uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,497,115 | arxiv | \section{Introduction}
One of the most important recent discoveries in particle physics was discovery of neutrino oscillations in the atmospheric Super-Kamiokande \cite{Fukuda:1998mi}, solar SNO \cite{Ahmad:2002jz} and the reactor KamLAND experiment \cite{Eguchi:2002dm} (1998-2002). In 2015 the Nobel Prize was awarded to T. Kajita and A. McDonald ``for the discovery of neutrino oscillations, which shows that neutrinos have masses". Small neutrino masses, driving neutrino oscillations, is the only evidence (in particle physics) of an existence of a new beyond the Standard Model physics.
Origin of neutrino masses, which are many orders of magnitude smaller than quark and lepton masses, is the major open problem of neutrino physics.
In the first part of this talk I will consider phenomenon of neutrino oscillations. In the second part I will discuss a possible (and plausible) origin of neutrino masses and mixing.
\section{Basics of Neutrino Oscillations}
Idea of neutrino oscillations was put forward by B.Pontecorvo in Dubna in 1957-58 \cite{Pontecorvo:1957qd}. This idea was further developed by B.Pontecorvo and V.Gribov (1969)\cite{Gribov:1968kq} and B.Pontecorvo and S.Bilenky (1975-1989)(see, for example, \cite{Bilenky:1978nj}). Idea of flavor neutrino mixing was proposed in \cite{Maki:1962mu}.
We know from experiments on the investigation of the "invisible" decay $Z^{0}\to \nu_{l}+\bar\nu_{l}$ (LEP, SLC) that three flavor left-handed neutrinos $\nu_{e},\nu_{\mu},\nu_{\tau}$ (and right-handed antineutrinos) exist in nature. The left-handed fields of flavor neutrinos $\nu_{lL}$~~$(l=e,\mu,\tau)$ enter into the Standard Model CC and NC weak interaction
\begin{equation}\label{CC}
\mathcal{L^{CC}_{I}}=-\frac{g}{2\sqrt{2}}j^{CC}_{\alpha}W^{\alpha}+
\mathrm{h.c.},\quad j^{CC}_{\alpha}=
2\sum_{l=e,\mu,\tau}\bar\nu_{lL}\gamma_{\alpha}l_{L}
\end{equation}
and
\begin{equation}\label{NC}
\mathcal{L^{NC}_{I}}=-\frac{g}{2\cos\theta_{W}}j^{NC}_{\alpha}
Z^{\alpha}, \quad j^{NC}_{\alpha}=
\sum_{l=e,\mu,\tau}\bar\nu_{L}\gamma_{\alpha}\nu_{L}.
\end{equation}
From the observation of neutrino oscillations follows that {\em flavor neutrino fields are mixed}
\begin{equation}\label{mix}
\nu_{lL}(x)=\sum^{3}_{i=1}U_{li}~\nu_{iL}(x).
\end{equation}
Here $\nu_{i}(x)$ is the field of neutrino (Dirac or Majorana) with mass $m_{i}$ and $U$ is $3\times3$ PMNS mixing matrix.
It follows from (\ref{mix}) that if all neutrino mass-squared differences are small, states of the flavor neutrinos with definite momentum, produced in the decays $\pi^{+}\to \mu^{+}+\nu_{\mu}$ (accelerator and atmospheric neutrinos),
$(A,Z)\to (A,Z+1)+e^{-}+\bar\nu_{e}$ (reactor antineutrinos) etc are given by
\begin{equation}\label{states}
|\nu_{l}\rangle=\sum^{3}_{i=1}U^{*}_{li}~|\nu_{iL}\rangle~~
(l=e,\mu,\tau),
\end{equation}
where $|\nu_{i}\rangle$ is the state of neutrino with mass $m_{i}$, momentum $\vec{p}$ and energy $E_{i}\simeq p +\frac{m^{2}_{i}}{2E}$~ ($p^{2}\gg m^{2}_{i}$).
If at $t=0$ flavor neutrino $\nu_{l}$ was produced, the state of neutrino at the time $t$ will be a coherent superposition
\begin{equation}\label{transit}
|\nu_{l}\rangle_{t}=\sum_{i}U^{*}_{li}~e^{-iE_{i}t}|\nu_{iL}\rangle =\sum_{l'}\mathcal{A}(\nu_{l}\to \nu_{l'})|\nu_{l'}\rangle.
\end{equation}
Here
\begin{equation}\label{transit1}
\mathcal{A}(\nu_{l}\to \nu_{l'})=e^{-ipL}\sum_{i}U_{l'i}
e^{-i\frac{m^{2}_{i}L}{2E}}U^{*}_{li}
\end{equation}
is the amplitude of transition $\nu_{l}\to \nu_{l'}$ during time $t$, $L\simeq t$ is a source-detector distance.
The amplitude $\mathcal{A}(\nu_{l}\to \nu_{l'})$ is {\em a coherent sum} of products of amplitudes (of transition $(\nu_{l}\to\nu_{i})$ ($U^{*}_{li}$)) (of propagation in the state $\nu_{i}$ ($ e^{-iE_{i}t}$) ) (of transition $(\nu_{i}\to\nu_{l'})$~($U_{l'i}$)).
Neutrino oscillations in vacuum are the result of {\em interference
between different $i$-amplitudes}. Taking into account the unitarity of the mixing matrix $U$ and the arbitraness of a common phase from (\ref{transit}) and (\ref{transit1}) we find a convenient expression for the transition probability (see \cite{Bilenky:2015xwa})
\begin{equation}\label{Probabil2}
\mathrm{P}(\nu_{l}\to \nu_{l'})= |\sum^{3}_{i=1}U_{l'i}
e^{-i\frac{m^{2}_{i}L}{2E}}U^{*}_{li}|^{2}=|\delta_{l'l}
-2i\sum_{i\neq r}e^{-i\Delta_{ri}}U_{l'i}U^{*}_{li}\sin\Delta_{ri}|^{2}.
\end{equation}
Here
\begin{equation}\label{Probabil1}
\Delta_{ri}=\frac{\Delta m^{2}_{ri}L}{4E},\quad\Delta m^{2}_{ik}= m^{2}_{k}- m^{2}_{i}
\end{equation}
and $r$ is an arbitrary, fixed index.
The second term of (\ref{Probabil2}) describes neutrino oscillations. We see from this expression that neutrino oscillations take place if
\begin{itemize}
\item at least one neutrino neutrino mass-squared difference is different from zero;
\item there is a neutrino mixing ($U\neq I$).
\end{itemize}
Let us consider the simplest case of two flavors, say $\mu$ and $\tau$. Choosing $r=1$ from (\ref{Probabil2}) we have
\begin{equation}\label{Probabil3}
\mathrm{P}(\nu_{l}\to \nu_{l'})=|\delta_{l'l}
-2ie^{-i\Delta_{12}}U_{l'2}U^{*}_{l2}\sin\Delta_{12}|^{2}.
\end{equation}
For two flavors the mixing matrix has the following general form
\begin{equation}\label{Probabil4}
U=\left(
\begin{array}{c c}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta \\
\end{array}
\right)
\end{equation}
From (\ref{Probabil3}) and (\ref{Probabil4}) we find the standard two-neutrino appearance and disappearance transition probabilities
\begin{equation}\label{Probabil5}
\mathrm{P}(\nu_{\mu}\to \nu_{\tau})=\sin^{2}2\theta \sin^{2}\frac{\Delta m^{2}_{12}L}{4E},~~ \mathrm{P}(\nu_{\mu}\to \nu_{\mu})=1-\sin^{2}2\theta \sin^{2}\frac{\Delta m^{2}_{12}L}{4E}.
\end{equation}
Atmospheric, solar and long baseline accelerator and reactor data are described by the three-neutrino mixing. In this case
transition probabilities depend on six parameters: atmospheric and solar mass-squared differences $\Delta m^{2}_{A}$ and $\Delta m^{2}_{S}$, three mixing angles $\theta_{12}$, $\theta_{23}$, $\theta_{13}$ and one CP phase $\delta$.
Neutrino masses are usually labeled in such a way that $m_{1}$ and $m_{2}$ are connected with solar neutrinos.
From the condition of the MSW resonance \cite{Wolfenstein:1977ue,Mikheev:1986wj} follows that $\Delta m^{2}_{12}>0$. The solar mass squared difference is determined as follows $\Delta m^{2}_{S}=\Delta m^{2}_{12}$.
For $m_{3}$ there are two possibilities
\begin{enumerate}
\item Normal ~ordering (NO)\quad $ m_{3}>m_{2}>m_{1}$.
\item Inverted~ ordering (IO)\quad $m_{2}>m_{1}>m_{3}$.
\end{enumerate}
Atmospheric mass-squared difference can be determined as follows
\begin{equation}\label{Atm}
\Delta m^{2}_{A}=\Delta m^{2}_{23}~ (\mathrm{NO})\quad \Delta m^{2}_{A}=|\Delta m^{2}_{13}|~(\mathrm{IO}).
\end{equation}
Using (\ref{Atm}) from (\ref{Probabil2}) we can easily find that probabilities of $\nua{l}\to \nua{l'}$ transitions have the form of the sum of atmospheric, solar and interference terms \cite{Bilenky:2015xwa}. In NO case we have
\begin{eqnarray}
&&P^{\mathrm{NO}}(\nua{l}\to \nua{l'})
=\delta_{l' l }
-4|U_{l 3}|^{2}(\delta_{l' l} - |U_{l' 3}|^{2})\sin^{2}\Delta_{A}\nonumber\\&&-4|U_{l 1}|^{2}(\delta_{l' l} - |U_{l' 1}|^{2})\sin^{2}\Delta_{S}
-8~[\mathrm{Re}~(U_{l' 3}U^{*}_{l 3}U^{*}_{l'
1}U_{l 1})\cos(\Delta_{A}+\Delta_{S})\nonumber\\
&&\pm ~\mathrm{Im}~(U_{l' 3}U^{*}_{l 3}U^{*}_{l'
1}U_{l 1})\sin(\Delta_{A}+\Delta_{S})]\sin\Delta_{A}\sin\Delta_{S},
\label{Genexp5}
\end{eqnarray}
For IO case we find
\begin{eqnarray}
&&P^{\mathrm{IO}}(\nua{l}\to \nua{l'})
=\delta_{l' l }
-4|U_{l 3}|^{2}(\delta_{l' l } - |U_{l' 3}|^{2})\sin^{2}\Delta_{A}\nonumber\\&&-4|U_{l 2}|^{2}(\delta_{l' l} - |U_{l' 2}|^{2})\sin^{2}\Delta_{S}
-8~[\mathrm{Re}~(U_{l' 3}U^{*}_{l 3}U^{*}_{l'
2}U_{l 2})\cos(\Delta_{A}+\Delta_{S})\nonumber\\
&&\mp ~\mathrm{Im}~(U_{l' 3}U^{*}_{l 3}U^{*}_{l'
2}U_{l 2})\sin(\Delta_{A}+\Delta_{S})]\sin\Delta_{A}\sin\Delta_{S},
\label{Genexp6}
\end{eqnarray}
where $\Delta_{A,S}=\frac{\Delta m^{2}_{A,S}L}{4E}$.
It is seen from comparison of these expressions that
$P^{\mathrm{NO}}(\nua{l}\to \nua{l'})$ and $P^{\mathrm{IO}}(\nua{l}\to \nua{l'})$ differ by the change of $U_{l(l') 1}\leftrightarrows U_{l(l') 2} $ and by the
sign of last terms.
In conclusion we present in the Table \ref{tab:I} the result of the global analysis of existing neutrino oscillation data \cite{Esteban:2020cvm}.
\begin{table}
\caption{Values of neutrino oscillation parameters obtained from the global fit of existing data \cite{Esteban:2020cvm}}
\label{tab:I}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline Parameter & Normal Ordering& Inverted Ordering\\
\hline $\sin^{2}\theta_{12}$& $0.310^{+0.013}_{-0.012}$& $0.310^{+0.013}_{-0.012}$
\\
\hline $\sin^{2}\theta_{23}$& $0.582^{+0.015}_{-0.019}$& $ 0.582^{+0.015}_{-0.018}$
\\
\hline $\sin^{2}\theta_{13}$ & $ 0.02240^{+0.00065}_{-0.00066}$& $0.02263^{+0.00065}_{-0.00066}$
\\
\hline $\delta $~(in $^{\circ}$) & $(217^{+40}_{-28})$& $ (280^{+25}_{-28})$
\\
\hline $\Delta m^{2}_{S}$& $(7.39^{+0.21}_{-0.20})\cdot 10^{-5}~\mathrm{eV}^{2}$&$(7.39^{+0.21}_{-0.20})\cdot 10^{-5}~\mathrm{eV}^{2}$\\
\hline $\Delta m^{2}_{A}$& $(2.525^{+0.033}_{-0.031})\cdot 10^{-3}~\mathrm{eV}^{2}$&$(2.512^{+0.034}_{-0.031})\cdot 10^{-3}~\mathrm{eV}^{2}$\\
\hline
\end{tabular}
\end{center}
\end{table}
The study of neutrino oscillations enter now into high-precision era. High precision measurements (at \% level) are necessary in order to solve such fundamental problems of neutrino physics as
\begin{itemize}
\item What is the neutrino mass ordering?
\item Is $CP$ is violated in the lepton sector and what is the precise value of CP phase $\delta$?
\end{itemize}
Neutrino oscillation experiments allow to determine two neutrino mass-squared differences $\Delta m^{2}_{S}$ and $\Delta m^{2}_{A}$. The lightest neutrino mass and, correspondingly, absolute values of neutrino masses are at present unknown.
In recent tritium KATRIN experiment \cite{Aker:2019uuj} the following bound was found
\begin{equation}\label{Katrin}
m_{\beta}=(\sum_{i}|U_{ei}|^{2}m^{2}_{i})^{1/2}<1.1~ \mathrm{eV}. \end{equation}
From different recent cosmological measurements it was obtained \cite{Aghanim:2018eyx}
\begin{equation}\label{Cosmo}
\sum_{i}m_{i}<0.12~\mathrm{eV}.
\end{equation}
\section{On the Origin of Small Neutrino Masses}
Before starting the discussion of a possible origin of neutrino masses we would like to remind that particles with spin 1/2 can be Dirac or Majorana.
Dirac field $\psi(x)$ is a complex (non hermitian) four-component field which satisfies the Dirac equation. If a Lagrangian is invariant under a global transformation $\psi(x)\to e^{i\Lambda}\psi(x)$ ($\Lambda$ is a constant) a charge is conserved and $\psi(x)$ is a field of particles and antiparticles, which have opposite charges, same masses (due to the $CPT$ invariance) and helicities $\pm 1$.
Majorana field $\chi(x)$ is a two-component field which satisfies the Dirac equation and the Majorana condition
\begin{equation}\label{Maj}
\chi(x)=\chi^{c}(x)=C\bar \chi^{T}(x),~~C\gamma^{T}_{\alpha}C^{-1}=-\gamma_{\alpha},~C^{T}=-C. \end{equation}
There is no global invariance of a Lagrangian in the Majorana case. Majorana field is two-component field of truly neutral particles with helicities $\pm 1$.
Neutrino masses and mixing are generated by {\em a neutrino mass term}. The first neutrino mass term was proposed by V. Gribov and B. Pontecorvo \cite{Gribov:1968kq} in 1969. At that time it was established that the Lagrangian of the weak interaction had $V-A$ current $\times$ current form
\begin{equation}\label{Current}
\mathcal{L}=-\frac{G_{F}}{\sqrt{2}}j^{CC}(j^{CC})^{\dag}
\end{equation}
where the leptonic current was given by the expression
\begin{equation}\label{Current1}
j^{CC,\mathrm{lep}}_{\alpha}
=2(\bar\nu_{eL}\gamma_{\alpha}e_{L}+\bar\nu_{\mu L}\gamma_{\alpha}\mu_{L}).
\end{equation}
Gribov and Pontecorvo put themselves the following question: is it possible to introduce neutrino masses and mixing in the case if neutrino fields are left-handed $\nu_{eL}$, $\nu_{\mu L}$?\footnote{ It was a common opinion at that time that left-handed neutrinos are massless.}
They understood that {\em if the total lepton number $L=L_{e}+L_{\mu}$ is not conserved} it is possible to built a neutrino mass term in the case of $\nu_{eL}$, $\nu_{\mu L}$ fields. In fact, taking into account that the $C$-conjugated field
$ \nu_{lL}^{c}=C\bar\nu_{lL}^{T}$ is right-handed, in the general three-flavor case (see \cite{Bilenky:1987ty}) we have the following unique mass term
\begin{equation}\label{Mjmass}
\mathcal{L}^{\mathrm{M}}(x)=-\frac{1}{2}\sum_{l',l}
\bar\nu_{l'L}(x)M^{\mathrm{M}}_{l'l}\nu_{lL}^{c}(x) +\mathrm{h.c.},\quad M^{\mathrm{M}}=(M^{\mathrm{M}})^{T}.
\end{equation}
The matrix $M^{\mathrm{M}}$ can be diagonalized as follows
\begin{equation}\label{Mjmass1}
M^{\mathrm{M}}=U~m~U^{T}, \quad U^{\dag}~U=1,
\end{equation}
where $m_{ik}=m_{i}\delta_{ik},~ m_{i}>0$. From (\ref{Mjmass}) and (\ref{Mjmass1}) we find
\begin{equation}\label{Mjmass2}
\mathcal{L}^{\mathrm{M}}(x)=-\frac{1}{2}\sum^{3}_{i=1}m_{i}~
\bar\nu_{i}(x)\nu_{i}(x),
\end{equation}
where $\nu_{i}(x)$, the field of neutrino with mass $m_{i}$,
satisfies the Majorana condition
\begin{equation}\label{Mj}
\nu_{i}(x)=\nu^{c}_{i}(x) =C\bar\nu^{T}_{i}(x).
\end{equation}
The flavor field $\nu_{lL}(x)$ is a "mixed" field
\begin{equation}\label{Mj1}
\nu_{lL}(x) =\sum^{3}_{i=1}U_{li}\nu_{iL}(x),\quad (l=e,\mu,\tau)
\end{equation}
The mass term $\mathcal{L}^{\mathrm{M}}$ is called the Majorana mass term. It is the only possible mass term in which the left-handed flavor fields $\nu_{lL}$ enter. We would like to stress that in the framework of purely phenomenological approach, we discussed, neutrino masses $m_{i}$ (and mixing matrix $U$) are parameters. There are no any clues, why neutrino masses are much smaller then lepton and quarks masses.
Origin of neutrino masses and neutrino mixing is an open problem. Exist many different models. It is commonly suggested that {\em the Standard Model neutrinos are massless particles}.
Masses of quarks and leptons are of the Standard Model origin.
They are generated by $SU_{L}(2)\times U_{Y}(1)$ invariant Yukawa interactions. In the case of leptons the Yukawa Lagrangian has the form
\begin{equation}\label{Yukawa}
\mathcal{L}^{Y}_{I}=-\sqrt{2}\sum_{l',l}\bar\psi_{l'L}
Y_{l'l}l_{R}\phi+\mathrm{h.c.}
\end{equation}
Here
\begin{equation}\label{doublets}
\psi_{lL}=\left(
\begin{array}{c}
\nu_{lL} \\
l_L \\
\end{array}
\right)~~~(l=e,\mu,\tau),~~\phi=\left(
\begin{array}{c}
\phi_{+} \\
\phi_{0}\\
\end{array}
\right)
\end{equation}
are lepton and Higgs doublets, $l_{R}$ is a singlet, $Y$ is a dimensionless complex matrix. After spontaneous symmetry breaking and diagonalization of $Y$ we come to the Dirac mass term
\begin{equation}\label{Dir}
\mathcal{L}^{Y}_{I}(x)=-\sum_{l=e,\mu,\tau}m_{l}~\bar l(x)~l(x).
\end{equation}
Here $l(x)=l_{L}(x)+l_{R}(x)$ is the Dirac field of leptons
$l^{-}$ ($Q=-1$ and antileptons $ l^{+}$ ($Q=1$)
The lepton mass $m_{l}$ is given by the relation
\begin{equation}\label{lepmass}
m_{l}=y_{l}~v \quad (l=e,\mu,\tau).
\end{equation}
Here $y_{l}$ is a Yukawa coupling (eigenvalue of the matrix $Y$) and $v=(\sqrt{2}G_{F})^{-1/2}\simeq 246~\mathrm{GeV}$ is the Higgs vev (electroweak scale).
All SM masses (masses of quarks, leptons, $W^{\pm}$ and $Z^{0}$ bosons, the mass of the Higgs boson) are proportional to $v$.\footnote{ This is connected with the fact that $v$ is the only parameter of the Standard Model which has a dimension $\mathrm{M}$.}
If neutrino masses are also of the Standard Model origin in this case
\begin{itemize}
\item Right-handed singlets $\nu_{lR}$ enter into Lagrangian.
\item The total lepton number is conserved and neutrinos with definite masses $\nu_{i}$ are Dirac particles.
\end{itemize}
Neutrino masses are given by the expression
\begin{equation}\label{Numass}
m_{i}=y^{\nu}_{i}~v.
\end{equation}
Yukawa constants are determined by masses. For quarks and lepton of the third family we have
\begin{equation}\label{3fam}
y_{t}\simeq 7\cdot 10^{-1},~~ y_{b}\simeq 2\cdot 10^{-2},~~y_{\tau}\simeq 7\cdot 10^{-3}.
\end{equation}
Absolute values of neutrino masses are not known at present. However, assuming normal ordering of neutrino masses and using a conservative cosmological bound ($\sum_{i}m_{i}< 1$~eV) for the largest neutrino mass $m_{3}$ and neutrino Yukawa coupling $y^{\nu}_{3}$ we find the following bounds
\begin{equation}\label{bounds}
(5\cdot 10^{-2}\lesssim m_{3}\lesssim 3\cdot 10^{-1})~ \mathrm{eV},\quad
2\cdot 10^{-13}\lesssim y^{\nu}_{3}\lesssim \cdot 10^{-12}.
\end{equation}
Yukawa couplings of quarks and lepton of the same family differ by about two orders of magnitude. Neutrino Yukawa coupling $y^{\nu}_{3}$ is about ten orders of magnitude smaller then Yukawa couplings of top and bottom quarks and $\tau$-lepton. It is very unlikely that {\em neutrino masses are of the same Standard Model origin as masses of leptons and quarks}. The Standard Model with left-handed, massless $\nu_{e}$, $\nu_{\mu}$, $\nu_{\tau}$ (without right-handed neutrino fields) is a minimal theory, originally proposed by Weinberg and Salam. We come to the conclusion that in order to generate small neutrino masses, observed in neutrino oscillation experiments, {\em we need a new beyond the Standard Model mechanism.}
A general method which allow to describe effects of a beyond the Standard Model physics is a method of the effective Lagrangian.
Effective Lagrangian is a dimension five or more non renormalizable Lagrangian built from the Standard Model fields
and invariant under $SU(2)_{L}\times U(1)_{Y}$ transformations.
Effective Lagrangians are generated by beyond the Standard model interactions of SM particles with heavy particles
with masses much larger than $v$. In the electroweak region such interactions induce processes with virtual heavy particles, which are described by effective Lagrangians (fields of heavy particles are "integrated out"). Typical example is the four-fermion, dimension six, Fermi effective Lagrangian of the weak interaction.
In order to built an effective Lagrangian which generate a neutrino mass term , let us consider dimension $M^{5/2}$ $SU_{L}(2)\times U_{Y}(1)$ invariant
\begin{equation}\label{inv}
(\tilde{\phi }^{\dag}~ \psi_{lL}),
\end{equation}
where $\tilde{\phi }=i\tau_{2}\phi^{*}$ is a conjugated Higgs doublet. After spontaneous symmetry breaking we have
\begin{equation}\label{inv1}
(\tilde{\phi }^{\dag}~ \psi_{lL})\to \frac{v}{\sqrt{2}}~\nu_{lL}.
\end{equation}
From (\ref{inv1}) it is obvious that (like in the Gribov-Pontecorvo case) we can built an effective Lagrangian which generates a neutrino mass term only if {\em the total lepton number is not conserved.} We come to the following unique expression for the effective Lagrangian (Weinberg \cite{Weinberg:1979sa})
\begin{equation}\label{Weinb}
\mathcal{L}_{I}^{\mathrm{W}}=-\frac{1}{\Lambda}~\sum_{l',l}
\overline{(\tilde{\phi }^{\dag} \psi_{l'L})}X_{l'l}(\tilde{\phi }^{\dag}~ \psi_{lL})^{c}+\mathrm{h.c.}
\end{equation}
Here $X$ is $3\times 3$ dimensionless, symmetrical matrix and $\Lambda$ is a parameter which has dimension $M$ (the operator in $\mathcal{L}_{I}^{\mathrm{eff}}$ has a dimension $M^{5}$). The parameter $\Lambda$ characterizes a scale of a beyond the SM physics.
In connection with non conservation of $L$ by the Lagrangian (\ref{Weinb}) we would like to make the following general remark.
Global invariance and conservation of $L$ (and $B$) is not a fundamental symmetry of QFT \cite{Weinberg:1980bf,Witten:2017hdv}. Local gauge symmetry ensure conservation of $L$ (and $B$) by the Standard Model Lagrangian. It is natural to expect that a beyond the Standard Model theory does not conserve $L$ (and $B$).
After spontaneous symmetry breaking from (\ref{Weinb}) we come to the Majorana mass term
\begin{equation}\label{Weinb1}
\mathcal{L}^{\mathrm{M}}= -\frac{1}{2}\,\sum_{l',l}
\bar\nu_{l'L}\,\frac{v^{2}}{\Lambda}~ X_{l'l} ~\nu^{c}_{lL}+\mathrm{h.c.}=-\frac{1}{2}\sum^{3}_{i=1}m_{i}~\bar \nu_{i}\nu_{i}.
\end{equation}
Here $\nu_{i}= \nu^{c}_{i}$ is the field of the Majorana neutrino with the ``seesaw mass"
\begin{equation}\label{seemass}
m_{i}=\frac{v^{2}}{\Lambda}~x_{i}=\frac{v}{\Lambda}\cdot
(x_{i}v),
\end{equation}
where $x_{i}$ is the eigenvalue of the matrix $X$. In (\ref{seemass}) $(x_{i}v)$ is a ``typical" SM mass. Thus, the generation of neutrino masses via the effective Lagrangian mechanism leads to a suppression factor
\begin{equation}\label{seemass1}
\frac{v}{\Lambda}=\frac{\mathrm{EW~scale}}{\mathrm{scale~of~a~new~ physics}}.
\end{equation}
There are two unknown parameters ($x_{i}$ and $\Lambda$) in (\ref{seemass}). Thus, values of neutrino masses can not be predicted. However, if $\Lambda\gg v $ in this case Majorana neutrino masses $m_{i}$ are naturally much smaller than masses of leptons and quarks.
Notice that assuming $x_{3}\simeq 1$ (like Yukawa coupling of the top quark) for the scale of a new physics, responsible for neutrino masses, we find
\begin{equation}\label{scale}
\Lambda \simeq ( 10^{14}-10^{15})~\mathrm{GeV}.
\end{equation}
\section{On the Origin of the Weinberg Effective Lagrangian}
In this section we will briefly discuss a possible origin of the Weinberg effective Lagrangian (\ref{Weinb}). We will start with the simplest and most economical scenario. Let us assume that
lepton-Higgs pairs interact with heavy Majorana leptons $N_{i}=N^{c}_{i}$~($i=1,2,..n$), $SU_{L}(2)$ singlets, via $SU_{L}(2)\times U_{Y}(1)$ interaction
\begin{equation}\label{heavy}
\mathcal{L}_{I}=-\sqrt{2}\sum_{l, i}(\bar \psi_{l L}\tilde{\phi })y_{li}~N_{iR}+\mathrm{h.c.}
\end{equation}
Here $y_{li}$ are dimensionless constants.
{\em In the tree approximation} for low-energy processes with virtual heavy leptons at $Q^{2}\ll M^{2}_{i}$ we obtain the Weinberg effective Lagrangian in which
\begin{equation}\label{heavy1}
\frac{1}{\Lambda}X_{l'l}=\sum^{n}_{i=1}y_{l'i}~\frac{1}{M_{i}}~y_{li}.
\end{equation}
Thus, masses of heavy Majorana leptons determine the scale $\Lambda$.
The mechanism, we discussed, is called the type-I seesaw mechanism \cite{Minkowski:1977sc,GellMann:1980vs,Yanagida:1979as,
Glashow:1979nm,Mohapatra:1980yp}. Notice, that the Weinberg effective Lagrangian can be also generated by interaction of heavy
triplet scalar bosons with a Higgs pair and lepton pair (type-II seesaw mechanism) and by interaction of
lepton-Higgs pairs with heavy Majorana triplet leptons (type-III seesaw mechanism).
There exist numerous {\em radiative neutrino mass models} which lead to the Weinberg effective Lagrangian and, correspondingly, to the Majorana neutrino mass term. In these models values of neutrino masses $m_{i}$ are suppressed by loop mechanisms which require existence of different beyond the Standard Model particles with masses which could be much smaller than $ 10^{15}$ GeV (see review \cite{Cai:2017jrq}).
\section{Conclusion}
In the first part of this report we considered a convenient phenomenology of neutrino oscillations in vacuum. In the second part we discussed a possible (and plausible) origin of neutrino masses.
The approach, we considered, is based on the following
general assumptions
\begin{enumerate}
\item There exist a beyond Standard Model interaction(s) of SM lepton and Higgs doublets and new particles whose masses are much larger than the electroweak scale $v$.
\item Standard Model neutrinos are massless left-handed particles.
\end{enumerate}
Beyond the SM interactions (after fields of heavy particles are integrated out) generate in the electroweak region an effective Lagrangian. From 2. it follows that independently on a type of model, tree-level or radiative, the only possible effective Lagrangian is $L$-violating, dimension five Weinberg Lagrangian which lead to the most economical Majorana mass term.
The effective Lagrangian method of the generation of neutrino masses can explain (and, apparently, was inspired by) the smallness of neutrino masses. Values of neutrino masses $m_{i}$, neutrino mixing angles and $CP$ phase unknown parameters which depend on model and {\em can not be predicted.}
However, the following features are common for all models we discussed (in this sense are model independent)
\begin{enumerate}
\item The number of neutrinos with definite masses $\nu_{i}$ is equal to the number of lepton flavors (three).
\item Neutrinos with definite masses $\nu_{i}$ are Majorana particles.
\end{enumerate}
Thus, the effective Lagrangian method of neutrino mass generation
predicts that there are {\em no transitions of flavor neutrinos into sterile states.} As it is well known, indications in favor of fourth (sterile) neutrino $m_{4}$ with mass in the range $(10^{-1}\lesssim m_{4}\lesssim 10)$ eV were obtained in different short baseline neutrino experiments. About 25 years ago in the accelerator LSND experiment indications in favor of $\bar\nu_{\mu}\to \bar\nu_{e}$ were found. Later, these indications were confirmed by the MiniBooNE
accelerator experiment in which transitions $\nua{\mu}\to \nua{e}$ were studied. The sterile neutrino anomaly was found also by reanalysis of old reactor neutrino experiment data and by analysis of the data of GALLEX and SAGE Gallium calibration experiments (see recent review \cite{Boser:2019rta}).
Several new short baseline reactor, accelerator, atmospheric and source neutrino experiments are going on or in preparations at present. From existing data it is not possible to make definite conclusions on the existence of sterile neutrinos (see talks presented at the NEUTRINO2020 conference http://nu2020.fnal.gov).
Notice, however, that recent combined analysis of the data of the reactor Daya Bay and Bugey-3 experiments and accelerator MINOS+ experiment, allows to exclude at 90 \% CL LSND and MiniBooNE allowed regions for $\Delta m^{2}_{14}< 5~\mathrm{eV}^{2}$ \cite{Adamson:2020jvo}, in new reactor DANSS experiment \cite{Shitov} the best-fit point in the allowed region of previous reactor experiments is excluded at 5$\sigma$... etc.
The study of neutrinoless double $\beta$-decay $(A,Z)\to (A,Z+2) +e^{-} +e^{-}$ is the most sensitive way which could allow us to reveal the Majorana nature of neutrinos with definite masses $\nu_{i}$. In recent experiments the following lower limits on half-lives of the $0\nu\beta\beta$-decay of different nuclei were reached: $T_{1/2}(^{76}\mathrm{Ge}) > 9\cdot 10^{25}$~yr~ (GERDA) \cite{Agostini:2019hzm},
$T_{1/2}(^{136}\mathrm{Xe}^{136}) > 10.7\cdot 10^{25}$~yr ~(KamLAND-Zen) \cite{KamLAND-Zen:2016pfg}, $T_{1/2}(^{130}\mathrm{Te}) > 3.2\cdot 10^{25}$~yr~ (CUORE) \cite{Adams:2019jhp}.
About one-two orders of magnitude larger half-lives are expected, if neutrinos are Majorana particles. In future $0\nu\beta\beta$- experiments such sensitivities are planned to be reached (see \cite{Detwiler}).
Summarizing, we discussed a plausible (apparently, the most plausible) scenarios of the origin of neutrino masses, based on such fundamental hypotheses as a total lepton number violation by beyond the SM interactions. Crucial tests of this scenarios can be realized in experiments on
\begin{itemize}
\item The search for light sterile neutrinos.
\item The search for neutrinoless double $\beta$-decay.
\end{itemize}
|
1,116,691,497,116 | arxiv | \section{Introduction}
Black holes have long fascinated theorists and presented them with various puzzles. Are different pieces of Hawking radiation completely uncorrelated or, rather, entangled on long time scales? Does information contained in infalling objects come out in the radiation and how can it be recovered? While our understanding of quantum gravity is still incomplete, such general questions can hopefully be answered using a very simple model. A black hole is characterized by a Hilbert space of dimension $d=2^S$, where the \emph{coarse-grained entropy}\footnote{In this paper, we define entropy with the binary logarithm.} $S$ is proportional to the horizon area. The quantum evolution over a sufficiently long time interval $t$ (longer than the so-called scrambling time) is described by a Haar-random unitary operator $U$. This approach was pioneered by Page~\cite{Page93,Page93a}, who considered a black hole forming from particles in some pure state and partially evaporating. The emitted radiation $R$ and the remaining black hole $B$ are described by Hilbert spaces of dimensions $d_R$, $d_B$ such that $d_Rd_B=d$. The state $|\Psi\rangle$ of the whole system is pure but so complex that may be regarded as random. Page found that if $d_R\ll d_B$, then the entanglement entropy is with high accuracy equal to $\log_2d_R$. Thus, a chunk of radiation is uncorrelated unless it includes half of the original black hole. On the other hand, if $d_R\gg d_B$, then some correlations exist and their specific form depends on $|\Psi\rangle$.
Hayden and Preskill studied an interesting variant of the information recovery problem. Their paper~\cite{Hayden07} is both profound and fun to read! In short, Alice wants to destroy her confidential diary and tosses it into a black hole. One may model Alice's secret by a quantum state $|\psi\rangle$ of some system $A$ that is added to the original black hole $B$. Bob tries to spy on Alice by capturing some Hawking radiation (subsystem $D$ in Figure~\ref{fig_Hayden_Preskill_decoding}a). The important difference from Page's setting is that $B$ is maximally entangled with another system $B'$, which is in Bob's possession. In this situation, Bob does not have to wait until half of the black hole evaporates. In fact, $|\psi\rangle$ can be extracted from $D$ and $B'$ by applying some unitary decoder $V$, provided $\log_2d_D\ge\log_2d_A+\epsilon$. The parameter $\epsilon$ is related to the decoding fidelity and may be taken as a constant. Thus, the black hole acts as a quantum information mirror, reflecting whatever falls in it almost immediately. The delay is equal to the scrambling time $t_{\text{scr}}$ plus the time needed to radiate $\log_2d_A+\epsilon$ qubits. This ground-breaking work has led to recent studies that largely focused on the physics of scrambling~\cite{Sekino08, Lashkari13, Shenker:2013pqa, Maldacena:2016aa,Kitaev:2014t2}. It turns out that there are many good information scramblers, but black holes are the fastest ones among equilibrium systems at a given temperature: they satisfy the estimate $t_{\text{scr}}\approx(2\pi T)^{-1}\ln S$.
\begin{figure
\centering\(\displaystyle
\begin{array}{@{}c@{\hspace{2cm}}c@{}}
\figbox{1.0}{fig-HPdecoding} &
\figbox{1.0}{fig-HPdecoding-1}\: =\: \figbox{1.0}{fig-HPdecoding-2}
\vspace{5pt}\\
\text{a)} & \text{b)}
\end{array}
\)
\caption{The Hayden-Preskill decoding problem (a) and its variant with a reference system (b).}
\label{fig_Hayden_Preskill_decoding}
\end{figure}
However, there is an important caveat. Hayden and Preskill showed that the decoding task is \emph{information-theoretically} possible in the sense that there exists a unitary operator $V$ that reconstructs $|\psi\rangle$ by acting only on the Hawking radiation $D$ and the auxiliary system $B'$. But it is not clear how complex this operator is and whether finding it from $U$ is a computationally tractable problem. One can argue that the decoding complexity is at least polynomial in $d_A$, i.e.\ exponential in the number of qubits that constitute Alice's diary. This can be seen from the classical analogue of the Hayden-Preskill problem, where Alice's secret $a$ and the black hole state $b$ are binary words, and $U$ is replaced by an invertible function. Bob having access to $B'$ means that he knows $b$. Let us consider $b$ as a fixed parameter and express the radiation state as $r=f(a)$. Since $r$ is just a small part of the overall state, the function $f$ need not be invertible; we may rather regard it as completely random. The condition $d_D\gg d_A$ guarantees that $a$ can be recovered from $r$, but the only general method to find $a$ is exhaustive search.
Thus, the real question is how the decoding complexity scales with the black hole size. The answer is not obvious. On the one hand, Harlow and Hayden argued that it is exponentially hard to process the Hawking radiation of an old (more than half-evaporated) black hole so as to produce the standard EPR state~\cite{Harlow:2013tf}. We simply assume this state to be available. On the other hand, a physical process has recently been discovered that is akin to Hayden-Preskill decoding. First, Gao, Jafferis and Wall~\cite{Gao16} showed that the Einstein-Rosen bridge in the AdS black hole geometry can be made traversable if one arranges a momentary coupling (at time $0$) between the opposite boundaries so as to generate negative Casimir energy. In this setup, a signal sent from one boundary at a particular time $-t$ before the interaction is turned on reaches the other boundary at time $t$. Although the signal travels through the bulk, there is a holographically dual description strictly in terms of the boundaries. Its relation to the Hayden-Preskill problem and some aspects of the bulk-boundary correspondence were elucidated by Maldacena, Stanford and Yang~\cite{Traversable2017}. However, this particular decoding scheme works due to special properties of the operator $U$ at ``early times'', i.e.\ for $t<t_{\text{scr}}$.
Before attempting a solution for a random $U$ (or more generally, for the late times), let us slightly simplify the problem along the lines of the original Hayden and Preskill paper. Instead of estimating the worst-case recovery fidelity, we will take some average over $|\psi\rangle$. This idea is captured by a standard trick: one considers Alice's diary as part of a fixed entangled state $|\xi\rangle$ that also includes some reference system $R$. We assume that $|\xi\rangle=(I_R\otimes \Xi)|\text{EPR}\rangle_{RR'}$, where $R'$ represents the information content of Alice's diary and $\Xi:\,R'\to A$ is an isometric embedding. Bob's goal is to reconstruct $|\text{EPR}\rangle_{RR'}$ as shown in Figure~\ref{fig_Hayden_Preskill_decoding}b. The number $\log_2 d_A\ge\log_2d_R$ is interpreted as the increase of the coarse-grained black hole entropy. It follows from standard thermodynamics that $\ln d_A=E/T$, where $T$ is Hawking's temperature and $E$ is the energy of Alice's diary (including the rest energy).
\section{Notation, basic assumptions, and summary of results}
We will extensively use diagrams like those in Figure~\ref{fig_Hayden_Preskill_decoding}b. In general, nodes (e.g.\ $U$ and $|\xi\rangle$) are tensors, and the connecting lines represents the contraction of indices. A few additional rules formalize the idea that the upward direction is time. In particular, lines are labeled at places where they go vertically. The same line may carry a label $B$ at one point and $B'$ at a different point if it bends and reverses direction. Pairs of labels such as $B$ and $B'$ refer to dual Hilbert spaces. For each ket-vector $|\psi\rangle=\sum_{j}c_{j}|j\rangle\in A$, there is a dual vector $|\psi^*\rangle=\sum_{j}c_{j}^*|j\rangle\in A'$. It is just $\langle\psi|$ under a different name, but we keep them separate. Ket-vectors are associated with upward lines and bra-vectors with downward lines:
\begin{equation}
|\psi\rangle=\figbox{1.0}{fig-ket-psi}\,,\qquad\quad
|\psi^*\rangle=\figbox{1.0}{fig-ket-psistar}\,,\qquad\quad
\langle\psi|=\figbox{1.0}{fig-bra-psi}\,,\qquad\quad
\langle\psi^*|=\figbox{1.0}{fig-bra-psistar}\,.
\end{equation}
For operators, we put their mathematical symbols in boxes and change $X$ to $X^T$ when the box is rotated $180^\circ$ degrees. For example,
\begin{equation}
\figbox{1.0}{fig-X-1} \,\,=\,\, \figbox{1.0}{fig-XT-1}\,\,
= \sum_{j,k}X_{jk}|j,k\rangle.
\end{equation}
A dot on a line with label $B$ is equivalent to an overall factor of $d_B^{-1/2}$:
\begin{equation}
\figbox{1.0}{fig-EPR-B}\: =|\text{EPR}\rangle_{BB'}
=\frac{1}{\sqrt{d_B}}\sum_{j}|j,j\rangle
=\: \frac{1}{\sqrt{d_B}}\,\figbox{1.0}{fig-EPR-B-1}\:.
\end{equation}
A triangle corresponds to the embedding $\Xi:\,R'\to A$ multiplied by $d_R^{-1/2}$, but its exact meaning depends on the orientation. For example,
\begin{equation}
\figbox{1.0}{fig-xi-1}\: = \frac{1}{\sqrt{d_R}}\,\Xi\,,\qquad\quad
\figbox{1.0}{fig-xi}\: = (I_R\otimes \Xi)|\text{EPR}\rangle_{RR'}\,.
\end{equation}
When reasoning about decoding, it is convenient to refer to the given state of the world:
\begin{equation}
|\widetilde{\Psi}\rangle=\: \figbox{1.0}{fig-world-state}
\:,\qquad\qquad
\tilde{\rho}=|\widetilde{\Psi}\rangle\langle\widetilde{\Psi}|.\label{eq-world-state}
\end{equation}
We omit the tildes if $d_R=d_A$. Information-theoretically, the decoding is possible if the black hole has lost any memory of Alice's diary and become uncorrelated with $R$, i.e.\ if $\tilde{\rho}_{RC} \approx\tilde{\rho}_{R}\otimes\tilde{\rho}_{C}$. In particular, it is sufficient for $\tilde{\rho}_{RC}$ to be close to the maximally mixed state. This condition can be quantified using the parameter
\begin{equation}\label{delta}
\delta=d_{R}d_{C}\Tr\tilde{\rho}_{RC}^2-1\ge 0.
\end{equation}
If $\delta$ is small, then the decoding can be achieved with high fidelity, and our algorithms do exactly that. The number $\delta$ can be found from the diagrammatic expressions
\begin{equation}
\tilde{\rho}_{RC}=\: \frac{1}{d_B}\figbox{1.0}{fig-rho_RC}\:\,,
\qquad\quad
\Tr\tilde{\rho}_{RC}^2=\: \frac{1}{d_B^2}\figbox{1.0}{fig-rho_RC-1}\:\,.
\end{equation}
After some rearrangement, we get this answer:
\begin{equation}\label{Delta}
\delta =d_Ad_R\Delta-1,\qquad
\text{where}\quad \Delta=\:\figbox{1.0}{fig-Delta}\:\,.
\end{equation}
If $d_R=d_A$, then $\Delta$ is also related to the R\'{e}nyi-$2$ mutual information between subsystems $R$ and $DB'$ for the density matrix $\rho$:
\begin{equation}\label{Delta_from_I2}
\Delta=2^{-I^{(2)}(R,DB')},\qquad\quad
I^{(2)}(R,DB')= S^{(2)}_{R} + S^{(2)}_{DB'} - S^{(2)}_{RDB'}\,.
\end{equation}
Here, $S^{(2)}_{DB'} =-\log_{2}\Tr(\rho_{DB'}^2) =-\log_{2}\Tr(\rho_{RC}^2)$ is the R\'{e}nyi-$2$ entropy of $DB'$ (equal to that of the complementary system $RC$), and it is clear that $S^{(2)}_{R}=\log_2d_A$ and $S^{(2)}_{RDB'}=S^{(2)}_{C}=\log_2d_C$.
We will construct two decoding algorithms that recover the entangled state $|\xi\rangle$ of the reference system and Alice's diary with fidelity $1-O(\delta)$. The bound $\delta\le d_Ad_R/d_D^2$ holds for a large class of operators $U$, hence, the algorithms work if $d_D\gg\sqrt{d_Ad_R}$. The first procedure successfully performs the task with probability $\approx\frac{1}{d_{A}d_{R}}$ or signals a failure. Roughly speaking, Bob tries to guess the content of Alice's diary. Classically, this is how one solves the equation $f(a)=r$ for an arbitrary function $f$: one guesses some candidate solution $a'$, calculates $r'=f(a')$, and compares it with $r$. In the quantum case, Bob prepares a copy of the entangled state $|\xi\rangle$ in separate subsystems $A'$, $R'$ and applies the operator $U^{*}$ to $A'B'$. Then he projects the captured Hawking radiation $D$ and its counterpart $D'$ onto the standard EPR state, see Figure~\ref{Figure-probabilistic}. If successful, the projection has the effect of ``teleporting'' Alice's part of $|\xi\rangle$ to subsystem $R'$. The second procedure is deterministic in the sense that it never aborts. It replaces the postselection with Grover's search, which involves applying $U^{*}$ and $U^{T}$ about $\sqrt{d_{A}d_{R}}$ times. Thus, this deterministic decoder has complexity $\mathcal{O}\bigl(\sqrt{d_{A}d_{R}}\,\mathcal{C}\bigr)$, where $\mathcal{C}$ is the complexity of implementing $U$.
\begin{figure}
\centering\includegraphics{fig-probabilistic}
\caption{Probabilistic decoding by postselecting on an EPR state. The actual decoder, denoted by $V$, corresponds to the shaded area.}
\label{Figure-probabilistic}
\end{figure}
Now, let us discuss some physical assumptions that went into the definition of the problem. As already mentioned, we approximate the black hole thermal state by the maximally mixed state of dimension $d=2^S$. More exactly, each eigenvalue $Z^{-1}e^{-E_j/T}$ of the thermal density matrix is replaced by either $d^{-1}=Z^{-1}e^{-E/T}$ or $0$. This change is not extensive and amounts to neglecting energy fluctuations, which are relatively small in the thermodynamic limit. Yet the trace norm distance between the exact and approximate states is not small, therefore the decoding may work in one case but not the other. We believe that our algorithms can be adapted to a more realistic setting, though this involves some technical issues. At least, the probabilistic procedure generalizes to thermal states under the following assumptions. Let $\rho_{AB}$ be the thermal density matrix of the black hole that has absorbed the full energy of Alice's diary and let $\rho_{CD}=U\rho_{AB}U^\dag$. Our calculations work if $\rho_{AB}$ and $\rho_{CD}$ factor as $\rho_A\otimes\rho_B$ and $\rho_C\otimes\rho_D$, respectively. Unfortunately, this condition is problematic for physical reasons. Indeed, if $\rho_{CD}$ is thermal (which is almost true because the black hole evaporates adiabatically), the condition $\rho_{CD}=\rho_C\otimes\rho_D$ is equivalent to $H=H_C+H_D$. In other words, subsystems $C$ and $D$ do not interact! Of course, the Hawking radiation quanta eventually decouple from the black hole, but it is difficult to draw a sharp line between the two subsystems such that the density matrix factors. This issue can hopefully be addressed by soft partitioning of the subsystem $D$. After all, it is sufficient to verify that $D$ and $D'$ have enough entanglement, but we can be less aggressive in testing the qubits that are close to the boundary with $C$. Developing such a technique is a separate problem, and we set it aside.
\section{Condition on $U$ and out-of-time-order correlators}\label{sec_OTOC}
To obtain a good upper bound on the fidelity parameter $\delta$ or the related number $\Delta$ (see Eq.~(\ref{Delta})), we will assume that the evolution operator $U=e^{-iHt}$ is ``perfectly scrambling''. Quantum information scrambling is related to out-of-time-order correlators (OTOCs)~\cite{Shenker:2013pqa, Roberts:2014isa, Kitaev:2014t1, Maldacena:2016aa}. They have the general form $\corr{W(t)Y(0)Z(t)X(0)}$, where $O(0)=O$,\, $O(t)=U^{\dag}OU$, and the quantum average\footnote{\label{foot_OTOC}One may consider more general averages: $\Tr\bigl( W(t)\rho^{\alpha_3} Y(0)\rho^{\alpha_2} Z(t)\rho^{\alpha_1} X(0)\rho^{\alpha_0}\bigr)$, where $\alpha_3+\alpha_2+\alpha_1+\alpha_0=1$. In many cases, this change of the definition can be compensated by conjugating $X,Y,Z,W$ by suitable powers of $\rho$, or equivalently, by changing $Z(t)$ to $Z(t-i\alpha_1/T)$, etc. The most convenient choice is $\alpha_3=\alpha_2=\alpha_1=\alpha_0=1/4$.} of an operator $O$ is defined as $\langle O\rangle=\Tr O\rho$. In our case, $\rho=d^{-1}I_{AB}$, the operators $X$ and $Y$ act on subsystem $A$, whereas $Z$ and $W$ act on subsystem $D$. The (almost) perfect scrambling is defined as follows:
\begin{equation}\label{late-OTOC}
\bcorr{W(t)\,Y(0)\,Z(t)\,X(0)}
\approx \corr{WZ}\corr{Y}\corr{X}
+\corr{W}\corr{Z}\corr{YX}
-\corr{W}\corr{Z}\corr{Y}\corr{X}.
\end{equation}
This property holds for the Haar-random $U$, where the above equation becomes exact if we average the left-hand side over $U$ and subtract $(d^2-1)^{-1}\ccorr{WZ}\ccorr{YX}$ from the right-hand side. The derivation is given in Appendix~\ref{sec_averaging}. We work in the $d\to\infty$ limit; therefore, the correction just mentioned is negligible. More generally, equation~(\ref{late-OTOC}) characterizes the late-time asymptotics of OTOCs. It is expected to be true for a large class of operators $U$, provided $X$, $Y$, $Z$, $W$ act on sufficiently small subsystems.
If $d_R=d_A$, one can express $\Delta=2^{-I^{(2)}(R,DB')}$ using a formula derived in Refs.~\cite{Hosur:2015ylk,Roberts:2017aa}:
\begin{equation}
2^{-I^{(2)}(R,DB')} =\langle\text{OTOC}\rangle_{\text{ave}}.
\end{equation}
To handle the general case, we will devise some new notation. The idea is to combine $X$, $Y$ in the definition of OTOC into one object and $Z$, $W$ into another object:
\begin{equation}
\begin{aligned}
\OTOC(Y^T\otimes X,\,W\otimes Z^T)
&=\:\frac{1}{d}\Tr\bigl(U^{\dag}WU\,Y\,U^{\dag}ZU\,X\bigr)\\[5pt]
&=\: \frac{1}{d}\:\figbox{1.0}{fig-OTOC}
\end{aligned}
\end{equation}
By linearity, this definition extends to arbitrary operators $L=\sum_{j}Y_j^T\otimes X_j$ and $M=\sum_{k}W_k\otimes Z_k^T$ acting on $A'A$ and $DD'$, respectively. In this notation,
\begin{equation}
\Delta=\OTOC(L,M),\qquad \text{where}\quad\:
L=d_A\,\figbox{1.0}{fig-L}\:,\qquad
M=\figbox{1.0}{fig-M}\:.
\end{equation}
Now, let us derive a special case of Eq.~(\ref{late-OTOC}) for $\OTOC(L,M)$. The left-hand side of the original equation is equal to $\OTOC(Y^T\otimes X,\,W\otimes Z^T)$. The right-hand side contains these linear functions of $Y^T\otimes X$:
\begin{equation}
\corr{Y}\corr{X}=\,\figbox{1.0}{fig-Yav-Xav}\,,\qquad\quad
\corr{YX}=\,\figbox{1.0}{fig-YXav}\,,
\end{equation}
as well as similar expressions with $W$ and $Z$. We can apply the same graphics rules to $L$ and $M$ and evaluate the resulting diagrams. Thus, if $U$ is almost perfectly scrambling, then
\begin{equation}
\OTOC(L,M)\approx \frac{1}{d_Ad_R}+\frac{1}{d_D^2}-\frac{1}{d_Ad_Rd_D^2}.
\end{equation}
Assuming that the error in this approximation is smaller than the last term, we conclude that
\begin{equation}
\Delta \le \frac{1}{d_Ad_R}+\frac{1}{d_D^2},\qquad
\delta=d_Ad_R\Delta-1\le \frac{d_Ad_R}{d_D^2}.
\end{equation}
\section{Probabilistic decoder}\label{sec:protocol}
The probabilistic decoding begins with the state $|\widetilde{\Psi}\rangle$ defined in Eq.~\eqref{eq-world-state}. As a very first step, Bob creates a copy of $|\xi\rangle$ on $A'R'$ and applies $U^{*}$. The result is:
\begin{align}
|\Psi_{\text{in}}\rangle = (I_{RC}\otimes I_{D}\otimes U^{*}\otimes I_{R'}) ( |\widetilde{\Psi}\rangle \otimes |\overleftrightarrow{\xi} \rangle )
= \: \figbox{1.0}{fig-in-state}\:, \label{eq-in-state}
\end{align}
where $|\overleftrightarrow{\xi} \rangle$ is obtained by swapping $R$ and $A$ in $|\xi \rangle$. Then Bob applies a projector $P_{D}$ onto the EPR pair on $DD'$. It is defined as an operator acting only on $DD'C'R'$ because $R$ and $C$ are not accessible by Bob:
\begin{align}
P_{D}= \bigl(|\text{EPR}\rangle_{DD'}\langle\text{EPR}|_{DD'}\bigr) \otimes I_{C'R'} = \: \figbox{1.0}{fig-projector-D}\:.
\end{align}
The projection succeeds with probability $\langle\Psi_{\text{in}}| I_{RC}\otimes P_{D} |\Psi_{\text{in}}\rangle =\Delta$, where $\Delta$ is given by Eq.~(\ref{Delta}). Note that $\Delta\ge\frac{1}{d_{A}d_{R}}$ because $\delta\ge 0$. The normalized output state is
\begin{align}
|\Psi_{\text{out}}\rangle
= \frac{1}{\sqrt{\Delta}} (I_{RC}\otimes P_{D}) |\Psi_{\text{in}}\rangle
= \: \frac{1}{\sqrt{\Delta}} \:\figbox{1.0}{fig-out-state}\:.
\end{align}
By definition, the decoding is exact if $|\Psi_{\text{out}}\rangle$ contains the EPR pair on $RR'$. In general, we do not expect it to be exact and should estimate the associated fidelity.
We will first argue abstractly that if the Hayden-Preskill decoding is information-theoretically possible, then our probabilistic decoder also works. This argument is not meant as a rigorous proof, but it provides some insight and reveals the connection between our scheme and quantum teleportation. A more concrete analysis will follow.
Suppose that a perfect decoder $V$ exists that works as shown in Figure \ref{fig_Hayden_Preskill_decoding}b. Then for any operator $X_R$ there is some operator $X_{DB'}$ (namely, $X_{DB'}=V^{\dag}(I_{E}\otimes X_{R}^T)V$) such that
\begin{align}
\figbox{1.0}{fig-X-R} = \: \figbox{1.0}{fig-X-DB}.
\end{align}
Furthermore, if $X_R$ is unitary, then $X_{DB'}$ is also unitary. A similar argument holds for $U^{*}$ by taking complex conjugates. Hence,
\begin{align}\label{fig_teleportation}
\figbox{1.0}{fig-teleportation-left} \: = \: \figbox{1.0}{fig-teleportation-right}.
\end{align}
Here, we have glued the diagrams with $U$ and $U^*$ together, removed one dot from the bottom and added it on the top. The right-hand side is, essentially, the state $|\Psi_{\text{out}}\rangle$ up to trivial changes. The whole equation is equivalent to the condition that $(X_{R}\otimes I_{CDD'C'}\otimes X^*_{R'})|\Psi_{\text{out}}\rangle =|\Psi_{\text{out}}\rangle$ for all unitaries $X_R$. It follows that $|\Psi_{\text{out}}\rangle$ contains an EPR pair on $RR'$, and thus, fulfills the decoding requirement.
When $R$, $A$, $D$, $D'$, $A'$, $R'$ are single qubits, the probabilistic decoding has the effect of ``teleporting'' Alice's quantum state to subsystem $R'$ by postselecting on a particular Bell state. While ordinary quantum teleportation succeeds even if the Bell measurement outcome is different, in the probabilistic decoder, projections onto wrong Bell basis states imply failure of decoding in general as $R'$ may remain entangled with $CC'$.
The fidelity of the probabilistic decoder is
\begin{equation}
F = \langle \Psi_{\text{out}}|P_{R}|\Psi_{\text{out}}\rangle
=\frac{\langle \Psi_{\text{in}} | P_{R} (I_{RC}\otimes P_{D}) | \Psi_{\text{in}}\rangle }{\Delta}, \qquad \text{where}\quad\:
P_{R} = \:\figbox{1.0}{fig-PR}.
\end{equation}
The numerator in the expression for $F$ can be lower bounded by $\bigl|\langle\text{EPR}|_{RCD}| \Psi_{\text{in}}\rangle\bigr|^2$, where
\begin{equation}\label{EPR_Psi_in}
\begin{split}
\langle \text{EPR} |_{RCD} | \Psi_{\text{in}}\rangle &= \: \figbox{1.0}{fig-bound-1}\\
&= \: \figbox{1.0}{fig-bound-2} \: = \frac{1}{\sqrt{d_{R}d_{A}}}.
\end{split}
\end{equation}
Therefore,
\begin{equation}
F \geq \frac{1}{d_{R}d_{A}\Delta} = \frac{1}{1+\delta}.
\end{equation}
We conclude that if $U$ is almost perfectly scrambling and $d_{D}\gg \sqrt{d_{A}d_{R}}$, then $F \approx 1 $.\medskip
The above calculation generalizes to factorizable inputs and outputs by replacing each dot with the square root of the corresponding density matrix. (Note that $\rho_R$ is still maximally mixed.) Using this definition and the condition $\rho_C\otimes\rho_D=U(\rho_A\otimes\rho_B)U^{\dag}$, the dots on the $C$ and $D$ lines in the first diagram in~(\ref{EPR_Psi_in}) can be moved through $U$. Since the dot on the $R$ line corresponds to the factor $d_R^{-1/2}$ and each triangle to the operator $d_R^{-1/2}\Xi$, we arrive at the following bound:
\begin{equation}
F\ge \frac{d_R^{-3}
\bigl(\Tr(\Xi^{\dag}\sqrt{\rho_A}\,\Xi)\bigr)^2}{\Delta},
\end{equation}
The expressions for OTOCs in Section~\ref{sec_OTOC} are generalized by interspersing $WYZX$ with $\rho^{1/4}$ as described in footnote~\ref{foot_OTOC}. Expectation values like $\corr{X}$ are defined with respect to the density matrix, whereas $\corr{XY}$ involves two copies of $\sqrt{\rho}$. If $U$ is almost perfectly scrambling, then
\begin{equation}
\Delta\ge \frac{1}{\widetilde{d_A}d_R}+\frac{1}{\widetilde{d_D}^2},\qquad
\text{where}\quad
\widetilde{d_A}
=\bigl(d_R^{-1}\Tr(\Xi^{\dag}\sqrt{\rho_A}\,\Xi)^{2}\bigl)^{-1},\quad\:
\widetilde{d_D}
=\bigl(\Tr\rho_D^3\bigr)^{-1/2}.
\end{equation}
In this case, the fidelity bound becomes
\begin{equation}
F\ge \frac{\bigl(d_R^{-1}\Tr(\Xi^{\dag}\sqrt{\rho_A}\,\Xi)\bigr)^2}
{d_R^{-1}\Tr(\Xi^{\dag}\sqrt{\rho_A}\,\Xi)^{2}}\,
\frac{1}{1+\widetilde{d_A}d_R/\widetilde{d_D}^2}.
\end{equation}
Qualitatively, the first factor is close to $1$ if the image of the embedding $\Xi$ lies in the ``typical subspace'' of $\rho_A$.
\section{Deterministic decoder}\label{sec:Grover}
Unfortunately, the success probability of the aforementioned decoding algorithm scales as $\frac{1}{d_{A}d_{R}}$. Now we present a modified decoder that is deterministic (albeit not exact). It incorporates a procedure similar to Grover's search algorithm~\cite{Grover:1996}.
The initial step is the same as in the probabilistic decoding so that the subsequent procedure is applied to the state $|\Psi_{\text{in}}\rangle$ defined by Eq.~\eqref{eq-in-state}. Let $P_{A}$ be the projector onto $|\overleftrightarrow{\xi}\rangle$ on $A'R'$ and define another projector $\widetilde{P}_{A}=(I_{D}\otimes U^{*}\otimes I_{R'})P_{A}(I_{D}\otimes U^{T}\otimes I_{R'})$ acting on $DD'C'R'$:
\begin{align}
P_{A} = \figbox{1.0}{fig-PA}, \qquad\quad \widetilde{P_{A}} = \figbox{1.0}{fig-tilde-PA}\,\:.
\end{align}
Consider the following unitary operators:
\begin{align}
W_{D} = 1 - 2P_{D}, \qquad\quad
\widetilde{W}_{A} = 2\widetilde{P}_{A} - 1.
\end{align}
Bob's decoding strategy is to apply $W=\widetilde{W}_{A}W_{D}$ multiple ($\approx \frac{\pi \sqrt{d_{A}d_{R}} }{4}$) times to obtain a good approximation of $|\Psi_{\text{out}}\rangle$.
To analyze the algorithm, it will be convenient to define the following operator acting on subsystem $DD'C'R'$:
\begin{align}
\Pi \equiv \widetilde{P}_{A}P_{D}\widetilde{P}_{A} = \: \figbox{1.0}{fig-Pi}
\end{align}
The rank $r$ of this operator is at most the rank of $P_{D}$, which is $d_{R}d_{C}$. The following identity can be derived graphically by rotating the $U^T$ and $U^{*}$ boxes in the middle in opposite directions and moving them to the left:
\begin{align}
\Tr_{RC}(|\Psi_{\text{in}}\rangle \langle \Psi_{\text{in}}|) = \: \figbox{1.0}{fig-in-trace}\: =\: \frac{d_{A}}{d_{C}} \Pi. \label{eq:trace}
\end{align}
Equation~\eqref{eq:trace} helps to decompose the initial vector $|\Psi_{\text{in}}\rangle$ into eigenvectors of $I_{RC}\otimes \Pi$. To this end, let us consider the eigendecomposition of $\Pi$:
\begin{align}
\Pi = \sum_{j=1}^{r}\alpha_{j} |\psi_{j}\rangle\langle \psi_{j}|, \qquad \alpha_{j}>0.
\end{align}
The vectors $|\psi_{j}\rangle$ are also eigenvectors of $\Tr_{RC}(|\Psi_{\text{in}}\rangle \langle\Psi_{\text{in}}|)$. Together with some orthonormal vectors $|\eta_{j}\rangle$ on the complementary subsystem $RC$, they make a Schmidt decomposition of $|\Psi_{\text{in}}\rangle$:
\begin{align}\label{Psi_j}
|\Psi_{\text{in}}\rangle
= \sum_{j=1}^{r} \sqrt{\frac{d_{A}}{d_{C}}}\,
\sqrt{\alpha_{j}}\, |\Psi_{j}\rangle,\qquad\quad
|\Psi_{j}\rangle = |\eta_{j}\rangle \otimes |\psi_{j}\rangle.
\end{align}
Since $\langle \Psi_{\text{in}} | \Psi_{\text{in}} \rangle$=1 and $\langle \Psi_{\text{in}} | \Pi | \Psi_{\text{in}} \rangle = \Delta$, we have
\begin{align}
\sum_{j=1}^{r}\alpha_{j}=\frac{d_{C}}{d_{A}}, \qquad\quad \sum_{j=1}^{r}\alpha_{j}^2= \frac{d_{C}}{d_{A}}\,\Delta, \qquad
\text{where}\quad r\le d_Rd_C. \label{eq:average}
\end{align}
In the ideal case of $\Delta=1/(d_{A}d_{R})$, these conditions imply that $\alpha_{j}=1/(d_{A}d_{R})$ for all $j$ and that $r=d_Rd_C$.
Now we examine how the iterated application of $I_{RC}\otimes W$ acts on $|\Psi_{\text{in}}\rangle$. Let us define the following unit vectors:
\begin{equation}
|\Phi_{j}\rangle
= \frac{I_{RC}\otimes \widetilde{P}_{D}}{\sqrt{\alpha_{j}}}
|\Psi_{j}\rangle.
\end{equation}
It follows from equations below that the two-dimensional subspace $\mathcal{L}_j=\text{linear span}\{|\Psi_{j}\rangle, |\Phi_{j}\rangle\}$ is invariant under both $I_{RC}\otimes\widetilde{P}_{A}$ and $I_{RC}\otimes P_{D}$. Furthermore, $\mathcal{L}_j\perp\mathcal{L}_k$ if $j\not=k$. Thus, for each individual $j$, our procedure works as the standard Grover algorithm. We will reproduce its analysis for reader's convenience and then apply it to a superposition of $j$'s.
\begin{figure}
\centering\(\displaystyle
\begin{array}{@{}c@{\hspace{2cm}}c@{}}
\figbox{1.0}{fig-iteration} &
\figbox{1.0}{fig-rotation}
\vspace{5pt}\\
\text{a)} & \text{b)}
\end{array}
\)
\caption{The deterministic decoder (a) and the Grover rotation (b).}
\label{Figure-Grover}
\end{figure}
Inside $\mathcal{L}_{j}$, the algorithm induces Grover rotations by some angle that depends on $\alpha_{j}$. To see this, we will use the following relations:
\begin{align}
\label{PsiPhi1}
I_{RC}\otimes P_{D}|\Psi_{j}\rangle &=\sqrt{\alpha_{j}}|\Phi_{j}\rangle, \qquad
&I_{RC}\otimes P_{D}|\Phi_{j}\rangle &= |\Phi_{j}\rangle, \\[3pt]
\label{PsiPhi2}
I_{RC}\otimes \widetilde{P}_{A}|\Psi_{j}\rangle &= |\Psi_{j}\rangle, \qquad
&I_{RC}\otimes \widetilde{P}_{A} |\Phi_{j}\rangle
&= \sqrt{\alpha_{j}}|\Psi_{j}\rangle.
\end{align}
The first pair of equations follows from the definition of $|\Phi_{j}\rangle$ and the second from the fact that $|\Psi_{j}\rangle$ is an eigenvectors of $I_{RC}\otimes \Pi$ with eigenvalue $\alpha_{j}$. The vector $|\Psi_{j}\rangle$ can be represented as $\sin(\theta_j/2)\,|\Phi_{j}\rangle +\cos(\theta_j/2)\,|\Phi_{j}^{\perp}\rangle$, where $|\Phi_{j}^{\perp}\rangle\in\mathcal{L}_{j}$ is a unit vector orthogonal to $|\Phi_{j}\rangle$, see Figure~\ref{Figure-Grover}. The value of $\theta_j$ can be obtained from Eq.~(\ref{PsiPhi2}):
\begin{align}
\sin(\theta_{j}/2) = \langle\Phi_j|\Psi_j\rangle = \sqrt{\alpha_{j}}.
\end{align}
Notice that $I_{RC}\otimes W_{D}$ is a reflection across $|\Phi_{j}^{\perp}\rangle$ and $I_{RC}\otimes \widetilde{W}_{A}$ is a reflection across $|\Psi_j\rangle$. Therefore, the operator $I_{RC}\otimes W=(I_{RC}\otimes \widetilde{W}_{A})(I_{RC}\otimes W_{D})$ restricted to $\mathcal{L}_{j}$ is a rotation by angle $\theta_{j}$; its action on the vector $|\Psi_j\rangle$ is shown in the figure. Now recall that the initial vector $|\Psi_{\text{in}}\rangle$ is a superposition of $|\Psi_j\rangle$ with different $j$, which is given by Eq.~(\ref{Psi_j}). After $m$ iterations, the quantum state becomes
\begin{align}
|\Psi(m)\rangle
= \sum_{j=1}^{r} \sqrt{\frac{d_{A}}{d_{C}}}\, \sqrt{\alpha_{j}}\,
\Bigl(\sin\bigl(\bigl(m+\tfrac{1}{2}\bigr)\theta_{j}\bigr)\,|\Phi_{j}\rangle
+\cos\bigl(\bigl(m+\tfrac{1}{2}\bigr)\theta_{j}\bigr)\,|\Phi_{j}^{\perp}\rangle
\Bigr). \label{eq:rotated}
\end{align}
In the ideal case, we would have $\theta_j=\theta_*=2\arcsin\bigl((d_Ad_R)^{-1/2}\bigr)$ for all $j$. Setting $m$ to
\begin{equation}
m_*=\frac{\pi}{2\theta_*}-\frac{1}{2} \approx\frac{\pi}{4}\,\sqrt{d_Ad_R}
\end{equation}
would give the state $|\Psi(m_*)\rangle= \sum_{j=1}^{r}\sqrt{\frac{d_{A}}{d_{C}}}\,
\sqrt{\alpha_{j}}\,|\Phi_{j}\rangle$, which can be shown to satisfy the decoding requirement. Let us bound the fidelity of the reconstructed state $|\Psi(m_*)\rangle$ under more general circumstances. To simplify the analysis, we assume that $d_{A}d_{R}\gg 1$ so that the error due to the rounding of $m_*$ to an integer may be neglected.\footnote{Instead of rounding, one can alter the last Grover step so as to rotate the vector by a suitable angle less than $\theta_j$. For example, one can use an operator of the form $\bigl(e^{i\beta}(1-\widetilde{P}_A)+\widetilde{P}_A\bigr) \bigl((1-P_D)+e^{i\gamma}P_D\bigr)$. Thus modified algorithm should be accurate even if $d_{A}d_{R}\sim 1$.} We will also approximate $\theta_j =2\arcsin\sqrt{\alpha_j}$ by $2\sqrt{\alpha_j}$; the resulting error is also suppressed when $d_{A}d_{R}\gg 1$.
Let us upper bound the distance between $|\Psi(m_*)\rangle$ and a state that is known to have $1-O(\delta)$ decoding fidelity, namely
\begin{align}
|\Psi_{\text{out}}\rangle
= \frac{1}{\sqrt{\Delta}} (I_{RC}\otimes P_{D}) |\Psi_{\text{in}}\rangle
= \frac{1}{\sqrt{\Delta}} \sum_{j=1}^{r}
\sqrt{\frac{d_{A}}{d_{C}}}\,\alpha_{j}|\Phi_{j}\rangle.
\end{align}
Consider the following auxiliary vector:
\begin{equation}
|\Psi_{\text{out}}'\rangle
= \sum_{j=1}^{r}\sqrt{\frac{d_{A}}{d_{C}}}\,
\sqrt{\alpha_{j}}\,|\Phi_{j}\rangle.
\end{equation}
It is sufficiently close to $|\Psi_{\text{out}}\rangle$. Indeed,
\begin{equation}
\begin{alignedat}{2}
\bigl\| |\Psi_{\text{out}}\rangle-|\Psi_{\text{out}}'\rangle \bigr\|^2
&= \frac{d_{A}}{d_{C}} \sum_{j=1}^{r} \alpha_{j} \left(\frac{\sqrt{\alpha_{j}}}{\sqrt{\Delta}} -1 \right)^2
&& \\
&\leq
\frac{d_{A}}{d_{C}} \sum_{j=1}^{r} \alpha_{j} \left(\frac{\sqrt{\alpha_{j}}}{\sqrt{\Delta}} - \frac{\sqrt{\Delta}}{\sqrt{\alpha_{j}}} \right)^2
\qquad && \text{since } |x-1|\le|x-x^{-1}| \text{ for } x>0 \\
&= \frac{d_{A}}{d_{C}} \sum_{j=1}^{r}
\left( \frac{\alpha_{j}^2}{\Delta} - 2 \alpha_{j} + \Delta\right)
&& \\
&\le d_{A}d_{R}\Delta - 1 = \delta
\qquad && \text{using Eq.~(\ref{eq:average})}.
\end{alignedat}
\end{equation}
Let $x_{j}=\sqrt{\alpha_{j}d_{A}d_{R}}$ so that $\bigl(m_*+\frac{1}{2}\bigr)\theta_j\approx\frac{\pi}{2}x_j$ and thus,
\begin{align}
|\Psi(m_*) \rangle \approx \sum_{j=1}^{r}
\sqrt{\frac{d_{A}}{d_{C}}} \sqrt{\alpha_{j}}
\Bigl(\sin \bigl(\tfrac{\pi}{2}x_{j}\bigr) |\Phi_{j}\rangle + \cos\bigl(\tfrac{\pi}{2}x_{j}\bigr)|\Phi_{j}^{\perp}\rangle \Bigr).
\end{align}
The distance between $|\Psi(m_*)\rangle$ and $|\Psi_{\text{out}}'\rangle$ can be bounded as follows:
\begin{equation}
\begin{split}
\bigl\| |\Psi(m_*)\rangle-|\Psi_{\text{out}}'\rangle \bigr\|^2
&\approx \frac{d_{A}}{d_{C}} \sum_{j=1}^{r} 2\alpha_{j}
\biggl( 1 - \cos\Bigl(\frac{\pi}{2} (x_{j}-1) \Bigr) \biggr)\\
&\leq \frac{d_{A}}{d_{C}}\Big( \frac{\pi}{2} \Big)^2 \sum_{j=1}^{r} \alpha_{j} (x_{j}-1)^2 \\
&\leq \frac{d_{A}}{d_{C}}\Big( \frac{\pi}{2} \Big)^2 \sum_{j=1}^{r} \alpha_{j} \Big(x_{j}-\frac{1}{x_{j}}\Big)^2
\le \Big( \frac{\pi}{2} \Big)^2 \delta,
\end{split}
\end{equation}
where Eq.~(\ref{eq:average}) was used at the last step. Therefore,
\begin{equation}
\bigl\| |\Psi(m_*)\rangle-|\Psi_{\text{out}}\rangle \bigr\|
\le \left(1+\frac{\pi}{2}\right)\sqrt{\delta}\,,
\end{equation}
which shows that $|\Psi(m_*)\rangle$ satisfies the decoding condition with $1-O(\delta)$ fidelity.\medskip
The above argument generalizes to factorizable inputs and outputs if we also assume that the image of $\Xi$ is an invariant subspace of $\rho_A$. The parameter $\delta$ in the fidelity bound should be replaced with
\begin{equation}
\widehat{\delta}=\widehat{d_{A}}d_R\Delta-1,\qquad \text{where}\quad
\widehat{d_{A}} = d_{R}^{-1} \Tr\bigl(\Xi^{\dagger}\rho_{A}^{-1}\Xi\bigr).
\end{equation}
We leave this generalization as an exercise for reader.
\section{Discussion}\label{sec:discussion}
Our decoding procedures bear some similarity with the Gao-Jafferis-Wall traversable wormhole, but differ in some aspects. First, the Gao-Jafferis-Wall system achieves deterministic decoding in one go, without Grover-like iterations. However, the signal can only cross the wormhole if it is sent within a certain time window; therefore, the simplicity of decoding relies on some properties of the operator $U$ at early times. It would be interesting to find exactly what these properties are. Second, the wormhole is just a variant of an eternal black hole. As such, it is characterized by a thermal state or its thermofield double rather than the maximally mixed/entangle state. Our algorithms can be adapted to a setting where $\rho_{AB}$ and $\rho_{CD}$ need not be maximally mixed but factor as $\rho_{A}\otimes \rho_{B}$ and $\rho{_C}\otimes\rho_{D}$. This assumption is unrealistic though, and finding a good generalization to thermal states is still an open problem.
The deterministic decoding algorithm with Grover iterations can be expressed as higher-order OTOCs of the form $\langle X_{1}(0)Y_{1}(t)X_{2}(0)Y_{2}(t) \cdots \rangle$. For instance, the expectation value of $\Pi^m = \tilde{P}_{A} (P_{D} \tilde{P}_{A} )^m $ with respect to the initial state $|\Psi_{\text{in}}\rangle$ can be expressed in terms of $4m$-point OTOCs. Interestingly, this expectation value, $\langle \Psi_{\text{in}} | \Pi^m|\Psi_{\text{in}}\rangle$, can be computed from the R\'{e}nyi-$2m$ mutual information $I^{(2m)}(R,DB')$. The type of higher-order OTOCs associated with our deterministic decoding algorithm are similar to those previously considered by Shenker and Stanford~\cite{Shenker:2013yza} in the context of multiple shockwave geometries. It would be interesting to develop holographic and geometric interpretations of the deterministic decoding protocol.
Another question concerns the optimality of our scheme. The Grover algorithm is known to be optimal for the black box search problem in the sense of query complexity~\cite{BBBV}. It would be interesting to see whether our deterministic decoder uses the operator $U$, considered as a black box, as few times as possible.
\section*{Acknowledgments}
We thank John Preskill, Daniel Roberts and Douglas Stanford for useful discussions. We gratefully acknowledge the support by the Simons Foundation through the ``It from Qubit'' program. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. A.K.\ is supported by the Simons Foundation under grant~376205 and by the Institute of Quantum Information and Matter, a NSF Frontier center funded in part by the Gordon and Betty Moore Foundation.
|
1,116,691,497,117 | arxiv | \section{Introduction }
Stochastic volatility and covariance estimation are of key importance in many fields. Motivated in particular by financial applications, a lot of research has been devoted to constructing suitable (co-) volatility estimators and to deriving their asymptotic limit theory in the setting when discrete, high-frequent observations are available. Initially, the main interest was in (continuous-time) stochastic models based on (It\^{o}) semimartingales, where the so-called realised variance and covariance estimators (and their extensions) proved to be powerful tools. Relevant articles include the works by \cite{BNS2002, BNS2003, BNS2004, ABDL2003} and \cite{Jacod2008}, amongst many others,
and the textbooks by \cite{JacodProtter2012} and \cite{Ait-SahaliaJacod2014}.
Subsequently, the theory was extended to cover non-semimartingale models, see, for instance, \cite{CNW2006}, \cite{BNCP11}, \cite{BNCP10b}, \cite{CorcueraHPP2013}, \cite{CorcueraNualartPodolskij2015} and the survey by \cite{Podolskij2014}, where the proofs of the asymptotic theory rely on Malliavin calculus and the famous fourth-moment theorem, see \cite{NP2005}. The multivariate theory has been studied in \cite{VG2019, PV2019}.
Common to these earlier lines of investigation is the fact that the stochastic processes considered have finite dimensions.
In this article, we extend the concept of realised covariation to an infinite-dimensional framework.
The estimation of covariance operators is elementary in the field of functional data analysis and was elaborated mainly for discrete-time series of functional data (see e.g. \cite{Ramsay2005}, \cite{Ferraty2006}, \cite{Yao2005}, \cite{Bosq2012}, \cite{Horvath2012}, \cite{Panaretos2013}).
However, spatio-temporal data that can be considered as functional might also be sampled densely in time, like forward curves for interest rates or commodities and data from geophysical and environmental applications.
In this paper, we consider a
separable Hilbert space $H$ and
study
$H$-valued stochastic processes $Y$ of the form
\begin{equation}\label{Intro Mild SPDE}
Y_t=\mathcal{S}(t)h+\int_0^t \mathcal{S}(t-s)\alpha_sds+\int_0^t\mathcal S(t-s)\sigma_s dW_s,\quad t\in[0,T],
\end{equation}
for some $T>0$.
Here $(\mathcal S (t))_{t\geq 0}$ is a strongly continuous semigroup, $\alpha:=(\alpha_t)_{t\in[0,T]}$ a predictable and almost surely integrable $H$-valued stochastic process, $\sigma:=(\sigma_t)_{t\in [0,T]}$ is a predictable operator-valued process, $h\in H$ some initial condition and $W$ a so called $Q$-Wiener process on $H$ (see Section \ref{sec: Preliminaries} below for details).
Our aim is to construct an estimator
for the integrated covariance process
\begin{equation*}
\left(\int_0^t\sigma_sQ\sigma_s^*ds\right)_{t\in[0,T]}.
\end{equation*}
More precisely, we denote by
\begin{equation}\label{Intro Realised Volatility}
\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(Y_{t_i}-\mathcal S(\Delta_n)Y_{t_{i-1}})^{\otimes 2},
\end{equation}
the \emph{
semigroup-adjusted realised covariation (SARCV)}
for an equally spaced grid $t_i:=i\Delta_n$ for $\Delta_n=1/n$, $i=1,\dots,\lfloor t/\Delta_n\rfloor$.
We prove uniform convergence in probability (ucp) with respect to the Hilbert-Schmidt norm of the SARCV\
to the integrated covariance process under mild conditions on the volatility.
This framework differs from common high-frequency settings mainly due to peculiarities that arise from infinite dimensions. First, observe that the main motivation to consider processes in this form, is that a vast amount of parabolic stochastic partial differential equations
posses only mild (in opposition to analytically strong) solutions, which are of the form \eqref{Intro Mild SPDE}. That is, $Y$ is (under weak conditions) the mild solution of a stochastic partial differential equation
\begin{align*}
(\text{SPDE}) \quad dX_t= (AX_t +\alpha_t) dt+ \sigma_t dW_t, \quad X_0=h,\quad t\in [0,T].
\end{align*}
(cf. \cite{DPZ2014}, \cite{PZ2007} or \cite{GM2011}).
In contrast to finite-dimensional stochastic diffusions, this is a priori not an $H$-valued semimartingale, but rather an $H$-valued Volterra process.
Various recent developments related to statistical inference for (parabolic) SPDEs based on discrete observations in time and space have emerged, see e.g.~\cite{Cialenco2020}, \cite{Bibinger2020}, \cite{Chong2020}, \cite{ChongDalang2020}.
To
the best of our knowledge, our paper is the first one considering high-frequency estimation of (co-) volatility of infinite-dimensional stochastic evolution equations in an operator setting.
This is of interest for various reasons. For instance, a simple and important application might be the parameter estimation for $H$-valued Ornstein-Uhlenbeck process (that is, $\sigma_s=\sigma$ is a constant operator).
Elementary techniques such as functional principal component analysis might then be considered on the level of volatility. In a multivariate setting, dynamical dimension reduction was conducted for instance in \cite{AIT-Sahalia2019}.
Furthermore, it can be used as a tool for inference of infinite-dimensional stochastic volatility models as in \cite{BenthRudigerSuss2018} or \cite{BenthSimonsen2018}.
In the special case of a semigroup that is continuous with respect to the operator norm, the framework also covers the estimation of volatility for $H$-valued semimartingales.
We organize the paper as follows:
First, we recall the main technical preliminaries of our framework in Section \ref{sec: Preliminaries}. In Section \ref{sec: Weak Law of large numbers}, we establish the weak law of large numbers. For that, we discuss the conditions imposed on the volatility process in Section \ref{sec: Technical assumptions} and state our main result, given by Theorem \ref{T: LLN for semigroup case}, in Section \ref{sec: Main result}. Afterwards, we show how to weaken the assumptions on the volatility by a localization argument in Section \ref{sec: Extension by localization}. In Section \ref{sec: Applications}, we study the behaviour of the estimator in special cases of semigroups and volatility.
We discuss conditions for particular examples of semigroups to determine the speed of convergence of the estimator in Section \ref{sec: Convergence rates and semigroups}. In Section \ref{sec: Stochastic volatility models}, we validate our assumptions for some stochastic volatility models in Hilbert spaces. Section 5 is devoted to the proofs of our main results, while in
Section \ref{Sec:Conclusion} we discuss our results and methods in relation to some existing literature and provide some outlook into further developments.
Some technical proofs are relegated to the Appendix.
\section{Notation and some preliminary results}\label{sec: Preliminaries}
Let
$(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq 0}), \ensuremath{\mathbb{P}})$ denote a filtered probability space satisfying the usual conditions.
Consider two separable Hilbert spaces $U, H$ with scalar products denoted by $\langle \cdot, \cdot \rangle_U$, $\langle \cdot, \cdot \rangle_H$ and norms $\Vert\cdot\Vert_U$, $\Vert\cdot\Vert_H$, respectively. We denote $L(U,H)$ the space of all linear bounded operators $K:U\rightarrow H$, and use the shorthand notation $L(U)$ for $L(U,U)$. Equipped with the operator norm, $L(U,H)$ becomes a Banach space. The adjoint operator of a $K\in L(U,H)$ is denoted by $K
^*$, and is an element on $L(H,U)$.
Following \citet[Appendix A]{PZ2007} we use the following notations:
An operator $K\in L(U,H)$ is called \emph{nuclear} or \emph{trace class} if the following representation holds
\begin{align*}
K u = \sum_k b_k\langle u, a_k \rangle_U, \text{ for } u \in U,
\end{align*}
where $\{a_k\} \subset U$ and $\{b_k\}\subset H$ such that $\sum_k\Vert a_k\Vert_U\Vert b_k\Vert_H<\infty$. The space of all nuclear operators is denoted by $L_1(U,H)$; it is a separable Banach space and its norm is denoted by
\begin{align*}
\Vert K\Vert_1 :=\inf\left\{\sum_k\Vert a_k\Vert_U\Vert b_k\Vert_H: K u = \sum_k b_k\langle u, a_k \rangle_U\right\}.
\end{align*}
We denote by $L^+_1(U,H)$ the class of all symmetric, non-negative-definite nuclear operators from $U$ to $H$.
We write $L_1(U)$ and
$L_1^+(U)$ for $L_1(U,U)$ and $L_1^+(U,U)$, resp. Frequently, nuclear operators are also called trace class operators.
For $x\in U$ and $y\in H$, we define the tensor product $x\otimes y$ as the linear operator in $L(U,H)$ defined as $x\otimes y(z):=\langle x, z\rangle_U y$ for $z\in U$. We note that $x \otimes y \in L_1(U,H)$ and $\Vert x \otimes y \Vert_1 =\Vert x\Vert_U \Vert y\Vert_H$, see \citet[p.~107]{PZ2007}.
The operator $K\in L(U,H)$ is said to be a \emph{Hilbert-Schmidt operator} if
\begin{align*}
\sum_k \Vert K e_k\Vert_H^2 < \infty,
\end{align*}
for any orthonormal basis (ONB) $(e_k)_{k\in\mathbb N}$ of $U$. The space of all Hilbert-Schmidt operators is denoted by $L_{\text{HS}}(U,H)$. We can introduce an inner product by
\begin{align*}
\langle K, L \rangle_{\text{HS}}:=\sum_{k}\langle Ke_k, Le_k\rangle_H, \text{ for } K,L \in L_{\text{HS}}(U,H).
\end{align*}
The induced norm is denoted $\Vert\cdot\Vert_{\text{HS}}$.
As usual, we write $L_{\text{HS}}(U)$ in the case $L_{\text{HS}}(U,U)$.
We have the following convenient result for the space of Hilbert-Schmidt operators. Although it is well-known, we include the proof of this result in the Appendix \ref{App:proofs} for the convenience of the reader:
\begin{lemma}
\label{lem:HS-banachalg}
Let $U,V,H$ be separable Hilbert spaces. Then $L_{\text{HS}}(U,H)$ is a separable
Hilbert space. Moreover, if $K\in L_{\text{HS}}(U,V), L\in L_{\text{HS}}(V,H)$, then
$LK\in L_{\text{HS}}(U,H)$ and
\begin{equation}
\Vert LK\Vert_{\text{HS}}\leq \Vert L \Vert_{\text{op}}\Vert K\Vert_{\text{HS}}\leq \Vert L \Vert_{\text{HS}}\Vert K\Vert_{\text{HS}},
\end{equation}
where the HS-norms are for the spaces in question.
\end{lemma}
\subsection{Hilbert-space-valued stochastic integrals}
Fix $T>0$ and assume that $0\leq t \leq T$ throughout.
Let $W$ denote a Wiener process taking values in $U$ with covariance operator $Q\in L^+_1(U)$.
\begin{definition}
A stochastic process $(W_t)_{t\geq 0}$ with values in $U$ is called Wiener process with covariance operator $Q \in L_1^+(U)$, if $W_0=0$ almost surely, $W$ has independent and stationary increments, and for $0\leq s \leq t$, we have $W_t-W_s\sim N(0,(t-s)Q)$.
\end{definition}
\begin{remark}
Recall that a $U$-valued random variable $X$ is normal with mean $a\in U$ and covariance operator $Q\in L_1^+(U)$ if
$\langle X,f\rangle_U$ is a real-valued normally distributed random variable for each $f\in U$, with mean $\langle a,f\rangle$ and
$$
E[\langle X,f\rangle_U\langle X,g\rangle_U]=\langle Qf,g\rangle_U, \forall f,g\in U.
$$
\end{remark}
We introduce the space $\mathcal L_{2,T}(U,H)$ of predictable $L(U,H)$-valued stochastic processes $Z=(Z_t)_{t\geq 0}$
such that
\begin{equation}
\mathbb E\left[\int_0^T\Vert Z_sQ^{1/2}\Vert_{\text{HS}}^2ds\right]<\infty,
\end{equation}
for $T<\infty$.
Then $\mathcal L_{2,T}(U,H)$ will be the space of integrable processes with respect to the $Q$-Wiener
process $W$ on $[0,T]$.
Let $\sigma=(\sigma_t)_{t\geq 0}$ denote a stochastic volatility process
where $\sigma_t\in \mathcal L_{2,T}(U,H)$ for some fixed $T<\infty$.
The stochastic integral
\begin{align*}
Y_t:= \int_0^t \sigma_s dW_s
\end{align*}
can then be defined as in \cite[Chapter 8]{PZ2007} and takes values in the Hilbert space $H$.
We denote the tensor product of the stochastic integral $Y$ by
$ \left(Y_t\right)^{\otimes 2}=Y_t\otimes Y_t$,
and define the corresponding stochastic variance term as the \emph{operator angle bracket} (not to be confused with the inner products introduced above!) given by
\begin{align*}
\langle\langle Y\rangle\rangle_t=\int_0^t\sigma_sQ\sigma^*_sds=\int_0^t
(\sigma_sQ^{1/2}) (\sigma_sQ^{1/2})^*ds,
\end{align*}
see \citet[Theorem 8.7, p.~114]{PZ2007}.
\begin{remark}
As in \citet[p.~104]{DPZ2014}, we note that $(\sigma_sQ^{1/2})\in L_{HS}(U,H)$ and $(\sigma_sQ^{1/2})^*\in L_{HS}(H,U)$. Hence the process $(\sigma_sQ^{1/2}) (\sigma_sQ^{1/2})^*$ for $s\in [0,T]$ takes values in $L_1(H,H)$.
\end{remark}
\begin{remark}
The integral
$
\int_0^t\sigma_sQ\sigma_s^* ds
$
is interpreted as a Bochner integral in the space of Hilbert-Schmidt operators
$L_{\text{HS}}(H)$. Indeed, $\sigma_sQ\sigma_s^*$ is a linear operator on $H$, and we have
\begin{align*}
\int_0^t\ensuremath{\mathbb{E}}[\Vert\sigma_s Q\sigma_s^*\Vert_{\text{HS}}]ds&=\int_0^t\ensuremath{\mathbb{E}}[\Vert\sigma_sQ^{1/2}(\sigma_sQ^{1/2})^*\Vert_{\text{HS}}]ds
\\
&\leq\int_0^t\ensuremath{\mathbb{E}}[\Vert\sigma_sQ^{1/2}\Vert_{\text{HS}}^2]ds<\infty,
\end{align*}
by appealing to Lemma \ref{lem:HS-banachalg} and the
assumption on $\sigma$ being an integrable process with respect
to $W$.
This means that the Bochner integral is $a.s.$ defined. If we relax integrability to go beyond $L^2$, this argument fails, but we still have a well-defined Bochner integral as we can argue pathwise.
\end{remark}
\begin{remark}
\label{rem:martingale}
From \citet[Theorem 8.2, p.~109]{PZ2007} we deduce that
the process $(M_t)_{t\geq 0}$ with
\begin{align*}
M_t= \left(Y_t\right)^{\otimes 2}-\langle\langle Y\rangle\rangle_t
\end{align*}
is an $L_1(H)$-valued martingale w.r.t.~$(\mathcal{F}_t)_{t\geq 0}$. Thus, the operator angle bracket process can be called the {\it quadratic covariation process} of $Y_t$, which we shall do from now on.
\end{remark}
We end this section with a general expression for the even moments of an increment of the Wiener process. Later we will need the fourth moment in our analysis.
First, we introduce the $p$-trace of an operator $K\in L(U)$: We denote by $\text{Tr}_p(K)$ the
{\it $p$-trace} of $K$, $p\in\mathbb N$,
defined as
$$
\text{Tr}_p(K)=\sum_{i=1}^{\infty}\langle K e_i,e_i\rangle_U^p,
$$
whenever this converges. Here, $(e_i)_{i\in\mathbb N}$ is an ONB in $U$. We denote by $\text{Tr}$ the classical trace, given by $\text{Tr}=\text{Tr}_1$.
Consider now the positive definite symmetric trace class operator $Q$. If we organize the eigenvalues $(\lambda_i)_{i=1}^{\infty}\subset\mathbb R_+$ of $Q$ in decreasing order, letting $(e_i)_{i\in\mathbb N}$ be the ONB of eigenvectors, we have
$$
\text{Tr}_p(Q)\leq \lambda_1^{p-1}\sum_{i=1}^{\infty}\lambda_i=\text{Tr}(Q),
$$
and hence the $p$-trace is bounded by the trace for any $p>1$, and therefore also finite. The proof of the following result is relegated to Appendix \ref{App:proofs}:
\begin{lemma}
\label{lemma:4thmoment}
Let $W$ be a $Q$-Wiener process on $U$ and $q\in\mathbb N$ and define a generic increment as
$\Delta W_t:=W_{t+\Delta}-W_t$ for $\Delta>0$. Furthermore, let $(e_k)_{k\in\ensuremath{\mathbb{N}}}$ be the ONB in $U$ of eigenvectors of $Q$ with associated eigenvalues $(\lambda_k)_{k\in\ensuremath{\mathbb{N}}}$. Then, for any $t\geq 0$ and $m\in\ensuremath{\mathbb{N}}$ it holds that
$$
\mathbb E\left[\Vert\Delta W_t\Vert_U^{2q}\right]=(-i)^q\lim_{m\rightarrow\infty}\Phi_m^{(q)}(0),
$$
where
$$
\Phi_m(x)=\exp\left(-\frac12\sum_{k=1}^m\ln(1-2ix\Delta\lambda_k)\right),
$$
for $x\in\ensuremath{\mathbb{R}}$.
In particular,
$$
\mathbb{E}[\Vert \Delta W_t\Vert_U^4]=\Delta^2\left(\text{Tr}(Q)^2+2\text{Tr}_2(Q)\right).
$$
\end{lemma}
This finishes our section with preliminary results.
\section{The weak law of large numbers}\label{sec: Weak Law of large numbers}
In this section, we show our main result on the law of large numbers for Volterra-type stochastic integrals in Hilbert space with
operator-valued volatility processes.
Consider
\begin{equation}\label{Volatility Integral}
Y_t:=\int_0^t\mathcal S(t-s)\sigma_sdW_s,
\end{equation}
where $W$ is a $Q$-Wiener process on the separable Hilbert space $U$, $\sigma$ is an element of $\mathcal L_{2,T}(U,H)$ and $\mathcal S$ is a $C_0$-semigroup on $H$.
We assume that we observe $Y$ at times $t_i:=i\Delta_n$ for $\Delta_n=1/n$, $i=1,\dots,\lfloor t/\Delta_n\rfloor$ and define the
semigroup-adjusted increment
\begin{equation}\label{Adjusted Increment}
\widetilde{\Delta}_n^iY:=Y_{t_i}-\mathcal S(\Delta_n)Y_{t_{i-1}}=\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\sigma_sdW_s.
\end{equation}
We define the process of the semigroup-adjusted realised covariation (SARCV) as
\begin{align*}
t\mapsto\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2}.
\end{align*}
The aim is to prove the following weak law of large numbers for the SARCV
\begin{align*}
\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2}
\stackrel{ucp}{\rightarrow}
\int_0^t\sigma_sQ\sigma_s^*ds, \qquad \text{ as } n \to \infty,
\end{align*}
in the ucp-topology,
that is, for all $\epsilon>0$ and $T>0$
\begin{align}\label{ucp convergence with semigroup}
\lim_{n\to\infty}\mathbb{P}\left(\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2} - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right)=0.
\end{align}
\subsection{Technical assumptions}\label{sec: Technical assumptions}
We need some technical assumptions on the stochastic volatility process $\sigma$.
\begin{assumption}\label{as:smoothvol}
Assume that the volatility process satisfies the following H\"older continuity property: For all $T>0$ and $s, t \in [0,T]$ we have
$$\mathbb E\left[\Vert(\sigma_t-\sigma_s)Q^{\frac 12}\Vert^2_{\text{HS}}\right]^{\frac 12} \leq C_1(T)|t-s|^{\alpha},
$$
for some $\alpha>0$
and a constant $C_1(T)>0$ (depending on $T$).
\end{assumption}
Notice that we assume only local mean-square-H{\"o}lder continuity for the paths of the volatility process. This allows for including volatility processes with c{\`a}dl{\`a}g paths in our considerations, as we will see later.
We shall also need a moment condition to hold for the volatility process:
\begin{assumption}
\label{as:fourthmomentvol}
Assume that the volatility process satisfies for all $T>0$ the following moment conditions:
\begin{equation}
\ensuremath{\mathbb{E}}\left[\Vert\sigma_sQ^{\frac 12}\Vert^
4_{\text{HS}}\right]\leq C_2(T)\quad \forall s \in [0,T],
\end{equation}
for some constant $C_2(T)>0$ (depending on $T$).
\end{assumption}
\begin{remark}
Using the Cauchy-Schwarz inequality, we can deduce under Assumption \ref{as:fourthmomentvol} for each $T>0$
\begin{align*}
\sup_{s \in [0,T]} \ensuremath{\mathbb{E}}\left[\Vert\sigma_sQ^{\frac 12}\Vert^
2_{\text{HS}}\right]
& \leq \sup_{s \in [0,T]} \sqrt{\ensuremath{\mathbb{E}}\left[\Vert\sigma_sQ^{\frac 12}\Vert_{\text{HS}}^4\right]}
\leq \sqrt{C_2(T)}.
\end{align*}
Moreover, we find that for all $t\in [0,T]$, also
\begin{align*}
\ensuremath{\mathbb{E}}\left[\int_0^t\Vert\sigma_sQ^{1/2}\Vert^2_{\text{HS}}ds\right] \leq t \sqrt{C_2(T)}<\infty.
\end{align*}
Thus, the integrability condition on $(\sigma_t)_{t\in[0,T]}$ holds for adapted processes satisfying Assumption \ref{as:fourthmomentvol}.
\end{remark}
The semigroup is in general not continuous with respect to time in the operator norm, but only strongly continuous. This makes it more involved to verify convergence in Hilbert-Schmidt norms, like \eqref{ucp convergence with semigroup}, since then the semigroup component $S(\Delta_n)$ in the adjusted increment \eqref{Adjusted Increment} converges just strongly to the identity.
However, we can make use of compactness of the closure of the image of the operators $\sigma_sQ^{\frac 12}$ for each $s\in [0,T]$,
and show the convergence of the semigroup to the identity operator on compacts by the subsequent argument in Theorem \ref{C: Application of Arzela Ascoli}.
This line of argument necessitates one of the following two alternative assumptions:
\begin{assumption}
\label{as:Q is more than Hilbert Schmidt}
\begin{itemize}
\item[(a)]Assume we can find a mean-square continuous process $ (\mathcal{K}_s)_{s\in \mathbb{R}_+}\in L^2(\Omega\times\mathbb{R}_+;L(U,H))$ of compact operators and a Hilbert-Schmidt operator $\mathcal{T}\in L_{{\text HS}}(U)$ such that almost surely $\sigma_sQ^{\frac 12}=\mathcal{K}_s\mathcal{T}$ for each $s\in [0,t]$.
\item[(b)] The semigroup $(S(t))_{t\geq 0}$ is uniformly continuous, that is $S(t)=e^{A t}$ for some bounded operator $A\in L(H)$.
\end{itemize}
\end{assumption}
Observe, that Assumption \ref{as:Q is more than Hilbert Schmidt}(a) is fulfilled in the following cases:
\begin{itemize}
\item[(i)] $\sigma$ satisfies Assumption \ref{as:smoothvol} and $\sigma_t$ is almost surely compact (for instance itself a Hilbert-Schmidt operator) for each $t\in[0,T]$. In this case we can choose $\mathcal{K}_s:=\sigma_s$ and $\mathcal{T}:=Q^{\frac 12}$.
\item[(ii)] $\sigma$ satisfies Assumption \ref{as:smoothvol} and there exists an $\epsilon>0$, such that $Q^{(1-\epsilon)}$ is still a nuclear operator, that is, the eigenvalues of $Q$ satisfy $\sum_{n\in\mathbb{N}}\lambda_n^{1-\epsilon}<\infty$. In this case we can choose $\mathcal{K}_s:=\sigma_sQ^{\frac{\epsilon}{2}}$ and $\mathcal{T} :=Q^{\frac{1-\epsilon}{2}}$. Notice that this eigenvalue-property on $Q$ is not always fulfilled. We could for example have an operator with eigenvalues $\lambda_n^{\frac 12}:=n^{\frac 12 +\frac 1n}$
\end{itemize}
\begin{remark}
The semigroup given by $S(t)=I$ for all $t\geq 0$, where $I$ is the identity operator, is uniformly continuous and therefore satisfies Assumption 3(b).
\end{remark}
\subsection{The main result}\label{sec: Main result}
In order to prove the ucp-convergence (\ref{ucp convergence with semigroup}) we will first show the following stronger result:
\begin{theorem}\label{T: LLN for semigroup case}
Assume that Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and either \ref{as:Q is more than Hilbert Schmidt}(a) or \ref{as:Q is more than Hilbert Schmidt}(b) hold.
For each $T>0$ there is a constant $L(T)>0$ such that
\begin{align}\label{L:Convergence speed}
\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2} - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]
\leq L(T) (\Delta_n^{\alpha}+ b_n^{\frac 12}(T)),
\end{align}
where
\begin{align}\label{Convergence Rate sequence}
b_n(T):= \sup_{r\in [0,T]}\mathbb E[ \sup_{x\in [0,\Delta_n]}\Vert
(I-\mathcal S(x))\sigma_rQ^{\frac 12}\|_{op}^2].
\end{align}
In particular, for all $T>0$
$$\lim_{n\to\infty}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2} - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]=0.$$
\end{theorem}
Before we prove this result in Section \ref{sec: Proofs}, we will make a couple of remarks and discuss uniform continuity of semigroups on compact sets.
\begin{remark}
The factor $L(T)$ in the Theorem above is actually not just depending on $T$, but also shrinks when $n$ gets larger. Effectively, the constant can be precisely computed by careful inspection of the estimates \eqref{convergence inequality for first summand}, \eqref{convergence inequality for second summand}, \eqref{convergence inequality for third summand} and \eqref{convergence inequality for fourth summand} in the proof of Theorem \ref{T: LLN for semigroup case}. However, the expression becomes rather extensive and we refrain from stating it here.
That $(b_n)_{n\in\mathbb N}$ converges to $0$ is an implication of the following Proposition \ref{C: Application of Arzela Ascoli}. The magnitude of this sequence essentially determines the rate of convergence of the realised covariation by virtue of
inequality \eqref{L:Convergence speed}. We will come back to the magnitude of the $b_n$'s in specific cases in Section \ref{sec: Convergence rates and semigroups}.
\end{remark}
Denote for $t\geq 0$
\begin{equation}\label{Global Bound for the semigroup}
M(t):=\sup_{x\in[0,t]}\|S(x)\|_{op},
\end{equation}
which is finite by the Hille-Yosida bound on the semigroup.
Often in stochastic modelling one also has a drift present. The following remark shows that our results are not altered by this:
\begin{remark}\label{R:Drift extension}
Observe that we could easily extend $Y$ to posses a drift and an "inital condition", that is
$$Y_t=\mathcal{S}(t)h+\int_0^t\mathcal{S}(t-s)\alpha_s ds +\int_0^t \mathcal S (t-s) \sigma_s dW_s,$$
for a predictable and almost surely Bochner-integrable stochastic process $(\alpha_t)_{t\in [0,T]}$, such that
\begin{equation}\label{Finite moment condition for the drift}
\sup_{t\in[0,T]}\mathbb E [\|\alpha_t\|^2]<\infty,
\end{equation}
and for an initial value $h\in H$.
In this case
\begin{align*}\label{Adjusted Increment}
\widetilde{\Delta}_n^iY:= Y_{t_i}-\mathcal S(\Delta_n)Y_{t_{i-1}}
= \int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\alpha_s ds+\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\sigma_sdW_s.
\end{align*}
We can then argue that
\begin{align*}
& \mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2} - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]\\
&\qquad\leq \mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\left(\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\alpha_s ds\right)^{\otimes 2}\right\|_{\text{HS}}\right]\\
&\qquad\qquad + \mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\left(\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\sigma_sdW_s\right)^{\otimes 2} - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]\\
&\qquad=(1)+(2)
\end{align*}
Summand $(2)$ can be estimated with Theorem \ref{T: LLN for semigroup case}. For Summand $(1)$
we find
\begin{align*}
& \mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\left(\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\alpha_s ds\right)^{\otimes 2}\right\|_{\text{HS}}\right]\\
&\qquad\leq \mathbb{E}\left[ \sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\left\| \int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\alpha_s ds\right\|_{H}^2\right]\\
&\qquad\leq \sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \Delta_n^2 M^2(T) \sup_{r\in[0,T]}\mathbb{E}\left[\|\alpha_r\|_H^2\right]\\
&\qquad\leq M^2(T) T \sup_{r\in[0,T]}\mathbb{E}\left[ \|\alpha_r\|_H^2\right]\Delta_n,
\end{align*}
where we appealed to the bound \eqref{Finite moment condition for the drift} on the semigroup.
Hence, Summand $(1)$ is $\mathcal{O}(\Delta_n)$ and will not impact the estimation of the covariation (in the limit).
\end{remark}
\subsection{Extension by localisation}\label{sec: Extension by localization}
In general, we have the following result:
\begin{theorem}
\label{T: Extension by localization}
Let $(\Omega_m)_{m\in \mathbb N}$ be a sequence of measurable subsets such that $\Omega_m\uparrow \Omega$. Suppose Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and \ref{as:Q is more than Hilbert Schmidt} hold for $\sigma^{(m)}:=\sigma \mathbf{1}_{\Omega_m}$ for all $m\in \mathbb N$. Then
\begin{equation}
\label{eq:final-result}
\lim_{n\rightarrow\infty}\ensuremath{\mathbb{P}}\left(\sup_{0\leq s\leq t}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor s/\Delta_n\rfloor}}(\tilde{\Delta}_i^n Y)^{\otimes 2}-\int_0^s\sigma_u Q\sigma_u^*du\right\Vert_{\text{HS}}>\epsilon\right)=0,
\end{equation}
for any $\epsilon>0$, that is, convergence holds in $ucp$ of the realized covariation.
\end{theorem}
We can apply the localization on volatility processes $\sigma$ with almost sure H\"older-continuous paths:
\begin{corollary}\label{C: Localization for almost surely Holder continuous functions}
Assume $\sigma$ is almost surely $\alpha$-H{\"o}lder-continuous on $[0,T]$ with respect to the operator norm, satisfies Assumption \ref{as:Q is more than Hilbert Schmidt} and that the initial value has a finite fourth moment, i.e.
\begin{align}\label{Fourth moment of initial state is finite}
\mathbb E[\|\sigma_0\|_{\text{op}}^4]<\infty.
\end{align} Then the ucp convergence in Eq. \eqref{eq:final-result} holds.
\end{corollary}
\begin{proof}
We know that
\begin{align}
C(T):= \sup_{s\neq t\in [0,T]} \frac{\| (\sigma_t-\sigma_s)Q^{\frac 12}\|_{\text{HS}}}{|t-s|^{\alpha}}<\infty, \qquad \mathrm{a.~s.}
\end{align}
Then $C(T)$ is a random variable and the set $\Omega_m:= \lbrace \omega \in \Omega: C(T)\leq m\rbrace$ is measurable and $\Omega_m\uparrow \Omega$\footnote{At least a convergence to a set with full measure}. We have to verify that $\sigma^{(m)}=\sigma \mathbf{1}_{\Omega_m}$ fulfills Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol}, since \ref{as:Q is more than Hilbert Schmidt} is satisfied automatically. The $\alpha$-H{\"o}lder continuity is obtained since
\begin{align*}
\mathbb{E}[\|(\sigma_t^{(m)}-\sigma_s^{(m)})Q^{\frac 12}\|_{\text{HS}}^2]\leq m^2 |t-s|^{2\alpha} \text{Tr}(Q),
\end{align*}
and the fourth moment is finite since
\begin{align*}
\mathbb{E}[\|\sigma_t^{(m)} \|_{\text{op}}^4]\leq
\mathbb{E}[\|\sigma_t^{(m)} -\sigma_0^{(m)}\|_{\text{op}}^4]+\mathbb{E}[\|\sigma_0^{(m)} \|_{\text{op}}^4]
\leq m^4 t^{4\alpha}+\mathbb{E}[\|\sigma_0 \|_{\text{op}}^4]<\infty.
\end{align*}
The proof is complete.
\end{proof}
\section{Applications}\label{sec: Applications}
In this section, we give an overview of potential settings and scenarios for which we can use the techniques described above to infer volatility.
Stochastic integrals of the form (\ref{Volatility Integral}) arise naturally in correspondence to mild or strong solutions to stochastic partial differential equations. Take as a simple example a process given by
\begin{equation}\label{SPDE}
(\text{SPDE})\begin{cases}
dY_t=AY_t dt + \sigma_tdW_t , \qquad t \geq 0\\
Y_0=h_0\in H,
\end{cases}
\end{equation}
where $A$ is the generator of a $C_0$-semigroup $(\mathcal S(t))_{t\geq 0}$ on the separable Hilbert space $H$, $W$ is a $Q$-Wiener process on a separable Hilbert space $U$ for some positive semidefinite and symmetric trace class operator $Q:U\to U$ and $\sigma \in \mathcal{L}_{T,2}(U,H)$.
There are three components in this model, which need to be estimated in practice: the covariance operator $Q$ of the Wiener process, the generator $A$ (or the semigroup $(\mathcal S(t))_{t\geq 0}$ respectively) and the stochastic volatility process $\sigma$.
\subsection{Semigroups}\label{sec: Convergence rates and semigroups}
The essence of the convergence result
in Theorem \ref{T: LLN for semigroup case} is that we can infer on $Q$ and $\sigma$ based on observing the path of $Y$, given that we {\it know} the semigroup $(\mathcal S(t))_{t\geq 0}$.
Certainly, this is not always the case, since we may just have knowledge about the infinitesimal generator $A$. However, if we know the precise form of the semigroup it is sometimes possible to estimate the speed of convergence, that is, a bound on the $b_n(T)$'s given in \eqref{Convergence Rate sequence}.
\subsubsection{Martingale case}\label{sec: martingale case}
For $A=0$ and $S(t)=I$ and for all $t\geq 0$, we have the solution
$$Y_t=\int_0^t \sigma_s dW_s,$$
for the stochastic partial differential equation (\ref{SPDE}).
Clearly in this case we have
$$b_n(T)=0.$$
\subsubsection{Uniformly continuous semigroups}
Assume that $(\mathcal S(t))_{t\geq0}$ is continuous with respect to the operator norm. This is equivalent to $A\in L(H)$ and $\mathcal S(t)=e^{t A}$.
\begin{lemma}
If the semigroup $(\mathcal S (t))_{t\geq 0}$ is uniformly continuous, we have, for $b_n$ given in \eqref{Convergence Rate sequence}, that
$$b_n(T)\leq \Delta_n \|A\|_{\text{op}} e^{\|A\|_{\text{op}}\Delta_n}\sup_{r\in[0,T]}\mathbb E [\|\sigma_rQ^{\frac 12}\|^2_{\text{HS}}].$$
In particular, if Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol} are valid, we have
$$b_n(T)\leq \Delta_n \|A\|_{\text{op}} e^{\|A\|_{\text{op}}\Delta_n}\sqrt{C_2(T)}\text{Tr}(Q).$$
\end{lemma}
\begin{proof}
Recall the following fundamental equality from semigroup theory (cf. \citet[Lemma 1.3]{Engel1999}):
\begin{align}\label{Fundamental Theorem of Semigroup Theory}
(\mathcal S(x)-I)h= & \int_0^x A \mathcal S (s) h ds,\quad\quad \forall h\in H\\
= & \int_0^x \mathcal S (s)A h ds,\quad\quad \forall h\in D(A).\label{Fundamental Theorem of Semigroup Theory II}
\end{align}
Using \eqref{Fundamental Theorem of Semigroup Theory}, we get
\begin{align*}
\sup_{x\in [0,\Delta_n]}\left\| (I-\mathcal S (x))\right\|_{\text{op}}= & \sup_{x\in [0,\Delta_n]}\sup_{\|h\|=1}\left\| \int_0^x A \mathcal S (s) hds\right\|_{H}\\
\leq & \sup_{x\in [0,\Delta_n]} x \|A\|_{\text{op}} e^{\|A\|_{\text{op}}x}=\Delta_n \|A\|_{\text{op}} e^{\|A\|_{\text{op}}\Delta_n}.
\end{align*}
It follows that
\begin{align*}
b_n^2(T)=&\sup_{r\in [0,T]}\mathbb E [\sup_{x\in [0,\Delta_n]}\| (I-\mathcal S (x))\sigma_rQ^{\frac 12}\|^2_{\text{op}}] \\
\leq & \sup_{x\in [0,\Delta_n]}\| (I-\mathcal S (x))\|_{\text{op}}^2\sup_{r\in [0,T]}\mathbb E [\|\sigma_rQ^{\frac 12}\|^2_{\text{HS}}]\\
\leq &\Delta_n^2 \|A\|_{\text{op}}^2 e^{2\|A\|_{\text{op}}\Delta_n}\sup_{r\in [0,T]}\mathbb E [\|\sigma_rQ^{\frac 12}\|^2_{\text{HS}}].
\end{align*}
\end{proof}
For uniformly continuous semigroups we obtain a convergence speed of the order $\min(\Delta_n^{\frac 12},\Delta_n^{\alpha})$ for the convergence of the realized covariation to the quadratic covariation in Theorem \ref{T: LLN for semigroup case}.
\begin{remark}
Note that, if the semigroup is uniformly continuous and under Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol}, we can get back to the martingale case of Section \ref{sec: martingale case} if we operate on the values of $Y_t$ in any of the following two ways:
\begin{itemize}
\item[(i)] We continue as in the martingale case for the realised covariation of $\Tilde{Y_t}:=\mathcal S(-t)Y_t$: This can be done since $\mathcal S(t)=\exp(A t)$ and we have
$$
Y_t=\int_0^t \mathcal S (t-s) \sigma_s ds=\mathcal S(t)\int_0^t\mathcal S(-s)\sigma_sdW_s.
$$
Thus $\widetilde{Y}_t$ is a martingale.
\item[(ii)] We continue as in the martingale case for the realised covariation of $\Tilde{Y_t}:= Y_t-AY_t$: This can be done
since the process $(Y_t)_{t\in [0,T]}$
is the strong solution to \eqref{SPDE} with the continuous linear generator $A$ and $h_0\equiv 0\in H$, since $D(A)=H$ (see for instance Theorem 3.2 in \cite{GM2011}). That is, in particular,
$$Y_t=AY_t+\int_0^t \sigma_s dW(s),\quad\forall t\in [0,T].$$
\end{itemize}
\end{remark}
Let us turn our attention to a case of practical interest coming from financial mathematics applied to commodity markets.
\subsubsection{Forward prices in commodity markets: the Heath-Jarrow-Morton approach}
\label{subsect:hjmm}
A case of relevance for our analysis is inference on the volatility for forward prices in commodity markets as well as for forward rates in fixed-income markets.
The Heath-Jarrow-Morton-Musiela equation (HJMM-equation) describes the term structure dynamics in both of these settings (see \cite{Filipovic2000} for a detailed motivation for the use in interest rate modelling and \cite{BenthKruhner2014} for its use in commodity markets) and is given by
\begin{equation}\label{HJMM}
(\text{HJMM})\begin{cases}
dX_t=(\frac{d}{dx}X_t+\alpha_t )dt + \sigma_t dW_t , \qquad t \geq 0\\
X_0=h_0\in H,
\end{cases}
\end{equation}
where $H$ is a Hilbert space of functions $f:\mathbb{R}_+\to\mathbb R$ (the \textit{forward curve space}), $(\alpha_t)_{t\geq 0}$ is a predictable and almost surely locally Bochner-integrable stochastic process and $\sigma$ and $W$ are as before.
Conveniently, the states of this {\it forward curve dynamics} are realized
on the separable Hilbert space
\begin{align}\label{FCS}
H=H_{\beta} &= \left\{ h:\mathbb{R}_+\to\mathbb{R}: h\text{ is absolutely continuous
and } \| h \|_{\beta} < \infty \right\},
\end{align}
for fixed $\beta>0$, where the inner product is given by
\begin{align*}
\langle h,g \rangle_{\beta} &= h(0)g(0) + \int_0^{\infty} h'(x)g'(x)\mathrm{e}^{\beta x}dx,
\end{align*}
and norm $\| h \|_{\beta}^2 = \langle h, h \rangle_{\beta}$. This space was introduced and analysed in \cite{Filipovic2000}. As in \cite{Filipovic2000}, one may consider more general scaling functions in the inner product than the exponential $\exp(\beta x)$. However, for our purposes here this choice suffices.
The suitability of this space is partially due to the following result:
\begin{lemma}
The differential operator $A=\frac{d}{dx}$ is the generator of the strongly continuous semigroup $(\mathcal{S}(t))_{t\geq 0}$ of shifts on $H_{\beta}$, given by $\mathcal S(t)h(x)=h(x+t)$, for $h\in H_{\beta}$.
\end{lemma}
\begin{proof}
See for example \cite{Filipovic2000}.
\end{proof}
The HJMM-equation \eqref{HJMM} possesses a mild solution (see e.g. \cite{PZ2007})
\begin{align}\label{HJM-mild solution}
f_t=\mathcal S(t)f_0+\int_0^t \mathcal{S}(t-s) \alpha_s ds+\int_0^t \mathcal{S}(t-s) \sigma_s dW_s.
\end{align}
Since forward prices and rates are often modelled under a risk neutral probability measure, the drift has in both cases (commodities and interest rates) a special form. In the case of forward prices in commodity markets, it is zero under the risk neutral probability, whereas in interest rate theory it is completely determined by the volatility via the no-arbitrage drift condition
\begin{equation}\label{HJM Drift}\alpha_t= \sum_{j\in \mathbb{N}}\sigma_t^j \Sigma_t^j, \quad \forall t\in [0,T],
\end{equation}
where $\sigma_t^j=\sqrt{\lambda_j}\sigma_t(e_j)$ and $\Sigma^j_t=\int_0^t\sigma^j_sds$ for some eigenvalues $(\lambda_j)_{j\in\mathbb N}$ and a corresponding basis of eigenvectors $(e_j)_{j\in\mathbb N}$ of the covariance operator $Q$ of $W$ (cf. Lemma 4.3.3 in \cite{Filipovic2000}).
\begin{lemma}
Assume that the volatility process $(\sigma_t)_{t\in[0,T]}$ satisfies Assumption \ref{as:fourthmomentvol} and that for each $t\in[0,1]$ the operator $\sigma_t$
maps into $$H_{\beta}^0=\lbrace h\in H_{\beta}: \lim_{x\to\infty} h(x)=0\rbrace.$$ Then the drift given by \eqref{HJM Drift} has values in $H_{\beta}$, is predictable, satisfies \eqref{Finite moment condition for the drift} and is almost surely Bochner integrable. Thus, the conditions of Remark \ref{R:Drift extension} are satisfied.
\end{lemma}
\begin{proof}
That the drift is well defined follows from Lemma 5.2.1 in \cite{Filipovic2000}. Predictability follows immediately from the predictability of the volatility.
We have
by Theorem 5.1.1 from \cite{Filipovic2000} that there is a constant $K$ depending only on $\beta$ such that
$$\|\sigma^j_t \Sigma^j_t\|_{\beta}\leq K \|\sigma_t^j\|_{\beta}^2.$$
Therefore, we get by the triangle inequality that
\begin{align*}
\|\alpha_t\|_{\beta}\leq & K \sum_{j\in\mathbb N}\|\sigma_t^j\|_{\beta}^2=K\|\sigma_t Q^{\frac 12}\|_{\text{HS}}^2.
\end{align*}
Using Cauchy-Schwarz inequality we obtain
\begin{align*}
\sup_{t\in[0,T]}\mathbb E[\|\alpha_t\|_{\beta}^2]\leq \sup_{t\in[0,T]}\mathbb E [\|\sigma_t Q^{\frac 12}\|_{\text{HS}}^4],
\end{align*}
which is finite by Assumption \ref{as:fourthmomentvol}. This shows \eqref{Finite moment condition for the drift}. Moreover, the Bochner integrability follows, since we have the stronger
$$\mathbb E \left[\int_0^T \|\alpha_t\|_{\beta}dt\right]
\leq\int_0^T \mathbb E [\|\alpha_t\|_{\beta}^2]^{\frac 12}dt \leq T \sup_{t\in[0,T]}\mathbb E [\|\sigma_tQ^{\frac 12}\|_{\text{HS}}^4]^{\frac 12}<\infty.$$
The result follows.
\end{proof}
\begin{remark}
Since we know the exact form of the semigroup $(S(t))_{t\geq 0}$, we can recover the adjusted increments $\tilde{\Delta}_n^if$ efficiently from forward curve data by a simple shifting in the spatial (e.g., time-to-maturity) variable of these curves.
Theorem \ref{T: LLN for semigroup case} (and Remark \ref{R:Drift extension} in case of a nonzero drift in interest rate theory) can therefore be applied in practice to make inference on $\sigma$ under Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and \ref{as:Q is more than Hilbert Schmidt}, in which case the ucp-convergence \eqref{ucp convergence with semigroup} holds.
\end{remark}
The shift semigroup is strongly, but not uniformly, continuous, leaving us with the question to determine the convergence speed of the estimator established in Corollary \ref{L:Convergence speed}.
We close this subsection by deriving a convergence bound under regularity condition of the volatility in the space variable (that is time to maturity).
Observe that by Theorem 4.11 in \cite{BenthKruhner2014}
we know that for all $r\in[0,T]$ there exist random variables $c_r$ with values in $ \mathbb R$, $f_r,g_r$ with values in $ H$ such that $g_r(0)=0=f_r(0)$ and $p_r$ with values in $ L^2(\mathbb{R}^2_+)$ such that we have
$$\sigma_r Q^{\frac 12} h(x)= c_r h(0)+\langle g_r, h\rangle_{\beta}+h(0)f_r(x)+ \int_0^{\infty} q_r(x,z)h'(z)dz,$$
where $q_r(x,z)=\int_0^x p_r(y,z) e^{\frac{\beta}{2}z-y}dy$.
We denote by $C_{\text{loc}}^{1,\gamma}:=C_{\text{loc}}^{1,\gamma}(\mathbb{R}_+)$ the space of continuously differentiable functions with locally $\gamma$-H{\"o}lder continuous derivative for $\gamma \in(0,1]$.
\begin{theorem}\label{T: Convergence rate for forward curves}
Assume that $f_r,q_r(\cdot,z)\in C^{1,\gamma}_{\text{loc}}$ for all $z\geq 0$, $r\in[0,T]$ and that the corresponding local H{\"o}lder constants $L_r^1(x)$ of $e^{\frac{\beta}{2}\cdot}f_r'(\cdot)$ and $L^2_r(x,z)$ of $p_r$ are square integrable in $x$ and in $(x,z)$ respectively such that $$\hat L: =\sup_{r\in [0,T]}\mathbb E
\left[\left(|f_r'(\zeta)|+\|L_r^1\|_{L^2(\mathbb{R}_+)}+ \|L_r^2\|_{L^2(\mathbb{R}_+^2)}+ \frac{\beta}{2} \| p_r\|_{L^2(\mathbb{R}_+^2)}\right)^2\right]<\infty.$$
Then for $b_n(T)$ as given in \eqref{Convergence Rate sequence},
we can estimate
$$b_n(T)\leq \hat L \Delta_n^{2\gamma}.$$
\end{theorem}
In the next section, we investigate the validity of assumptions for volatility models.
\subsection{Stochastic volatility models}\label{sec: Stochastic volatility models}
In this section different models for stochastic volatility in Hilbert spaces are discussed.
So far, infinite-dimensional stochastic volatility models are specified by stochastic partial differential equations on the positive cone of Hilbert-Schmidt operators (see \cite{BenthRudigerSuss2018}, \cite{BenthSimonsen2018}). As such, Assumption \eqref{as:Q is more than Hilbert Schmidt} is trivially fulfilled. We will check therefore, which models satisfy Assumptions \eqref{as:smoothvol} and \eqref{as:fourthmomentvol}.
Throughout this section, we take $H=U$ for simplicity. The volatility is oftentimes given as the unique positive square-root of a process
$\Sigma_t$, e.g.,
\begin{equation}
\sigma_t:=\Sigma^{\frac 12}_t,
\end{equation}
where $\Sigma$ takes values in the set of positive
Hilbert-Schmidt operators on $H$
Before we proceed with the particular models, we state the following result:
\begin{lemma}\label{L:Squared Volatility Lemma}
Assume for some constants $\alpha, C_1(T)$ and $C_2(T)$ that for all $s,t\in[0,T]$ we have
\begin{equation}\label{Holder condition for squared Volatility}
\mathbb E \left[\| (\Sigma_t-\Sigma_s)\|_{\text{op}}^2\right]^{\frac 12}
\leq \frac{C_1(T)^2}{\text{Tr}(Q)^2} (t-s)^{2\alpha}
\end{equation}
and
\begin{equation}
\sup_{s\in [0,T]} \mathbb E [\| \Sigma_s\|^2_{\text{op}}]\leq C_2(T).
\end{equation}
Then $\sigma$ satisfies Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol} with corresponding constants $\alpha, C_1(T)$ and $C_2(T)$.
\end{lemma}
\begin{proof}
By the inequality in Lemma 2.5.1 of \cite{Bogachev2018}, the H{\"o}lder inequality and (\ref{Holder condition for squared Volatility})
\begin{align*}
\mathbb{E}[\| (\sigma_t-\sigma_s)Q^{\frac 12}\|_{\text{HS}}^2]\leq & \mathbb{E}[\| (\Sigma^{\frac 12}_t-\Sigma^{\frac 12}_s)\|_{\text{op}}^2] \text{Tr}(Q)\\
\leq & \mathbb{E}[\| (\Sigma_t-\Sigma_s)\|_{\text{op}}] \text{Tr}(Q)\\
\leq & \mathbb{E}[\| (\Sigma_t-\Sigma_s)\|_{\text{op}}^2]^{\frac 12} \text{Tr}(Q)\\
\leq & C_1(T) (t-s)^{\alpha}.
\end{align*}
Moreover, Assumption \ref{as:fourthmomentvol} is satisfied, since
\begin{align*}
\sup_{s \in [0,T]}\ensuremath{\mathbb{E}}[\| \sigma_s\|^
4_{\text{op}}]= \sup_{s \in [0,T]}\ensuremath{\mathbb{E}}[\|\Sigma^{\frac 12}_s\|^4_{\text{op}}]= \sup_{s \in [0,T]}\ensuremath{\mathbb{E}}[\|\Sigma_s\|^2_{\text{op}}]\leq C_2(T).
\end{align*}
The proof is complete.
\end{proof}
\subsubsection{Barndorff-Nielsen \& Shephard (BNS) model}
We assume $\Sigma$ is given by the Ornstein-Uhlenbeck dynamics
\begin{align*}
(BNS)\begin{cases}
d\Sigma_t=\mathbb{B} \Sigma_tdt+ d\mathcal{L}_t,\\
\Sigma_0= \Sigma\in L_{\text{HS}}(H),
\end{cases}
\end{align*}
where $\mathbb B$ is a positive bounded linear operator on the space of Hilbert-Schmidt operators $L_{\text{HS}}(H)$ and $\mathcal{L}$ is a square integrable L{\'e}vy subordinator on the same space. $\mathbb{B}$ is then the generator of the uniformly continuous semigroup given by $\mathbb{S}(t)=\exp(\mathbb{B}t)$ and the equation has a mild solution given by
\begin{align*}
\Sigma_t=\mathbb{S}(t)\Sigma_0+\int_0^t \mathbb{S}(t-s)d\mathcal{L}_s,
\end{align*}
which defines a process in $\mathcal{L}_{T,2}(H,H)$
(see \cite{BenthRudigerSuss2018}).
Stochastic volatility models with OU-dynamics were suggested in \cite{BenthRudigerSuss2018}, extending the BNS-model introduced in \cite{Barndorff-Nielsen2001}
to infinite dimensions.
\begin{lemma}\label{L: Mean Square Lipschitz continuity of OU-Processes} For all $s,t\in [0,T]$ such that $t-s\leq 1$ we have
\begin{align*}
\mathbb E [\| (\Sigma_t-\Sigma_s)\|_{\text{HS}}^2]^{\frac 12}
\leq \tilde{L}(T) (t-s)^{\frac 12},
\end{align*}
where we denote $$\tilde{L}(T):= \sqrt{3} (\mathbb C e^{\|\mathbb C \|_{\text{op}}T} \|\Sigma_0\|_{\text{HS}} +e^{\|\mathbb C\|_{\text{op}}T} \text{Tr}(Q_{\mathcal L})^{\frac 12} (1+\mathbb C e^{\|\mathbb C \|_{\text{op}}T}) )\text{Tr}(Q). $$
In particular, $\sigma$ satisfies Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol} with corresponding constants $\alpha= \frac 14 $, $C_1(T)= \sqrt{\tilde{L}(T)}\text{Tr}(Q)$ and $C_2(T)= e^{\|\mathbb C\|_{\text{op}} T}(\|\Sigma_0\|_{\text{HS}}+ \text{Tr}(Q)^{\frac 12} T^{\frac 12})$.
\end{lemma}
It is also possible to derive ucp convergence for rough volatility models, which we present in the following section.
\subsubsection{Rough volatility models}
In \cite{BenthHarang2020} pathwise constructions of Volterra processes are established and suggested for the use in stochastic volatility models.
In this setting, a process is mostly known to be H{\"o}lder continuous almost surely of some particular order.
Therefore we fix an almost surely H{\"o}lder continuous process $(Y_t)_{t\in [0,T]}$ of order $\alpha$ with values in $H$.
Without any further knowledge of the process, we do not know whether the corresponding H{\"o}lder constant, that is the random variable $C(T)$ such that
\begin{align}\label{local Holder constant}
C(T):=\sup_{s,t\in [0,T]}\frac{\|Y_t-Y_s\|_H}{|t-s|^{\alpha}},
\end{align}
is square-integrable, and therefore we cannot verify Assumptions \ref{as:smoothvol} or \ref{as:fourthmomentvol} without additional assumptions. However, for various models we can use Corollary \ref{C: Localization for almost surely Holder continuous functions}.
If $H$ is a Banach algebra (like the forward curve space defined by \eqref{FCS}), we can define the volatility process by
\begin{equation}\label{Rough exponential volatility}
\sigma_t h:=\exp(Y_t) h.
\end{equation}
This is a direct extension of the volatility models proposed in \cite{Gatheral2018}.
\begin{lemma}
Assume that $H$ be a commutative Banach algebra and $\sigma$ is defined by \eqref{Rough exponential volatility}. Moreover assume that $$\mathbb E [\exp(4\|Y_0\|_H)]<\infty.$$ Then the ucp-convergence in \eqref{ucp convergence with semigroup} holds.
\end{lemma}
\begin{proof}
Since in commutative Banach algebras $\exp(f+g)=\exp(f)\exp(g)$ holds for all $f,g\in H$, we have
\begin{align*}
\|\exp(f)-\exp(g)\|_{\text{op}}\leq & \exp(\|f-g\|_H)\|\exp(g)-\exp(-f+2g)\|\\\leq & 2\exp(2\|f\|_H+2\|g\|_H) \|f-g\|_H.
\end{align*}
This implies the local $\alpha$-H{\"o}lder continuity of $\sigma$. Due to Corollary \ref{C: Localization for almost surely Holder continuous functions} the assertion holds.
\end{proof}
\section{Proofs}\label{sec: Proofs}
In this section, we will present the proofs of our previously stated results.
\subsection{Proofs of results in Section \ref{sec: Weak Law of large numbers}}
\subsubsection{Uniform continuity of semigroups on compact sets}
In order to verify that $b_n(T)$ defined in \eqref{Convergence Rate sequence} converges to $0$ and to prove Theorem \ref{T: LLN for semigroup case},
we need to establish some convergence properties of semigroups on compacts.
Let $X$ be a compact Hausdorff space. Recall that a subset $F\subset C(X;\mathbb{R})$ is equicontinuous, if
for each $x\in X$ and $\epsilon>0$ there is a neighbourhood $U_x$ of $x$ in $X$ such that for all $y\in U_x$ and for all $f\in F$ we have
$$| f(x)-f(y)|\leq \epsilon.$$
$F$ is called pointwise bounded, if for each $x\in X$ the set $\lbrace |f(x)|: f\in F\rbrace$ is bounded in $\mathbb{R}$. $F$ is called relatively compact (or conditionally compact), if its closure is compact.
For convenience, we recall the Arzel\'{a}-Ascoli Theorem (see for example Theorem IV.6.7 in \cite{Dunford1958}):
\begin{theorem}
Let $X$ be a compact Hausdorff space. A subset $F\subset C(X;\mathbb{R})$ is relatively compact in the topology induced by uniform convergence, if and only if it is equicontinuous and pointwise bounded.
\end{theorem}
The next proposition follows from the Arzel\'{a}-Ascoli Theorem and will be important for our analysis:
\begin{proposition}\label{C: Application of Arzela Ascoli}
The following holds:
\begin{itemize}
\item[(i)] Let $\mathcal C \subset H$ be a compact set. Then
\begin{equation}\label{Arzela Ascoli deterministic convergence}
\sup_{h\in \mathcal C}\sup_{x\in [0,\Delta_n]}\|(I-S(x))h\|_H\to 0, \quad \text{ as } n\to \infty.
\end{equation}
\item[(ii)] If $\sigma\in L^p(\Omega;L(U,H))$ for some $p\in[1,\infty)$ is an almost surely compact random operator, we get that
\begin{equation}\label{Arzela Ascoli random operator convergence}
\sup_{x\in [0,\Delta_n]}\|(I-S(x))\sigma\|_{op}\to 0, \quad \text{ as } n\to \infty,
\end{equation}
where the convergence holds almost surely and in $L^p(\Omega;\mathbb{R})$.
\item[(iii)] Let $(\sigma_s)_{s\in [0,T]}$ in $L^p( \Omega\times[0,T];L(U,H))$ for some $p\in[1,\infty)$ be a stochastic process, such that $\sigma_s$ is almost surely compact for all $s\in [0,t]$. If in addition the volatility process is continuous in the $p$'th mean, we obtain
\begin{equation}\label{Arzela Ascoli random operator process L2 convergence}
\sup_{r\in [0,t]}\mathbb{E}[\sup_{x\in [0,\Delta_n]}\|(I-S(x))\sigma_r\|_{op}^p]\to 0 \quad \text{ as } n\to \infty.
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
We want to apply the Arzel\'{a}-Ascoli Theorem for the subset $$F:=\lbrace h\mapsto \sup_{x\in [0,\Delta_n]}\| (I-S(x))h\|_H: n\in \mathbb{N}\rbrace\subset C(\mathcal{C};\mathbb{R}).$$
It is clear that $F$ is pointwise bounded and the equicontinuity holds, since there is a common Lipschitz-constant (independent of $n$):
\begin{align*}
& | \sup_{x\in [0,\Delta_n]}\| (I-S(x))h\|_H- \sup_{x\in [0,\Delta_n]}\| (I-S(x))g\|_H|\\
&\qquad\qquad\leq \sup_{x\in [0,\Delta_n]} \| (I-S(x))(h-g)\|_H\\
&\qquad\qquad\leq \sup_{x\in [0,\Delta_1]} \| (I-S(x))\|_H \|h-g\|_{H},
\end{align*}
for all $g,h\in \mathcal C$.
This implies the relative compactness of $F$ with respect to the sup-norm on $C(\mathcal C;\mathbb{R})$. Therefore, there exists a subsequence such that, for $n\to\infty$, we have
\begin{align*}
\sup_{h\in \mathcal C}\sup_{x\in [0,\Delta_{n_k}]}\|(I-S(x))h\|\to 0.
\end{align*}
Since the sequence $\sup_{x\in [0,\Delta_{n}]}\|(I-S(x))\cdot\|$ is monotone in $n$, we obtain convergence for the whole sequence. This shows (\ref{Arzela Ascoli deterministic convergence}).
Let $B_0(1):=\lbrace h\in H: \| h\|_H=1\rbrace$ be the unit sphere in $H$ and fix $\omega\in \Omega$, such that $\sigma(\omega)$ is compact. Since $\sigma(\omega)$ is compact, $\mathcal C:=\overline{\sigma(\omega)(B_0(1))}$ is compact in $H$.
The set $F(\omega)$ of functionals of the form
\begin{align*}
f_n:=\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\cdot \|_{H}:
\mathcal C\to \mathbb{R}
\end{align*}
forms an equicontinuous and pointwise bounded subset of $C(\mathcal C
;\mathbb{R})$.
Thus, by (\ref{Arzela Ascoli deterministic convergence})
\begin{align*}
\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\sigma(\omega)\|_{op}=& \sup_{x\in [0,\Delta_n]} \sup_{\|h\|=1}\|(I-\mathcal S (x))\sigma(\omega)h\|_{H}\\
\leq& \sup_{g\in \mathcal C
} f_n(g)\\
\to & 0, \quad\text{ as } n\to \infty.
\end{align*}
This gives almost sure convergence.
Since the sequence is uniformly bounded by
$
(1+M(T)) \| \sigma\|_{op},
$
which has finite $p$th moment, we obtain $L^p(\Omega;\mathbb R)$-convergence by the dominated convergence theorem, and therefore (\ref{Arzela Ascoli random operator convergence}) holds.
To verify the convergence (\ref{Arzela Ascoli random operator process L2 convergence}) we argue as follows:
Defining
$$
g_n(s):=\left(\mathbb E[\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\sigma_s\|_{op}^p]\right)^{\frac 1p},
$$
we obtain pointwise boundedness with the bound
$(1+M(T)) \mathbb E [\|\sigma_s\|_{op}^p]^{\frac 1p}$ and equicontinuity of $\lbrace g_n:n\in\mathbb N\rbrace\subset C([0,t];\mathbb{R})$ by the continuity in the $p$th mean of the process $(\sigma_s)_{s\in[0,T]}$, since by the Minkowski inequality
\begin{align*}
\vert g_n(t)-g_n(s)\vert &\leq \left(\mathbb E \left[\sup_{x\in[0,\Delta_n]}\|(I-\mathcal S (x))(\sigma_t-\sigma_s)\|_{op}^p\right]\right)^{\frac 1p}\\
&\leq (I+M(T)) \left(\mathbb E \left[\|\sigma_t-\sigma_s\|_{op}^p\right]\right)^{\frac 1p}.
\end{align*}
By the Arzel\'{a}-Ascoli Theorem this induces the convergence of a subsequence of $(b_n)_{n\in\mathbb N}$ in the $\sup$-norm and thus since $b_n$ decreases pointwise with $n$, the convergence of the whole sequence.
For all $s\in [0,T]$ we have by (\ref{Arzela Ascoli random operator convergence}) that $(\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\sigma_s\|_{op})$ goes to zero as $n\to \infty$ almost surely. By uniqueness of the limit (in probability), this implies that $b_n(s)$ converges to zero and thus, $\sup_{s\in[0,T]}b_n(s)$ goes to zero.
\end{proof}
Recall also the following fact:
\begin{lemma}
The family $(\mathcal{S}(t)^*)_{t\geq0}$ of adjoint operators of the $C_0$-semigroup $(\mathcal{S}(t))_{t\geq0}$ forms again a $C_0$-semigroup on $H$.
\end{lemma}
\begin{proof}
See Section 5.14 in \cite{Engel1999}.
\end{proof}
Now we can proceed with the proof of our main theorem in the next subsection.
\subsubsection{Proof of Theorem \ref{T: LLN for semigroup case}}
The operator bracket process for the semigroup-adjusted increment takes the form
\begin{equation}
\label{eq:variation-increments}
\langle\langle \widetilde{\Delta}_n^iY\rangle\rangle=\int_{t_{i-1}}^{t_i}\mathcal S(t_i-s)\sigma_sQ\sigma_s^*\mathcal S(t_i-s)^*ds.
\end{equation}
\iffalse We will establish conditions under which this convergence holds, by investigating the following stronger\textbf{???} convergences separately:
\begin{equation}\label{First Convergence}
\lim_{n\to\infty}\mathbb{E}[\sup_{0\leq t\leq T}\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\langle\langle \tilde{\Delta}_n^iY\rangle\rangle
-\int_0^t\sigma_sQ\sigma_s^*ds \|_{\text{HS}}]
=0
\end{equation}
as well as
\begin{equation}\label{Second Convergence}
\lim_{n\to\infty}\mathbb{E}[\sup_{0\leq t\leq T}\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}[(\tilde{\Delta}_n^iY)^{\otimes 2}-\langle\langle \tilde{\Delta}_n^iY\rangle\rangle\|_{\text{HS}}]
= 0
, \qquad \text{ as } n \to \infty,
\end{equation}
where both of the convergences will be proven in $L^1(\Omega;L_{HS}(H))$ with respect to the Hilbert-Schmidt norm.
We are going to establish conditions, which guarantee the convergence
\begin{equation}
\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}[(\tilde{\Delta}_n^iY)^{\otimes 2}-\langle\langle \tilde{\Delta}_n^iY\rangle\rangle]
\stackrel{L^1}{\rightarrow} 0
, \qquad \text{ as } n \to \infty.
\end{equation}\fi
For $i\in \{1, \dots, \ensuremath{\lfloor t/\Delta_n\rfloor}\}$ we denote by $
\Delta_n^iW:=W_{t_i}-W_{t_{i-1}}$ and:
\begin{align*}
\tilde{\beta}_i^n&:=\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}\Delta_n^i W,\\
\tilde{\chi}_i^n&:=\int_{t_{i-1}}^{t_i}[\mathcal S (t_i-s)\sigma_s-\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}]dW_s.
\end{align*}
Then
\begin{align*}
\tilde{\Delta}_n^iY&= \tilde{\beta}_i^n +\tilde{\chi}_i^n.
\end{align*}
To this end, fix some $T>0$. Using the triangle inequality, we can estimate
\begin{align}\nonumber
&\sup_{t\in[0,T]}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2}
-\int_0^t\sigma_sQ\sigma_s^*ds \right \Vert_{\text{HS}}\\\label{eq:component1}
&\leq \sup_{t\in[0,T]}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\Delta}_n^iY)^{\otimes 2}
-(\tilde{\beta}_i^n)^{\otimes 2}\right \Vert_{\text{HS}}
\\\label{eq:component2}
&\qquad +\sup_{t\in[0,T]} \left \Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}(\tilde{\beta}_i^n)^{\otimes 2}
-\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^* \Delta_n \right\Vert_{\text{HS}}\\\label{eq:component3}
&\qquad + \sup_{t\in[0,T]}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \int_{(i-1)\Delta_n}^{i\Delta_n}\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\notag \right.\\
&\qquad\qquad\left. -\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S(t_i-s)^*ds \right\Vert_{\text{HS}}\\
&\qquad+\sup_{t\in[0,T]}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}
\langle\langle \tilde{\Delta}_n^iY\rangle\rangle-\int_0^t\sigma_sQ\sigma_s^*ds \right \Vert_{\text{HS}}\label{eq:component4}.
\end{align}
Before we proceed, we need the following result:
\begin{lemma}\label{le:bounds}
Under Assumption \ref{as:fourthmomentvol}, we have
\begin{align}\label{eq:boundbeta}
&\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H\right]\leq M(\Delta_n)\sqrt{\text{Tr}(Q) \sqrt{C_2(T)}} \Delta_n^{1/2}, \\
&\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^2\right]\leq M(\Delta_n)^2\text{Tr}(Q) \sqrt{C_2(T)} \Delta_n,\\
&\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^4\right]\leq M(\Delta_n)^4(\text{Tr}(Q)+2\text{Tr}_2(Q)) C_2(T) \Delta_n^2.
\end{align}
Under Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and either \ref{as:Q is more than Hilbert Schmidt}(a) or \ref{as:Q is more than Hilbert Schmidt}(b), we have
\begin{align}\label{eq:boundchi}
& \ensuremath{\mathbb{E}}\left[\Vert\tilde{\chi}_i^n\Vert_H^2\right]
\leq \Delta_na_n(T),
\end{align}
for some constant $K(T)>0$ and a sequence $(a_n(T))_{n\in\mathbb{N}}$ of real numbers converging to zero.
\end{lemma}
\begin{proof}
First notice that the trace class property of $Q$ yields $\|Q^{1/2}\Vert_{\text{HS}}^2=\text{Tr}(Q)<\infty$. Using the It\^{o} isometry, see
\citet[Corollary 8.7, p.~123]{PZ2007}, we deduce from Assumption 2 that
\begin{align*}
\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^2\right] &= \Delta_n \ensuremath{\mathbb{E}}\left[\left\Vert S(t_i-t_{i-1})\sigma_{t_{i-1}}Q^{1/2}\right\Vert_{\text{HS}}^2\right]\\
&\leq M(\Delta_n)^2\Delta_n\ensuremath{\mathbb{E}}\left[\Vert \sigma_{t_{i-1}}\Vert_{\text{op}}^2\right]\Vert Q^{1/2}\Vert_{\text{HS}}^2 \\
&\leq M(\Delta_n)^2\text{Tr}(Q) \sqrt{C_2(T)} \Delta_n,
\end{align*}
where $M(\Delta_n)$ is given by \eqref{Global Bound for the semigroup}.
An application of the Cauchy-Schwarz inequality gives
$$
\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H\right]\leq\sqrt{\ensuremath{\mathbb{E}}\left[\Vert\beta_i^n\Vert_H^2\right]},
$$
which leads to the result for $p=1$.
For the fourth moment, we argue as follows: By the independent increment property of $W$, we have that
$\Delta_i^nW$ is independent of the $\mathcal F_{(i-1)\Delta_n}$-measurable random variable $\sigma_{(i-1)\Delta_n}$. Thus, again by using the bound \eqref{Global Bound for the semigroup} on the semigroup gives
\begin{align*}
\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^4\right]&\leq M(\Delta_n)^4 \ensuremath{\mathbb{E}}\left[\Vert\sigma_{t_{i-1}\Delta_n}\Vert_{\text{op}}^4\Vert\Delta_i^nW\Vert_H^4\right]
\\
&=M(\Delta_n)^4\ensuremath{\mathbb{E}}\left[\Vert\sigma_{t_{i-1}}\Vert_{\text{op}}^4\right]\ensuremath{\mathbb{E}}\left[\Vert\Delta_i^nW\Vert_H^4\right] \\
&\leq M(\Delta_n)^4C_2(T) \left(\text{Tr}(Q)^2+2\text{Tr}_2(Q)\right)\Delta_n^2,
\end{align*}
after appealing to Lemma \ref{lemma:4thmoment} and Assumption \ref{as:fourthmomentvol}
We have, by Assumption \ref{as:smoothvol}, that
\begin{align*}
& \sup_{s\in (t_{i-1},t_i]} \ensuremath{\mathbb{E}}\left[\Vert(\sigma_s-\sigma_{t_{i-1}})Q^{1/2} \Vert_{\text{HS}}^2\right]
\leq C_1^2(T) \Delta_n^{2\alpha}.
\end{align*}
Hence, for all $i\in\{1, \dots, \ensuremath{\lfloor t/\Delta_n\rfloor}\}$
\begin{align}\label{Zweigstelle1}
\int_{t_{i-1}}^{t_i} \ensuremath{\mathbb{E}}\left[\Vert(\sigma_s-\sigma_{(i-1)\Delta_n})Q^{1/2} \Vert_{\text{HS}}^2\right] ds \leq C_1^2(T)\Delta_n^{1+2\alpha}.
\end{align}
By the It\^{o} isometry
\begin{align}\label{Zweigstelle2}
\ensuremath{\mathbb{E}} \left[\| \tilde{\chi}_i^n\|_H^2\right]=& \int_{t_{i-1}}^{t_i} \mathbb{E}\left[\|(\mathcal S (t_i-s) \sigma_s-\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}})Q^{\frac 12}\|^2_{HS}\right]ds\\\notag
\leq & \int_{t_{i-1}}^{t_i} \mathbb{E}\left[M(\Delta_n)^2\|(\sigma_s-\mathcal S (s-t_{i-1})\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2\right]ds\\\notag
\leq & 2M(\Delta_n)^2\int_{t_{i-1}}^{t_i} \mathbb{E}\left[\|(\sigma_s-\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2+\|(\mathcal S (s-t_{i-1})\sigma_{t_{i-1}}-\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2\right]ds,
\end{align}
where we used the fact that $ \mathcal S (t_i-t_{i-1})=\mathcal S (t_i-s)\mathcal S (s-t_{i-1})$ in the first inequality.
Assume now Assumption \ref{as:Q is more than Hilbert Schmidt}(a) holds and
denote by $\sigma_sQ^{\frac 12}=\mathcal{K}_s\mathcal T$ the corresponding decomposition. We obtain
\begin{align*}
\ensuremath{\mathbb{E}} \left[\| \tilde{\chi}_i^n\|_H^2\right]&\leq 2M(\Delta_n)^2\int_{t_{i-1}}^{t_i} \mathbb{E}\left[\|(\mathcal S (s-t_{i-1})-I)\mathcal{K}_{t_{i-1}}\|_{op}^2\right]\|\mathcal{T} \|_{\text{HS}}^2\\
&\qquad +\ensuremath{\mathbb{E}}\left[\|(\sigma_s-\sigma_{t_{i-1}} )Q^{\frac{1}{2}}\|_{HS}^2\right]ds\\
&\leq 2M(\Delta_n)^2\left( \Delta_n \mathbb{E}\left[\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\mathcal{K}_{t_{i-1}}\|_{op}^2\right] \|\mathcal{T} \|_{\text{HS}}^2
+C_1^2(T) \Delta_n^{1+2\alpha}\right).
\end{align*}
The assertion follows
with $$a_n(T)=2M(\Delta_n)^2\left(\sup_{s\in[0,T]}\mathbb{E}\left[\sup_{x\in [0,\Delta_n]}\|(I-\mathcal S (x))\mathcal{K}_{s}\|_{op}^2\right] \|\mathcal{T} \|_{\text{HS}}^2
+C_1^2(T) \Delta_n^{2\alpha}\right),$$
by \eqref{Arzela Ascoli random operator process L2 convergence} in Corollary \ref{C: Application of Arzela Ascoli}, since $(\mathcal{K}_{s})_{s\in[0,T]}$ is mean square continuous and $\mathcal{K}_s$ is almost surely a compact operator for all $s\in [0,T]$.
Assume now Assumption \ref{as:Q is more than Hilbert Schmidt}(b) holds.
By (\ref{Zweigstelle2}) and (\ref{Zweigstelle1}) and Assumption \ref{as:fourthmomentvol} we obtain
\begin{align*}
&\ensuremath{\mathbb{E}} \left[\| \tilde{\chi}_i^n\|_H^2\right] \\
&\leq 2M(\Delta_n)^2\int_{t_{i-1}}^{t_i} \mathbb{E}\left[\|(\sigma_s-\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2+\|(\mathcal S (s-t_{i-1})\sigma_{t_{i-1}}-\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2\right]ds\\
&\leq 2M(\Delta_n)^2 (\int_{t_{i-1}}^{t_i}\mathbb{E}\left[ \|(\sigma_s-\sigma_{t_{i-1}})Q^{\frac{1}{2}}\|_{\text{HS}}^2 +\sup_{r\in[0,\Delta_n]} \|(\mathcal S (r)-I\|_{op}^2\| \sigma_{t_{i-1}}Q^{\frac{1}{2}}\|_{\text{HS}}^2\right]ds)\\
&\leq 2M(\Delta_n)^2 \left(C_1^2(T) \Delta_n^{1+2\alpha}+\Delta_n \sup_{r\in[0,\Delta_n]} \|(\mathcal S (r)-I\|_{op}^2 \sqrt{C_2(T)} \text{Tr}(Q)\right).
\end{align*}
This shows the assertion with
$$a_n(T)=2M(\Delta_n)^2 \left( \sup_{r\in[0,\Delta_n]} \|(\mathcal S (r)-I\|_{op}^2 \sqrt{C_2(T)} \text{Tr}(Q)+C_1^2(T) \Delta_n^{2\alpha}\right),$$
since, by the uniform continuity of the semigroup, $ \sup_{r\in[0,\Delta_n]} \|(\mathcal S (r)-I\|_{op}$ converges to zero as $n\to\infty$.
\end{proof}
\begin{remark}
In the following, we need Assumption \ref{as:Q is more than Hilbert Schmidt} only if we want to apply Lemma \ref{le:bounds}, where we needed it to verify that the sequence $a_n$ converges to zero.
The convergence rate of $a_n$ is determined by both, the path-regularity of the volatility process as well as the convergence rate of the semigroup (on compacts) as $t\to 0$. The convergence speed of this sequence will essentially determine the rate of convergence of the sequence $b_n$ from Theorem \ref{T: LLN for semigroup case}.
\end{remark}
\begin{remark}We notice that for the first and second moment estimates of $\Vert\tilde{\beta}_i^n\Vert_H$, we could relax the assumption on $\sigma$ slightly by assuming $\Vert\sigma_s Q^{1/2}\Vert_{\text{HS}}$ having finite second moment. However, the fourth moment of $\Vert\tilde{\beta}_i^n\Vert_H$ is most conveniently estimated based
on a fourth moment condition on the operator norm of $\sigma$.
\end{remark}
With the results in Lemma \ref{le:bounds} at hand, we prove convergence of the four components \eqref{eq:component1}-\eqref{eq:component4}.
First, we show the convergence of \eqref{eq:component1}.
\begin{proposition}
Under Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and \ref{as:Q is more than Hilbert Schmidt}, we have
\begin{align*} \lim_{n\rightarrow\infty}\ensuremath{\mathbb{E}}\left[\sup_{t\in[0,T]}
\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\left[(\tilde{\Delta}_n^iY)^{\otimes 2}
-(\tilde{\beta}_i^n)^{\otimes 2}\right] \right\Vert_{\text{HS}}\right]=0.
\end{align*}
\end{proposition}
\begin{proof}
Define
\begin{align*}
\tilde{\xi}_i^n&:=(\tilde{\Delta}_n^i Y)^{\otimes 2}-(\tilde{\beta}_i^n)^{\otimes 2} =(\tilde{\beta}_i^n +\tilde{\chi}_i^n)^{\otimes 2}-(\tilde{\beta}_i^n)^{\otimes 2}
\\
&=(\tilde{\chi}_i^n)^{\otimes 2}+ \tilde{ \beta}_i^n\otimes \tilde{\chi}_i^n + \tilde{\chi}_i^n \otimes \tilde{\beta}_i^n.
\end{align*}
By the triangle inequality, we note that
\begin{align}\nonumber
\Vert \tilde{\xi}_i^n \Vert_{\text{HS}}
&\leq \Vert (\tilde{\chi}_i^n)^{\otimes 2}\Vert_{\text{HS}}
+\Vert \tilde{\beta}_i^n\otimes \tilde{\chi}_i^n\Vert_{\text{HS}}
+\Vert\tilde{\chi}_i^n \otimes \tilde{\beta}_i^n\Vert_{\text{HS}}
\\ \label{eq:xis}
&= \Vert \tilde{\chi}_i^n\Vert_{H}^2
+2\Vert \tilde{\beta}_i^n\Vert_H\Vert \tilde{\chi}_i^n\Vert_{H}.
\end{align}
Again appealing to the triangle inequality, it follows
\begin{align*}
\sup_{t\in[0,T]} \left \Vert \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \tilde{\xi}_i^n\right \Vert_{\text{HS}} \leq \sup_{t\in[0,T]}\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \Vert \tilde{\xi}_i^n\Vert_{\text{HS}}\leq\sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \Vert \tilde{\xi}_i^n\Vert_{\text{HS}}.
\end{align*}
Applying \eqref{eq:boundchi} in Lemma \ref{le:bounds} leads to
\begin{align*}
\ensuremath{\mathbb{E}}\left[\Vert \tilde{\chi}_i^n\Vert^2_H\right]
\leq \Delta_n a_n(T).
\end{align*}
We next apply the Cauchy-Schwarz inequality to obtain, using the notation $K_n(T)= M(\Delta_n)^2\text{Tr}(Q) \sqrt{C_2(T)}$,
\begin{align*}
\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n \Vert_{H}
\Vert\tilde{\chi}_i^n\Vert_{H}\right] ^2
&\leq \ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_{H}^2\right] \ensuremath{\mathbb{E}}\left[\Vert\tilde{\chi}_i^n
\Vert_{H}^2\right] \leq K_n(T) \Delta_n^{2} a_n(T),
\end{align*}
by \eqref{eq:boundbeta} and \eqref{eq:boundchi} in Lemma \ref{le:bounds}.
Altogether we have, since $a_n\to 0$ as $n\to \infty$, that
\begin{align}\label{convergence inequality for first summand}
\ensuremath{\mathbb{E}} \left[ \sup_{t\in[0,T]} \left\Vert \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \tilde{\xi}_i^n\right\Vert_{\text{HS}}\right]
\leq & \lfloor T/\Delta_n\rfloor ( \Delta_n a_n(T)+2 \sqrt{K_n(T)a_n(T)}\Delta_n)
\end{align}
converges to zero as $n\to \infty$ by Lemma \ref{le:bounds}.
\end{proof}
Now we prove the convergence of \eqref{eq:component2}.
\begin{proposition}
Under Assumptions \ref{as:smoothvol}, \ref{as:fourthmomentvol} and \ref{as:Q is more than Hilbert Schmidt} we have,
\begin{align*}
\lim_{n\rightarrow\infty}\ensuremath{\mathbb{E}} \left[\sup_{t\in[0,T]} \left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\left\{(\tilde{\beta}_i^n)^{\otimes 2}- \mathcal S(t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S(t_i-t_{i-1})^* \Delta_n\right\} \right\Vert_{\text{HS}}^2\right]=0.
\end{align*}
\end{proposition}
\begin{proof}
We define
\begin{align*}
\tilde{\zeta}_i^n :=(\tilde{\beta}_i^n)^{\otimes 2}- \mathcal S(t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^* \Delta_n.
\end{align*}
First we show that $\sup_{t\in[0,T]}\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\tilde{\zeta}_i^n\Vert_{\text{HS}}$ has finite second moment. By the triangle inequality and
Lemma \ref{lem:HS-banachalg}
\begin{align*}
\sup_{t\in[0,T]} \left \Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\tilde{\zeta}_i^n\right \Vert_{\text{HS}}&\leq
\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert\tilde{\zeta}_i^n\Vert_{\text{HS}} \\
&\leq\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert(\tilde{\beta}_i^n)^{\otimes 2}\Vert_{\text{HS}}\\
&\qquad +
\Delta_n\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert \mathcal S(t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\Vert_{\text{HS}} \\
&\leq \sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert\tilde{\beta}_i^n\Vert_H^2+\Delta_n\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q^{1/2}\Vert^2_{\text{HS}} \\
&\leq \sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert\tilde{\beta}_i^n\Vert_H^2+\Delta_n\text{Tr}(Q)M(\Delta_n)^2\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\Vert\mathcal \sigma_{t_{i-1}}\Vert^2_{\text{op}}.
\end{align*}
Considering $\ensuremath{\mathbb{E}}\left[\sup_{t\in[0,T]}\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\tilde{\zeta}_i^n\Vert_{\text{HS}}^2\right]$, we get a finite sum of terms of the type
$\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^4\right]$, $\ensuremath{\mathbb{E}}\left[\Vert\mathcal \sigma_{t_{i-1}}\Vert_{\text{op}}^4\right]$
and $\ensuremath{\mathbb{E}}\left[\Vert\tilde{\beta}_i^n\Vert_H^2\Vert\sigma_{t_{i-1}}\Vert^2_{\text{op}}\right]$. The first is finite due to Lemma
\ref{le:bounds}, while the second is finite by the imposed Assumption \ref{as:fourthmomentvol}. For the third, we apply the Cauchy-Schwarz inequality and argue as for the first two. In conclusion, we obtain a finite second moment as desired.
Note that $R_t=\int_0^t h_s dW(s)$ where $h_s=\sum_{i=1}^n S(t_i-t_{i-1})\sigma_{t_{i-1}} \mathbf{1}_{(t_{i-1},t_i]}(s)$ defines a martingale, such that $R_{t_m}=\sum_{j=1}^{m}\tilde{\beta}_j^n$. Then the squared process is
\begin{align*}
\int_0^{t_m} h_s dW(s)^{\otimes 2} =\sum_{i,j=1}^m\langle \tilde{\beta}_i^n,\cdot\rangle \tilde{\beta}_j^n
\end{align*}
and
\begin{align*}
\langle\langle &\int_0^{\cdot} h_s dW(s)\rangle\rangle_{t_m} \\
&\qquad=\int_0^{t_m} \sum_{i,j=1}^m\mathcal{S}(t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{j-1}}^*\mathcal{S}(t_j-t_{j-1})^*\mathbf{1}_{[t_{i-1},t_i)}(s)\mathbf{1}_{[t_{j-1},t_j)}(s)ds\\
&\qquad=\int_0^{t_m} \sum_{i=1}^m\mathcal{S}(t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal{S}(t_i-t_{i-1})^* \mathbf{1}_{[t_{i-1},t_i)}(s)ds.
\end{align*}
We obtain that
\begin{align*}
\tilde{\zeta}_m^n&=\int_0^{t_m} h_s dW(s)^{\otimes 2}-\langle\langle \int_0^{\cdot} h_s dW(s)\rangle\rangle_{t_m}\\
&\qquad\qquad-\int_0^{t_{m-1}} h_s dW(s)^{\otimes 2}+\langle\langle \int_0^{\cdot} h_s dW(s)\rangle\rangle_{t_{m-1}}
\end{align*}
forms a sequence of martingale differences with respect to $(\mathcal{F}_{t_{i-1}})_{i\in \ensuremath{\mathbb{N}}}$, by Remark \ref{rem:martingale}.
This implies in particular, after double conditioning, that for $1\leq i\neq j\leq \ensuremath{\lfloor t/\Delta_n\rfloor} $,
$$
\mathbb{E}\left[\langle\tilde{\zeta}_i^n, \tilde{\zeta}_j^n\rangle_{\text{HS}}\right]=0.
$$
By Doob's martingale inequality we obtain
\begin{align*}
\ensuremath{\mathbb{E}} \left[\sup_{t\in[0,T]}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\tilde{\zeta}_i^n \right\Vert_{\text{HS}}^2 \right]\leq 4 \ensuremath{\mathbb{E}} \left[\left\Vert\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\tilde{\zeta}_i^n \right\Vert_{\text{HS}}^2 \right]= 4 \sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \ensuremath{\mathbb{E}}\left[\Vert\tilde{\zeta}_i^n\Vert_{\text{HS}}^2\right].
\end{align*}
Applying the triangle inequality and the basic inequality $(a+b)^2\leq 2(a^2+b^2)$, we find
\begin{align*}
\Vert \tilde{\zeta}_i^n\Vert_{\text{HS}}^2&\leq 2\left(\Vert(\tilde{\beta}_i^n)^{\otimes 2}\Vert_{\text{HS}}^2 + \Vert \mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^* \Vert_{\text{HS}}^2 \Delta_n^2 \right)\\
&\leq2\left(\Vert \tilde{\beta}_i^n\Vert_{H}^4 + \Vert \sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^* \Vert_{\text{HS}}^2 M(\Delta_n)^4 \Delta_n^2 \right).
\end{align*}
Denoting again $K_n(T)= M(\Delta_n)^2\text{Tr}(Q) \sqrt{C_2(T)}$, we can now apply
Lemma \ref{le:bounds} to conclude that
\begin{align}\label{convergence inequality for second summand}
\sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \ensuremath{\mathbb{E}}\left[\Vert\tilde{\zeta}_i^n\Vert_{\text{HS}}^2\right]
&\leq 2\left(K_n(T)\lfloor T/\Delta_n\rfloor \Delta_n^2+M(\Delta_n)^4\Delta_n\ensuremath{\mathbb{E}}\left[ \sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \Vert \sigma_{t_{i-1}}Q\sigma_{t_{i-1}}^* \Vert_{\text{HS}}^2 \Delta_n \right]\right)\\
&\to 0, \text{ as } n \to \infty,\notag
\end{align}
since the expectation operator on the right-hand side of the inequality above converges to
$$
\ensuremath{\mathbb{E}}\left[ \int_0^T\Vert \sigma_{s}Q\sigma_{s}^* \Vert_{\text{HS}}^2ds\right]<\infty.
$$
Hence, the proposition follows.
\end{proof}
Next, we prove the convergence of \eqref{eq:component3}.
\begin{proposition}
Assume that Assumptions \ref{as:smoothvol} and \ref{as:fourthmomentvol} hold.
Then
\begin{align*}
\lim_{n\rightarrow\infty}\ensuremath{\mathbb{E}} [ \sup_{t\in[0,T]}\Vert \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \int_{(t_{i-1}}^{t_i} & (\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^* \\
&-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*ds \Vert_{HS}]=0.
\end{align*}
\end{proposition}
\begin{proof}
From the triangle and Bochner inequalities, we get
\begin{align*}
&\Vert \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \int_{t_{i-1}}^{t_i}(\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*\\
&\qquad-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*ds\Vert_{HS}
\\
&\qquad\qquad\leq
\sum_{i=1}^{\lfloor T/\Delta_n \rfloor} \int_{t_{i-1}}^{t_i}
\Vert(\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*\\
&\qquad\qquad\qquad-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*\Vert_{\text{HS}}ds.
\end{align*}
Note that for $s\in (t_{i-1},t_i]$, we have
\begin{align*}
\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}&Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*
\\
=&(\mathcal S (t_i- t_{i-1})\sigma_{t_{i-1}}-\mathcal S (t_i - s) \sigma_s)Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\\
& +\mathcal S (t_i -s) \sigma_sQ(\sigma_{t_{i-1}}^*\mathcal{S}(t_i-t_{i-1})^*-\sigma_s^*\mathcal S (t_i-s)^*).
\end{align*}
Hence, using the triangle inequality and then the Cauchy-Schwarz inequality, we have
\begin{align*}
&\ensuremath{\mathbb{E}} \left[\Vert \mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*\Vert_{\text{HS}}\right]^2 \\
&\qquad= \ensuremath{\mathbb{E}}\left[\Vert
(\mathcal S (t_i- t_{i-1})\sigma_{t_{i-1}}-\mathcal S (t_i - s) \sigma_s)Q\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\right.\\
& \qquad\qquad\left.+\mathcal S (t_i -s) \sigma_sQ(\sigma_{t_{i-1}}^*\mathcal{S}(t_i-t_{i-1})^*-\sigma_s^*\mathcal S (t_i-s)^*)
\Vert_{\text{HS}}\right]^2 \\
&\qquad\leq 2 \ensuremath{\mathbb{E}} \left[\Vert
(\mathcal S (t_i- t_{i-1})\sigma_{t_{i-1}}-\mathcal S (t_i - s) \sigma_s)Q^{\frac 12}\|_{op}\|Q^{\frac 12}\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\|_{\text{HS}}\right]^2\\
& \qquad\qquad+2\mathbb E \left[\| \mathcal S (t_i -s) \sigma_sQ^{\frac 12}\|_{\text{HS}}\|Q^{\frac 12}(\sigma_{t_{i-1}}^*\mathcal{S}(t_i-t_{i-1})^*-\sigma_s^*\mathcal S (t_i-s)^*)
\Vert_{\text{op}}\right]^2\\
&\qquad\leq 2\ensuremath{\mathbb{E}} \left[\Vert
(\mathcal S (t_i- t_{i-1})\sigma_{t_{i-1}}-\mathcal S (t_i - s) \sigma_s)Q^{\frac 12}\|_{op}^2\right]\ensuremath{\mathbb{E}}\left[\|Q^{\frac 12}\sigma_{t_{i-1}}^*\mathcal S (t_i-t_{i-1})^*\|_{\text{HS}}^2\right]\\
& \qquad\qquad+2\mathbb E \left[\| \mathcal S (t_i -s) \sigma_sQ^{\frac 12}\|_{\text{HS}}^2\right]\ensuremath{\mathbb{E}}\left[\|Q^{\frac 12}(\sigma_{t_{i-1}}^*\mathcal{S}(t_i-t_{i-1})^*-\sigma_s^*\mathcal S (t_i-s)^*)
\Vert_{\text{op}}^2\right] .
\end{align*}
Thus, using the identity $S(t_i-t_{i-1})=S(t_i-s)S(s-t_{i-1})$,
we get
\begin{align*}
& \ensuremath{\mathbb{E}} \left[\Vert (\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*\Vert_{\text{HS}}\right]^2 \\
&\qquad\leq 2M(\Delta_n)^4 \ensuremath{\mathbb{E}}
\left[\Vert(\mathcal S((s-t_{i-1}) \sigma_{t_{i-1}}- \sigma_s)Q^{1/2} \Vert_{\text{op}}^2\right] \ensuremath{\mathbb{E}}\left[\Vert Q^{1/2}\sigma_{t_{i-1}}^*\mathcal \Vert_{\text{HS}}^2\right]
\\
&\qquad\qquad+2M(\Delta_n)^4
\ensuremath{\mathbb{E}}\left[\Vert \sigma_s Q^{1/2} \Vert_{\text{HS}}^2\right]
\ensuremath{\mathbb{E}} \left[\Vert Q^{1/2}(\sigma_{t_{i-1}}^*\mathcal S (s-t_{i-1})^*-\sigma_s^*) \Vert_{\text{op}}^2\right]
\\
&\qquad\leq 4 M(\Delta_n)^4 \sup_{r\in[0,T]} \ensuremath{\mathbb{E}}\left[\Vert\sigma_rQ^{\frac 12} \Vert_{\text{HS}}^2\right] \ensuremath{\mathbb{E}}
\left[\sup_{x\in[0,\Delta_n]}\Vert(\mathcal S (x)\sigma_{t_{i-1}}- \sigma_s)Q^{\frac 12} \Vert_{\text{op}}^2\right].
\end{align*}
By Assumption \ref{as:fourthmomentvol} we know that$$ A_n:=4 M(\Delta_n)^4 \sqrt{C_2(T)}\geq 2 M(\Delta_n)^4 \sup_{r\in[0,T]} \ensuremath{\mathbb{E}}[\Vert\sigma_rQ^{\frac 12} \Vert_{\text{HS}}^2].$$
Using Assumption \ref{as:smoothvol}, this gives the following estimate:
\begin{align*}
&\ensuremath{\mathbb{E}} \left[\Vert(\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*\Vert_{\text{HS}}\right]^2 \\
&\qquad\leq A_n(T) \ensuremath{\mathbb{E}}
\left[\sup_{x\in[0,\Delta_n]}\Vert(\mathcal S (x)\sigma_{t_{i-1}}-\sigma_{t_{i-1}}+\sigma_{t_{i-1}}- \sigma_s)Q^{\frac 12} \Vert_{\text{op}}^2\right]\\
& \qquad\leq A_n(T) 2\left( \ensuremath{\mathbb{E}}
\left[\sup_{x\in[0,\Delta_n]}\left\Vert(\mathcal S (x)-I) \sigma_{t_{i-1}} Q^{\frac 12} \right\Vert_{\text{op}}^2\right]+\ensuremath{\mathbb{E}}
\left[\left\Vert(\mathcal \sigma_{t_{i-1}}- \sigma_s)Q^{\frac 12} \right\Vert_{\text{op}}^2\right]\right) \\
&\qquad \leq A_n(T) 2 (b_n(T)+C_1^2(T)\Delta_n^{2\alpha}),
\end{align*}
where $b_n(T):=\sup_{s\in[0,T]}\ensuremath{\mathbb{E}}
[\sup_{x\in[0,\Delta_n]}\left\Vert(I-\mathcal S (x)) \sigma_{s} Q^{\frac 12} \Vert_{\text{op}}^2\right]$ as before. We have that $(b_n(T))_{n\in\mathbb N}$ is a real sequence converging to 0 by \eqref{Arzela Ascoli random operator process L2 convergence} in Corollary \ref{C: Application of Arzela Ascoli}, since for each $s\in [0,T]$ the operator $\sigma_sQ^{\frac 12}$ is almost surely compact as a Hilbert-Schmidt operator and the process $(\sigma_sQ^{\frac 12})_{s\in [0,T]}$ is mean square continuous by Assumption \ref{as:smoothvol}.
Summing up, we obtain
\begin{align}\label{convergence inequality for third summand}
&\ensuremath{\mathbb{E}} [ \Vert \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}} \int_{t_{i-1}}^{t_i} (\mathcal S (t_i-t_{i-1})\sigma_{t_{i-1}}Q \sigma_{t_{i-1}}^* \mathcal S(t_i-t_{i-1})^*
\\
&\qquad-\mathcal S (t_i-s)\sigma_sQ\sigma_s^*\mathcal S (t_i-s)^*ds\Vert_{HS}]\notag
\\
&\qquad\qquad\qquad\leq \sum_{i=1}^{\lfloor T/\Delta_n\rfloor} \int_{t_{i-1}}^{t_i} ( A_n(T) 2 (C_1^2(T)\Delta_n^{2\alpha}+ b_n(T)))^{\frac 12} ds\notag
\\
&\qquad\qquad\qquad= \lfloor T/\Delta_n\rfloor \Delta_n ( A_n(T) 2 (C_1^2(T)\Delta_n^{2\alpha}+ b_n(T)))^{\frac 12} \to 0, \text{ as } n \to \infty,\notag
\end{align}
and the proof is complete.
\end{proof}
Finally, we prove the convergence of (\ref{eq:component4}).
\begin{proposition}
Suppose that Assumption \ref{as:smoothvol} and \ref{as:fourthmomentvol} hold.
The
$$\lim_{n\to\infty}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\langle\langle \tilde{\Delta}_n^iY\rangle\rangle - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]=0.$$
\end{proposition}
\begin{proof}
Recall the expression for $\langle\langle \tilde{\Delta}_n^iY\rangle\rangle$ in \eqref{eq:variation-increments}. By the triangle and Bochner inequalities, we find,
\begin{align*}
&\sup_{t\in[0,T]}\left \Vert \int_0^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\sigma_sQ\sigma_s^*ds-\sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\int_{t_{i-1}}^{t_i}
\mathcal S(t_i-s)\sigma_sQ\sigma_s^*\mathcal S(t_i-s)^*ds\right \Vert_{\text{HS}} \\
&\leq\sup_{t\in[0,T]} \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\int_{t_{i-1}}^{t_i}\Vert
\sigma_sQ\sigma_s^*-\mathcal S(t_i-s)\sigma_sQ\sigma_s^*\mathcal S(t_i-s)^*\Vert_{\text{HS}}ds \\
&\leq\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\int_{t_{i-1}}^{t_i}\Vert
\sigma_sQ\sigma_s^*-\mathcal S(t_i-s)\sigma_sQ\sigma_s^*\mathcal S(t_i-s)^*\Vert_{\text{HS}}ds.
\end{align*}
By Lemma \ref{lem:HS-banachalg} and the Cauchy-Schwarz inequality we obtain
\begin{align*}
&\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\langle\langle \tilde{\Delta}_n^iY\rangle\rangle - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]\\
\leq & \sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\int_{t_{i-1}}^{t_i}\mathbb E [\Vert
(I-\mathcal S(t_i-s))\sigma_sQ\sigma_s^*\Vert_{\text{HS}}]\\
&+\mathbb{E}[\Vert
\mathcal S(t_i-s)\sigma_sQ\sigma_s^*(I-S(t_i-s)^*)\Vert_{\text{HS}}]ds\\
&+ \int_{t_n}^{T}\mathbb{E}[\Vert
\sigma_sQ\sigma_s^*\Vert_{\text{HS}}]ds\\
\leq &\sum_{i=1}^{\lfloor T/\Delta_n\rfloor}\int_{t_{i-1}}^{t_i}\mathbb E[\Vert
(I-\mathcal S(t_i-s))\sigma_sQ^{\frac 12}\|_{\text{op}}\|Q^{\frac 12}\sigma_s^*\Vert_{\text{HS}} ]\\
&+M(\Delta_n)\mathbb E[\Vert
\sigma_sQ^{\frac 12}\|_{\text{HS}}\|Q^{\frac 12}\sigma_s^*(I-S(t_i-s)^*)\Vert_{\text{op}}]ds\\
&+ \int_{t_n}^{T}\mathbb{E}[\Vert
\sigma_s Q^{\frac 12}\Vert_{\text{HS}}^2]ds\\
\leq & \sup_{r\in [0,T]}\mathbb E[ \sup_{x\in [0,\Delta_n]}\Vert
(I-\mathcal S(x))\sigma_rQ^{\frac 12}\|_{op}^2]^{\frac 12}(1+M(\Delta_n)) \int_{0}^{T}\mathbb E[\|Q^{\frac 12}\sigma_s^*\Vert_{\text{HS}}^2]^{\frac 12} ds\\
&+ \int_{t_n}^{T}\mathbb{E}[\Vert
\sigma_sQ^{\frac 12}\Vert_{\text{HS}}^2]ds.
\end{align*}
Using Assumption \ref{as:fourthmomentvol}, we can estimate
\begin{align}\label{convergence inequality for fourth summand}
&\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \sum_{i=1}^{\ensuremath{\lfloor t/\Delta_n\rfloor}}\langle\langle \tilde{\Delta}_n^iY\rangle\rangle - \int_0^t\sigma_sQ\sigma_s^*ds\right\|_{\text{HS}}\right]\\
\leq & (b_n(T))^{\frac 12}(1+M(\Delta_n)) T( \sqrt{C_2(T)}\text{Tr}(Q))^{\frac 12}+ (T-t_n)\sqrt{C_2(T)}\text{Tr}(Q)\notag\\
\to & 0 \quad \text{ as }n\to \infty. \notag
\end{align}
Here again $b_n(T):=\sup_{s\in[0,T]}\ensuremath{\mathbb{E}}
[\sup_{x\in[0,\Delta_n]}\left\Vert(I-\mathcal S (x)) \sigma_{s} Q^{\frac 12} \Vert_{\text{op}}^2\right]$, which is a real sequence converging to 0 by \eqref{Arzela Ascoli random operator process L2 convergence} in Corollary \ref{C: Application of Arzela Ascoli}, since for each $s\in [0,T]$ the operator $\sigma_sQ^{\frac 12}$ is almost surely compact as a Hilbert-Schmidt operator and the process $(\sigma_sQ^{\frac 12})_{s\in [0,T]}$ is mean square continuous by Assumption \ref{as:smoothvol}.
\end{proof}
\subsubsection{Proof of Theorem \ref{T: Extension by localization}}
\begin{proof}[Proof of Theorem \ref{T: Extension by localization}]
Define
\begin{equation}
Y^{(m)}_t:=\int_0^t\mathcal S (t-s)\sigma_s^{(m)}dW_s,
\end{equation}
and
\begin{align*}
\mathcal{Z}^n_m:= &\sup_{0\leq s\leq t}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor s/\Delta_n\rfloor}}(\tilde{\Delta}_n^i Y^{(m)})^{\otimes 2}-\int_0^s\sigma_u^{(m)} Q\sigma_u^{(m)*}du\right\Vert_{\text{HS}},\\
\mathcal Z^n:= & \sup_{0\leq s\leq t}\left\Vert\sum_{i=1}^{\ensuremath{\lfloor s/\Delta_n\rfloor}}(\tilde{\Delta}_n^i Y)^{\otimes 2}-\int_0^s\sigma_uQ\sigma_u^*du\right\Vert_{\text{HS}}.
\end{align*}
Since $\sigma^{(m)}$ satisfies the conditions of Theorem \ref{T: LLN for semigroup case}, we obtain that for all $m\in\mathbb N$ and $\epsilon>0$
\begin{equation}\label{Localization Convergence of Zmn}
\lim_{n\to\infty}\mathbb{P}[\mathcal{Z}^n_m>\epsilon]=0.
\end{equation}
We have $\mathcal{Z}_m^n=\mathcal{Z}^n$ on $\Omega_m$ and hence
\iffalse
Using Fatou's Lemma and the monotone convergence theorem we obtain
\begin{align*}
\limsup_{n\to\infty}\mathbb{P}[\mathcal{Z}^n>\epsilon]=& \limsup_{n\to\infty}\int_{\Omega} \mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P\\
\leq&\int_{\Omega} \limsup_{n\to\infty}\mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P\\
= &\lim_{m\to\infty} \int_{\Omega_m} \limsup_{n\to\infty}\mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P (**)\\
= &\lim_{m\to\infty} \int_{\Omega_m} \lim_{n\to\infty}\mathbf{1}(\mathcal{Z}^n_m>\epsilon)d\mathbb P (***)\\
\leq&\lim_{m\to\infty} \liminf_{n\to\infty}\int_{\Omega_m} \mathbf{1}(\mathcal{Z}^n_m>\epsilon)d\mathbb P\\
\leq&\lim_{m\to\infty} \liminf_{n\to\infty} \mathbb{P}[\mathcal{Z}^n_m>\epsilon]=0.
\end{align*}
{\bf Fred: we used monotone convergence to get (**), maybe mention.}
{\bf Fred: why does (***) follow, e.g., why is $\limsup$ equal to $\lim$ in this case? Dennis: I think it was a mistake by myself, so alternatively one can argue} \begin{align*}
\limsup_{n\to\infty}\mathbb{P}[\mathcal{Z}^n>\epsilon]\leq & \lim_{n\to\infty}( \int_{\Omega_m} \mathbf{1}(\mathcal{Z}_m^n>\epsilon)d\mathbb P+\int_{\Omega_m^c} \mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P)\\
\leq &\lim_{n\to\infty}( \mathbb P [\mathcal{Z}_m^n>\epsilon]+\mathbb P [\Omega_m^c])\\
= & \mathbb P [\Omega_m^c],
\end{align*}
which holds for all $m\in \mathbb N$. Since $\mathbb P[\Omega_m^c]$ becomes arbitrary small for large $m$, this shows that $\lim_{n\to\infty}\mathbb P [\mathcal{Z}^n>\epsilon]=0$ for all $\epsilon>0$.{\bf Dennis, I agree, this argument does the job! Maybe a bit careful with the lim and limsups on the RHS, maybe also you could split the argument by first saying},\fi
\begin{align*}
\mathbb{P}[\mathcal{Z}^n>\epsilon]&= \int_{\Omega_m} \mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P+\int_{\Omega_m^c} \mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P \\
&=\int_{\Omega_m} \mathbf{1}(\mathcal{Z}_m^n>\epsilon)d\mathbb P+\int_{\Omega_m^c} \mathbf{1}(\mathcal{Z}^n>\epsilon)d\mathbb P\\
&\leq \mathbb{P}[\mathcal{Z}_m^n>\epsilon] + \mathbb P[\Omega_m^c],
\end{align*}
which holds for all $n,m\in\mathbb N$.
Now, by virtue of \eqref{Localization Convergence of Zmn} we obtain for all $m\in\mathbb N$ that
$$
\limsup_{n\rightarrow\infty} \mathbb{P}[\mathcal{Z}^n>\epsilon]\leq \mathbb P[\Omega_m^c].
$$
By the continuity of $\mathbb P$ from below, $\mathbb P[\Omega^c_m]$ converges to $0$ as $m\to \infty$ and therefore
$$\lim_{n\rightarrow\infty} \mathbb{P}[\mathcal{Z}^n]=
\limsup_{n\rightarrow\infty} \mathbb{P}[\mathcal{Z}^n>\epsilon]= 0.
$$
\end{proof}
\subsection{Proofs of Section \ref{sec: Applications}}
We will now present the longer proofs of the results presented in Section \ref{sec: Applications}.
\subsubsection{Proof of Theorem \ref{T: Convergence rate for forward curves}}
\begin{proof}[Proof of Theorem \ref{T: Convergence rate for forward curves}]
Since for all $h\in H_{\beta}$ it is $|h(0)|\leq \|h\|_{\beta}$ we have for $\|h\|_{\beta}=1$ that
\begin{align*}
\| (I-\mathcal S (x)) \sigma_r Q^{\frac 12} h\|_{\beta}\leq & \|(I-\mathcal S (x)) f_r\|_{\beta}+ \left\|(I-\mathcal S (x))\int_0^{\infty} q_r(\cdot,z)h'(z)dz\right\|_{\beta}\\
= & (1)+(2).
\end{align*}
The first summand can be estimated as follows, for some $\zeta\in (0,t)$ and $x<1$:
\begin{align}\label{First step in HJM-convergence speed}
(1)= & \left(|f_r(x)|^2+\int_0^{\infty} (f_r'(y+x)-f_r'(y))^2e^{\beta y}dy\right)^{\frac 12}\notag\\
&\leq (|f_r'(\zeta)|^2 x^2 + x^{2\gamma} \|L_1\|_{L^2(\mathbb{R}_+)}^2)^{\frac 12}\leq x^{\gamma} (|f_r'(\zeta)|+\|L_r^1\|_{L^2(\mathbb{R}_+)}).
\end{align}
We can show, using H{\"o}lder inequality, for all $h\in H_{\beta}$ such that $\|h\|_{\beta}=1$, that
\begin{align*}
(2)=&\left(\int_0^{\infty}\left[\partial_y \int_0^{\infty}(q_r(y+x,z)-q_r(y,z))h'(z)dz\right]^2 e^{\beta y}dy\right)^{\frac 12}\\
=&\left(\int_0^{\infty}\left[ \int_0^{\infty}\left(e^{-\frac{\beta}{2}x}p_r(y+x,z)-p_r(y,z)\right) e^{\frac{\beta}{2}z-y}h'(z)dz\right]^2 e^{\beta y}dy\right)^{\frac 12}\\
=&\left(\int_0^{\infty}\left[ \int_0^{\infty}(e^{-\frac{\beta}{2}x}p_r(y+x,z)-p_r(y,z)) e^{\frac{\beta}{2}z}h'(z)dz\right]^2 dy\right)^{\frac 12}\\
\leq &\left(\int_0^{\infty} \int_0^{\infty}(e^{-\frac{\beta}{2}x}p_r(y+x,z)-p_r(y,z))^2 dz \|h\|_{\beta} dy\right)^{\frac 12}.
\end{align*}
Now we can estimate, for $x<1$,
\begin{align}\label{Second step in HJM-convergence speed}
(2)\leq &\left(\int_0^{\infty}\int_0^{\infty}(e^{-\frac{\beta}{2}x}(p_r(y+x,z)-p_r(y,z)))^2 dz dy\right)^{\frac 12}\notag\\
&+\left(\int_0^{\infty}\int_0^{\infty} (e^{-\frac{\beta}{2}x}-1)^2 p_r(y,z)^2dxdz\right)^{\frac 12}\notag\\
\leq & x^{\gamma}\|L_r^2\|_{L^2(\mathbb{R}_+^2)}+|e^{-\frac{\beta}{2}x}-1| \| p_r\|_{L^2(\mathbb{R}_+^2)}\notag\\
\leq & x^{\gamma}\|L_r^2\|_{L^2(\mathbb{R}_+^2)}+ \frac{\beta}{2} x \| p_r\|_{L^2(\mathbb{R}_+^2)}\leq x^{\gamma}(\|L_r^2\|_{L^2(\mathbb{R}_+^2)}+ \frac{\beta}{2} \| p_r\|_{L^2(\mathbb{R}_+^2)}).
\end{align}
Combining \eqref{First step in HJM-convergence speed} and \eqref{Second step in HJM-convergence speed}, we obtain, for $\|h\|_{\beta}= 1$,
\begin{equation}
\|(I-\mathcal S (x)) \sigma_r Q^{\frac 12} h\|_{\beta}\leq x^{\gamma} [|f_r'(\zeta)|+\|L_r^1\|_{L^2(\mathbb{R}_+)}+ \|L\|_{L^2(\mathbb{R}_+^2)}+ \frac{\beta}{2} \| p_r\|_{L^2(\mathbb{R}_+^2)}].
\end{equation}
Now we can conclude that
\begin{align*}
b_n(T)= & \sup_{r\in[0,T]}\mathbb E[\sup_{x\in [0,\Delta_n]}\sup_{\|h\|_{\beta}=1}\|(I-\mathcal S (x)) \sigma_r Q^{\frac 12} h\|_{\beta}^2]\\
\leq & \Delta_n^{2\gamma} \sup_{r\in [0,T]}\mathbb E
[(|f_r'(\zeta)|+\|L_r^1\|_{L^2(\mathbb{R}_+)}+ \|L\|_{L^2(\mathbb{R}_+^2)}+ \frac{\beta}{2} \| p_r\|_{L^2(\mathbb{R}_+^2)})^2].
\end{align*}
\end{proof}
\subsubsection{Proof of Lemma \ref{L: Mean Square Lipschitz continuity of OU-Processes}}
\begin{proof}[Proof of Lemma \ref{L: Mean Square Lipschitz continuity of OU-Processes}]
We have
\begin{align*}
\Sigma_t-\Sigma_s= & (\mathbb S(t)-\mathbb S(s)) \Sigma_0+\int_s^t \mathbb S(t-u)d\mathcal L_u+ \int_0^s (\mathbb S(t-u)-\mathbb S(s-u))d\mathcal L_u\\:= & (1)+(2)+(3).
\end{align*}
As the semigroup $(\mathbb S(t))_{t\geq 0}$ is uniformly continuous, we can again use the fundamental equality \eqref{Fundamental Theorem of Semigroup Theory II} and the triangle inequality for Bochner integrals to deduce, for $s,t\in [0,T]$ and $t\geq s$, that
$$\|\mathbb{S}(t)-\mathbb S(s)\|_{\text{op}}=\left\|e^{\mathbb{B}s}\int_0^{t-s} e^{\mathbb B x}\mathbb Bdx\right\|_{\text{op}}=\left\|\int_s^{t} e^{\mathbb B x}\mathbb Bdx\right\|_{\text{op}}\leq e^{\|\mathbb{B}\|_{\text{op}}T}\|\mathbb B\|_{\text{op}}(t-s).$$
Denoting $U:=e^{\|\mathbb{B}\|_{\text{op}}T}\|\mathbb B\|_{\text{op}}$, this gives
$$\|(1)\|_{HS}\leq \|\mathbb S(t)-\mathbb S(s)\|_{\text{op}} \|\Sigma_0\|_{\text{HS}}\leq U \|\Sigma_0\|_{\text{HS}} (t-s).$$
This induces
$\mathbb{E}[\|(1)\|_{\text{HS}}^2]^{\frac 12}\leq U \|\Sigma_0\|_{\text{HS}} (t-s)$.
Moreover, by the It\^{o} isometry
\begin{align*}
\mathbb{E}[\|(2)\|_{\text{HS}}^2]^{\frac 12}=\left(\int_s^t \|\mathbb S(t-u)Q_{\mathcal L}^{\frac 12}\|_{\text{HS}}^2du\right)^{\frac 12}\leq e^{\|\mathbb B\|_{\text{op}}T} \text{Tr}(Q_{\mathcal L})^{\frac 12} (t-s)^{\frac 12},
\end{align*}
where $Q_{\mathcal L}$ denotes the covariance operator of $\mathcal L$.
Finally, we can show again, by the It\^{o} isometry and the mean value inequality, that
\begin{align*}
\mathbb{E}[\|(3)\|_{\text{HS}}^2]^{\frac 12}= & \left(\int_0^s \|(\mathbb S(t-u)-\mathbb S(s-u))Q_{\mathcal L}^{\frac 12}\|_{\text{HS}}^2du\right)^{\frac 12}\\
\leq & \left(\int_0^s \|\mathbb S(t-s)-\mathcal{I}\|_{\text{op}}^2\|\mathbb S(s-u))Q_{\mathcal L}^{\frac 12}\|_{\text{HS}}^2du\right)^{\frac 12}\\
\leq & \left( U^2(t-s)^2 \int_0^s\|\mathbb S(s-u))Q_{\mathcal L}^{\frac 12}\|_{\text{HS}}^2du\right)^{\frac 12}\\
\leq & U (t-s) e^{\|\mathbb B\|_{\text{op}}T} \text{Tr}(Q_{\mathcal L})^{\frac 12}.
\end{align*}
Summing up, we obtain, for $t-s\leq 1$,
\begin{align*}
\mathbb{E}[\| (\Sigma_t-\Sigma_s)\|_{\text{HS}}^2]^{\frac 12}
\leq & (\mathbb{E}[\|(1)\|_{\text{HS}}^2]^{\frac 12}+\mathbb{E}[\|(2)\|_{\text{HS}}^2]^{\frac 12}+\mathbb{E}[\|(3)\|_{\text{HS}}^2]^{\frac 12})\\
\leq & (U \|\Sigma_0\|_{\text{HS}} +e^{\|\mathbb B\|_{\text{op}}T} \text{Tr}(Q_{\mathcal L})^{\frac 12} (1+U) ) (t-s)^{\frac 12}.
\end{align*}
Since also
by the It\^{o} isometry, we obtain
\begin{align*}
\sup_{t \in [0,T] }\mathbb E [\| \Sigma_t^2\|_{\text{HS}}]^{\frac 12}
\leq & \sup_{t \in [0,T] }\left(\|\mathbb S (t) \Sigma_0\|_{\text{HS}}+\mathbb E \left[\left \|\int_0^t \mathbb S(t-u)d\mathcal L_u \right\|_{\text{HS}}^2\right]^{\frac 12}\right) \\
\leq &\sup_{t \in [0,T] }\left(\|\mathbb S (t) \Sigma_0\|_{\text{HS}}+ \left(\int_0^T \|\mathbb S(t-u)Q_{\mathcal L}^{\frac 12}\|_{\text{HS}}^2du\right)^{\frac 12}\right)\\
\leq & e^{\|\mathbb B\|_{\text{op}} T}\|\Sigma_0\|_{\text{HS}}+ e^{\|\mathbb C\|_{\text{op}} T} \text{Tr}(Q_{\mathcal L})^{\frac 12} T^{\frac 12},
\end{align*}
the additional assertion follows by Lemma \ref{L:Squared Volatility Lemma}.
\end{proof}
\section{Discussion and outlook}\label{Sec:Conclusion}
Our paper develops a new asymptotic theory for high-frequency estimation of the volatility of infinite-dimensional stochastic evolution equations in an operator setting. We have defined the so-called semigroup-adjusted realised covariation (SARCV) and derived a weak law of large numbers based on uniform convergence in probability with respect to the Hilbert-Schmidt norm. Moreover, we have presented various examples where our new method is applicable.
Many articles on (high-frequency) estimation for stochastic partial differential equations rely on the so-called spectral approach and assume therefore the applicability of spectral theorems to the generator $A$ (cf. the survey article \cite{Cialenco2018}). This makes it difficult to apply these results on differential operators that do not fall into the symmetric and positive definite scheme, as for instance $A=\frac{d}{dx}$ in the space of forward curves presented in Section \ref{subsect:hjmm}, a case of relevance in financial applications that is included in our framework.
Moreover, a lot of the related work assumes the volatility as a parameter of estimation to be real-valued (c.f.~the setting in \cite{Cialenco2018}). An exception is the spatio-temporal volatility estimation in the recent paper by \cite{Chong2020}
(see also \cite{ChongDalang2020} for limit laws for the power variation of fractional stochastic parabolic equations). Here, the stochastic integrals are considered in the sense of \cite{Walsh1986} and the generator is the Laplacian. In our analysis, we operate in the general Hilbert space framework in the sense of Peszat and Zabzcyck for stochastic integration and semigroups.
In our framework, we work with high-frequent observations of Hilbert-space valued random elements, hence we have observations, which are discrete in time but not necessarily in space.
Recent research on
inference for parabolic stochastic partial differential considered observation schemes which allow for discreteness in time and space, cf. \cite{Cialenco2020}, \cite{Bibinger2020}, \cite{Chong2020}, \cite{ChongDalang2020}. However, as our approach falls conveniently into the realm of functional data analysis, we might reconstruct data in several cases corresponding to well-known techniques for interpolation or smoothing.
Indeed, in practice, a typical situation is that the Hilbert space consists of real-valued functions (curves) on $\mathbb R^d$ (or some subspace thereof), but we only have access to
discrete observations of the curves. We may have data for $Y_{t_i}(x_j)$ at locations $x_j, j=1,\ldots, m$, or possibly some aggregation of these (or, in more generality, a finite set of linear functionals of $Y_{t_i}$).
For example, in commodity forward markets, we have only a finite number of forward contracts traded at all times, or, like in power forward markets, we have contracts with a delivery period (see e.g. \cite{BSBK}) and hence observations of the average of $Y_{t_i}$ over intervals on $\mathbb R_+$. In other applications, like observations of temperature and wind fields in space and time, we may have accessible measurements at geographical locations where meteorological stations are situated, or, from atmospheric reanalysis where we have observations in grid cells regularly distributed in space. From such discrete observations, one must recover the Hilbert-space elements $Y_{t_i}$. This is a fundamental issue in
functional data analysis, and several smoothing techniques have been suggested and studied. We refer to \cite{Ramsay2005} for an extensive discussion of this. However, smoothing introduces another layer of approximation, as we do not recover $Y_{t_i}$ but some approximate version $Y^m_{t_i}$, where the superscript $m$ indicates that we have smoothed based on the $m$ available observations. The construction of a curve from discrete observations is not a unique operation as this is an inverse problem.
In future research, it will be interesting to extend our theory to the case when (spatial) smoothing has been applied to the discrete observations.
Interestingly, when we compare our work to recent developments on high-fre\-quen\-cy estimation for volatility modulated Gaussian processes in finite dimensions, see e.g.~\cite{Podolskij2014} for a survey, it appears that a scaling factor is needed in the realised (co)variation so that an asymptotic theory for Volterra processes can be derived. This scaling factor is given by the
variogram of the associated so-called Gaussian core process, and depends on the corresponding kernel function.
However, in our case, due to the semigroup property, we are in a better situation than for general Volterra equations, since we actually have (or can reconstruct) the data in order to compute the semigroup-adjusted increments. We can then develop our analysis based on extending the techniques and ideas that are used in the semimartingale case. In this way, the estimator becomes independent of further assumptions on the remaining parameters of the equation.
However, the price to pay for this universality is that the convergence speed cannot generally be determined.
The semigroup-adjustment of the increments effectively forces the estimator to converge at most at the same rate as the semigroup converges to the identity on the range of the volatility as $t$ goes to $0$.
At first glance, it seems that the strong continuity of the semigroup suggests that we can obtain convergence just with respect to the strong topology. This would make it significantly harder to apply methods from functional data analysis, even for constant volatility processes.
Fortunately, the compactness of the operators $\sigma_tQ^{\frac 12}$ for $t\in[0,T]$ comes to the rescue and enables us to prove that convergence holds with respect to the Hilbert-Schmidt norm. In this case, we obtain reasonable convergence rates for the estimator.
|
1,116,691,497,118 | arxiv | \section*{Introduction}
Learning is the process of gaining or improving knowledge or behavior by observing or interacting with the environment \cite{marton2013learning,gross2015psychology,rogers2010teaching}.
In the brain, learning is dependent on the synaptic modifications between neurons \cite{hebb2005organization,markram1997regulation,bi1998synaptic,payeur2021burst}. However, the realization of learning in the brain is not completely understood \cite{humeau2019next}.
Existing theories mainly focus on neural
electrochemical signals and study their capabilities to be the brain’s information carriers \cite{dayan2001theoretical}.
Backpropagation is an important part of our current understanding of learning in artificial neural networks and is most often used to train deep neural networks \cite{goodfellow2016deep}. Inspired by the fact that the brain learns by modifying the synaptic connections between neurons, the error signals are fed back to inner layers to update synaptic weights \cite{rumelhart1986learning,hecht1992theory}.
The broad applications and success of backpropagation and backpropagation-like algorithms as well as its core idea of using feedback connections to adjust synapses encouraged us to investigate if the brain's learning process is based on the principle of the backward flow of information \cite{zipser1988back,lillicrap2013preference,cadieu2014deep,khaligh2014deep,lillicrap2020backpropagation,payeur2021burst,sacramento2018dendritic,theories}.
However, it is not clear if and how backpropagation is implemented by the brain. It has been argued that some of its main assumptions such as having exactly the same weight for each feedback connection and its feedforward counterpart as well as the need for separate distinct forward and backward pathways of information are biologically unrealistic \cite{guerguiev2017towards}.
Recent works suggest that symmetric weights are not necessary for effective learning \cite{kovsvcak2010stochastic,lillicrap2016random,lee2015joint,liao2016important,samadi2017deep,moskovitz2019feedback}; however, they are implicitly assuming a separate feedback pathway \cite{guerguiev2017towards,lillicrap2020backpropagation}.
In this paper, we suggest a new potential photonic mechanism for the backward flow of information that avoids the above mentioned assumptions.
Biophotons are spontaneously emitted by living cells in the range of near-IR to near-UV frequency (350 nm–1300 nm wavelength) with low rates and low intensity, on the order of $1-10^3$ photons/(s.cm$^{2}$) \cite{cifra2014ultra}. These photons have been observed from microorganisms including yeast cells and bacteria \cite{konev1966very,vogel1999weak}, plants and animals \cite{prasad2013towards}, and different biological tissues \cite{kobayashi2009imaging,prasad2011two} including brain slices \cite{kobayashi1999vivo,tang2014,wang2011spontaneous}, yet it is unknown whether they have a biological function. In 1999, Kobayashi et al.\, performed in vivo imaging of biophotons from a rat's brain for the first time \cite{kobayashi1999vivo}. They demonstrated the correlation between biophoton emission intensity and neuronal activities of the brain with electroencephalographic techniques and suggested that biophoton emission from the brain originates from mitochondrial activities through the production of reactive oxygen species \cite{kobayashi1999vivo}. Moreover, several experiments studied the response of neurons and generally the brain to the external light \cite{leszkiewicz,wade1988mammalian,vandewalle2009light, starck2012stimulating,zhang2020violet} and showed that the brain has photosensitive properties.
The existence of these biophotons as well as the evidence that opsin molecules deep in the brain respond to light \cite{zhang2020violet} prompt the question of whether biophotons could serve as communication signals guided through the brain\cite{zarkeshian2018there}.
Axons have been proposed to be potential photonic waveguides for such optical communication \cite{kumar, sun2010biophotons, zangari2018node}.
The detailed theoretical modeling of myelinated axons shows that optical propagation is possible in either direction along the axon \cite{kumar}.
Recent experimental evidence for light guidance by the myelin sheath supports the theoretical model \cite{depaoli2020anisotropic}.
Also, there is some older indirect experimental evidence in supporting light conduction by axons \cite{tang2014,sun2010biophotons}.
Given the advantage that optical communication provides in terms of precision and speed in a technical context and the growing evidence that photons are practical carriers of information, one may wonder whether biological systems also exploit this modality.
If any backpropagation-like algorithm is employed by the brain, biophotons guided through axons
are a plausible choice for carrying the backward information in the brain in addition to the well-known electrochemical feed-forward signaling.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{Fig1-channel.png}
\caption[Schematic of a network, and distribution of ATP consumption]{Schematic of a simplified network of neurons trained by backpropagation with stochastic photonic feedback. Three sets of neurons are represented as input, hidden, and output layers. For clarity, three neurons are shown in the hidden layer here. Connection of the dendrite of the post-synaptic neuron blue and the axonal terminal of the pre-synaptic neuron yellow at the synaptic cleft is enlarged (top-right).
The strength of the synapse (or synaptic weight) is greater as the result of more working ion channels (orange oval-shaped gates) in the post-synaptic neuron \cite{voglis2006role}. This is in accordance with the greater amount of ATP (adenosine triphosphate, the energy carrier molecules) usage in the post-synaptic neuron~\cite{harris2012}. That results in more biophoton production by the post-synaptic mitochondrion which can transfer backward information to the pre-synaptic neuron.
The myelinated axon (top-left, enlarged) can guide the received backward photons along the axon.
}
\label{Fig1}
\end{figure}
We model the backward path of information as a communication channel (see Fig~\ref{Fig1}) in which photons are produced stochastically with fairly low rates as it is expected by experimental observations of biophotons from the brain \cite{kobayashi1999vivo,tang2014}.
The stochastically emitted biophotons update a random subset of synaptic weights in each training trial, meaning that only a percentage of the neurons transmit backward information at any given time.
We consider realistic conditions and evaluate the learning efficacy of the mechanism.
We demonstrate that even with a small proportion (a few percent) of neurons sending stochastic biophotons backward to the upstream neuron, networks with one hidden layer and photonic emission can still learn a complex task.
We examine our model for the case that photons carry only one bit of information.
We further incorporate noise (e.g.\ due to ambient light) in our model.
Our results show that the network can still learn the task of MNIST digit recognition considering these realistic imperfections.
\section*{Results}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Figure2.pdf}
\caption[Training of a 3-layered artificial neural network with 500 neurons in the hidden layer with the stochastic photonic updates]{Training of a 3-layered artificial neural network with 500 neurons in the hidden layer by the stochastic photonic updates. \textbf{a,} The network is trained for an MNIST hand-written recognition task. For each training trial, the network receives sets of handwritten digits as input and their corresponding digit classes as the target output.
\textbf{b,} The network computes the feedforward weights and updates them with stochastic photonic feedback. The parameter \textit{q} is the probability of transmitting one photon per neuron, and as it gets larger (closer to 1), the network sends more backward photons and behaves closer to the conventional backpropagation algorithm. \textbf{c,} As the trial number grows, the error rate (that is the moving average of the past 100 trial errors, see Eq~\eqref{eq:error_train_n}) converges to a small value and the training completes. This convergence happens even for small values of \textit{q} but after greater numbers of trials. Here, the learning rate, $\epsilon=0.01$, has been kept small for the stability of the network.
\textbf{d,} The test error, which measures the distance between the target and the output of the trained model, is averaged over 10 repetitions of the test experiment for each different values of \textit{q} and $\epsilon$ (see Methods for details.)}
\label{Fig2}
\end{figure}
In an artificial neural network, the backpropagation learning algorithm calculates the gradient of an error function for each individual synapse with respect to the network weights and propagates the gradients backward all through the network to the upstream neuron.
The forward flow of information is due to action potentials and action potentials go one way through the neural paths \cite{purves2008neuroscience}.
Our suggested mechanism for backward communication that determines the error signals does not interfere with the forward flow of information, as the electrochemical signaling pathway is not likely manipulated by biophotons on short time scales.
We do not require a separate network of neurons for feedback, addressing one of the main biologically problematic assumptions of backpropagation.
In our experiments, we consider a network with three sets of layers that are categorized into three classes input, hidden, and output layers (Fig~\ref{Fig1}). The hidden layer consists of 500 units (neurons) and the number of neurons in the other two layers depends on the task to be trained. The goal of the training is to reduce the loss function (Eq~\eqref{err}) which is the distance between the target output and the calculated output of the network. The synaptic weights are updated stochastically to mimic the random emission and propagation of biophotons that carry the information backward.
We train the network for the
MNIST digit recognition task \cite{lecun1998mnist,grother1995nist} in an online fashion.
The mathematical details of the model are described in the Methods section.
\subsection*{Neurons are trained with stochastically backpropagated photons.}
Here we provide numerical evidence that our described model is trainable, even with partial backpropagation
of errors (teaching signals),
by testing it on
the classification task of MNIST digits. The MNIST dataset of handwritten digits consists of 60,000 training examples and 10,000 test examples. Each training example in the dataset is a grey-scale image of 28 by 28 pixels of handwritten single digits between 0 and 9, in which the handwritten digits are recognized. The task is to classify a given image of a handwritten digit into one of the 10 classes (see Fig~\ref{Fig2}a). After evaluating the network and calculating the weights of connections, a random proportion $q$ of the neurons releases photons. The photon travels backward to transmit the error signal to its pre-synaptic neuron (see Fig~\ref{Fig1}) to update the pre-synaptic weights (see Equations~\eqref{wt1}-\eqref{wt2} and the discussion for Stochastic photonic updates afterward in Methods). For small values of $q$, the photon emission is sparser and as it gets closer to 1, more photons travel backward (see Fig~\ref{Fig2}b), and the model performs closer to the original backpropagation algorithm where all weights get updated (see Methods, Stochastic photonic updates discussion). For stability, we keep the learning rate $\epsilon$ small (the learning rate is a tunable parameter between 0 and 1 that controls the step size of each iteration of weight updates in the training of the neural network) and of the order of $0.001$ in most of the simulations. We show that training error converges to a small constant value after $6\times 10^4$ number of trials for reasonable values of $q$. Here, the training error is the average error of the last 100 samples. While the training is in progress, we use a validation dataset
to estimate how accurately the model performs and avoid overfitting. The validation error is computed every 500 steps. Fig~\ref{Fig2}c shows that we can still train a network for small values of $q$ of the order of $10^{-4}$. This shows that with even low emission rates of biophotons, the backpropagation-like channel can still learn the task.
We also compare the output of the trained model with the expected output (test dataset, independent from the training dataset) and calculate the test error to check the performance of the trained model, shown in Fig~\ref{Fig2}d.
Next, we restrict the amount of information that each photon transmits back in the model.
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{Figure3.pdf}
\caption[Training of a 3-layered artificial neural network with stochastic photonic updates and binary signal limit]{Training of a 3-layered artificial neural network by the stochastic photonic updates with carrying one bit of information. \textbf{a,} Stochastic weight updates transmit binary information and each update either increases or decreases the weight by a fixed amount, $\epsilon$. For more details see Eq~\eqref{wt1_sgn} and Eq~\eqref{wt2_sgn} in Methods. \textbf{b,}
Here, $q=0.1$ and $\epsilon=0.1$. With stochastic binary updates, the training of MNIST classification task is still successful. As the number of hidden units increases the error rate converges to smaller values. \textbf{c,} To obtain mean test error, for each values of $\epsilon$ and $q$, 10 different networks with 500 hidden units were evaluated and the test error was averaged. \textbf{d,} Standard deviation of the test error is calculated per mesh point.}
\label{Fig3}
\end{figure}
\subsection*{Neurons can learn even if each stochastic backpropagated photon carries only one bit of information.}\label{sec:sign}
As it may not be realistic to assume that a single photon can carry unlimited detailed information, we investigated if photons with more limited information could still lead to learning. In order to limit the amount of information carried by each photon, we discretize the gradient information into binary increases ($+\epsilon$) and decreases ($-\epsilon$) as shown in Fig~\ref{Fig3}a with blue and red signals, see Methods Eq~\eqref{wt1_sgn}-\ref{wt2_sgn} for the implementation of this specific weight update.
Depending on their type (e.g.\ two different polarizations or frequencies), they might increase or decrease the weight by a fixed amount of $\epsilon$.
Fig~\ref{Fig3} summarizes the result of implementing this limitation of information transmission and that the network can still learn. In Fig~\ref{Fig3}b, we have increased the size of the network from 500 units in the hidden layer to 1000 and 5000 units. As we increase the size of the network, the error rate converges to smaller amounts. In Fig~\ref{Fig3}c, the test error demonstrated with respect to each mesh point has been averaged over 10 trials of training the different networks. The standard deviation of the trials is shown in Fig~\ref{Fig3}d.
\subsection*{The network can be trained even in presence of uncorrelated random photonic updates.}
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{Figure4.pdf}
\caption[Training of a 3-layered artificial neural network with the noisy stochastic photonic updates and binary signal limit]{Training of a 3-layered artificial neural network by the noisy stochastic photonic updates with binary signal limit. \textbf{a,} After evaluation of the network, binary photonic signals are transmitted back to update weights, however, in the presence of uncorrelated noisy photons. \textbf{b,} Here, the deterministic photon emission rate, $q=0.1$, and the learning rate, $\epsilon=0.1$, are given for a network with 500 units in the hidden layer. The error rate depends on the noisy photon rate, $n_p$ which defines the probability of emitting noisy photon update. \textbf{c,} The average error rate after 10 trials has increased due to the noise in the system. Here, $n_p=0.01$. There are still areas in the graph where learning is happening. This figure is generated after training 10 different networks for each mesh point and taking the average on the test error. \textbf{d}) The standard deviation of the test error after running 10 different networks.}
\label{Fig4}
\end{figure}
As the biophoton emission rates are low, the impact of ambient light as noise should be considered \cite{kuvcera2013cell}. Although waveguiding of biophotons by the axons would mitigate the effect of such noise \cite{kumar}, it is still important to consider. Other possible sources of noise could be some uncorrelated photon emissions and dark counts by detectors.
We investigated biophoton-assisted learning in the presence of noise. We model the noise as uncorrelated random photons that impair the photonic updates.
As shown in Fig~\ref{Fig4}a, noise photons (dashed ones) are also backpropagated. They disturb the process of weight adjustment and increase the error rate (see Fig~\ref{Fig4}b). The model was simulated for a 3-layered network with 500 units in the hidden layer and $q=0.1$ for different values of $n_p$, where $n_p$ is the proportion of neurons that emit noise (meaningless) photons. As long as the $n_p$ is smaller than $q$ (signal to noise ratio is smaller or equal to one), the training error converges according to Fig~\ref{Fig4}b.
The comparison between Fig~\ref{Fig3}c and Fig~\ref{Fig4}c shows that for some areas of the parameter space learning still works even in the presence of noise with very low standard deviation of test error (see Fig~\ref{Fig4}d).
\section*{Discussion}
We have shown that backpropagation-like learning is possible with stochastic photonic feedback, inspired by the idea that axons can serve as photonic waveguides, and taking into account the stochastic nature of biophoton emission in the brain. Considering realistic imperfections in the biophoton emission, we trained the network when each photon carried only one bit of information and showed that the network learned. We also examined the learning in presence of background noise and our results demonstrated its success. Here we discuss the biological inspiration for our suggested mechanism, address a few related questions, and propose experiments to test our hypothesis.
Synaptic weights are considered as the amount of influence that firing a pre-synaptic neuron has on the post-synaptic one \cite{hebb2005organization,markram1997regulation,FROHLICH201647}. This is directly related to the number of ion channels affected in the post-synaptic neuron \cite{debanne2003brain,meriney2019synaptic}.
The greater the synaptic weight is, the more ion channels are working in the post-synaptic neuron, which requires more metabolic activity \cite{harris2012,voglis2006role}.
That, in turn, escalates ATP usage resulting in more active post-synaptic mitochondria \cite{Stoler2021}.
As mitochondria in the post-synaptic neuron work harder and consume more energy, more reactive oxygen species (ROS) are produced \cite{pospivsil2014role,turrens2003mitochondrial,murphy2009mitochondria,lambert2009reactive}.
The emission of biophotons has been linked to the production of reactive species such as ROS and carbonyl in mitochondria \cite{pospivsil2014role,kobayashi1999vivo,miyamoto2014singlet,pospivsil2019mechanism}.
Thus, a higher production rate of ROS leads to a higher production rate of biophotons in the post-synaptic neuron.
That directly relates the production of biophotons in the post-synaptic neuron to the synaptic weight changes. Proportionality of the photon emission to the weights is part of what is required for backpropagation (see Eq~\eqref{eq:wt_err2} in Methods).
In addition, neurons may have evolved to encode the error signals in the photonic flux, e.g.\ by modulating biophoton emission as a function of incoming biophotons received.
An important question is how photonic information could be relayed across multiple network layers.
Opsins are well-known for their ability of light detection in retina \cite{buhr2015neuropsin} and skin \cite{buhr2019neuropsin} of mammals. But they also exist in the deep brain tissues of mammals \cite{yamashita2014evolution} and are highly conserved over evolution. The existence of such light absorbent proteins in deep brain tissues suggests that they might serve as biophoton detectors.
Moreover, a biological effect of external light mediated by opsins deep in the brain has recently been demonstrated, namely the opsin-mediated suppression of thermogenesis (heat production) in response to light \cite{zhang2020violet}.
On the other hand, mitochondria always balance ATP production versus thermogenesis \cite{li2020mfsd7c}.
Such suppression of thermogenesis via opsin-mediated photon detection could lead to more production of ATP by mitochondria which results in more photon production. Thus, it could constitute a relay across the neuron in the photonic backpropagation channel.
In our modeling, we have considered the fact that the amount of information carried by each photon may be limited, for example to one bit. This information could be encoded in the polarization of the photons, or in their wavelengths \cite{senior2009optical, hui2019introduction, nielsen_chuang_2010}. The amount of information that can be successfully transmitted also depends on the detection mechanism, e.g.\ there could be different opsins responding to different wavelengths.
Although low rates of biophoton emission might be a concern\cite{kuvcera2013cell}, guiding them by axons could be part of the solution because it will limit the loss of signal photons and reduce the impact of background light \cite{kumar}. Biophoton emission rate from a slice of a mouse brain was measured at one photon per neuron per minute \cite{tang2014} at rough estimation. This reported rate is one to two orders of magnitude lower than the electro-chemical signaling rate in the brain \cite{buzsaki2014log}.
If biophotons are guided through the axons, it should be noted that the measured rates of brain emission only reflect the scattered photons and there could be more light propagating in a guided way than the experimental observations from the outside.
To verify the role of biophotons in learning in the brain, we propose some in-vivo experimental approaches. One type of tests is to genetically modify possible photon detectors in the brain, such as opsins, using well-studied optogenetics methods \cite{deisseroth2015optogenetics,adamantidis2014optogenetics,beyer2015optogenetic} in order to impair biophoton reception by the network and observe the effects on the learning process. Another type of test could be using the RNA interference process \cite{hannon2002rna,summerton2007morpholino,gao2021active} in non-genetically engineered animals to target the silencing of specific sequences in genes that involve the generation or reception of biophotons, which could affect learning.
Also, one could introduce background light into the neural network in vivo or add noise into the axon to see if that affects the learning. We have modeled noise by adding uncorrelated photons into the network. One could implement the idea of extra uncorrelated photons by introducing luciferase and luciferin (whose reaction produces bioluminescence without requiring an external light source) into the brain of the living animal by using optogenetic tools \cite{land2014optogenetic,park2020novel}.
It has been suggested that biophotons in the brain could transmit not only classical but also quantum information \cite{kumar,simon2019can,smith2021radical}, however, this still requires experimental confirmation. In the context of the present work, which is focused on a potential role for photons in learning, the possibility of transmitting quantum information by biophotons could be connected to the field of quantum machine learning \cite{paparo2014quantum,crawford2016reinforcement,xia2021quantum}, which studies potential advantages of quantum approaches to learning.
If the brain's biophotons are involved in learning by transmitting information backward through the axons, then it would reveal a new feature of the brain and can answer some fundamental questions about the learning process. It is also worth noting that
our stochastic backpropagation-like algorithm might be of interest beyond the biophotonic context and could have applications in other fields such as neuromorphic computing \cite{esser2015backpropagation,torrejon2017neuromorphic,markovic2020physics} and photonic reservoir computing\cite{paquot2012optoelectronic,duport2012all,tanaka2019recent,argyris2022photonic,davies2018loihi}.
\section*{Methods}
\subsection*{Network Equations}
We consider a basic 3-layer artificial neural network, with $N_i$ input nodes, $N_h$ hidden layers, and $N_o$ output nodes. We label the output or activity of each node with the variable $a^\mu_k$, where $\mu = i,h,o$ stands for the input, hidden, and output layers, respectively.
Neurons of the hidden and output layers perform some non-linearity, $\sigma(x)$, on their inputs.
We introduce non-linearity into the network with help of the logistic function which is a differentiable activation function $\sigma(x)=\frac{1}{1+e^{-x}}$
and has a convenient derivative of ${\frac {d\sigma(x)}{dx}}=\sigma(x)(1-\sigma(x))$. In an artificial neural network, the non-linear activation function produces a new representation of the original data that ultimately allows the non-linear decision boundaries for the network. The network equations then are
\begin{eqnarray}
a^i_j&=& x_j, \quad j=1,2,\ldots N_i, \label{eq1}\\
a^h_k &=& \sigma\left(\sum_{j=1}^{N_i} w^{h,i}_{k,j}a^i_j +b^h_k\right), \quad k = 1,2,\ldots N_h, \label{eq2}\\
a^o_l &=& \sigma\left(\sum_{k=1}^{N_h} w^{o,h}_{l,k} a^h_k\right),\quad l=1,2,\ldots N_o, \label{eq3}
\end{eqnarray}
where $b^h_k$ is a commonly included ``bias term''.
\subsection*{Standard Backpropagation}
Suppose we consider a finite sequence of inputs $\{x_{[1]},\ldots x_{[m]} \}$ with a matched sequence of outputs $\{y_{[1]},\ldots y_{[m]} \}$ as the training data set and we want to train the network such that the network output $a^0_{[n]}$ approximates the target output $y_{[n]}$, as $n$ grows. Note that subscript $[n]$ denotes the corresponding vector values at iteration $n=1,2,\ldots$. In the online learning approach, the backpropagation algorithm iteratively updates the weights $w^{h,i}, w^{o,h}$ to minimize the loss (error function) at each time. For each trial, the error function $L_n$ will be
\begin{eqnarray}\nonumber
L_n &=& \frac{1}{2}\sum_{l=1}^{N_o} ((\delta_l^o)_{[n]})^2 \\ \nonumber
&=& \frac{1}{2}\sum_{l=1}^{N_o}\left((a^0_l)_{[n]} - (y_l)_{[n]}\right)^2. \label{err}
\end{eqnarray}
\noindent After each forward pass of information, the weights should be updated such that the network output gets closer to the target output. Thus, for the next training trial ($[n+1]$), the weights $w^{h,i}$ and $w^{o,h}$ are updated as
\begin{align}\label{wt0}
\left(w^{o,h}_{l,k}\right)_{[n+1]} &= \left(w^{o,h}_{l,k}\right)_{[n]} - \epsilon \frac{\partial L_n}{\partial w^{o,h}_{l,k}}\bigg|_{w^{o,h}_{l,k}= \left(w^{o,h}_{l,k}\right)_{[n]}},\\
\left(w^{h,i}_{k,j} \right)_{[n+1]} &= \left(w^{h,i}_{k,j} \right)_{[n]} - \epsilon \frac{\partial L_n}{\partial w^{h,i}_{k,j} }\bigg|_{w^{h,i}_{k,j} = \left(w^{h,i}_{k,j} \right)_{[n]}},
\end{align}
where $\epsilon$ is the learning rate. After evaluating the requisite derivatives, we have the following:
\begin{eqnarray}
\left(w^{o,h}_{l,k}\right)_{[n+1]} &=& \left(w^{o,h}_{l,k}\right)_{[n]} - \epsilon \cdot (\delta_l^o)_{[n]} \cdot (a_k^h)_{[n]}, \label{wt1}\\
\left(w^{h,i}_{k,j} \right)_{[n+1]} &=& \left(w^{h,i}_{k,j} \right)_{[n]} - \epsilon \cdot (\delta_l^h)_{[n]} \cdot (x_j)_{[n]}, \label{wt2}
\end{eqnarray}
where $(\delta_l^o)_{[n]}$ denotes the error signal of the output layer and $(\delta_l^h)_{[n]}$ denotes the error signal of the hidden layer, that are given by:
\begin{align}\label{eq:wt_err1}
(\delta_l^o)_{[n]} &= (a^0_l)_{[n]} - (y_l)_{[n]}, \\
\label{eq:wt_err2}
(\delta_l^h)_{[n]} &= \left( \sum_{l=1}^{N_o} (\delta_l^o)_{[n]} \cdot w^{o,h}_{l,k} \right) \cdot \sigma'\left( (a_k^h)_{[n]} \right).
\end{align}
In order to update the weights, the error signal $(\delta_l^o)_{[n]}$ is transmitted back to the hidden layer and $(\delta_l^h)_{[n]}$ is transmitted back to the input layer.
\subsection*{Training error rate and Test error}
To evaluate the performance of the training trials we calculate
the error of each trial, which is a function of the error signal of the output layer, given by
\begin{equation}\label{eq:error_train_n}
e_{[n]} = 1 - \mathds{1}\left((\delta_l^o)_{[n]}\right),
\end{equation}
where
\begin{equation*}
\mathds{1}(\alpha) = \left\{\begin{array}{lc}
0 & \text{if~} \alpha = 0 \\
1 & \text{o.w.}
\end{array} \right.
\end{equation*}
The training error simply indicates whether the network output matches the target data.
The training error rate is calculated as the moving average of the training errors over the past 100 trials.
If the training is successful the error rate converges to a negligible value.
To make sure the network has truly leaned the task,
we evaluate the performance of the network by using a new set of data called the test data set.
The test error for each test experiment is the average number of times where the network output does not match the target output of the test data set.
\subsection*{Proposed photonic feedback}
We propose
photonic backward propagation of error signals
that is modeled under three main realistic limitations.
\begin{enumerate}
\item \textbf{Stochastic photonic updates.}
To model the stochasticity of biophoton emissions in the brain,
in our proposed system, instead of updating all the weights, we only adjust $w^{h,i}$ for a random $q.(N_h N_i)$ number of weights, and $w^{o,h}$ for a random $q.(N_h N_o)$ number of weights where $q$ is the proportion of neurons that release photons. As $q$ gets larger (closer to 1), more photons are transmitted backward and for the case of $q=1$, it is the same original backpropagation algorithm.
\item \textbf{Stochastic photonic updates carrying only one bit of information.}
In our model, when photons only transmit one bit of backward information, instead of Eq~\eqref{wt1} and Eq~\eqref{wt2}, the weight updates for the determined random number of weights, $q.(N_h N_i)$ or $q.(N_h N_o)$, are given by:
\begin{eqnarray}
\left(w^{o,h}_{l,k}\right)_{[n+1]} &=& \left(w^{o,h}_{l,k}\right)_{[n]} - \epsilon \cdot \mathsf{Sgn}\left( (\delta_l^o)_{[n]} \cdot (a_k^h)_{[n]} \right), \label{wt1_sgn}\\
\left(w^{h,i}_{k,j} \right)_{[n+1]} &=& \left(w^{h,i}_{k,j} \right)_{[n]} - \epsilon \cdot \mathsf{Sgn}\left( \left( \sum_{l=1}^{N_o} (\delta_l^o)_{[n]} \cdot w^{o,h}_{l,k} \right) \cdot \sigma'\left( (a_k^h)_{[n]} \right) \cdot (x_j)_{[n]} \right), \label{wt2_sgn}
\end{eqnarray}
where $\mathsf{Sgn}$ is the sign function defined as:
\begin{equation*}
\mathsf{Sgn}(x) = \left\{ \begin{array}{cr}
1 & \text{if~} x>0 \\
0 & \text{if~} x=0 \\
-1 & \text{if~} x<0
\end{array} \right. .
\end{equation*}
\item \textbf{Stochastic photonic updates carrying only one bit of information in presence of noise.} To model the noise in feedback updates, the weights are first updated according to Eq~\eqref{wt1_sgn} and Eq~\eqref{wt2_sgn}. Then we select a random $np.(N_h N_i)$ number of $w^{h,i}$ weights, and a random $np.(N_h N_o)$ number of $w^{o,h}$ weights where $np$ is the proportion of neurons that release uncorrelated noise photons. The new weight updates are given by
\begin{eqnarray}
\left(w^{o,h}_{l,k}\right)_{[n+1]} &=& \left(w^{o,h}_{l,k}\right)_{[n]} - \epsilon \cdot (\eta^o_{l,k})_{[n]}, \label{wt1_noisy}\\
\left(w^{h,i}_{k,j} \right)_{[n+1]} &=& \left(w^{h,i}_{k,j} \right)_{[n]} - \epsilon \cdot (\eta^h_{k,j})_{[n]}, \label{wt2_noisy}
\end{eqnarray}
where $\eta^o_{l,k}$ and $\eta^h_{k,j}$ are independent random variables takings values over $\{-1, +1\}$.
\end{enumerate}
|
1,116,691,497,119 | arxiv | \section{Introduction}
Secure multi-party quantum computation (MPQC) protocol~\cite{CGS02}
allows $n$ players to compute an agreed quantum circuit
where each player has access only to his own arbitrary quantum input.
A MPQC protocol has two phases: In the \emph{sharing} phase, players dubbed
dealers provide the other players with their initial state. In the
\emph{reconstruction} phase, the honest players help a designated player
reconstruct the final state of the protocol. During the latter phase, only local
operations and classical computation is available.
In this paper we view \emph{controlled teleportation} as a special case of
MPQC. A dealer named Carol hands Alice and Bob an (entangled) initial state $\ket{\psi_{ABC}}$.
A second dealer, named David, provides Alice with an unknown $m$-qubit state
$\rho$.
The task of Alice and Bob is to reconstruct (teleport) $\rho$
into Bob's hands, when Carol allows it.
$\ket{\psi_{ABC}}$ is such that Carol controls whether the teleportation
can take place.
Carol and David are honest dealers.
We call a controlled teleportation protocol \emph{secure} if it is impossible
for malicious Alice and Bob to teleport any part of $\rho$ before the
reconstruction phase. Namely, if Alice and Bob can build a state $\rho'$ at
Bob's hands which has non-trivial fidelity with $\rho$ before the
reconstruction phase, the protocol is insecure.
The straightforward solution to this problem, suggested in~\cite{YangChuHan04},
is to use a procedure described
in~\cite{KB98,HBB99}.
First, Carol prepares the following $3m$-qubit state and gives Alice and Bob
their respective qubits
\begin{eqnarray}
&&\otimes_{i=1}^m\ket{GHZ}_{ABC(i)}=\nonumber\\
&&\otimes_{i=1}^m\left(\ket{\phi^+}_{AB(i)}\ket+_{C(i)}+
\ket{\phi^-}_{AB(i)}\ket-_{C(i)}\right).
\end{eqnarray}
(Here, and throughout the paper, we drop normalization factors for
readability), $\{\ket{\psi^\pm},\ket{\phi^\pm}\}$ are the four
Bell-BMR\cite{BMR92} states,
$\ket{\pm}=\frac{\ket0\pm\ket1}{\sqrt2}$, and
$\ket{GHZ}_{ABC(i)}=\frac{\ket{000}+\ket{111}}{\sqrt2}$ is the $i$th
$GHZ$ state shared among Alice, Bob and Carol.
Later, if Carol wishes to allow the teleportation, she measures her $m$ qubits
in
the Hadamard basis, and publishes her results $c_i\in\{+,-\}$.
Now, the state shared by Alice and Bob is $\otimes_{i=1}^m\ket{\phi^{c_i}}$
which can be freely used by them for teleportation.
On the other hand, if Carol abstains from participation, the state shared by
Alice and Bob can be calculated by tracing over Carol's qubits
\begin{eqnarray}
&&\mathrm{tr}\,_C\otimes_{i=1}^m\ket{GHZ}\bra{GHZ}_{ABC(i)}=\nonumber\\
&&\otimes_{i=1}^m\left(\ket{00}\bra{00}_{AB(i)}+\ket{11}\bra{11}_{AB(i)}\right).
\end{eqnarray}
We note here, that without the participation of Carol,
the state shared by Alice and Bob becomes a classical correlation,
which cannot facilitate quantum teleportation.
Ref.~\cite{YangChuHan04} provides a second protocol, in which Carol holds only
one entangled qubit, aiming at the
same task. This entanglement-efficient protocol can be stated as follows:
Carol creates the following $2(m+1)$-qubit state and
gives Alice and Bob their respective qubits
\begin{eqnarray}
\otimes_{i=1}^m\ket{\phi^+}_{AB(i)}\otimes\ket{\phi^+}_{AC}
\otimes_{i=1}^m\ket{\phi^-}_{AB(i)}\otimes\ket{\psi^+}_{AC}.
\end{eqnarray}
Later, if Carol wishes to allow the teleportation, she measures her single qubit
in the computational basis and publishes her result.
Alice measures her own rightmost qubit in the computational basis. If it is
equal to Carol's outcome, then Alice and Bob share
$\otimes_{i=1}^m\ket{\phi^+}_{AB(i)}$. Otherwise, they share
$\otimes_{i=1}^m\ket{\phi^-}_{AB(i)}$. Either way, they can safely teleport $m$
qubits.
On the other hand, if Carol abstains from participation, and
if Alice and Bob continue the protocol exactly as planned, they can no longer teleport
Alice's $m$-qubit message reliably, since they will create a mixed
state~\cite[Eq. (11)]{YangChuHan04} instead.
\section{Alice and Bob can cheat Carol}
In fact, in the second protocol,
even if Carol does not participate,
malicious Alice and Bob can let Alice teleport any $(m-1)$-qubit state to Bob.
The abstention of Carol mixes the shared state to create
\begin{eqnarray}
\otimes_{i=1}^m\ket{\phi^+}\bra{\phi^+}_{AB(i)}\otimes\mathds1_{A}
\otimes_{i=1}^m\ket{\phi^-}\bra{\phi^-}_{AB(i)}\otimes\mathds1_{A},
\label{prot2noCarol}
\end{eqnarray}
where $\mathds1$ is the totally mixed state in one qubit.
Yet Alice and Bob
can easily distill it~\cite{BBPSSW96}.
Each of them has to relinquish his or her $m$th qubit
and measure it in the computational basis.
If their results coincide, they share $\otimes_{i=1}^{m-1}\ket{\phi^+}_{AB(i)}$;
otherwise they share $\otimes_{i=1}^{m-1}\ket{\phi^-}_{AB(i)}$. Either way, they can
safely teleport any $(m-1)$-qubit state and thus reconstruct any $m$-qubit
state with high fidelity.
It is important to note that~\cite{YangChuHan04}
never claimed that Bob can learn \emph{nothing} about Alice's state,
and they stated openly that they did not ``attempt a comprehensive study of the
security against all possible forms of eavesdropping and/or cheating''.
However, it is equally important to note that their efficient protocol for
multiqubit quantum information teleportation via the control of an agent, is
insecure.
The same malady affects their protocol for multiple controllers (see
Section~\ref{multicontrol}).
\section{Carol needs $m$ entangled qubits}
When Carol held a single qubit entangled to Alice
and Bob, she could control only one of their qubits, and not the rest.
We believe that this is not an accident. We \emph{conjecture} that
Carol must have at least $m$ entangled qubits with Alice and Bob if she wants
to completely control their ability to teleport an $m$-qubit state.
We prove a special case of this conjecture.
We define a limited form of secure controlled
teleportation. In the limited form,
we assume three additional limitations.
(a) The initial state shared by Alice, Bob and Carol is pure.\footnote{In
general, this state could have been mixed.}
(b) If Carol abstains, the remaining state
$\rho_{AB}=\mathrm{tr}\,_C\ket{\psi_{ABC}}\bra{\psi_{ABC}}$ is separable.\footnote{In
general, it is probably
enough to assume that the state without Carol is not distillable, namely
either separable or bound-entangled.}
(c) In the reconstruction phase, Carol performs her measurement on the shared
state without obtaining any prior information from Alice and Bob; Alice and Bob
do not help Carol to assist them.\footnote{In general, Carol's measurements can
depend on the outcome of Alice and Bob's measurements; The reconstruction phase
can be more complex.}
These limitations are not true in general, since the initially-shared state may
be mixed, since a bound-entangled state is probably equally
unhelpful for Alice and Bob if they want to perform teleportation,
and since the reconstruction phase can be more complex.
Note that these limitations leave enough room for interesting protocols.
Specifically, the protocols of~\cite{YangChuHan04} satisfy limitations (a)
and (c), and the ones not satisfying (b) can be cheated because of that.
Limitation (c)
means that the highest value of entanglement that Alice
and Bob can create between them with the help of Carol is $EoA^1(\rho_{AB})$,
the entanglement of assistance, which in turn is limited by $EoA^\infty(\rho_{AB})$.
Recently, Smolin, Verstraete, and
Winter~\cite{SVW05} showed that
\begin{equation}
EoA^\infty(\rho_{AB})\leq \min(S(A),S(B)).
\end{equation}
Let us now assume that a secure limited controlled teleportation protocol exists,
i.e.\ there exists $\ket{\psi_{ABC}}$ so that
$\rho_{AB}=\mathrm{tr}\,_C{\ket{\psi_{ABC}}\bra{\psi_{ABC}}}$ is separable, while
$EoA^1(\rho_{AB})\geq m$.
Since $\rho_{AB}$ is separable,
$S(\rho_{AB})\geq \max(S(A),S(B))$.
We conclude that
\begin{eqnarray}
S(\rho_{AB})&\stackrel{\hbox{(b)}}{\geq} &\max(S(A),S(B))\nonumber\\
&\geq& \min(S(A),S(B)) \stackrel{\hbox{\cite{SVW05}}}\geq
EoA^\infty(\rho_{AB})\\
&\geq&EoA^1(\rho_{AB})\stackrel{\hbox{(c)}}\geq m.\nonumber
\end{eqnarray}
But since $\ket{\psi_{ABC}}$ is pure, $S(\rho_{AB})$ is exactly the initial
entanglement of Carol with Alice and Bob.
Thus, a limited controlled teleportation protocol requires Carol to hold no less
than $m$ entangled bits---just as in the straightforward protocol.
\section{Multiple controllers}
\label{multicontrol}
In an extended problem presented in~\cite{YangChuHan04}, Carol is replaced by
$n$ controllers: If all the controllers participate,
Alice can teleport an $m$-qubit
message to Bob. But even if a single controller abstains, the teleportation has
to be impossible.
In the straightforward protocol achieving this,
the controllers prepare the following $(n+2)m$-qubit state and give
each participant his or her respective qubits
\begin{eqnarray}
&&\otimes_{i=1}^m\ket{GHZ}_{ABC^n(i)}=\nonumber\\
&&\otimes_{i=1}^m\left(\ket{\phi^+}_{AB(i)}
H^{\otimes n}\sum_{\mathrm{even}\,|x|}\ket x_{C^n(i)}+\right. \nonumber\\
&&~~~~~~~~\left.\ket{\phi^-}_{AB(i)}H^{\otimes n}\sum_{\mathrm{odd}\,|x|}\ket x_{C^n(i)}\right)
\end{eqnarray}
where $H^{\otimes n}$ is the Hadamard transform on $n$ qubits, and
the summations are over the
$n$-bit strings $x$ whose Hamming weight $|x|$ is even (odd).
Later, when all the controllers wish to allow the teleportation, each of them
apply the Hadamard transform to her first qubit,
measures it in the computational basis,
and publishes her result.
If the number of ``1''s published by all
controllers is even (odd), Alice and Bob share $\ket{\phi^+}$ ($\ket{\phi^-}$).
Either way, they can safely teleport one qubit.
This is repeated on the $m-1$ sets of remaining qubits.
On the other hand, if even a single controller abstains, the complete state
becomes a classical correlation, unworthy for teleportation.
In this protocol each controller initially holds $m$ entangled bits with the
rest of the system. Much like as in the case of a single controller, the
protocol cannot be improved by another protocol of the limited form.
If all $n$ controllers participate,
they can be thought of as one, and $m\leq EoA(\rho_{AB})\leq \min(S(A),S(B))$.
If one of the controllers $(C')$ abstains, we require that
$\rho_{ABC^{n-1}}$ would become separable. Again, this means that each
controller's entanglement with the rest of the system
$S(C')=S(\rho_{ABC^{n-1}})$ has to be more than $\max(S(A),S(B))$
and certainly more than $\min(S(A),S(B))\geq m$.
Any limited controlled teleportation protocol that tries to
be more entanglement-efficient than that, is insecure.
For example, in the shared state suggested in~\cite[Eq.~(21)]{YangChuHan04}
\begin{eqnarray}
\otimes_{i=1}^m\ket{\phi^+}_{AB(i)}\otimes\ket{GHZ_+}_{AC^n}+\nonumber\\
\otimes_{i=1}^m\ket{\phi^-}_{AB(i)}\otimes\ket{GHZ_-}_{AC^n}
\end{eqnarray}
(where $\ket{GHZ_\pm}=\ket{0...0}\pm\ket{1...1}$ is an $n+1$ qubit state share
by Alice and the $n$ controllers) there is only one bit of entanglement
between the group of all controllers and the Alice and Bob pair.
Therefore, Alice and Bob can again ignore the controllers and remain
with a state useful for teleportation.
\section{Acknowledgments}
We thank Amir Kalev and Gili Bisker for discussing this paper with us
and for their valuable comments, and for the support of
the Israeli MOD Research and Technology Unit.
|
1,116,691,497,120 | arxiv | \section{Introduction}
Weyl semimetals (WSMs) are materials whose low-energy excitations are Weyl fermions \cite{RevModPhys.90.015001,Shen2012,weylcoming}. While these particles have their roots in high-energy physics as solutions to the massless three-dimensional Dirac equation in a chiral basis, WSMs present an elegant way of accessing their properties in the condensed matter regime.
A growing interest in these materials culminated with their physical realization in TaAs \cite{taasweylexperiment} and TaNb\cite{tanbweylexperiment}, with additional predictions of type-II WSMs in $\text{WTe}_2$ \cite{typeiiwsmwte2} and $\text{MoTe}_2$ \cite{type11wsmmote2}. On the theoretical side, the WSM's classification as a gapless topological phase makes it an appealing object of study with deep connections to topological Chern insulators \cite{reviewoftopologicalphasesthinfilm} and novel properties in the presence of superconductivity \cite{weylsuperconductor,weylmajoranaflatband} and external magnetic fields \cite{burkovweyl}, to name but a few.
The Weyl Hamiltonian describes a linear crossing of two non-degenerate bands. For a pair of such bands to touch, one must in general tune three independent parameters, one for each Pauli matrix. In three spatial dimensions with three independent momenta Weyl points are therefore robust against weak perturbations. Near these points/nodes the bulk energy disperses linearly and the physics are governed by the Weyl Hamiltonian:
\begin{equation}
H = \hbar \mathbf{v}_0 \cdot \mathbf{k} \pm \hbar v \mathbf{k} \cdot \bm{\sigma},
\end{equation}
where $\pm$ denotes the node's chirality, $v$ is the effective Fermi velocity, $\mathbf{k}$ is the momentum and $\bm{\sigma}$ is the vector of Pauli matrices acting in spin space. The first term, proportional to the unit matrix, breaks Lorentz invariance and tilts the dispersion. For type-I WSMs, it can be ignored, leaving only the second term. The latter has a linear dispersion that, while strongly reminiscent of two-dimensional graphene, will not open a gap in the presence of small perturbations. Each Weyl node is also a monopole of Berry curvature, leading to a chiral anomaly which manifests itself in many exotic properties such as the Quantum anomalous Hall effect, negative magnetoresistance \cite{negativemagnetoresistance}, the chiral magnetic effect \cite{obrienchiralmagneticeffect}, and high carrier mobility \cite{highmobility}.
In lattice systems, a Weyl semimetal hosts pairs of Weyl nodes along a given nodal direction \cite{Burkovnodalsemimetals,burkovwsmmultilayer,RevModPhys.90.015001}. This is required by either time-reversal ($\mathcal{T}$) or inversion ($\mathcal{I}$) symmetry and the fact that the total Berry flux in the first Brillouin zone (BZ) must vanish. One can slice the system along the nodal direction and assign a Chern number to each two dimensional slice of momentum space: if a plane is pierced by Berry flux it will be topologically non-trivial, and vice-versa. Therefore, the bulk-boundary correspondence implies the presence of topologically protected surface states in between the Weyl nodes only. At the Fermi level, then, an open system will host a \textit{Fermi arc} -- a projection of zero-energy chiral surface states connecting pairs of opposite chirality Weyl nodes and dispersing linearly away from the Fermi level. In this sense, gapless topological phases are intermediaries between genuine trivial and topological phases of matter and can even be realized by a repeated stacking of the two \cite{burkovwsmmultilayer}.
While a growing number of their properties are known, such as the effect of impurities and defects \cite{impuritiesintypeii,impuritiesintypeii2,Silva_2022,rkkyinteractioninwsms}, the manoeuvrability and theoretical richness of these materials further motivates the analytical study of tunnelling in WSMs. In what follows, we investigate the effect of tunnelling by constructing a tight-binding model of a time reversal symmetry ($\mathcal{T}$) broken WSM coupled to a non-magnetic band. In describing a single Fermi arc, the $\mathcal{T}$-broken WSM displays all aforementioned properties while providing a minimal model to serve as a building block for setups with more pairs of nodes. Likewise, our choice of a simple tunnelling potential and featureless band are intentional: we seek to draw out the bare properties of a WSM in contact with a non-topological material.
The remaining sections are structured as follows. In Sec.~\ref{sec:model}, we present the WSM and non-magnetic band models along with the specific form of surface tunnelling. The numerical results of a finite lattice model are then presented in Sec.~\ref{sec:finitelatticemodel}. In Sec.~\ref{sec:discretetheory} we derive an infinite lattice theory with an interface to model the spectra, spin canting and interface arcs in a lattice framework, while Sec.~\ref{sec:interfacetheory} presents a simpler continuum model. We finish in Sec.~\ref{sec:transport} by investigating the novel transport properties of the coupled system both along and across the interface in the Landauer-Buttiker and electron tunnelling formalism, respectively. Directions for further study are briefly touched upon in the conclusion, Sec.~\ref{sec:conclusion}, and relevant technical details are included in the appendices.
\section{Model}
\label{sec:model}
\subsection{Weyl semimetal}
We consider a minimal Hamiltonian which captures the Fermi arc feature. This can be achieved either by breaking $\mathcal{T}$ while preserving $\mathcal{I}$ or vice-versa. In order to work with smaller matrices, we choose the former. Explicitly, then, our Hamiltonian must satisfy $ H\left(\mathbf{k}\right) = \sigma_z H\left(-\mathbf{k}\right) \sigma_z$ and $H\left(\mathbf{k}\right) \neq \sigma_y H^*\left(-\mathbf{k}\right) \sigma_y $. A simple tight-binding Hamiltonian which abides by these symmetries is ($\hbar = \text{lattice constant} = 1$) \cite{RevModPhys.90.015001}
\begin{subequations}
\label{eq:wsmhamiltonian}
\begin{gather}
{H}_w = \sum_{\mathbf{k}} \mathbf{c}^{\dagger}_{\mathbf{k}} \mathcal{H}^{\mathrm{bulk}}_w \left(\mathbf{k}\right) \mathbf{c}_{\mathbf{k}}, \\
\mathcal{H}^{\mathrm{bulk}}_w \left(\mathbf{k}\right) = t_x \sin{k_x} \sigma_x + t_y \sin{k_y} \sigma_y +t_z m\left(\mathbf{k}\right) \sigma_z, \\
m\left(\mathbf{k}\right) = \left(2 + \gamma - \cos{k_x} - \cos{k_y} - \cos{k_z}\right).
\end{gather}
\end{subequations}
Here, $\mathbf{c}_{\mathbf{k}} = \left(c_{\mathbf{k},\uparrow}, c_{\mathbf{k},\downarrow}\right)^{\top}$ is an annihilation operator in momentum space, $t_{s}$ ($s=x,y,z$) is the strength of hopping in the $s$-direction and ${\sigma}$ are the Pauli spin matrices. We further set $t_x=t_y=t_z=t>0$ for simplicity.
The Hamiltonian \eqref{eq:wsmhamiltonian} admits the bulk energies
\begin{equation}
E_{\pm} = \pm t \left[\sin^2{k_x}+\sin^2{k_y} + m^2\left(\mathbf{k}\right) \right]^{\frac{1}{2}}.
\end{equation}
These vanish at $\mathbf{k}^{\pm}_w = \left(0,0,\pm \arccos{\gamma}\right) \equiv \left(0,0,\pm k_w\right)$ -- the aforementioned Weyl nodes. We emphasize the importance of the $\cos{k_{x,y}}$ terms, without which there would be more than two nodes in the BZ for a given $\gamma$.
These gapless bulk momenta $\mathbf{k}^{\pm}_w$ suggest that $H_w$ exhibits different phases that depend solely on the arc length parameter $\gamma$. For $\gamma > 1$, $m\left(\mathbf{k}\right) > 0$ for all $\mathbf{k}$ and the system is trivially gapped. As $\gamma$ decreases to $1$, a pair of Weyl nodes appear at the origin and move outward along $k_z$ as $\gamma$ decreases further. This defines a gapless topological phase whereby a nonzero Berry flux flows within the momentum range $|k_z| < k_w$ from the node of negative chirality to the one of positive chirality. Consequently, the Chern number -- defined for a fixed $k_z$ -- is nonzero between the nodes, and zero beyond them. When $\gamma \leq -1$, the Weyl nodes reach the BZ boundaries and disappear, leaving the bulk dispersion with an inverted band gap. Between $-5<\gamma<-1$, the same process occurs for Weyl nodes with $(k_x,k_y) = (0,\pi)$, $(\pi,0)$, and $(\pi,\pi)$, until $\gamma < -5$ where the system is again gapped and trivial for all $\mathbf{k}$. In all numerical results that follow, we take $\gamma = 0$ ($k_w = \pi / 2$), well within the gapless topological regime and with a Fermi arc length $k_{\mathrm{arc}} = \pi$. The bare WSM's surface spectrum, Fermi arc, and topological phases are shown in Fig.~\ref{fig:barewsm}.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figures/barewsm.png}
\caption{The minimal Weyl semimetal model. (a) Spectral function at $y=L_y-1$ of a WSM open in $y$ plotted in the $E$-$k_x$ plane for fixed $k_z = 0$ and (b) $k_z = \pi / 2$. (c) WSM spectral function plotted in the $k_x$-$k_z$ plane for fixed $E = 0$ showing the Fermi arc. (d) Phase diagram of Eq.~\eqref{eq:wsmhamiltonian} with lower band's Chern numbers. The phase boundaries $\gamma = \cos{k_x}$, $\gamma = \cos{k_x} - 2$ and $\gamma = \cos{k_z} - 4$ are plotted in blue. We will work at $\gamma=0$ (dashed orange line).
}
\label{fig:barewsm}
\end{figure}
\subsection{Tunnelling}
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figures/schematic.png}
\caption{(a) Schematic of the WSM-metal system. Only the rightmost surface, or \textit{interface} (with the Fermi arc shown as a white line and the nodes as white crosses) is linked to the metal via tunnelling $\Delta$. (b) Physical representation of the system as a chain ($\mathbf{R}_w$, $\mathbf{h}_w$, etc. defined in App.~\ref{sec:hamiltonianfullform}). (c) By integrating out the metal degrees of freedom, the chain is simplified into a single semi-infinite chain with a single edge site of energy $h_{\Delta} = T^{\dagger} G_m T$ (shaded with grey line).}
\label{fig:schematic}
\end{figure}
To draw out the tunnelling properties of the Weyl semimetal, we couple it to a simple parabolic band via non-magnetic surface tunnelling. The band's Hamiltonian is spin-independent and reads
\begin{subequations}
\label{eq:metalhamiltonian}
\begin{gather}
H_m = \sum_{\mathbf{k}} \mathbf{d}^{\dagger}_{\mathbf{k}} \mathcal{H}^{\mathrm{bulk}}_m \left(\mathbf{k}\right) \mathbf{d}_{\mathbf{k}}, \\
\mathcal{H}^{\mathrm{bulk}}_m \left(\mathbf{k}\right) = -2 t_m \left( \cos{k_x} + \cos{k_y} + \cos{k_z}\right) - \mu,
\end{gather}
\end{subequations}
where $t_m$ is the hopping amplitude, $\mu$ the chemical potential and $\mathbf{d}_{\mathbf{k}} = \left(d_{\mathbf{k},\uparrow}, d_{\mathbf{k},\downarrow}\right)^{\top}$ is an annihilation operator in momentum space.
For brevity, we equivalently refer to this non-magnetic parabolic band as ``metal", though one may of course tune $\mu$ to achieve a semi-conductor or an insulator, as discussed in App.~\ref{sec:varymetal}, where we also consider two parabolic bands.
We now introduce a tunnelling Hamiltonian which couples the surface of the WSM to the surface of the metal. We proceed with open boundary conditions in the $y$-direction and keep well-defined momenta perpendicular to the surface, $\mathbf{k}_{\perp} = \left(k_x,k_z \right)$. The WSM (metal) side runs from $y=-L_y + 1$ to $0$ ($y=1$ to $L_y$), defining an interface between the WSM's $y=0$ and metal's $y=1$ sites.
The Hamiltonian for the full (finite-sized) system is therefore
\begin{subequations}
\label{eq:mainhamiltonian}
\begin{gather}
{H} = \sum_{\mathbf{k_{\perp}}} \sum_{y,y'=-L_y+1}^{L_y} \mathbf{f}_{\mathbf{k}_{\perp},y}^{\dagger} \mathcal{H}\left(\mathbf{k}_{\perp}\right)_{y,y'} \mathbf{f}_{\mathbf{k}_{\perp},y'}, \\
\mathcal{H}\left(\mathbf{k}_{\perp}\right) = \begin{pmatrix}
\mathcal{H}^{\mathrm{open}}_w \left(\mathbf{k}_{\perp}\right) & T^{\dagger} \\
T & \mathcal{H}^{\mathrm{open}}_m \left(\mathbf{k}_{\perp}\right)
\end{pmatrix},
\end{gather}
\end{subequations}
where
\begin{equation}
\mathbf{f}_{\mathbf{k}_{\perp},y} = \begin{cases}
\mathbf{c}_{\mathbf{k}_{\perp},y} & -L_y+1 \leq y \leq 0 \\
\mathbf{d}_{\mathbf{k}_{\perp},y} & 1 \leq y \leq L_y
\end{cases}
\end{equation}
and $\mathcal{H}^{\mathrm{open}}$ is the partial-in-$y$ Fourier transform of $\mathcal{H}^{\mathrm{bulk}}$.
The full form of Eq.~\eqref{eq:mainhamiltonian} is shown in App.~\ref{sec:hamiltonianfullform}.
The surface tunnelling term is also non-magnetic and takes the form
$\left(T\right)_{y,y'} = \Delta \delta_{0,L_y-1}$, or
\begin{equation}
\label{eq:tunnelling}
T = \begin{pmatrix}
0 & \dots & \Delta \\
\vdots & \ddots & \vdots \\
0 & \dots & 0
\end{pmatrix},
\end{equation}
where, for simplicity, we have assumed that $\Delta$ is a real constant that modulates the tunnelling strength between interface sites $y=0$ and $y=1$. Physically, the tunnelling strength can be modified either by varying the metal bandwidth $t_m$ or changing the interface thickness, as suggested by Fig.~\ref{fig:schematic}. There are therefore two competing energy scales at the WSM's interface: the interlayer hopping $t$ pulling the electron towards the bulk and the tunnelling strength $\Delta$ pulling the electron towards the metal.
Before moving on to the finite lattice simulations, we note that the metal dynamics can be exactly integrated out to make way for a modified WSM propagator \cite{borchmannbarnea,marchandfranz}. More precisely, the effective Green's function becomes
\begin{equation}
\label{eq:effectivegreen}
G_{\mathrm{eff}} \left(i\omega_n\right) = \left[G_w^{-1}\left(i\omega_n\right) - T^{\dagger} G_m \left(i\omega_n\right)T \right]^{-1}
\end{equation}
where $\omega_n$ is the Matsubara frequency and $G_{w,m} = \left(i\omega_n - \mathcal{H}^{\mathrm{open}}_{w,m}\right)^{-1}$ are the bare Green's functions. Substituting in Eqs.~\eqref{eq:metalhamiltonian} and \eqref{eq:tunnelling} and yields, after some algebra,
\begin{equation}
\label{eq:effectivesamesitepotential}
T^{\dagger} G_m \left(i\omega_n\right) T = - \frac{\Delta^2 }{\sqrt{\left(i\omega_n - h_m \right)^2-4t_m^2}} \delta_{y,L_y-1}\delta_{y,y'},
\end{equation}
where $h_m = -2 t_m \left(\cos{k_x} + \cos{k_z}\right) - \mu$. Thus, surface tunnelling simply shifts the same-site hopping of the last site (Fig.~\ref{fig:schematic}c).
\section{Finite lattice model}
\label{sec:finitelatticemodel}
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{figures/discreteplots.png}
\caption{Interface density of states for the coupled WSM-metal system for both spins. The numerical results (simulated on a $L_y = 30$ size chain sampled at $100$ momentum points) are shown in warm colours whereas the chiral state's infinite lattice model (Sec.~\ref{sec:discretetheory}) is plotted in blue. In all plots where the infinite lattice theory obstructs the numerical results (e.g. the top left), the agreement is near exact. The columns correspond to (a) the spectrum along $k_x$ at $k_z=0$, (b) the spectrum along $k_x$ at the Weyl point $k_z = + \pi / 2$, and the emergent interface arcs at (c) $E = 0$ and (d) $E=0.5$. The rows are set in increasing order of $\Delta = 0$, $1$, $2.3$ going down. The bulk energy edges $E_{\mathrm{bulk}} = \pm t \left[\sin^2{k_x} + \left(1 + \gamma - \cos{k_x} - \cos{k_z}\right)\right]^{\frac{1}{2}}$ are denoted by dashed white lines, as is the Fermi surface in the $E=0.5$ interface plots. The bare Weyl nodes $\mathbf{k}^{\pm}_{\perp,w} = (0,\pm \pi / 2)$ are white crosses. The fixed parameters used for these and all other plots are $t=1$, $\gamma=0$, $t_m = 0.5$, $\mu = -4$, unless otherwise specified.}
\label{fig:discreteplots}
\end{figure*}
We now turn to the numerical results of Eq.~\eqref{eq:mainhamiltonian} on a finite lattice. Keeping the system open in $y$ with the quantum numbers $k_x$ and $k_z$, the spectral function is obtained by evaluating $A\left(E,\mathbf{k}_{\perp}\right) = - {\pi}^{-1} \mathrm{Im} \left[\mathrm{Tr} \left(G\right)\right]$ with the Green's function
\begin{equation}
G\left(E,\mathbf{k}_{\perp}\right) =\left[{E + i0^{+} - \mathcal{H}\left(\mathbf{k}_{\perp}\right)}\right]^{-1}.
\end{equation}
The WSM's interface density of states (IDOS), displayed in Fig.~\ref{fig:discreteplots}, is found by tracing over the $y = 0$ site only.
At $k_z=0$ (Fig.~\ref{fig:discreteplots}a), we are exactly in between the Weyl nodes. Without tunnelling, only the so-called \textit{chiral state} is present and localized to the interface, residing on the Fermi arc and dispersing as $E = -t\sin{k_x}$ with a spin $\sigma_x = -1$. With tunnelling, there are two noticeable effects. First, the chiral state lowers its energy as it is now able to hop to the metal side, spreading its wavefunction. Indeed, this lowering of energy captured by Eq.~\eqref{eq:effectivesamesitepotential} is a prevailing effect throughout this work. By that same token, a previously extended state enters the bulk gap from the upper bulk band and localizes to the interface. Contrary to the chiral state, this so-called \textit{emergent interface state} does not have a uniform spin polarization.
At the Weyl nodes (Fig.~\ref{fig:discreteplots}b), the Fermi arc terminates and there are no interface states for $\Delta = 0$. As tunnelling is increased, however, the chiral state can be seen along the Weyl node's upper cone. When $\Delta$ increases beyond the interlayer hopping $t$ the chiral state detaches from the Weyl cone and forms, together with the previously discussed emergent interface state, a noticeable asymmetry in the interface density of states with respect to $k_x$ reflection (Fig.~\ref{fig:discreteplots}b.iii). This striking asymmetry is of particular interest. Physically, it suggests that tunnelling modifies the group velocity along the interface to produce additional left- and right-flowing current in an energy range between the chiral and emergent interface states' intersections with the bulk dispersion. Naively, this is surprising because one may not expect the breaking of translation symmetry in $y$ to induce an asymmetry in the $x$-direction. However, one must remember that the physics on a single surface are not in fact symmetric in $k_x$ to begin with, as evidenced by the linearly dispersing chiral state at the interface. Therefore, although the spectral function is symmetric in $k_x$ when traced over all sites, the localized tunnelling term in $y$ will explicitly break this symmetry.
By plotting $A\left(E,\mathbf{k}_{\perp}\right)$ in the surface BZ, we see that the zero-energy interface Fermi arc (Fig.~\ref{fig:discreteplots}c and d) will curve in the presence of tunnelling \cite{gorbararcs}. While still terminating at the Weyl nodes $\mathbf{k}^{\pm}_{\perp,w} = (0,\pm k_w)$, it does go beyond $k_z = \pm k_w$ at zero energy, signifying the existence of interface states in a region of parameters outside the bare Fermi arc. This is illustrated by the previously discussed chiral state's presence at $k_z = \pi / 2$ and will have important transport consequences come Sec.~\ref{sec:transportalong}.
These results are robust to changes in the metal's form. In fact, we find that equivalent behaviour may be obtained simply by coupling the WSM to a constant energy reservoir $t_m = 0$, $\mu = -M$. A more realistic setup in which the WSM is coupled to a two-band bulk insulator will yield two copies of the dispersions found in Fig.~\ref{fig:discreteplots}, one for positive and one for negative energy (see App.~\ref{sec:varymetal}).
As seen in the numerics above, a new closed orbit of low energy states appears on the interface. This closed orbit should be apparent in quantum oscillations experiments as it leads to oscillations with frequency which matches the enclosed momentum space area. These oscillations should be contrasted to the arc/node oscillations suggested by Ref.~\cite{potteroscillations} and studied in Ref.~\cite{borchmannbarnea}. The latter oscillations result from closed orbits which include both the surface (interface) and bulk states, meaning their frequency depends on the slab depth. By contrast, the new orbit seen in Fig.~\ref{fig:discreteplots}d.iii contains only interface states and its frequency is depth independent.
\section{Infinite lattice theory with an interface}
\label{sec:discretetheory}
The physics at the interface seen in the lattice model above can be described in an infinite model and treated analytically with the help of an ansatz. We take $L_y \rightarrow \infty$ and impose $\psi\rightarrow 0$ at $y\to \pm\infty$ on both sides of the interface. Therefore, this theory effectively consists of two semi-infinite slabs connected by surface tunnelling $\Delta$.
Seeking states $\bm{\varphi}$ exponentially localized to the interface, we make the ansatz
\begin{equation}
\label{eq:wsmdiscreteansatz}
\bm{\varphi} = \begin{cases}
\bm{\varphi}_w (y) = e^{ik_xx + ik_zz} \ell^{y} \bm{\phi}_w & y=-\infty,\dots,-1, 0\\
\bm{\varphi}_m (y) = e^{ik_xx + ik_zz} \ell_m^{-y+1} \bm{\phi}_m & y=1,2,\dots,\infty
\end{cases}
\end{equation}
where $\bm{\phi}_{w,m}$ are spinors carrying the overall normalization. Note that this ansatz assumes a constant spin direction and is therefore suitable for the chiral state found above but is not completely general. To simplify our problem slightly, we rotate our states by $\pi/2$ about $y$ axis in spin space. Defining $g_1 \equiv t\sin{k_x}$ and $g_3 \equiv t \left(2 + \gamma - \cos{k_x} - \cos{k_z}\right)$ leads to the change
\begin{subequations}
\begin{align}
\mathbf{h}_w &= g_1 \sigma_z - g_3\sigma_x, \\
\mathbf{R}_w &= t(\sigma_x + i \sigma_y)/2,
\end{align}
\end{subequations}
for the same-site and nearest-neighbour hopping matrices, respectively. The metal and tunnelling components are unchanged.
\subsection{$\Delta = 0$}
As a first test of validity, we take the $\Delta=0$ case and recover the chiral state and Fermi arc of the finite lattice model.
We do not impose any boundary conditions at the $y$-termination but instead just look for exponentially localized states which solve the bulk difference equations. In a lattice formalism, $\mathcal{H}^{\mathrm{open}}_w \bm{\varphi}_w = E\bm{\varphi}_w$ produces a set of coupled difference equations relating $\bm{\varphi}_w(y)$ to its nearest neighbours $\bm{\varphi}_w(y \pm 1)$ \cite{andreevedgestates}:
\begin{equation}
E\bm{\varphi}_w(y) = \mathbf{h}_w \bm{\varphi}_w(y) + \mathbf{R}^{\dagger}_w \bm{\varphi}_w(y+1) + \mathbf{R}_w \bm{\varphi}_w(y-1) .
\end{equation}
Plugging in Eq.~\eqref{eq:wsmdiscreteansatz}, we obtain the matrix equation
\label{eq:discretedifferenceequations}
\begin{align}
\label{eq:bulkdifferenceequationWSM}
0 = \left(E - g_1\sigma_z + g_3\sigma_x -t\ell^{-1}\sigma_+ - t\ell\sigma_-\right)\bm{\phi}_w
\end{align}
where $\sigma_{\pm} = \left(\sigma_x \pm i \sigma_y \right)/2$. Setting the determinant of Eq.~\eqref{eq:bulkdifferenceequationWSM} to zero yields the ratio of spins and energy, respectively:
\begin{subequations}
\label{eq:discretebulkwsm}
\begin{gather}
\frac{\phi^{\uparrow}_w}{\phi^{\downarrow}_w} = \frac{E+g_1}{t \ell - g_3} = \frac{t\ell^{-1} - g_3}{E - g_1}, \\
\label{eq:discretebulkenergy}
E = \pm \left[g_1^2 +g_3^2 +t^2 -g_3 t \left(\ell + \ell^{-1}\right)\right]^{\frac{1}{2}}.
\end{gather}
\end{subequations}
Guided by the previous section we make the assumption $\phi^{\uparrow}_w / \phi^{\downarrow}_w = 0$ (spin in the $-x$ direction) and find a solution with $\ell = t / g_3$ and $E = - g_1$. To satisfy the boundary condition at $-\infty$, we impose $\mathrm{Re}\left(\ell \right) > 1$, or $g_3 < t$. The familiar Fermi arc condition $\gamma < \cos{k_z}$ then follows naturally\footnote{For a bulk state of energy $E$, $\mathrm{Re}\left(\ell \right) > 1$ implies $-E_{\mathrm{bulk}}<E<E_{\mathrm{bulk}}$ where $E_{\mathrm{bulk}}(\mathbf{k}_{\perp}) = \left[g_1^2 + (g_3 - t)^2\right]^{\frac{1}{2}}$ is the bulk edge.}.
We have therefore recovered the aforementioned chiral state: a uni-directional interface state on the Fermi arc.
\subsection{$\Delta > 0$}
We now allow for tunnelling at the interface between the $y=0$ and $y=1$ sites. There are then four difference equations, one for each type of site: the Weyl bulk,
Weyl interface, metal interface, and metal bulk. Substituting in the supposed forms of $\bm{\varphi}_{w,m}(y)$, the difference equations are, respectively:
\begin{subequations}
\begin{align}
\label{eq:differenceequationWSM}
&0 = \left(
E - g_1 \sigma_z + g_3 \sigma_x - t\ell\sigma_- -t \ell^{-1}\sigma_+\right) \bm{\phi}_w,\\
\label{eq:differenceequationinterface1}
& 0 =
\left(
E - g_1 \sigma_z + g_3 \sigma_x - t\ell^{-1}\sigma_+\right) \bm{\phi}_w
-
\Delta \bm{\phi}_m, \\
\label{eq:differenceequationinterface2}
& 0 =
\left(E - h_m + t_m \ell_m^{-1}\right) \bm{\phi}_m
-
\Delta \bm{\phi}_w,\\
\label{eq:differenceequationmetal}
& 0 =
\left(E - h_m + t_m\ell_m^{-1}+t_m\ell_m\right) \bm{\phi}_m.
\end{align}
\end{subequations}
Eq.~\eqref{eq:differenceequationinterface2} which has no matrix structure and is hence the same for both components of the spinor requires the spinor direction to be the same on both sides of the interface. Moreover, it determines the magnitude ratio:
\begin{equation}
\label{eq:discretespinrelation}
\bm{\phi}_m
=\frac{\Delta }{E - h_m + t_m \ell_m^{-1}} \bm{\phi}_w.
\end{equation}
Together with Eq.~\eqref{eq:differenceequationinterface1}, a final relation is obtained:
\begin{widetext}
\begin{equation}
\label{eq:interfacehamiltoniandiscrete}
\left(E -\frac{\Delta^2}{E-h_m+t_m\ell_m^{-1}} - g_1 \sigma_z + g_3 \sigma_x - t\ell^{-1} \sigma_+ \right) \bm{\phi_w} = 0.
\end{equation}
\end{widetext}
Eq.~\eqref{eq:interfacehamiltoniandiscrete} is similar in form and purpose to the effective surface Green's function \eqref{eq:effectivegreen} except it is purely in spin space since the ansatze and vanishing boundary conditions took care of position dependencies. It can be interpreted as an eigenvalue problem for the matrix $g_1\sigma_z - g_3 \sigma_x + \ell^{-1}\sigma_+$ whose eigenvalues are
\begin{equation}
E_\Delta \equiv E - \frac{\Delta^2}{ E - h_m + t_m \ell_m^{-1}} .
\end{equation}
With this, its energy bands are twofold and defined by the implicit equation
\begin{equation}
\label{eq:implicitenergy}
E_{\eta} - \frac{\Delta^2}{E_{\eta}-h_m+{t_m}\ell_m^{-1}} = \eta \left({g_1^2 + g_3^2 - tg_3\ell^{-1}}\right)^{\frac{1}{2}}
\end{equation}
where $\eta=\pm$ is the band index and $\ell$ ($\ell_m$) is itself a function of energy through Eq.~\eqref{eq:differenceequationWSM} [Eq.~\eqref{eq:differenceequationmetal}]:
\begin{subequations}
\begin{align}
\ell_{\pm} &= Q \pm \sqrt{Q^2 - 1}, \\
\ell_{m,\pm} &= P \pm \sqrt{P^2 - 1}.
\end{align}
\end{subequations}
Here, $Q = \frac{g_1^2+g_3^2+t^2-E^2}{2g_3t}$ and $P = \frac{h_m-E}{2t_m}$.
While it may seem at first glance that the energies are symmetric in $k_x$ due to the even parity of both $\ell$ and $\ell_m$ with respect to $k_x$, one must be careful in choosing the appropriate branch $\eta$, such that the state indeed decays away from the interface. In general, the branch may vary as a function of $\mathbf{k}_{\perp}$. The infinite lattice theory Eq.~\eqref{eq:implicitenergy} is compared to the finite model in Fig. \ref{fig:discreteplots}.
Another remarkable consequence of tunnelling is spin canting. At first glance, one should not expect that non-magnetic tunnelling to a non-magnetic metal should cause the polarized spins at the interface to cant. Indeed, this plain intuition is seemingly supported by Eq.~\eqref{eq:discretespinrelation} and agrees with the finite lattice model for $\Delta \lesssim t$. To dress a more complete picture, however, we must consider the ratio of spins of the interface state which, in light of Eq.~\eqref{eq:interfacehamiltoniandiscrete}, is
\begin{equation}
\label{eq:ratioofspinsinterfacediscrete}
\frac{\phi^{\uparrow}_w}{\phi^{\downarrow}_w} = \frac{E_{\Delta} + g_1}{-g_3}.
\end{equation}
If $\Delta > 0$ and $E < h_m + t_m \ell_m^{-1}$, we expect the interface spin to cant away from $ {\phi^{\uparrow}_w}/{\phi^{\downarrow}_w} = 0$ ($\sigma_x = -1$) towards ${\phi^{\uparrow}_w}/{\phi^{\downarrow}_w} = -1$ ($\sigma_z = +1$). Solving for ${\phi^{\uparrow}_w}/{\phi^{\downarrow}_w}$ together with Eq.~\eqref{eq:implicitenergy} yields the spins of the chiral state as they vary with tunnelling, shown in Fig. \ref{fig:spinstunnelling}. We therefore conclude that non-magnetic surface tunnelling to a non-magnetic metal can in fact induce a change in the spins of the WSM's chiral states. It is also of note that without $H_w$'s $\cos{k_y}$ term, spin canting is absent and the interface state will remain in a $\sigma_x = -1$ eigenstate independent of tunnelling.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figures/spinstunnelling.png}
\caption{The chiral state's spin at the interface, fixed at the Weyl node ${k}_z = \pi / 2$ and $k_x = - 0.7$ on a lattice of size $L_y = 30$. Varying $\Delta$ cants the spin from $\sigma_x = -1$ towards $\sigma_z = +1$ (black arrows), matching the prediction of Eq.~\eqref{eq:ratioofspinsinterfacediscrete}. The solid (dashed) lines correspond to numerical (infinite lattice theory) results. In the absence of the $\cos{k_y}$ term, the spins are unchanged with tunnelling (faded red and blue points). Note that $\expval{\sigma_y}$ is always zero (see App.~\ref{sec:sigmayzero}).}
\label{fig:spinstunnelling}
\end{figure}
\section{continuum interface theory}
\label{sec:interfacetheory}
A simplified model that can capture the effect of tunneling is a linearized continuum model which is valid at long distances. We note however that in order to satisfy the boundary conditions at the interface, it is important to keep the second derivative in the $y$-direction, as can be seen below.
\subsection{$\Delta = 0$}
We first consider the $\Delta = 0$ case, a semi-infinite WSM slab in the continuum limit. Keeping $\mathcal{O}\left(k_y^2\right)$ terms in the WSM Hamiltonian, letting $k_y = -i\partial_y$, and multiplying by $i\sigma_y$ throughout yields the differential equation ($t=1$):
\begin{equation}
\label{eq:surfacetise}
{\partial_y \bm{\psi} } + \frac{\sigma_x}{2}{\partial^2_y \bm{\psi}} = i\sigma_y(E - t\sin{k_x}\sigma_x)\bm{\psi} + h_z \sigma_x \bm{\psi}
\end{equation}
where $h_z \equiv g_3 - t$. To hone in on the objective interface states, we take the interface to be at $y=0$ and make the ansatz ${\bm{\psi}} \propto e^{\kappa y} \bm{\phi}$, where $\bm{\phi}$ is an unspecified spinor and $\mathrm{Re}\left(\kappa\right) > 0$ such that $\bm{\psi} \to 0$ as $y \to \infty$. The differential equation \eqref{eq:surfacetise} admits four solutions for $\kappa$, of which two have a putatively positive real part:
\begin{equation}
\label{eq:decayparameters}
\kappa_{\pm}^2 = 2\left(1 + h_z\right) \pm 2 \left[{1 + 2h_z + E^2 - \sin^2{k_x}}\right]^{\frac{1}{2}}.
\end{equation}
For a state of energy $E$, $\mathrm{Re}\left(\kappa\right) > 0$ translates to $E^2 < \sin^2{k_x} + h_z^2$, in agreement with the infinite lattice theory.
Eq.~\eqref{eq:decayparameters} sheds light on the fact that for a given eigenvector with energy $E$ satisfying Eq.~\eqref{eq:surfacetise}, there is a distinct eigenvector with equal and opposite energy $-E$ which is also a solution. Therefore, there are two $\kappa$ values per energy.
To determine which solution is correct, we impose the boundary condition $\bm{\psi}(0)=0$. In general, one has the superposition ${\bm{\psi}} \propto e^{\kappa_+ y} \bm{\phi}_{\kappa_+} + \alpha e^{\kappa_- y} \bm{\phi}_{\kappa_-}$. Therefore, $\alpha = -1$ and $\bm{\phi}_{\kappa_+} = \bm{\phi}_{\kappa_-}$. Equating the ratio of spins, the latter condition can be surmised as
\begin{equation}
\label{eq:ratioofspinorscontinuum}
\frac{E + h_z - \kappa_{+}^2 / 2}{\sin{k_x}+\kappa_{+}} = \frac{E + h_z - \kappa_{-}^2 / 2}{\sin{k_x}+\kappa_{-}}.
\end{equation}
After some algebra, we recover the aforementioned chiral state of energy $E = -t\sin{k_x}$, leading to the decay parameters $\kappa_{\pm} = 1 \pm \sqrt{1 + 2 h_z}$ and spin in the negative $x$-direction:
\begin{equation}
\bm{\psi}_{\mathrm{chiral}} \propto
e^{ik_x+ik_z z}\left( e^{\kappa_+ y} - e^{\kappa_- y}\right) \begin{pmatrix}
1 \\
-1
\end{pmatrix}.
\end{equation}
The condition of $\mathrm{Re}\left(\kappa\right) > 0$ leads to $ \gamma < \cos{k_z}$, which is the familiar arc condition.
At the surface BZ origin $ \mathbf{k}_{\perp,0} = \left(0,0\right)$, the chiral state's decay length is on the order of a lattice length, pointing to a strongly localized state which may therefore well be described by a continuum interface theory. At the surface Weyl points $\mathbf{k}^{\pm}_{\perp,w} = (0, \pm k_w)$,
however, $\kappa_{-} = 0$ and the chiral state's decay length diverges, as expected from the absence of such surface states at the Weyl node.
\subsection{$\Delta > 0$}
In order to get a simple analytical result, we imagine coupling the WSM to a quantum dot of energy $M$. Here, we model the metal as a flat band since it is well above the WSM and only states with the same energy are relevant. The continuum Hamiltonian reads
\begin{align}
\label{eq:interfacesubspace}
H_{\mathrm{cont}} = \begin{pmatrix}
\sin{k_x} \sigma_x + h_z \sigma_z - i \sigma_y \partial_y - \frac{1}{2}\sigma_z \partial^2_y & \Delta \\
\Delta & M
\end{pmatrix}.
\end{align}
Once again, we focus on solutions bound to the interface $\bm{\psi}_{w} \propto e^{\kappa y} \bm{\phi}_w$ ($\bm{\psi}_{m} \propto e^{-\kappa_m y} \bm{\phi}_m$), leading to four differential equations. The first two restrict the metal spins to be identical to the Weyl spins up to a scalar factor:
\begin{equation}
{\bm{\phi}_{m}} = \frac{\Delta}{E - M} {\bm{\phi}_{w}}.
\end{equation}
The remaining two equations reduce to a $2 \times 2$ matrix equation expressed in the basis of Weyl spins $\bm{\phi}_w$:
\begin{widetext}
\begin{align}
\label{eq:continuuminterfacetise}
\left(E - \frac{\Delta^2}{E - M} - \sin{k_x} \sigma_x - h_z \sigma_z + \frac{\kappa^2}{2}\sigma_z + i \kappa \sigma_y \right) {\bm{\phi}_w} = 0,
\end{align}
\end{widetext}
which is the continuum form of Eq.~\eqref{eq:interfacehamiltoniandiscrete}. When $\Delta = 0$ and $E \neq M$, it is not difficult to
see that the bare WSM surface chiral state is recovered. For $\Delta > 0$, the physics are identical to the $\Delta = 0$ case with the substitution $ E \to E - {\Delta^2 }/(E - M) = E_{\Delta}$
\footnote{In fact, the effective surface propagator Eq.~\eqref{eq:effectivesamesitepotential} exactly reduces to $-\Delta^2 / \left(E - M\right)$ when $t_m = 0$ and $\mu = -M$.}.
For instance, the decay parameters are now
\begin{equation}
\kappa_{\pm}^{2} a^2t = 2 \left(t + h_z\right) \pm 2 \left({t^2 + 2 t h_z - t^2\sin^{2}{k_x} + E_{\Delta}^2}\right)^{\frac{1}{2}},
\end{equation}
where we have re-inserted the energy scale $t$ and the lattice constant $a$.
The continuum interface theory therefore hints at a straightforward interpretation of the energy shift upon tunnelling. Indeed, seeing as the only effect of $\Delta$ was to shift the energies, the
chiral band's energy in the continuum theory is defined by $E_{\Delta} = - t\sin{k_x}$, or
\begin{equation}
\label{eq:chiralenergytunnelling}
E = \frac{M - t\sin{k_x}}{2} - \frac{1}{2} \left[{\left(M + t\sin{k_x} \right)^2 + 4 \Delta^2}\right]^{\frac{1}{2}}.
\end{equation}
In regimes where the decay lengths $\kappa^{-1}_{\pm} \sim a$, Eq.~\eqref{eq:chiralenergytunnelling} is in agreement with finite lattice simulations, as shown in Fig.~\ref{fig:gammaspectrum}. As for the chiral state's spin, it remains unchanged due to Eq.~\eqref{eq:ratioofspinorscontinuum} still being satisfied and equal to $-1$ when $E\to E_{\Delta} = -t \sin{k_x}$\footnote{In the infinite lattice theory, the replacement $$E \to E_{\Delta} = -t\sin{k_x} =-g_1$$ in Eq.\eqref{eq:ratioofspinsinterfacediscrete} also leads to a spin $\sigma_x = -1$ ($r_{\mathrm{interface}} = 0$).}. Therefore, the validity of Eq.~\eqref{eq:chiralenergytunnelling} will depend wholly on whether or not the state is in a $\sigma_x=-1$ eigenstate, and any deviations in the bandstructure must reflect a changing spin in the lattice model. Since the spins do in fact cant for $\Delta \gtrsim t$, this is the root of the continuum theory's inaccuracy in this regime.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figures/linearizedplots.png}
\caption{Interface density of states for the WSM-metal system ($L_y = 30$) at $k_z = 0$ for a tunnelling strength of (a) $\Delta = 0$ and (b) $\Delta = 1.5$. The blue line is the analytic chiral state dispersion $E_{\mathrm{chiral}}$ whereas the dashed white lines represent the bulk energy gap $E_{\mathrm{bulk}} = \pm 1$. The metal energy is $M = 4$.}
\label{fig:gammaspectrum}
\end{figure}
Another aspect captured by the continuum theory is the localization of bulk states at the interface to produce the emergent interface state, a typical feature of systems with boundary topologies \cite{andreevedgestates}. Simply put, the lowering of energy with tunnelling will give the bulk state's decay parameter a positive real part.
\section{Transport}
\label{sec:transport}
\subsection{Along the interface}
\label{sec:transportalong}
We will now turn to the transport consequences of the previously described theory and numerics. We begin by analyzing the current along the interface, travelling in the $x$-direction. We fix $k_z$ and analyze transport in 2D, summing over all momenta at the end.
At $\Delta = 0$, the conductance at the Weyl node should vanish due to the gap closure and subsequent absence of uni-dimensional current-carrying states. For $\Delta > 0$, however, the presence of interface states near the Weyl node and the resulting spectral asymmetry in $k_x$ [Fig.~\ref{fig:discreteplots} (row 3, col. b)] suggests a jump in group velocity $\partial_{k_x} E$ across the Weyl point, leading to a nonzero conductance.
We verify our reasoning numerically via the Landauer-Büttiker formalism, where conductance along the interface $\mathcal{G}_{\parallel}$ is defined as \cite{datta_1995}
\begin{equation}
\mathcal{G}_{\parallel}(E) = \frac{e^2}{h} \mathrm{Tr}(G^{R} \Gamma_{l} G^A \Gamma_{r}).
\end{equation}
Here, $G^R$ is the usual retarded Green's function
\begin{equation}
\label{eq:advancedgreenfunction}
G^R = \left({E - H - \Sigma^R}\right)^{-1}
\end{equation}
with the lead self-energy $\Sigma^R = \Sigma^R_l + \Sigma^R_r$ giving the quasiparticles a finite lifetime. The $\Gamma_l$ and $\Gamma_r$ operators describe the loss of electrons into the left and right leads, respectively:
\begin{equation}
\Gamma_{l(r)} = i \left(\Sigma^R_{l(r)} - \Sigma^A_{l(r)}\right) = -2\, \mathrm{Im}(\Sigma^R_{l(r)}).
\end{equation}
In the simplest case, we place two leads, one on each of the $x$-boundaries, which span the entire sample in the $y$-direction. Since the leads are (the interface is) in the plane perpendicular to $x$ ($y$), our construction forces the sample to be open in both the $x$-direction and the $y$-direction while still remaining periodic in $z$. For any $k_z$, $\Sigma^R_l$ takes the form
\begin{equation}
\left(\Sigma^R_l\right)_{x,x';y,y'} = - \frac{i}{2\tau} \delta_{x0}\delta_{xx'} \delta_{yy'} ,
\end{equation}
where $\tau$ is the quasiparticle's lifetime. For its part, $\Sigma^R_r$ admits a similar form with $\delta_{x0}$ replaced by $\delta_{x,L_x-1}$. The tunnelling matrix $T$, while unchanged in the $y$-direction, now adopts a new diagonal sub-component in the $x$-direction:
\begin{equation}
T_{x,x';y,y'} = {\Delta} \delta_{xx'} \delta_{y0}\delta_{y'L_y} .
\end{equation}
For $\Delta = 0$ (Fig.~\ref{fig:conductancealong}, panels a and b, top row), the $e^2/h$ quantized conductance for $|k_z|<k_w$ can be understood in the context of the quantum anomalous Hall effect, treating each constant $k_z$ plane as a 2D quantum spin Hall insulator with one-dimensional edge states carrying $\mathcal{G}_{\parallel} = e^2/h$ \cite{bhz}.
At $k_z = 0$ (Fig.~\ref{fig:conductancealong}a), the surface tunnelling localizes a bulk state to within the gap, allowing for both left- and right-moving carriers to produce a ``bump" in the conductance. One can reason by examining the juxtaposed spectrum. Above and below the bump energies (denoted by pink and green lines), there is only one left-moving state, whereas within it there are two left- and one right-mover. Without scattering between left and right movers, these states should contribute $2e^2/h$ to the conductance in one direction and $e^2/h$ in the other direction. On the other hand, scattering may reduce the conductance since a left and right mover can hybridize. In our case, the scattering is provided by the leads and therefore the resulting conductance is between 1 and 2 quanta of conductance.
The effect of tunnelling is perhaps most pronounced at the Weyl node (Fig.~\ref{fig:conductancealong}b). As discussed, the bulk gap closes and the subsequent absence of interface states at $\Delta=0$ leads to zero conductance at zero energy. However, with tunnelling there are now two interface states in the spectrum: the left-moving chiral state and the right-moving emergent interface state. The former will terminate at an energy $E_{\mathrm{term}}$ (green line), the intersection of $E_{\mathrm{bulk}} = \left(t^2\sin^2{k_x}+h_z^2\right)^{\frac{1}{2}}$ with Eq.~\eqref{eq:implicitenergy}.
Below $E_{\mathrm{term}}$, there are no uni-directional carriers and the conductance is unchanged. For $E_{\mathrm{term}} < E < 0$, only the chiral state is present, and there is a conductance $e^2/h$. Note the deviation from $e^2/h$ due to the small amount of bulk states present near zero energy. Above this range, both the chiral and the emergent interface state are present and move in opposite directions -- their sum is null (modulo scattering), and only bulk states contribute.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figures/conductancealong.png}
\caption{Conductance $\mathcal{G}_{\parallel}$ of the WSM-metal system along the interface at (a) $k_z = 0$ and (b) $k_z = \pi/2$ for $\Delta = 0$ (i, ii) and $\Delta = 2.3$ (iii, iv). To guide the physical intuition, the spectra are shown in the left panels (i, iii) and states are colored and shaded according to their $y$-position, with the relevant interface states in dark magenta and bulk states in faint colours. Energies relevant to the discussion in Sec.~\ref{sec:transportalong} are denoted by full horizontal lines.}
\label{fig:conductancealong}
\end{figure}
When experimentally measuring transport between leads, the measured quantity is a sum over all $k_z$ momenta. We therefore define the total conductance along the interface,
\begin{equation}
\label{eq:summedconductancealong}
\mathcal{G}_{\parallel} (E) = \frac{1}{L_z} \sum_{k_z} \mathcal{G}_{\parallel}(E,k_z).
\end{equation}
Summing over quantized conductance contributions on the $z$-projected Fermi arc $k_{\mathrm{arc}}^z $, Eq.~\eqref{eq:summedconductancealong}'s minimum is fixed (Fig.~\ref{fig:conductance_allkz}):
\begin{equation}
\label{eq:summedconductancealongmin}
\min{\mathcal{G}_{\parallel}} = \frac{e^2}{h} \frac{k_{\mathrm{arc}}^z}{2\pi}.
\end{equation}
To probe this signature, we vary the arc length along the $k_z$-direction, as shown by Fig.\ref{fig:conductance_allkz}b. In the minimal model, this can be done by applying a strong Zeeman-like magnetic field $ b_z \mathbf{e}_{z}$ coupling to spin degrees of freedom, bringing the arc length to $k_{\mathrm{arc}}^z \to 2 \arccos{(\gamma + b_z)}$ provided $b_z$ is small enough not to change the overall topological phase and that its orbital effects may be neglected.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/conductance_allkz.png}
\caption{(a) Conductance summed over all $k_z$ for $\Delta = 0$ (i) and $\Delta=2.3$ (ii). (b) Total conductance minimum \eqref{eq:summedconductancealongmin} as a function of bare Fermi arc length. For $\Delta = 0$ (black crosses), $k^z_{\mathrm{arc}} = 2\arccos{\gamma}$ and the minimum conductance scales with the Fermi arc length (modulo scattering). For $\Delta = 2.3$ (red triangles), $k_{\mathrm{arc}}^z > 2\arccos{\gamma}$ and the conductance minimum is therefore increased relative to $\Delta=0$.}
\label{fig:conductance_allkz}
\end{figure}
\subsection{Across the interface}
We complete our study with a simple analytical model for electron tunnelling across the interface. We set out to derive an expression for the conductance across the interface $\mathcal{G}_{\perp,\sigma} = d I_{\sigma} / d V$ of a particle polarized with spin $\sigma$. A detailed derivation is included in App.~\ref{sec:conductanceacross}.
We begin by expressing the current $I_{\sigma}$ of a particle with spin $\sigma$ in terms of the retarded correlation function $U_R^{\sigma\sigma'}$ \cite{PhysRevLett.8.316,Mahan2000,ryndyk_2016}:
\begin{equation}
\label{eq:currentmain}
I_{\sigma} = -{2e} \, \mathrm{Im} \sum_{\sigma'}U_R^{\sigma\sigma'}(-eV).
\end{equation}
${U}^{\sigma\sigma'}_R\left(-eV\right)$ is found by computing the Matsubara correlation function ${\mathcal{U}}^{\sigma\sigma'}(i \omega_n)$ and analytically continuing $i\omega_n \to -eV + i0^+$. At finite temperature $\beta^{-1}$, we have
\begin{equation}
\label{eq:matsubaracorrelation}
\mathcal{U}^{\sigma\sigma'}(i\omega_n) = \frac{1}{\beta}\sum_{\mathbf{k}\mathbf{q}} |T_{\mathbf{k}\mathbf{q}} |^2 \sum_{ip} g_w^{\sigma'\sigma}(\mathbf{k},ip-i\omega_n) g_m^{\sigma\sigma'}(\mathbf{q},ip).
\end{equation}
where $\mathbf{k}$ ($\mathbf{q}$) is the momentum in the WSM (metal), $T_{\mathbf{k}\mathbf{q}}$ is the tunnelling matrix element, $\omega_n$ ($p$) is a bosonic (fermionic) Matsubara frequency, and $\bm{g}_w$ ($\bm{g}_m$) is the Matsubara Green's function for the bare WSM (metal). Since states bound to the interface will not contribute to tunnelling across of it, we may consider only bulk states. The bulk Green's functions $\bm{g}_{m,w}$ are therefore
\begin{subequations}
\begin{align}
\bm{g}_m(\mathbf{q},ip) &= \frac{1}{ip - \xi_{m}}, \\
\bm{g}_w(\mathbf{k},ip) &= \frac{ip + \mathcal{H}^{\mathrm{bulk}}_w}{(ip - \xi_{w})(ip + \xi_{w})},
\end{align}
\end{subequations}
with the WSM (metal) dispersion $\xi_w$ ($\xi_m$).
Setting $|T_{\mathbf{k}\mathbf{q}}|^2=\Delta^2 \delta(\mathbf{k}_{\perp}-\mathbf{q}_{\perp})$,
we perform the Matsubara frequency summation $\sum_{ip} \left({ip - \xi}\right)^{-1} = \beta n_F \left(\xi\right)$ \cite{Mahan2000}, where $n_F$ is the fermionic distribution, by splitting the denominator into partial fractions. Using $\mathrm{Im} \, (-eV + i0^{+} - \xi)^{-1} = -\delta(-eV-\xi)$,
Eq.~\eqref{eq:currentmain} becomes
\begin{align}
\label{eq:currentcleanmain}
I = { 2 e \Delta^2} &\sum_{\mathbf{k}_{\perp}, k_y,q_y} \Big\{{u}_{\mathbf{k}}^2 \left[n_F(\xi_{m}) - n_F(\xi_{w})\right] \delta(-eV-\xi_-) \nonumber\\&+ {v}_{\mathbf{k}}^2\left[n_F(\xi_{m}) - n_F(-\xi_{w})\right] \delta(-eV-\xi_+)\Big\},
\end{align}
where $\xi_{\pm} = \xi_m \pm \xi_w$ and
\begin{subequations}
\label{eq:coherencefactorsmain}
\begin{align}
{u}^2_{\mathbf{k}} &= \frac{1}{2}\left(1 + t\sin{k_x}/{\xi_w} \right),\\
{v}^2_{\mathbf{k}} &= \frac{1}{2}\left(1 - t\sin{k_x}/{\xi_w} \right).
\end{align}
\end{subequations}
Note that we have chosen the quantization axis in the $x$-direction for simplicity. More generally, the second term in Eqs.~\eqref{eq:coherencefactorsmain} is an odd function of $k_x$, $k_z$, and $\xi_w$ and will vanish when integrated over, leaving the current spin-independent.
To proceed, we imagine placing the metal band's Fermi level $\mu_m$ in the WSM's upper band and largely above to parabolic band minimum. At low energies,
\begin{subequations}
\begin{equation}
\xi_w = v\left(\mathbf{k}_{\perp}^2 + k_y^2\right)^{\frac{1}{2}}
\end{equation}
and
\begin{equation}
\xi_m = {\mu}_m + \frac{1}{m}\left(2m \tilde{\mu} - \mathbf{k}_{\perp}^2\right)^{\frac{1}{2}} q_y,
\end{equation}
\end{subequations}
where $m = 1/2t_m$ and $\tilde{\mu} = \mu + \mu_m + 6t_m$ (the lattice constant is still $a=1$).
The latter expression is found by expanding near $\xi_m$'s intercept with $\mu_m$ along $q_y$, the metal's $y$-momentum. We further consider a small positive applied voltage such that particles tunnel from the upper WSM band to the metal. Thus, only the first term of Eq.~\eqref{eq:currentcleanmain} contributes. Replacing the sums by integrals, changing variables from $k_y$ to $\xi_w$ and $q_y$ to $\xi_m$, and re-inserting $\hbar$, the current is now
\begin{align}
\label{eq:currentsimpleformmain}
I &= \frac{e}{h} \frac{m \Delta^2}{2\pi v^2} \int_{0}^{eV} d\xi_w \int \frac{d^2\mathbf{k}_{\perp}}{(2\pi)^2} \xi_w u^2_{\mathbf{k}} \nonumber \\
&\times \frac{\theta(\xi_w - v|\mathbf{k}_{\perp}|)\theta(2m\tilde{\mu} - \mathbf{k}_{\perp}^2)}{\sqrt{\xi_w^2/v^2 - \mathbf{k}_{\perp}^2}\sqrt{2m\tilde{\mu} - \mathbf{k}_{\perp}^2}} .
\end{align}
Note that we have applied the low-temperature limit $n_F(\xi_w) = -\theta(\xi_w)$.
The integral over $d^2\mathbf{k}_{\perp}$ can be done analytically, yielding the conductance across the interface:
\begin{equation}
\label{eq:conductanceacross}
\mathcal{G}_{\perp}\left(eV\right) = \frac{e^2}{h} \frac{m \Delta^2}{\left(2\pi\right)^2 v^2} eV \log\left|\frac{\varepsilon + eV }{\varepsilon - eV }\right|.
\end{equation}
For $eV \ll \sqrt{2 m v^2 \tilde{\mu} } \equiv \varepsilon$, the leading order term is quadratic in $V$:
\begin{equation}
\label{eq:expandconductance}
\mathcal{G}_{\perp}\left(eV\right) \approx \frac{e^2}{h} \frac{2 m \Delta^2}{\left(2\pi\right)^2 v^2 \varepsilon} \left(eV\right)^2 .
\end{equation}
Eq.~\eqref{eq:expandconductance} maintains that tunnelling measurements with featureless metals reveal the density of states at the tunnelling energy, since the three-dimensional WSM's linear dispersion corresponds to a density of states proportional to $E^2$.
\section{Conclusion and discussion}
\label{sec:conclusion}
Using both lattice and continuum frameworks, we have described the behaviour of a $\mathcal{T}$-broken WSM's interface in proximity to a non-magnetic band. When coupled to this band via non-magnetic surface tunnelling, the WSM's chiral state lowers in energy and forms, together with a previously delocalized bulk state, a noticeable spin-dependent asymmetry in the interface spectrum across the Weyl nodes.
To model this phenomenon, we derived a infinite lattice theory of the interface and compared it to finite lattice model numerical results.
We found that the infinite lattice theory accurately described the behaviour of the chiral state in the entire Brillouin zone (BZ), from its energy asymmetry to its spin canting at the interface. The localization of bulk states and the curving of the Fermi arc was also captured by the infinite lattice theory.
To build intuition, we also derived a simpler continuum theory of interface states which captured the physics near $\mathbf{k}_{\perp,0}$.
Using the Landauer-Büttiker formalism, we calculated the transport of Weyl electrons travelling along the interface. Due to the asymmetry and increased Fermi arc length which allows for the presence of interface states beyond $\mathbf{k}^{\pm}_{\perp,w}$, we found a quantized increase in conductance per $k_z$ at the Weyl nodes due to tunnelling. We proposed a possible probe of this increase by relating the minimum in total conductance to the Fermi arc length.
Finally, across the interface, the conductance reproduces a simple electron tunnelling experiment, revealing the WSM's density of states.
The results obtained herein may also be understood in the context of a pseudo-magnetic theory whereby the Weyl node separation plays the role of a magnetic gauge field \cite{grushin,grushinlorentz}.
Alternatively, one may also view tunnelling as a finite potential well. As tunnelling broadens to link more sites on either side of the interface and broadens the well, the number of bound states increases.
Indeed, \citeauthor{guidingdirac} consider a Dirac cone (intuitively thought of as two Weyl cones of opposite chirality) under a confining potential well and find qualitatively similar interface spectra shown herein, albeit with symmetric $k_x$-spectra \cite{guidingdirac}.
Though this toy model described the minimal case of two Weyl nodes in a magnetic WSM, these nodes always come in pairs connected by Fermi arcs. It is therefore reasonable to expect that the results obtained herein will still manifest themselves in more complicated systems with, e.g., broken inversion symmetry and a greater number of Fermi arcs. Finally, the asymmetry is resolved if one also accounts for the Hamiltonian's $\mathcal{T}$-reversed partner $\sigma_y H_w^{*}\left(-\mathbf{k}\right)\sigma_y$, instead breaking inversion symmetry.
\begin{acknowledgments}
We would like to thank C.-T. Chen, A. Grushin, and B. Levitan for helpful discussions. LG acknowledges the hospitality of the Houches School of Physics and financial support from the NSERC CGS-M scholarship. TPB acknowledges funding from NSERC and FRQNT.
\end{acknowledgments}
|
1,116,691,497,121 | arxiv | \section{Introduction}
Kaon-nucleon collisions allow one to address many interesting problems
in nuclear and hadron physics \cite{DW}. (By ``kaons" we refer to the
K$^+=u\bar s$ and
K$^0=d\bar s$, generically K, as distinct from the $\bar {\rm K}$ antikaons
K$^-=s\bar u$ and
$\bar {\rm K}^0=s\bar d$.)
Three familiar
examples which we shall discuss below
are 1) the origins of nonresonant ``nuclear"
forces in a system distinct from NN,
2) nuclear structure physics, using kaons as
weakly scattered probes, and
3) searches for possible exotic Z$^*$ baryon resonances which couple
directly to KN. More recently it has become clear that an
understanding of KN scattering in nuclear matter is
important in other areas, such as the interpretation of strangeness production
in nuclear collisions and in two-kaon correlation measurements \cite{had91}.
Elastic KN scattering is a natural
system for the study of nonresonant
nuclear forces.
Since the valence kaon wavefunction contains an $\bar s$ antiquark which
cannot annihilate against the nonstrange nucleon state, direct
production of conventional baryon resonances is excluded.
KN scattering is further simplified by the absence of
one pion exchange, so one can study the nonresonant, non-OPE
part of hadron scattering in relative isolation. Theoretical studies of KN
nuclear forces are especially appropriate because there is already
considerable experimental information on the elastic amplitudes
and two-body inelastic reactions such as KN$\to$ K$^*$N and KN$\to$
K$\Delta$ \cite{DW,bland1,bland2,HARW}.
These experimental amplitudes provide stringent tests for
models of hadronic interactions.
The dominant S-wave elastic
phase shifts are moderately well established, and the higher partial waves
up to L=4 have been determined or estimated \cite{HARW}.
The basic features of the elastic
reaction are a strong repulsion in the I=1 S-wave, a weaker repulsion in the
I=0 S-wave, and an important spin-orbit interaction
which is evident in the P-waves. The important
low energy behavior of the I=0 S-wave,
in particular the scattering length, is unfortunately not yet very well known.
The experimental situation should improve considerably with the development of
new hadronic facilities such as DA$\Phi$NE and KAON
\cite{advert,lf}.
KN scattering also has applications in nuclear
physics; since the kaon-nucleon cross section is relatively small, kaon beams
can be used as probes of nuclear structure. It would obviously be useful to
understand the mechanism and properties of the kaon-nucleon interaction
for this application.
In view of this application one
topic in this paper will be the derivation of effective low energy
KN potentials from the nonrelativistic quark potential model.
Another reason for interest in KN collisions is the possibility of
producing flavor-exotic
Z$^*$ baryon resonances. If discovered, these might be resonances with
the quark valence structure $q^4\bar s$ \cite{mulders},
where $q=$ $u$ or $d$.
Such multiquark hadrons were widely predicted
in the early days of the quark model \cite{multiq},
but it now appears that
multiquark basis states
usually do not support resonances, due to the ``fall apart" effect
\cite{fall,nirev}.
The known exceptions are deuteronlike ``molecule" states of hadron pairs,
which should perhaps be classified as unusual nuclear species.
(Nuclei themselves are excellent examples of the
tendency of multiquark systems to separate into hadronic molecules.)
In the meson-meson
sector two K$\bar {\rm K}$ molecule states are reasonably
well established \cite{WI}, and there are
several other meson-meson candidates \cite{molec}.
In the antikaon-nucleon sector
the $\Lambda(1405)$ is an obvious candidate $\bar {\rm K}$N molecule,
and there presumably are other molecule states in channels with attractive
interactions.
Both the elastic reaction KN$\to $KN and inelastic processes such as KN$\to $
K$^*$N and KN$\to $ K$\Delta$ can be studied for evidence of
exotic Z$^*$ baryon
resonances. With a realistic model
of hadronic interactions we might
reasonably expect to predict the quantum numbers of
exotic meson-baryon molecular
bound states, should these exist.
In this paper we apply the ``quark Born diagram" formalism to KN scattering. In
this approach we assume conventional nonrelativistic quark model wavefunctions
for the asymptotic hadrons, and calculate the Hamiltonian matrix element for
scattering due to a single interaction between constituents in different
incident hadrons.
To form color singlet final states at lowest order one must
then exchange constituents. The full Born amplitude is obtained by summing over
all such processes coherently.
(Similar constituent exchange mechanisms
have been proposed for high energy hadron scattering \cite{CEX}, and there is
strong experimental evidence in favor of this mechanism from large-$t$
exclusive reactions \cite{Baller}.)
This nonrelativistic Hamiltonian matrix element
is then combined with relativistic phase space and kinematics to give results
for differential cross sections, partial wave amplitudes and other scattering
observables. In previous work we derived the elastic scattering amplitudes for
I=2 $\pi\pi$ \cite{BS},
I=3/2 K$\pi$ \cite{BSW} and I=1 KK \cite{BS}.
(These cases were chosen because they are free of valence $q\bar q$
annihilation processes, which are known to be important if allowed.) We found
good agreement with experimental $\pi\pi$ and K$\pi$ S-wave phase shifts given
conventional quark model parameters. We have also applied similar techniques to
pseudoscalar-vector and vector-vector meson channels \cite{Swan}, and the
results may have important implications for meson spectroscopy \cite{molec}. In
Appendix C of \cite{BS} we presented a diagrammatic representation of these
techniques, with associated ``Feynman rules" for the scattering diagrams. KN
elastic scattering is also annihilation free and affords a nontrivial test of
the quark Born formalism.
KN elastic scattering has previously been the subject of numerous theoretical
investigations. Meson exchange models have been applied in several studies
\cite{mesons}, but these are difficult to justify fundamentally because the
range of heavier meson exchange forces ($\approx 0.2$ fm) is much smaller than
the minimum possible interhadron distance for two distinct hadrons ($\approx 1$
fm) \cite{nirev}.
These models typically have many free parameters, which are not
well established experimentally and are fitted to the data. Thus one is in
effect simply parametrizing experiment. This type of model may be of
theoretical interest as a parametrization of more fundamental scattering
mechanisms which operate at the quark and gluon level, as it may be possible to
relate the predictions of these different approaches.
A quark and gluon approach to scattering using the P-matrix and bag model
wavefunctions was proposed by Jaffe and Low \cite{JL}. They suggested
interpreting the multiquark clusters of the bag model not as resonances, but
instead as the short distance parts of hadron-hadron scattering states. In
principal this approach can be used to predict phase shifts, but in practice it
has mainly been used to interpret experimental phase shifts in terms of
P-matrix poles. This approach has been followed for KN by Roiesnal \cite{Roie},
who concluded that the KN data could indeed be interpreted in terms of poles
approximately at the energies predicted by the bag model, but that the pole
residues (coupling strengths to asymptotic KN channels) did not agree well with
predictions. A more recent bag model calculation of KN scattering by Veit,
Thomas and Jennings \cite{VTJ} used the cloudy bag model, which combines
quark fields (in the baryon) with fundamental pseudoscalar meson fields
in an effective lagrangian. This composite model leads to an I=1
S-wave phase shift and a scattering length
which are very similar to our result, but their I=0 phase shift is much smaller
than experiment. Although this cloudy bag
approach gives promising numerical
results, it does not provide us with an understanding of the scattering
mechanism at the quark and gluon level.
Studies of the dominant S-wave
KN scattering amplitudes in terms of quark model wavefunctions
and quark-gluon interactions
have been published by Bender and Dosch \cite{BD} (adiabatic approach),
Bender, Dosch, Pirner and Kruse \cite{BDPK} (variational generator
coordinate method, GCM) and Campbell and Robson \cite{CR} (resonating group
method, RGM).
The large spin-orbit forces evident in the KN
P-wave data have
also been studied using similar quark model techniques,
first qualitatively by
Pirner and Povh \cite{PP} and later in detail by Mukhopadhyay and Pirner
\cite{MP} (using GCM).
The assumptions regarding dynamics,
the scattering mechanism, quark model wavefunctions
and the parameters used in these calculations are very similar to
our assumptions in this paper.
The most important differences are that 1) our techniques
are perturbative and allow analytic solution, and
2) we disagree about
the size of the OGE contribution to KN scattering. Specifically,
we find that OGE alone suffices to explain the
observed I=1 KN scattering length, whereas Bender {\it et al.} \cite{BDPK}
conclude that OGE is
too small, and that a Pauli blocking effect is dominant in I=1.
Campbell and Robson \cite{CR}
similarly found that the experimental I=1 phase shift
was larger than their predictions, which were based on generalizations of
Gaussian wavefunctions and a full OGE and confining interaction.
\section{Calculation of KN and related scattering amplitudes}
\noindent
{\it a) Hamiltonian and hadron states}
\vskip 0.2cm
Our technique involves a Born order calculation of the matrix element
of the Hamiltonian
between asymptotic hadron states in the nonrelativistic quark model. In the
KN case the dominant interaction was previously found by
Bender {\it et al.} \cite{BDPK} to be the
spin-spin ``color hyperfine" term. A similar conclusion has been
reached for the NN interaction \cite{nirev,hyperf}.
Here we shall adopt
this approximation and neglect the
other OGE and confining terms. Thus, our scattering amplitude is proportional
to the matrix element of
\begin{equation}
H_{scat} = \sum_{a,i<j} \;
\bigg{[}
-{8 \pi \alpha_s\over 3 m_i m_j}\; \delta(\vec r_{ij})
\bigg{]}
\;
\bigg{[}
\vec S_i \cdot \vec S_j
\bigg{]}
\;
\bigg{[}
{\cal F}^a_i \cdot {\cal F}^a_j
\bigg{]}
\end{equation}
\noindent
between asymptotic KN states.
(${\cal F}^a_i$ is the color matrix for quark or antiquark $i$, which is
$\lambda^a/2$ for quarks and $-(\lambda^T)^a/2$ for antiquarks.)
Although we shall quote results for arbitrary
asymptotic hadron wavefunctions, we shall specialize to Gaussian wavefunctions
for our numerical results, as these allow closed form derivation of scattering
observables. Our momentum space Gaussian wavefunctions for the kaon and
nucleon are conventional quark model forms,
\begin{equation}
\phi_{kaon}(\vec p_{rel})
= {1\over \pi^{3/4} \beta^{3/2} }
\exp
\bigg\{
-{\vec p_{rel}^{\; 2} \over 8\beta^2}\; \;
\bigg\}
\end{equation}
where
\begin{equation}
\vec p_{rel} \equiv
{(m_{\bar q}\vec p_q - m_q\vec p_{\bar q})
\over
(m_q + m_{\bar q})/2
}
\ ,
\end{equation}
and
\begin{equation}
\phi_{nucleon}(\vec p_1,\vec p_2,\vec p_3) = {3^{3/4} \over \pi^{3/2}
\alpha^3 }
\exp
\bigg\{
-
{
(
\vec p_1^{\, 2}
+\vec p_2^{\, 2}
+\vec p_3^{\, 2}
- \vec p_1 \cdot \vec p_2
- \vec p_2 \cdot \vec p_3
- \vec p_3 \cdot \vec p_1
)
\over
3 \alpha^2
}
\bigg\} \ .
\end{equation}
The parameters $\alpha$ and $\beta$ are typically
found to be $\approx 0.3-0.4$ Gev in hadron phenomenology.
These are relative momentum wavefunctions, and have an implicit constraint that
the constituent momenta add to the hadron momentum. In the
full momentum space
wavefunction there is an overall delta function that
imposes this constraint;
\begin{equation}
\Phi_{kaon}(\vec p_q,\vec p_{\bar q};\vec P_{tot}) =
\phi_{kaon}(\vec p_{rel}) \;
\delta(\vec P_{tot} - \vec p_q - \vec p_{\bar q} ) \ ,
\end{equation}
\begin{equation}
\Phi_{nucleon}(\vec p_1,\vec p_2,\vec p_3;\vec P_{tot}) =
\phi_{nucleon}(\vec p_1,\vec p_2,\vec p_3) \;
\delta(\vec P_{tot} - \vec p_1 - \vec p_2 - \vec p_3 ) \ .
\end{equation}
The normalizations are
\begin{displaymath}
\langle \Phi_{kaon}(\vec P'_{tot}) | \Phi_{kaon}(\vec P_{tot}) \rangle
\phantom{yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy}
\end{displaymath}
\begin{equation}
= \int \!\! \int \!\! \int \!\! \int \,
d\vec p \, d\vec{\bar p} \,
d\vec p\, ' \, d\vec{\bar p }\, ' \,
\Phi^*_{kaon}(\vec p\, ',\vec {\bar p}\, ';\vec P'_{tot})
\Phi_{kaon}(\vec p,\vec {\bar p};\vec P_{tot})
=
\delta(\vec P_{tot} - \vec P'_{tot})
\end{equation}
and
\begin{displaymath}
\langle \Phi_{nucleon}(\vec P'_{tot}) | \Phi_{nucleon}(\vec P_{tot}) \rangle
\phantom{yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy}
\end{displaymath}
\begin{displaymath}
= \int \!\! \int \!\! \int \!\! \int \!\! \int \!\! \int \,
d\vec p_1 \, d\vec p_2 \, d\vec p_3 \,
d\vec p_1\, ' \, \vec p_2\, ' \, d\vec p_3\, ' \,
\Phi^*_{nucleon}(\vec p_1\, ',\vec p_2\, ',\vec p_3\, ';\vec P'_{tot})
\Phi_{nucleon}(\vec p_1,\vec p_2,\vec p_3;\vec P_{tot})
\end{displaymath}
\begin{equation}
= \delta(\vec P_{tot} - \vec P'_{tot}) \ .
\end{equation}
Since these state normalizations are identical to those used in our previous
study of K$\pi$ scattering \cite{BSW} we can use the relations between
amplitudes and scattering observables given in that reference.
The color wavefunctions for the asymptotic hadrons
are the familiar color singlet states
\begin{equation}
|meson\rangle = \sum_{\imath, \bar \imath=1,3}{1\over \sqrt{3}} \;
\delta_{\imath \bar \imath} \
|\imath \bar \imath\rangle
\end{equation}
and
\begin{equation}
|baryon\rangle = \sum_{i,j,k=1,3}{1\over \sqrt{6}} \;
\epsilon_{ijk} \
|ijk\rangle \ .
\end{equation}
Our spin-flavor states for the meson and baryon are standard
SU(6) states, but we have found it convenient to write the baryon states
in an unconventional manner, as the usual quark model conventions are
unwieldy for our purposes.
First, to establish our notation, the spin-flavor
K$^+$ state is
\begin{equation}
|{\rm K}^+\rangle =
{1\over \sqrt{2}}
\bigg(
|u_+ \bar s_- \rangle
-
|u_- \bar s_+ \rangle
\bigg) \ .
\end{equation}
For quark model baryon states it is conventional to assign each
quark a fixed location in the state vector, as though identical quarks were
distinguishable fermions. One then explicitly symmetrizes this state.
Thus for example one writes the normalized $\Delta^+(S_z=+3/2)$ state as
\begin{equation}
|\Delta^+(+3/2)\rangle = {1\over \sqrt{3} }
\bigg(
|u_+ u_+ d_+ \rangle
+ |u_+ d_+ u_+ \rangle
+ |d_+ u_+ u_+ \rangle
\bigg)
\end{equation}
and treats each basis state as orthogonal. Note however that this
is not the usual
way to represent multifermion states. In standard
field theoretic usage each of these basis states is identical to the
others, to within an overall phase. In this language the
normalized $\Delta^+(+3/2)$
state is simply
\begin{equation}
|\Delta^+(+3/2)\rangle = {1\over \sqrt{2} } \;
|u_+ u_+ d_+ \rangle \ ,
\end{equation}
which we could equally well write as
$ |u_+ d_+ u_+ \rangle /\sqrt{2} $ or
$ |d_+ u_+ u_+ \rangle /\sqrt{2} $. The advantage of using field theory
conventions becomes clear in calculating nucleon matrix elements. For example,
the usual quark model proton state is
\begin{eqnarray}
| P(+1/2) \rangle =&
\bigg( &2|u_+ u_+ d_- \rangle
- |u_+ u_- d_+ \rangle
- |u_- u_+ d_+ \rangle
\nonumber \\
&+
&2|u_+ d_- u_+ \rangle - |u_+ d_+ u_- \rangle - |u_- d_+ u_+ \rangle
\nonumber \\
&+
&2|d_- u_+ u_+ \rangle - |d_+ u_+ u_- \rangle - |d_+ u_- u_+ \rangle
\bigg) \bigg{/} \sqrt{18} \ ,
\end{eqnarray}
and in comparison this state in field theory conventions is
\begin{equation}
| P(+1/2) \rangle =
\sqrt{2\over 3}\ \bigg\{ {|u_+ u_+ d_- \rangle \over \sqrt{2}} \bigg\}
-
\sqrt{1\over 3}\ |u_+ u_- d_+ \rangle \ .
\end{equation}
Use of the latter form, with all permutations of quark entries allowed in
matrix elements, reduces the number of P$\to $P terms from 81 (many of which
are zero) to 4. Of course the results are identical, as these are just
different conventions for the same state.
\vskip 0.2cm
\noindent
{\it b) Enumeration of quark line diagrams for KN}
\vskip 0.2cm
Now we consider KN scattering.
As explained in reference \cite{BS}, we begin by determining the matrix element
of the scattering Hamiltonian (1). First we factor out the overall
momentum conserving delta function and then
derive the remaining matrix element,
which we call
$h_{fi}$;
\begin{equation}
{}_f\langle KN | H_{scat} | KN \rangle_i \equiv h_{fi} \ \delta(\vec P_f -
\vec P_i) \ .
\end{equation}
We will discuss one
part of the calculation in detail to explain the techniques, and
then simply quote the full result. Specializing
to the spin up I=1 case K$^+$P(+1/2)$\to$K$^+$P(+1/2),
we require the matrix element
of the scattering hamiltonian (1) between initial and final K$^+$P states
with color and spin-flavor
wavefunctions given by (9,10) and (11,15) respectively.
Since the kaon and proton states (11) and (15)
are each the sum of two terms, the
full amplitude for K$^+$P(+1/2)$\to$K$^+$P(+1/2) is a weighted
sum of 16 subamplitudes.
We shall consider the subamplitude for
$|u_+\bar s_-\rangle \{|u_+ u_+ d_- \rangle / \sqrt{2} \}
\to
|u_+\bar s_-\rangle |u_+ u_- d_+ \rangle $, which we call
$h_{fi}^{e.g.}$, in detail for illustration.
We begin by constructing all allowed quark line diagrams and their
associated combinatoric
factors. First we arrange the initial and final states with their
normalizations on a generic scattering diagram,
\setlength{\unitlength}{2.2pt}
\begin{picture}(200,70)(0,-5)
\put(30,28) {\makebox(0,0)[1]{ $h_{fi}^{e.g.}$ } }
\put(45,28) {\makebox(0,0)[1]{ = } }
\put(180,28) {\makebox(0,0)[1]{(17)} }
\put(65,05) {\makebox(0,0)[1]{${1\over \sqrt{2}}$} }
\put(80,55) {\makebox(0,0)[1]{$u_+ $} }
\put(80,45) {\makebox(0,0)[1]{$\bar s_-$} }
\put(80,15) {\makebox(0,0)[1]{$u_+ $} }
\put(80,05) {\makebox(0,0)[1]{$u_+ $} }
\put(80,-5) {\makebox(0,0)[1]{$d_-$} }
\put(160,55) {\makebox(0,0)[1]{$u_+$} }
\put(160,45) {\makebox(0,0)[1]{$\bar s_-$} }
\put(160,15) {\makebox(0,0)[1]{$u_+$} }
\put(160,05) {\makebox(0,0)[1]{$u_-$} }
\put(160,-5) {\makebox(0,0)[1]{$d_+$} }
\put(170,-5) {\makebox(0,0)[1]{.} }
\put(78,0){
\begin{picture}(75,60)(0,0)
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(5,55){\line(1,0){10}}
\put(5,45){\line(1,0){10}}
\put(5,15){\line(1,0){10}}
\put(5,5){\line(1,0){10}}
\put(5,-5){\line(1,0){10}}
\put(65,55){\line(1,0){10}}
\put(65,45){\line(1,0){10}}
\put(65,15){\line(1,0){10}}
\put(65,05){\line(1,0){10}}
\put(65,-5){\line(1,0){10}}
\put(15,-10){\framebox(50,70){}}
\end{picture}
}
\end{picture}
\vskip 1cm
Now we connect the initial and final lines in all possible ways consistent
with flavor conservation.
For the $d$ quark and the $\bar s$ antiquark this choice
is unique. For the final meson's $u$ quark however there are two
choices for which initial baryon's quark it originates from.
Similarly the initial meson's $u$ quark can attach to either of two
final baryon $u$ quarks. Thus we have four quark line diagrams.
We may immediately simplify the diagrams;
since the baryon wavefunctions are symmetric,
we may permute any two initial or final baryon lines and obtain an equivalent
diagram. We use this symmetry to reduce all diagrams to a ``standard form"
in which only the meson's quark and the upper baryon quark are exchanged.
The two choices for the initial baryon's spin up
$u_+$ quark are thus equivalent,
and contribute an overall combinatoric factor of two. The final baryon's quarks
however give inequivalent diagrams, one being nonflip ($u_+(K)\to u_+(P)$)
and the other spin flip ($u_+(K)\to u_-(P)$).
(No polarization selection rules are being imposed yet,
only flavor conservation.)
Thus our amplitude leads to the line diagrams
\vskip 1cm
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(22,28) {\makebox(0,0)[1]{ $h_{fi}^{e.g.}$ } }
\put(31,28) {\makebox(0,0)[1]{ = } }
\put(44,28) {\makebox(0,0)[1]{ ${1\over \sqrt{2}}\cdot 2 \cdot $ }}
\put(160,28) {\makebox(0,0)[1]{ $+$ }}
\put(280,28) {\makebox(0,0)[1]{(18)} }
\put(65,0){
\begin{picture}(75,60)(0,0)
\put(-10,-20){\line(0,1){90}}
\put(-10,-20){\line(1,0){10}}
\put(-10,70){\line(1,0){10}}
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(02,54) {\makebox(0,0)[1]{$u_+ $} }
\put(02,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(02,14) {\makebox(0,0)[1]{$u_+ $} }
\put(02,04) {\makebox(0,0)[1]{$u_+ $} }
\put(02,-6) {\makebox(0,0)[1]{$d_- $} }
\put(80,54) {\makebox(0,0)[1]{$u_+ $} }
\put(80,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(80,14) {\makebox(0,0)[1]{$u_+ $} }
\put(80,04) {\makebox(0,0)[1]{$u_- $} }
\put(80,-6) {\makebox(0,0)[1]{$d_+ $} }
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\end{picture}
}
\put(170,0){
\begin{picture}(75,60)(0,0)
\put(90,-20){\line(0,1){90}}
\put(90,-20){\line(-1,0){10}}
\put(90,70){\line(-1,0){10}}
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(02,54) {\makebox(0,0)[1]{$u_+ $} }
\put(02,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(02,14) {\makebox(0,0)[1]{$u_+ $} }
\put(02,04) {\makebox(0,0)[1]{$u_+ $} }
\put(02,-6) {\makebox(0,0)[1]{$d_- $} }
\put(80,54) {\makebox(0,0)[1]{$u_+ $} }
\put(80,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(80,14) {\makebox(0,0)[1]{$u_- $} }
\put(80,04) {\makebox(0,0)[1]{$u_+ $} }
\put(80,-6) {\makebox(0,0)[1]{$d_+ $} }
\put(95,-5) {\makebox(0,0)[1]{.} }
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\end{picture}
}
\end{picture}
\vskip 1.5cm
We next ``decorate" each of these line diagrams with all possible single
interactions (1) between a quark (or antiquark) in the initial meson and
a quark in the initial baryon. There are six of these per line
diagram (two choices in the meson times three in the baryon),
so we have a total of twelve scattering diagrams to evaluate.
However in this case all but one are trivially zero. Note that
in the first line
diagram we must flip the spins of $u$ and $d$ quarks in the initial
baryon to have a nonzero contribution. This however is not part of our
scattering interaction, which operates between pairs of constituents in
different initial hadrons. The $\vec S_i \cdot \vec S_j$
interaction either flips a pair of spins in different incident hadrons
(through $S_+ S_-$ or $S_-S_+$ terms) or leaves all spins unchanged
(through $S_z S_z$). Thus, the transition in the first line diagram cannot
occur through a single $\vec S_i \cdot \vec S_j$ interaction.
For the second diagram however there is a single nonvanishing transition,
in which the initial meson's $u_+$ quark and the baryon's $d_-$ quark interact
through the spin flip operator;
\vskip 1cm
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(50,28) {\makebox(0,0)[1]{ $h_{fi}^{e.g.}$ } }
\put(70,28) {\makebox(0,0)[1]{ = } }
\put(90,28) {\makebox(0,0)[1]{ $\sqrt{2}\ \ \ \cdot $ }}
\put(280,28) {\makebox(0,0)[1]{(19)} }
\put(230,-5) {\makebox(0,0)[1]{.} }
\put(120,0){
\begin{picture}(75,60)(0,0)
\put(-10,-20){\line(0,1){90}}
\put(-10,-20){\line(1,0){10}}
\put(-10,70){\line(1,0){10}}
\put(90,-20){\line(0,1){90}}
\put(90,-20){\line(-1,0){10}}
\put(90,70){\line(-1,0){10}}
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(02,54) {\makebox(0,0)[1]{$u_+ $} }
\put(02,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(02,14) {\makebox(0,0)[1]{$u_+ $} }
\put(02,04) {\makebox(0,0)[1]{$u_+ $} }
\put(02,-6) {\makebox(0,0)[1]{$d_- $} }
\put(80,54) {\makebox(0,0)[1]{$u_+ $} }
\put(80,44) {\makebox(0,0)[1]{$\bar s_-$} }
\put(80,14) {\makebox(0,0)[1]{$u_- $} }
\put(80,04) {\makebox(0,0)[1]{$u_+ $} }
\put(80,-6) {\makebox(0,0)[1]{$d_+ $} }
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\put(20,-5){\dashbox{2}(0,60){}}
\multiput(20,-5)(0,60){2}{\circle*{2}}
\end{picture}
}
\end{picture}
\vskip 1.5cm
\noindent
{\it c) Independent quark and gluon diagrams and their
spin and color factors}
\vskip 0.2cm
Finally we require the spin, color, overall phase
and spatial factors associated with this
and the other independent diagrams. There are only four independent
quark and gluon diagrams, since all others can be obtained from these by
permutation of lines. These four diagrams are
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(70,28) {\makebox(0,0)[1]{ $D_1$ } }
\put(100,28) {\makebox(0,0)[1]{ = } }
\put(280,28) {\makebox(0,0)[1]{(20)} }
\put(210,-5) {\makebox(0,0)[1]{,} }
\put(120,0){
\begin{picture}(75,60)(0,0)
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\put(20,15){\dashbox{2}(0,40){}}
\multiput(20,15)(0,40){2}{\circle*{2}}
\end{picture}
}
\end{picture}
\vskip 1cm
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(70,28) {\makebox(0,0)[1]{ $D_2$ } }
\put(100,28) {\makebox(0,0)[1]{ = } }
\put(280,28) {\makebox(0,0)[1]{(21)} }
\put(210,-5) {\makebox(0,0)[1]{,} }
\put(120,0){
\begin{picture}(75,60)(0,0)
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\put(20,05){\dashbox{2}(0,50){}}
\multiput(20,05)(0,50){2}{\circle*{2}}
\end{picture}
}
\end{picture}
\vskip 1cm
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(70,28) {\makebox(0,0)[1]{ $D_3$ } }
\put(100,28) {\makebox(0,0)[1]{ = } }
\put(280,28) {\makebox(0,0)[1]{(22)} }
\put(210,-5) {\makebox(0,0)[1]{,} }
\put(120,0){
\begin{picture}(75,60)(0,0)
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\put(20,15){\dashbox{2}(0,30){}}
\multiput(20,15)(0,30){2}{\circle*{2}}
\end{picture}
}
\end{picture}
\vskip 1cm
\setlength{\unitlength}{1.6pt}
\begin{picture}(320,70)(30,-5)
\put(70,28) {\makebox(0,0)[1]{ $D_4$ } }
\put(100,28) {\makebox(0,0)[1]{ = } }
\put(280,28) {\makebox(0,0)[1]{(23)} }
\put(210,-5) {\makebox(0,0)[1]{.} }
\put(120,0){
\begin{picture}(75,60)(0,0)
\multiput(5,55)(60,0){2}{\vector(1,0){5}}
\multiput(15,45)(60,0){2}{\vector(-1,0){5}}
\multiput(5,15)(60,0){2}{\vector(1,0){5}}
\multiput(5, 5)(60,0){2}{\vector(1,0){5}}
\multiput(5,-5)(60,0){2}{\vector(1,0){5}}
\put(5,55){\line(1,0){25}}
\put(5,45){\line(1,0){70}}
\put(5,15){\line(1,0){25}}
\put(5,5){\line(1,0){70}}
\put(5,-5){\line(1,0){70}}
\put(50,55){\line(1,0){25}}
\put(50,15){\line(1,0){25}}
\put(30,15){\line(1,2){20}}
\put(30,55){\line(1,-2){20}}
\put(20,05){\dashbox{2}(0,40){}}
\multiput(20,05)(0,40){2}{\circle*{2}}
\end{picture}
}
\end{picture}
\vskip 1.5cm
\setcounter{equation}{23}
The spin factor is simply the matrix element of $\vec
S_i \cdot \vec S_j$ for scattering constituents $i$ and $j$,
evaluated between the
initial and final $(q\bar q)(qqq)$ spin states.
This is $1/2$
if both spins $i$ and $j$
are antialigned and both flip, $+1/4$ if the spins are aligned
and neither flips, and $-1/4$ if they are antialigned and neither flips. All
other cases give zero. All spectator spins must not flip or the overall spin
factor is trivially zero.
The color factor can be evaluated using
the states (9), (10) and standard trace techniques, as in (51) of reference
\cite{BS}. The result for each diagram is
\begin{equation}
I_{\rm color}(D_1) = +4/9 \ ,
\end{equation}
\begin{equation}
I_{\rm color}(D_2) = -2/9 \ ,
\end{equation}
\begin{equation}
I_{\rm color}(D_3) = -4/9 \ ,
\end{equation}
\begin{equation}
I_{\rm color}(D_4) = +2/9 \ .
\end{equation}
\vskip 0.2cm
\noindent
{\it d) ``Diagram weights" for KN scattering}
\vskip 0.2cm
We conventionally write the meson-baryon
$h_{fi}$ matrix elements
as row vectors which display the numerical coefficient of each
diagram's spatial overlap integral. Thus,
\begin{equation}
h_{fi} = \bigg{[} \ w_1 ,\ w_2 ,\ w_3 ,\ w_4 \ \bigg{]}
\end{equation}
represents
\begin{equation}
h_{fi} =
w_1 \, I_{\rm space}(D_1) \;
+ w_2 \, I_{\rm space}(D_2) \;
+ w_3 \, I_{\rm space}(D_3) \;
+ w_4 \, I_{\rm space}(D_4) \ .
\end{equation}
This notation is useful because the diagram weights $\{ w_i \}$
are group theoretic numbers that obey certain symmetries,
whereas the spatial overlap integrals are complicated functions that depend
on the specific spatial wavefunctions rather than the symmetries
of the problem.
As an illustration,
our practice subamplitude $h_{fi}^{e.g.}$ is
\begin{equation}
h_{fi}^{e.g.} = \sqrt{2}\cdot \Big(\, {1\over 2}\, \Big)
\cdot \Big(-{2\over 9}\Big) \cdot
I_{\rm space}(D_2) \ .
\end{equation}
(using the spin and color matrix elements given above), which
we abbreviate as
\begin{equation}
h_{fi}^{e.g.} = \bigg{[} \ 0 ,\ -{\sqrt{2}\over 9} ,\ 0 ,\ 0 \ \bigg{]} \ .
\end{equation}
This completes our detailed derivation
of $h_{fi}^{e.g.}$
for the subprocess
$|u_+\bar s_-\rangle \{|u_+ u_+ d_- \rangle / \sqrt{2} \}
\to
|u_+\bar s_-\rangle |u_+ u_- d_+ \rangle $.
Proceeding similarly, we have derived the weights for the
full KN elastic scattering amplitudes,
given the states (11) and (15) and
their isospin partners.
These are
\begin{equation}
h_{fi}^{\rm KN}({\rm I=0}) =
\bigg{[} \
0 ,\
{1\over 6} ,\
0 ,\
{1\over 6} \ \bigg{]}
\end{equation}
and
\begin{equation}
h_{fi}^{\rm KN}({\rm I=1}) =
\bigg{[} \
{1\over 3} ,\
{1\over 18} ,\
{1\over 3} ,\
{1\over 18} \
\bigg{]}
\ .
\end{equation}
For numerical estimates of these amplitudes
we require the spatial overlap integrals, which we shall evaluate
explicitly with Gaussian wavefunctions.
\vskip 0.2cm
\noindent
{\it e) Spatial overlap integrals}
\vskip 0.2cm
The spatial overlap integrals represented by the four diagrams
$D_1\dots D_4$
may be determined using the diagrammatic techniques
discussed in Appendix C of reference \cite{BS}.
These are formally 30-dimensional
overlap integrals (three dimensions times ten external lines), but twelve
integrations are eliminated by external momentum constraints and
an additional nine are eliminated by the unscattered spectator lines.
This leaves a nontrivial 9-dimensional overlap integral for each diagram.
We give the initial meson a label $A$, with three-momentum also called $A$
and quark three-momentum $a$ and
antiquark momentum $\bar a$, and we similarly label the initial baryon $B$, the
final meson $C$ and the final baryon $D$. Since we choose to evaluate these
integrals in the CM frame we use the momentum substitutions $B=-A$ and $D=-C$.
We also introduce a nonstrange to strange quark mass ratio $\rho = m_q / m_s$.
With these substitutions the four spatial overlap integrals are
\begin{displaymath}
I_{space}(D_1) = + {8 \pi \alpha_s \over 3 m_q^2} \; {1\over (2\pi)^3}
\int \!\! \int \!\! \int \, d\vec a \, d\vec b_1 \, d\vec b_2 \
\phi_A(2a-{2\rho A\over 1+\rho}) \;
\phi_C^*(2a+{2C\over 1+\rho} - 2A)
\end{displaymath}
\begin{equation}
\cdot \; \phi_B(b_1, b_2, -A - b_1 - b_2 ) \;
\phi_D^*(b_1+A-C, b_2, -A - b_1 - b_2 ) \ ,
\end{equation}
\begin{displaymath}
I_{space}(D_2) = + {8 \pi \alpha_s \over 3 m_q^2} \; {1\over (2\pi)^3}
\int \!\! \int \!\! \int \, d\vec b_1 \, d\vec c \, d\vec d_1 \
\phi_A(2c-{2A\over 1+\rho} - 2C) \;
\phi_C^*(2c-{2\rho C\over 1+\rho})
\end{displaymath}
\begin{equation}
\cdot \; \phi_B(b_1, c, -A - b_1 - c ) \;
\phi_D^*(d_1,A-C+b_1+c-d_1, -A - b_1 - c) \ ,
\end{equation}
\begin{displaymath}
I_{space}(D_3) = + {8 \pi \alpha_s \over 3 m_q^2}\rho \; {1\over (2\pi)^3}
\int \!\! \int \!\! \int \, d\vec a \, d\vec b_2 \, d\vec c \
\phi_A(2a-{2\rho A\over 1+\rho}) \;
\phi_C^*(2c-{2\rho C\over 1+\rho})
\end{displaymath}
\begin{equation}
\cdot \; \phi_B(a-A+C, b_2, -a - b_2 - C ) \;
\phi_D^*(a,b_2, -a - b_2 - C) \ ,
\end{equation}
\begin{displaymath}
I_{space}(D_4) = + {8 \pi \alpha_s \over 3 m_q^2}\rho \; {1\over (2\pi)^3}
\int \!\! \int \!\! \int \, d\vec a \, d\vec b_1 \, d\vec c \
\phi_A(2a-{2\rho A\over 1+\rho}) \;
\phi_C^*(2c-{2\rho C\over 1+\rho})
\end{displaymath}
\begin{equation}
\cdot \; \phi_B(b_1, c, -A - b_1 - c ) \;
\phi_D^*(A-C-a+b_1+c,a, -A - b_1 - c) \ .
\end{equation}
There are many equivalent ways to write these integrals which arise from
different choices of the variables eliminated by momentum constraints.
Note that the overall coefficients of these integrals are positive,
although the
coefficient of $H_{scat}$ (1) is negative. This is because there is an overall
phase factor of $(-1)$ for each diagram $D_1\dots D_4$, due to anticommutation
of quark creation and annihilation operators in the matrix element. Here we
incorporate this phase, which we call the ``signature" of the diagram
\cite{BS}, in the spatial overlap integrals. The signature is equal to
$(-1)^{N_x}$, where $N_x$ is the number of fermion line crossings.
For diagrams $D_1\dots D_4$ above $N_x=3$, so the signature is
\begin{equation}
I_{\rm signature} = (-1) \ .
\end{equation}
Note that a
diagram in nonstandard form, such as the kaon's quark line crossing to the
second baryon quark, can have a $(+1)$ signature;
in the full $h_{fi}$ matrix element this is compensated by a
change in sign of the color factor.
We explicitly evaluate these overlap integrals using the Gaussian wavefunctions
(2) and (4). For Gaussians the integrals
factor into products of three 3-dimensional
integrals, and the results are all of the form
\begin{equation}
I_{\rm space}(D_i) = {8\pi \alpha_s \over 3 m_q^2} {1\over (2\pi)^3}
\; \eta_i \exp \bigg\{ -(A_i - B_i \mu ) P_{cm}^2 \bigg\} \ ,
\end{equation}
where $P_{cm}^2$
is the modulus of each hadron's three-momentum in the CM frame,
$\mu = \cos ( \theta_{CM} ) $ where $\theta_{CM} $ is the
CM scattering angle, and the constants $\eta_i$, $A_i$ and $B_i$ are functions
of $\alpha$, $\beta$ and $\rho$.
$B_i>0$ implies forward peaked scattering and $B_i<0$ is backward peaking.
The pure exponential dependence in
$P_{cm}^2$ and $P_{cm}^2\mu$ is a consequence of the
Gaussian wavefunctions and the
contact interaction.
Introducing the ratio $g = (\alpha / \beta)^2$,
these constants are
\begin{equation}
\eta_1 = 1
\end{equation}
\begin{equation}
A_1 =
{
2\rho^2 + 4\rho + (3g+2)
\over
6 (1+\rho)^2 \alpha^2
}
\end{equation}
\begin{equation}
B_1 = A_1 \ ,
\end{equation}
\begin{equation}
\eta_2 = \bigg( { 12g \over 7g+6} \bigg)^{3/2}
\end{equation}
\begin{equation}
A_2 =
{
(40g+3)\rho^2 + (32g+6)\rho + (21g^2+28g+3)
\over
6 (7g+6)(1+\rho)^2 \alpha^2
}
\end{equation}
\begin{equation}
B_2 =
{
(-8g+1)\rho^2 + 2\rho + (7g^2+8g+1)
\over
2 (7g+6)(1+\rho)^2 \alpha^2
}
\ ,
\end{equation}
\begin{equation}
\eta_3 = \rho \bigg( { 6 \over g+3} \bigg)^{3/2}
\end{equation}
\begin{equation}
A_3 =
{
(10g+6)\rho^2 + (8g+12)\rho + (7g+6)
\over
6 (g+3)(1+\rho)^2 \alpha^2
}
\end{equation}
\begin{equation}
B_3 =
{
(-g+1)\rho^2 +2\rho + (g + 1)
\over
(g+3)(1+\rho)^2 \alpha^2
}
\ ,
\end{equation}
\begin{equation}
\eta_4 = \rho \bigg( { 12g \over (2g+3)(g+2)} \bigg)^{3/2}
\end{equation}
\begin{equation}
A_4 =
{
(20g^2+40g+3)\rho^2 + (4g^2+14g+6)\rho + (5g^2+10g+3)
\over
6 (2g+3)(g+2)(1+\rho)^2 \alpha^2
}
\ ,
\end{equation}
\begin{equation}
B_4 =
{
(-4g^2-8g+1)\rho^2 + (-4g^2-6g+2)\rho + (g^2+2g+1)
\over
2 (2g+3)(g+2)(1+\rho)^2 \alpha^2
}
\ .
\end{equation}
These results were derived at MIT and ORNL \cite{Mitch} independently using
MAPLE and MACSYMA algebra programs respectively.
Some important properties of these diagrams, specifically
which are forward peaked or backward peaked processes, and which
diagrams dominate at high energies, can be inferred by inspection.
The leading diagram in the high energy limit is $D_1$,
which is a forward peaked exponential in
$t$. The other diagrams are exponentially suppressed in $s$ and are also
forward peaked, with the single exception of $D_4$. Note that for plausible
values of $g\approx 1$ and $\rho\approx 0.6$ this diagram leads to a
{\it backwards} peak ($B_4<0$).
These properties have a simple common origin;
since we are scattering through
a hard delta function interaction, the only angular dependence comes from
overlap suppression due to the spectator lines. A spectator line which is
required to cross into the other hadron gives an especially large suppression
at high energies and small angles. The amplitude for a crossing spectator line
is maximum for backscattered hadrons; in this case the crossing spectator is
actually continuing to move in a new hadron with the same momentum
vector as the
hadron it originally resided in.
The first diagram $D_1$ has no crossing spectators,
so it is not suppressed in $s$;
only the hard scattered constituents
are required to cross into different hadrons. In diagrams $D_2$ and $D_3$
one spectator line is required to cross to a different hadron, so there is
some suppression with increasing $s$. Since {\it two}
spectators do not cross, they dominate the angular dependence, and the
scattering is forward peaked.
Diagram $D_4$, the backward peaking process, is qualitatively different
because two spectator lines are required to change hadrons, and only
one spectator does not cross. In this case ``backwards" meson-baryon scattering
actually corresponds to forward scattering for the
two crossing spectator lines,
which is obviously preferred. This
description attributes
backward peaks, which might otherwise appear counterintuitive, to the
obvious mechanism of ``minimum spectator suppression"
at the quark level.
\vskip 0.2cm
\noindent
{\it f) KN phase shifts and scattering lengths}
\vskip 0.2cm
Given the diagram weights (32-33) and our
results (40-51) for the Gaussian overlap integrals, we have completed the
derivation of the Hamiltonian matrix element $h_{fi}$ for KN elastic
scattering.
Since we have used
the same normalization for KN states as in our previous discussion of K$\pi$
scattering \cite{BSW} we can use the same relations derived there to relate
$h_{fi}$ to scattering variables. First we consider the elastic phase shifts,
which are given by
\begin{equation}
\delta^{KN}_\ell = -{2\pi^2 P_{cm} E_K E_N \over ( E_K + E_N)}
\int_{-1}^1 h_{fi}^{KN}\, P_{\ell} (\mu ) d\mu \ .
\end{equation}
Using the integral
$\int_{-1}^1 e^{b\mu } P_{\ell} (\mu ) d\mu = 2 i_{\ell }(b)$, we find
\begin{equation}
\delta_{\ell }^{KN} = -{4\alpha_s\over 3 m_q^2}
{P_{cm} E_K E_N \over ( E_K + E_N)} \sum_{i=1}^4
\, w_i \, \eta_i \, \exp ( -A_i P_{cm}^2 ) \; i_\ell (B_i P_{cm}^2 ) \ ,
\end{equation}
where one specifies the isospin state I=0 or I=1 through the choice of
the diagram weights $\{ w_i \} $.
As we approach the KN threshold the
S-wave phase shift is asymptotically linear in $P_{cm}$,
and the coefficient is the scattering length $a_I$. Since the exponential and
the $i_0$ Bessel function are both unity in this limit, we recover a relatively
simple result for the KN scattering length,
\begin{equation}
a^{KN}_I = -{4\alpha_s\over 3 m_q^2}
{M_K M_N \over ( M_K + M_N)} \sum_{i=1}^4
\, w_i \, \eta_i \ .
\end{equation}
Since the coefficients $\{ \eta_i \}$ are relatively simple functions,
we can write these scattering lengths as simple functions of $\alpha_s/m_q^2$,
$\rho = m_q / m_s$, the meson-baryon relative scale parameter
$g = (\alpha / \beta )^2$ and the physical masses $M_K$ and $M_N$.
The results are
\begin{displaymath}
a^{KN}_{I=1} = -{4\alpha_s\over 3 m_q^2}
{M_K M_N \over ( M_K + M_N)}
\phantom{yyyyyyyyyyyyyyyyyyyyyyyyy}
\end{displaymath}
\begin{equation}
\cdot \bigg[
\; {1\over 3} \; + \;
{1\over 18} \bigg( { 12g \over 7g+6}\bigg)^{3/2} \, +
{1\over 3} \; \rho \bigg( {6 \over g+3} \bigg)^{3/2}\, + \,
{1\over 18} \; \rho \bigg( {12g \over (2g+3)(g+2)} \bigg)^{3/2}\
\bigg] \
\end{equation}
and
\begin{displaymath}
a^{KN}_{I=0} =
-{4\alpha_s\over 3 m_q^2}
{M_K M_N \over ( M_K + M_N)}
\phantom{yyyyyyyyyyyyyyyyyyyyyyyyy}
\end{displaymath}
\begin{equation}
\cdot \bigg[
\; {1\over 6} \bigg( { 12g \over 7g+6}\bigg)^{3/2} +
{1\over 6} \; \rho \bigg( {12g \over (2g+3)(g+2)} \bigg)^{3/2}\
\bigg] \ .
\end{equation}
The basic features of the low energy KN interaction, a repulsive I=1 S-wave
and a repulsive but less strong I=0 S-wave, are already evident in these
formulas. (The parameter $g = (\alpha/\beta)^2$ is
constrained by quark model phenomenology to be comparable to
unity.)
Detailed numerical results for the scattering lengths and phase shifts and a
comparison with experiment are presented in the next section.
\section{Comparison with experiment}
\noindent
{\it a) Scattering lengths}
\vskip 0.2cm
Before we discuss our numerical predictions we first review the status of the
experimental scattering lengths. Since there are unresolved disagreements
between analyses in the I=0 channel, we have compiled relatively recent
(since 1980) single-energy S-wave phase shifts for our discussion. These are in
chronological order Martin and Oades \cite{MO} (Aarhus and UC London, 1980);
Watts {\it et al.} \cite{Watts} (QMC and RAL, 1980); Hashimoto \cite{Hash}
(Kyoto and VPI, 1984); and Hyslop {\it et al.} \cite{HARW} (VPI, 1992). The I=1
data set analysed by Arndt and Roper \cite{AR} (VPI, 1985) was incorporated in
the 1992 VPI simultaneous analysis of I=0 and I=1 data, so we shall not
consider it separately. The energy dependent parametrizations of Corden {\it et
al.} \cite{Corden} and Nakajima {\it et al.} \cite{Nakajima} are not included
in our discussion.
In Fig.1 we show these experimental I=0 and I=1 S-wave phase shifts versus
$P_{cm}=|\vec P_{cm}|$. The linear low energy behavior which determines the
scattering length is evident in the I=1 data, and Hyslop {\it et al.} cite a
fitted value of $a^{KN}_{I=1} = -0.33$ fm. Previous analyses (summarized in
\cite{DW} and \cite{HARW}) have given values between $-0.28(6)$ fm
\cite{Cutkosky} and $-0.33$ fm \cite{HARW,Hyslop}. A more useful way to present
the S-wave phase shift data is to display $\delta_0^I / P_{cm}$ versus
$P_{cm}^2$; the intercept is the scattering length, and the slope at intercept
determines the effective range. In Fig.2 we show the
S-wave phase shifts
in this manner; an I=1 scattering length of about
$-0.31(1)$ fm is indeed evident, which we shall take as our estimated
experimental value.
Unfortunately the I=0 scattering length is much less well determined, as is
evident in Figs.1 and 2.
Previous (favored) solutions up to 1982 are summarized in Table
2.3 of \cite{DW}, and range between $+0.02$ fm and $-0.11^{+0.06}_{-0.04}$ fm.
There appear to be two sets of low energy values in the data of Fig.1, a
smaller phase shift from the Aarhus-UCL and QMC-RAL collaborations and a larger
one from from the Kyoto-VPI and VPI analyses. Below $P_{cm}=0.4$ Gev the
Kyoto-VPI and VPI results are larger than Aarhus-UCL and QMC-RAL
by about a factor of two. The VPI group actually
cite a scattering length of $a^{KN}_{I=0} = 0.0$ fm, although this requires
rapid low energy variation below the first experimental point (compare their
Fig.1(a) with the I=1 phase shift in their Fig.2(a), which is constrained by
experiment at lower energy and shows the expected $\sqrt{T_{lab}}\propto
P_{cm}$ S-wave dependence). Since the I=1 phase shift is close to linear for
$P_{cm}< 0.4$ Gev ($k_{lab}< 0.7$ Gev), we will assume that the zero I=0
scattering length quoted in \cite{HARW} is an artifact of their fit, and that
the actual I=0 phase shift is approximately linear in $P_{cm}$ for $P_{cm}<
0.4$ Gev. We can then read the I=0 scattering length from the
intercept in Fig.2. From the figure we see that a naive extrapolation to
threshold leads to scattering lengths of about $-0.09(1)$ fm and $-0.17(2)$
fm respectively from the two sets of references. In summary, the experimental
phase shifts shown in Fig.2 suggest to us the scattering lengths
\begin{eqnarray}
&a^{KN}_{I=1}(expt.)\phantom{\bigg|_{Aarhus-QMC-RAL-UCL}}
&= -0.31(1) \ {\rm fm} \ ; \nonumber \\
&a^{KN}_{I=0}(expt.)\bigg|_{Aarhus-QMC-RAL-UCL} &= -0.09(1) \ {\rm fm} \ ,
\nonumber \\
&a^{KN}_{I=0}(expt.)\bigg|_{Kyoto-VPI}^{\phantom{Aarhus-QMC-RAL-UCL}}
&= -0.17(2) \ {\rm fm} \ .
\end{eqnarray}
We emphasize that the I=0 values are our
interpretation of the data from Fig.2, and the references cited
quote smaller scattering lengths that we believe the data does not support.
As the values of the I=0 scattering length and low energy phase shifts
are controversial, an accurate determination should be a first priority at a
kaon facility.
To compare our predictions with experiment we first use a ``reference parameter
set" with conventional quark model parameters. The hyperfine strength is taken
to be $\alpha_s / m_q^2 =0.6/(0.33)^2$ Gev$^{-2}$, and the nonstrange to
strange quark mass ratio is $\rho = m_q/m_s= 0.33\; {\rm Gev}\; / 0.55\; {\rm
Gev} = 0.6$. The remaining parameter in the scattering length formulas is
$g=(\alpha/\beta)^2$, the ratio of baryon to meson width parameters squared.
These parameters are rather less well determined phenomenologically. For
baryons, values in the range $\alpha=0.25-0.41$ Gev have been used in
nonrelativistic quark model studies \cite{Simon,Nathan,Roman}.
Isgur and Karl \cite{IKK} originally used $\alpha=0.32$ Gev for
spectroscopy, but Copley, Karl and Obryk \cite{CKO} had earlier found that the
photocouplings of baryon resonances required a somewhat larger value of
$\alpha=0.41$ Gev, which may be a more realistic estimate \cite{Simon,Nathan}
because it is less sensitive to short distance hyperfine matrix elements.
This larger value was also found
by Koniuk and Isgur \cite{KI} for baryon electromagnetic transition amplitudes.
Here
we take $\alpha=0.4$ Gev as our reference value. For mesons, studies of various
matrix elements have led to values of $\beta=0.2-0.4$ Gev \cite{Nathan}. In our
previous study of I=2 $\pi\pi$ scattering we found a best fit to the S-wave
phase shift data with $\beta=0.337$ Gev. Here we use a similar $\beta=0.35$ Gev
as our reference value; if the quark Born formalism is realistic we should use
essentially the same meson parameters in all reactions.
With our reference parameter set and physical masses $M_K=0.495$ Gev and
$M_N=0.940$ Gev, our formulas (55) and (56) give
\vskip 0.5cm
\begin{equation}
a^{KN}_{I=1}(ref.\; set) = -0.35 \ {\rm fm}
\end{equation}
and
\begin{equation}
a^{KN}_{I=0}(ref.\; set) = -0.12 \ {\rm fm} \ .
\end{equation}
\vskip 0.5cm
In view of our approximations, the parameter uncertainties, and the
uncertainties in the I=0 data, these scattering lengths compare rather well
with experiment. Note that our conclusions differ from those of Bender {\it et
al.} \cite{BDPK}, who reported that the OGE contribution to I=1 scattering was
too small to explain the observed S-wave phase shift. We discuss this
disagreement further in the appendix.
Now suppose we attempt to fit our estimated
experimental values of the scattering lengths (57)
by varying our parameter set. It is useful to fit the ratio
$a^{KN}_{I=0}/a^{KN}_{I=1}$, since this involves only $\rho$ and the width
parameter $g$. We have fixed $\rho=0.6$, and in any case we find that
$a^{KN}_{I=0}/a^{KN}_{I=1}$ is insensitive to $\rho$, so only $g$ remains as an
important parameter. In Fig.3 we show the predicted ratio of KN scattering
lengths as a function of $\alpha/\beta$. The two experimental ratios
assuming the values in (57) are also indicated. The larger ratio
$a^{KN}_{I=0}/a^{KN}_{I=1} = 0.17/0.31$ requires $\alpha/\beta = 1.91$,
rather far from typical quark model values. Fitting the smaller ratio
$a^{KN}_{I=0}/a^{KN}_{I=1} = 0.09/0.31$ requires $\alpha/\beta = 1.02$, which
is more representative of quark model parameters. An accurate determination of
the I=0 KN scattering length through direct low energy measurements, rather
than by extrapolation, would be a very useful experimental contribution; this
would allow a more confident test of our results and those of other models
(as shown for example in Table 6-4 of Hyslop \cite{Hyslop}).
\vskip 0.2cm
\noindent
{\it b) S-wave phase shifts}
\vskip 0.2cm
The S-wave KN phase shifts
predicted by (53) with $\ell =0$ given the ``reference parameter set"
$\alpha_s/m_q^2 = 0.6/(0.33)^2$ Gev$^{-2}$, $\rho=m_q/m_s=0.6$,
$\alpha=0.4$ Gev and
$\beta=0.35$ Gev are shown as dashed lines
in Fig.4. As we noted in the previous section,
this parameter set gives reasonable scattering lengths, although
the I=0 scattering length is not yet very well established experimentally.
At higher energies the reference parameter set predicts an I=1 phase shift
that retreats more quickly with energy than is observed experimentally;
in Fig.4 we see a rapid departure of theory and experiment above
$P_{cm}=0.4$ Gev ($k_{lab}=0.7$ Gev).
This is near the opening of the inelastic channels
K$\Delta$ and K$^*$N,
as indicated in Fig.4.
Two possible reasons for
this discrepancy are 1) inelastic effects of the channels K$\Delta$, K$^*$N
and K$^*\Delta$, which should become important just where theory and
experiment part, and 2) short distance components in the meson and baryon
wavefunctions that are underestimated by the smooth Gaussian
wavefunctions (2) and (4).
Although inelastic effects are certainly important
experimentally \cite{bland1,bland2},
the most important low energy inelastic process
is P-wave K$\Delta$ production \cite{bland2}.
Hyslop {\it et al.} \cite{HARW} similarly find relatively small inelasticities
in the I=1 KN S-wave, with $\eta \geq 0.9$ for $P_{cm}\leq 0.68$ Gev. At the
end of this range our predicted phase shift
given the reference parameter set is only about half the observed
value, so it appears unlikely that the discrepancy is mainly due to inelastic
channels.
A second possible reason for the discrepancy is a departure of the hadron
wavefunctions from the assumed single Gaussian forms at short distances; both
the meson $q\bar q$ states and the baryon $qq$ substates experience attractive
short distance interactions from the color Coulomb and hyperfine terms (for
spin singlets), which will lead to enhancements of the short distance
components of their wavefunctions and increased high energy scattering
amplitudes. If this is the principal reason for the discrepancy, we would
expect a global
fit to the S-wave phase shifts to prefer a smaller
hadron length scale. In Fig.4 we show the result of a three-parameter fit to
the full 1992 VPI I=0,1 energy independent S-wave data set \cite{HARW}, letting
$\alpha_s/m_q^2, \alpha$ and $\beta$ vary and holding $\rho=0.6$ fixed. The fit
is shown as solid lines, and is evidently quite reasonable both near threshold
and at higher energies. The fitted parameters are
$\alpha_s/m_q^2=0.59/(0.33)^2$ Gev$^{-2}$, $\alpha=0.68$ Gev and $\beta=0.43$
Gev; the hyperfine strength is a typical quark model result but the width
parameters $\alpha$ and $\beta$ are about 1.5 times the usual quark model
values. Thus, a fit to the S-wave VPI data using single Gaussian wavefunctions
requires a hadron length scale about 0.7 times the usual scale. This result is
largely independent of the data set chosen, since it is driven by
the large I=1 phase shift, which shows little variation between
analyses. Evidently the predicted S-wave phase shifts at higher energies are
indeed very sensitive to the short distance parts of the wavefunction; this
supports our conjecture that the discrepancy at higher energies is
an artifact of our single Gaussian wavefunctions.
A calculation of these S-wave phase shifts using realistic Coulomb
plus linear plus hyperfine wavefunctions is planned \cite{Simon} and
should provide a very interesting test of
the quark Born formalism.
\vskip 0.2cm
\noindent
{\it c) Higher-L partial waves, spin-orbit and inelastic effects}
\vskip 0.2cm
In addition to the S-wave phase shifts, higher-L KN elastic phase shifts
and properties of the inelastic reactions KN$\to$K$^*$N, KN$\to$K$\Delta$ and
KN$\to$K$^*\Delta$ have been the subjects of experimental investigations.
These studies have found important effects in the L$>$0 partial waves
which are beyond the scope of the present paper.
One especially interesting effect is a remarkably large spin-orbit interaction
in the I=0 KN system; the L=1 states have a large, positive phase shift for
J=1/2 and a weaker, negative phase shift for J=3/2 (see \cite{HARW} and
references cited therein). This spin-orbit interaction cannot arise in our
quark Born amplitudes given the approximations we have made in this paper;
since we have incorporated only the spin-spin hyperfine interaction in single
hadronic channels, our phase shifts (53) are functions of the total hadronic L
and S but not J. Some but not all of this spin-orbit interaction may simply
require incorporation of the OGE spin-orbit term; Mukhopadhyay and Pirner
\cite{MP} found that the quark spin-orbit interaction was sufficient to
explain the sign and magnitude of some of the weaker KN spin-orbit forces, but
that the I=0, J=1/2 phase shift was much too large to be explained as an OGE
force. The strong KN spin-orbit forces might conceivably be due to couplings to
inelastic channels; since the available mixing states and their couplings to KN
are J-dependent, they might lead to effective spin-orbit forces at the hadronic
level, even if we do not include spin-orbit forces at the quark level. We hope
to treat this interesting possibility in a future study of coupled channel
effects using the quark Born formalism.
Since we do not have a model of the large spin-orbit effect it is not
appropriate to include a detailed discussion of our predicted amplitudes and
cross sections at higher energies, where higher partial waves are important.
In the interest of completeness however we will briefly discuss our
predicted differential cross section at high energy, since we previously noted
that we found an exponential in $t$ in I=2 $\pi\pi$ scattering, reminiscent of
diffraction in magnitude but not in phase \cite{BS}.
The differential cross section in this unequal mass case is related to the
$h_{fi}$ matrix element (29) by
\begin{equation}
{d\sigma \over dt} = 4\pi^5 \,
{ \Big[ \, s^2 - (M_N^2 - M_K^2)^2 \, \Big]^2 \over
s^2 \, \Big[ \, (s-(M_N+M_K)^2) (s-(M_N-M_K)^2) \, \Big] } \; | h_{fi} |^2 \ .
\end{equation}
For KN scattering in the
high energy limit only the contribution from diagram $D_1$ (20) survives, and
we find
\begin{equation}
\lim_{s\to\infty} {d\sigma \over dt} = {4\pi\alpha_s^2\over 9m_q^4} \,
w_1^2 \, \exp \{ A_1 t \} \ .
\end{equation}
Thus we again find an exponential in $t$ at high energy, with a slope parameter
(41) that is numerically equal to
\begin{equation}
b = A_1 = 3.7 \; {\rm Gev}^{-2}
\end{equation}
given our reference parameter set. This is similar to the observed
diffractive I=1 KN slope parameter \cite{Carnegie} of
\begin{equation}
b(expt.) \approx 5.5-5.9 \; {\rm Gev}^{-2} \ .
\end{equation}
The normalizations of the theoretical and experimental I=1 high energy
differential cross sections however differ by about an order of magnitude, and
are 1.8 mb Gev$^{-2}$ (reference parameter set) versus $\approx 15$ mb
Gev$^{-2}$ (experiment \cite{Carnegie}). We noted a similar tendency for the
reference parameter set to underestimate high energy amplitudes in our
discussion of the S-wave phase shifts, which we attributed to the
single Gaussian wavefunction approximation. One interesting prediction is that
I=0 KN scattering should have no diffractive peak in the high energy limit,
since it has has $w_1=0$; unfortunately there is no I=0 high energy data to
compare this prediction with. A serious comparison with high energy scattering
will presumably require the use of wavefunctions with more realistic
high momentum components as well as the incorporation of inelastic channels,
which may strongly affect the elastic amplitudes.
\section{KN equivalent potentials}
Sufficiently close to threshold our quark Born scattering amplitudes can be
approximated by local potentials. These potentials are useful in applications
such as multichannel scattering and investigations of possible bound states,
which are easiest to model using a Schr\"odinger equation formalism with local
potentials. There are many ways to define an equivalent low energy potential
from a scattering amplitude such as $h_{fi}$; several such procedures
are discussed in \cite{Swan,BG} and in Appendix E of \cite{BS}. Of course
effective potentials extracted using different definitions can appear to
be very different functions of $r$ although they lead to similar low energy
scattering amplitudes.
One approach to defining an equivalent potential is to derive a potential
operator $V_{op}(r)$ which give the scattering amplitude $h_{fi}$ in Born
approximation. This ``Born-equivalent potential" technique is discussed in
reference \cite{BG} and in Appendix E of \cite{BS}; it has been tested on the
OGE interaction, from which one recovers the correct Breit-Fermi Hamiltonian at
$O(v^2/c^2)$ \cite{BG}. To derive the Born-equivalent potential we reexpress
our scattering amplitude in the CM frame as a function of the transferred
three-momentum $\vec q = \vec C - \vec A$ and an orthogonal variable $\vec
{\cal P} = (\vec A + \vec C)/2$. We then expand the scattering amplitude in a
power series in $\vec {\cal P}$ and equate the expansion to the Born expression
for nonrelativistic potential scattering through a general potential operator
$V_{op}(r)$, which may contain gradient operators. The leading term, of order
${\cal P}^0$, gives the Born-equivalent local potential $V(r)$.
In this meson-baryon scattering problem our Hamiltonian matrix elements are of
the form
\begin{equation}
h_{fi} = {8\pi \alpha_s \over 3 m_q^2} {1\over (2\pi)^3} \sum_{i=1}^4 w_i
\eta_i \exp \bigg\{ -(A_i - B_i \mu ) P_{cm}^2 \bigg\} \ .
\end{equation}
Making the required substitutions
$P_{cm}^2 = \vec {\cal P}^2 + \vec q^{\, 2}/4$ and
$P_{cm}^2\mu = \vec {\cal P}^2 - \vec q^{\, 2}/4$ and Fourier transforming with
respect to $\vec q$ as in \cite{BS} gives the equivalent low energy KN
potential
\begin{equation}
V_{KN}(r) = {8 \alpha_s \over 3 \sqrt{\pi} m_q^2}\ \sum_{i=1}^4
{w_i \eta_i \over ( A_i + B_i )^{3/2} } \;
\exp\bigg\{ - r^2/ (A_i + B_i) \bigg\} \ .
\end{equation}
Thus our Born-equivalent meson-baryon potentials are sums of four Gaussians,
one from each inequivalent quark Born diagram, weighted by the diagram weights
of that channel.
The potentials for I=0 and I=1 with our reference parameter set $\alpha_s=0.6$,
$m_q=0.33$ Gev, $\rho=m_q/m_s=0.6$, $\alpha=0.40$ Gev and $\beta=0.35$ Gev are
shown in Fig.5. They are repulsive and have a range of about 0.3 fm, as one
would expect for a short range ``nuclear" core. The potentials at contact are
rather similar in this formalism, and the relative weakness of I=0 scattering
is a result of its shorter range. This is an effect of the backward peaking
diagram $D_4$, which leads to a very short range potential with a large value
at contact, and carries higher weight in I=0 scattering.
Although these Born-equivalent potentials are convenient for use in a
meson-baryon Schr\"odinger equation, the actual KN potentials are so strong
that they reproduce some features of the interaction only qualitatively. For
example, the Born diagrams give an I=1 scattering length of $-0.35$ fm, but the
Born-equivalent potential (65) for I=1 in the Schr\"odinger equation for KN
leads to a scattering length of only about $-0.22$ fm. The discrepancy is due
to higher order effects of $V_{KN}(r)$ in the Schr\"odinger equation; we have
confirmed that the ratio of $h_{fi}$ and $V_{KN}(r)$ scattering lengths
approaches unity in the small-$\alpha_s$ limit. In a multichannel study one
might modify $V_{KN}(r)$ (65) to give the input $h_{fi}$ scattering lengths,
perhaps through a change in the overall normalization, as a way of providing a
more realistic potential model of the quark Born amplitudes.
\section{Results for K$\Delta$, K$^*$N and K$^*\Delta$; prospects for
Z$^*$-molecules}
The channels K$\Delta$, K$^*$N and K$^*\Delta$ are interesting in part because
they may support molecular bound states if the effective interaction is
sufficiently attractive. In contrast the low energy KN interaction is
repulsive in both isospin states. These ``Z$^*$-molecules" would appear
experimentally as resonances with masses somewhat below the thresholds of
$\approx 1.7$ Gev, $\approx 1.85$ Gev and $\approx 2.1$ Gev. Even if there
are no bound states, attractive interactions will lead to threshold
enhancements which might be misidentified as Z$^*$ resonances just above
threshold.
Plausible binding energies of hadronic molecules can be estimated from the
uncertainty principle and the minimum separation allowed for distinct hadrons
as $E_B \sim 1/ ( M_{had}\cdot 1\, {\rm fm}^2) \sim 50$ Mev. In comparison, the
best established molecules or molecule candidates have binding energies ranging
from 2.2 Mev (the deuteron, which has a repulsive core) through 10-30 Mev (the
$f_0(975)$, $a_0(980)$ and $\Lambda(1405)$). (The $f_0(1710)$, with a binding
energy relative to K$^*\bar {\rm K}^*$ of about 75 Mev, appears plausible but
is a more controversial candidate \cite{molec}.) Finally, the state $f_2(1520)$
seen by the Asterix \cite{Asterix}, Crystal Barrel \cite{CB} and Obelix
\cite{Obelix} collaborations in P$\bar {\rm P}$ annihilation is an obvious
candidate for a nonstrange vector-vector molecule, with a (poorly determined)
binding energy relative to $\rho\rho$ threshold of perhaps 20 Mev.
Several candidate Z$^*$ resonances which might be meson-baryon molecule states
have been reported in KN partial wave analyses. The 1986 Particle Data Group
compilation \cite{PDG86} (the most recent to review the subject of Z$^*$
resonances) cited two I=0 candidates,
[$Z_0(1780), {1\over 2 }^+$] and
[$Z_0(1865), {3\over 2 }^-$] and four I=1 possibilities,
[$Z_1(1725), {1\over 2 }^+$];
[$Z_1(1900), {3\over 2 }^+$];
$Z_1(2150)$ and
$Z_1(2500)$. However the evidence for these states is not strong, and the
PDG argue that the standards of proof must be strict in this exotic channel.
For this reason these states were only given a one star ``Evidence weak; could
disappear." status. The 1986 PDG also noted that ``The general prejudice
against baryons not made of three quarks and the lack of any experimental
activity in this area make it likely that it will be another 15 years before
the issue is decided.". The 1992 PDG compilation \cite{PDG92} makes a similar
statement, with ``15 years" revised to ``20 years".
In their recent analysis of the data Hyslop {\it et al.} \cite{HARW} summarize
some previous claims and report evidence for ``resonancelike structures"
[$Z_0(1831),{1\over 2}^+$];
[$Z_0(1788),{3\over 2}^-$];
[$Z_1(1811),{3\over 2}^+$] and
[$Z_1(2074),{5\over 2}^-$]. The negative parity candidates
$Z_0(1788)$ and
$Z_1(2074)$ have
quantum numbers and masses consistent with S-wave K$^*$N and K$^*\Delta$
molecules respectively. We would not normally expect P-wave molecules; odd-L is
required to couple to positive parity KN channels, and the centrifical barrier
suppresses binding due to these short range forces. However, threshold effects
which resemble resonances might arise in the full multichannel problem, and the
very strong spin-orbit force evident in the P$_{01}$ and P$_{03}$ KN partial
waves may be sufficient to induce binding in some channels.
A clarification of the status of Z$^*$ candidates
through the determination of experimental amplitudes for the processes KN $\to
$ K$^*$N, KN $\to $ K$\Delta$ and KN $\to $ K$^*\Delta$ in addition to the
elastic KN reaction will be an important goal of future studies at kaon
factories.
All the S-wave (I,J$^{\rm P}$) quantum numbers, in which
molecule bound states are {\it a priori} most likely, are as follows;
\begin{displaymath}
{\rm K}\Delta (\approx 1.6-1.7 \ \hbox{Gev}): \ \ \
(2,{3\over 2}^-) \ ;
(1,{3\over 2}^-) \ ; \\
\end{displaymath}
\begin{displaymath}
{\rm K}^* {\rm N} (\approx 1.75-1.85 \ \hbox{Gev}): \ \ \
(1,{3\over 2}^-) \ ;
(1,{1\over 2}^-) \ ; \\
(0,{3\over 2}^-) \ ;
(0,{1\over 2}^-) \ ;
\end{displaymath}
\begin{displaymath}
{\rm K}^* \Delta (\approx 2.0-2.1 \ \hbox{Gev}): \ \ \
(2,{5\over 2}^-) \ ;
(2,{3\over 2}^-) \ ;
(2,{1\over 2}^-) \ ; \\
(1,{5\over 2}^-) \ ;
(1,{3\over 2}^-) \ ;
(1,{1\over 2}^-) \ .
\end{displaymath}
We can use our detailed model of meson-baryon scattering in the $(q\bar s)
(qqq)$ system $(q=u,d)$ to identify channels which experience attractive
interactions as a result of the color hyperfine term. These we again show as
weight factors which multiply each of the four diagrams $D_1\dots D_4$. Since
the overlap integrals these weights multiply are all positive and of comparable
magnitude, the summed weight can be used as an estimate of the sign and
relative strength of the interaction in each channel. Positive weights
correspond to a repulsive interaction. Our results for the $h_{fi}$ ``diagram
weights" for all K$\Delta$, K$^*$N and K$^*\Delta$ channels in (I,S$_{tot}$)
notation are given below. We also give the numerical values we find for the
scattering length in each channel given our reference parameter set and masses
M$_{K^*}=0.895$ Gev and M$_{\Delta}=1.210$ Gev.
\vskip 0.5cm
\begin{equation}
{{\rm K}\Delta}(2,{3\over 2}) =
{1\over 6} \ \bigg{[} \
+{3} ,\
-{1} ,\
+{3} ,\
-{1} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.38 \ \hbox{ fm} \ .
\end{equation}
\vskip 0.5cm
\begin{equation}
{{\rm K}\Delta}(1,{3\over 2}) = -{1\over 3} \
{{\rm K}\Delta}(2,{3\over 2}) =
{1\over 18} \ \bigg{[} \
-{3} ,\
+{1} ,\
-{3} ,\
+{1} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = +0.13 \ \hbox{ fm} \ .
\end{equation}
\vskip 0.5cm
\begin{equation}
{{\rm K}^*{\rm N} }(1,{3\over 2}) =
{1\over 27} \ \bigg{[} \
+{7} ,\
+{1} ,\
-{5} ,\
{0} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.08 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*{\rm N} }(1,{1\over 2}) =
{1\over 54} \ \bigg{[} \
+26 ,\
+5 ,\
+2 ,\
-3 \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.39 \ \hbox{ fm} \ .
\end{equation}
\vskip 0.5cm
\begin{equation}
{{\rm K}^*{\rm N} }(0,{3\over 2}) =
{1\over 9} \ \bigg{[} \
+1 ,\
+1 ,\
+1 ,\
0 \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.22 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*{\rm N} }(0,{1\over 2}) =
{1\over 18} \ \bigg{[} \
-4 ,\
+5 ,\
-4 ,\
-3 \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = +0.15 \ \hbox{ fm} \ .
\end{equation}
\vskip 0.5cm
\begin{equation}
{{\rm K}^*\Delta}(2,{5\over 2}) =
{1\over 3} \ \bigg{[} \
+{1} ,\
-{1} ,\
-1 ,\
+{1} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = +0.14 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*\Delta}(2,{3\over 2}) =
{1\over 18} \ \bigg{[} \
+{11} ,\
-{1} ,\
-{1} ,\
-{9} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.20 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*\Delta}(2,{1\over 2}) =
{1\over 9} \ \bigg{[} \
+{7} ,\
+{1} ,\
+{1} ,\
+{3} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.86 \ \hbox{ fm} \ .
\end{equation}
\vskip 0.5cm
\begin{equation}
{{\rm K}^*\Delta}(1,S_{tot}) = -{1\over 3} \
{{\rm K}^*\Delta}(2,S_{tot}) \ \ \ \ \ \forall \ S_{tot}
\ ;
\end{equation}
\begin{equation}
{{\rm K}^*\Delta}(1,{5\over 2}) =
{1\over 9} \ \bigg{[} \
-{1} ,\
+{1} ,\
+1 ,\
-{1} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = -0.05 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*\Delta}(1,{3\over 2}) =
{1\over 54} \ \bigg{[} \
-{11} ,\
+{1} ,\
+{1} ,\
+{9} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = +0.07 \ \hbox{ fm} \ .
\end{equation}
\begin{equation}
{{\rm K}^*\Delta}(1,{1\over 2}) =
{1\over 27} \ \bigg{[} \
-{7} ,\
-{1} ,\
-{1} ,\
-{3} \
\bigg{]}
\ ;
\end{equation}
\begin{equation}
a = +0.29 \ \hbox{ fm} \ .
\end{equation}
Evidently attractive forces arise from the OGE spin-spin interaction in the
minimum-spin, minimum-isospin channels,
\begin{displaymath}
{\rm K}\Delta : \ \ \
(1,{3\over 2}) \ ;
\end{displaymath}
\begin{displaymath}
{\rm K}^* {\rm N}
: \ \ \
(0,{1\over 2}) \ ;
\end{displaymath}
\begin{displaymath}
{\rm K}^* \Delta
: \ \ \
(1,{1\over 2}) \ .
\end{displaymath}
The two exceptions to this rule are the K$^*\Delta$ channels
\begin{displaymath}
{\rm K}^* \Delta
: \ \ \
(2,{5\over 2})
\end{displaymath}
and
\begin{displaymath}
{\rm K}^* \Delta
: \ \ \
(1,{3\over 2}) \ ;
\end{displaymath}
although their weights sum to zero, variations in the detailed overlap
integrals lead to attractive OGE-hyperfine forces in these two channels as
well.
For our reference parameter set we find no molecular bound states; the
attractive forces are too weak to induce binding.
The experimental situation at present is rather confused; some references claim
evidence for resonances in several channels (see for example
\cite{HARW,Hash,Hyslop}), whereas other references
such as \cite{MO} and \cite{Watts} conclude that the same
phase shifts are nonresonant. Our results
do not support the most recent claims of resonances \cite{HARW}, since the
S-wave quantum numbers of our attractive channels do not correspond to those of
the negative parity candidates [$Z_0(1788),{3\over 2}^-$] and
[$Z_1(2074),{5\over 2}^-$].
However our negative result may be an
artifact of our approximations, including the neglect of spin-orbit effects and
couplings between channels. The spin-orbit effects are known from experiment to
be very important, and might be sufficient to lead to
Z$^*$-molecule bound states or strong
threshold enhancements in the attractive channels. Our negative result is
based on strong assumptions on the form of the interaction; this
should be relaxed in future theoretical work, and should not be used to argue
against experimental searches for possible Z$^*$ meson-baryon molecules.
\section{Summary and conclusions}
In this paper we have applied the quark Born diagram formalism to KN
scattering. In this approach one calculates hadron-hadron scattering amplitudes
in the nonrelativistic quark potential model assuming that the amplitude is the
coherent sum of all OGE interactions followed by all allowed quark line
exchanges; this is expected to be a useful description of reactions which are
free of $q\bar q$ annihilation. The model has few parameters, here
$\alpha_s/m_q^2$, $\rho=m_q/m_s$ and the hadron wavefunction parameters, and
with Gaussian wavefunctions the scattering amplitudes can be derived
analytically. The model was previously applied to I=2 $\pi\pi$ and I=3/2 K$\pi$
scattering with good results.
KN scattering is an important test of this approach because it is also
annihilation-free (at the valence quark level) and the meson and baryon
wavefunction parameters and the interaction strength are already reasonably
well established. Thus there is little freedom to adjust parameters. We find
good agreement with the experimental low energy I=0 and I=1 phase shifts given
standard quark model parameters. (The experimental I=0 scattering length is
usually claimed to be very small; we disagree with this interpretation of the
data and argue in support of a larger value.) A resolution of the disagreements
between different I=0 KN phase shift analyses, especially at very low energies,
is an important task for future experimental work. Hyslop \cite{Hyslop} also
suggests additional experimental work on the I=0 KN system. At higher energies
we find that the single Gaussian S-wave phase shifts fall with energy more
quickly than experiment given standard quark model parameters; we attribute
most of this effect to departures of the hadron wavefunction from single
Gaussians at short distances, perhaps in response to the attractive color
hyperfine interaction. We have confirmed that a smaller hadronic length scale
(about 0.7 times the usual nonrelativistic quark potential model
scale) gives S-wave phase shifts which are in
good agreement with experiment at all energies.
We have investigated the possibility of Z$^*$-molecule meson-baryon bound
states by extending our calculations to all channels allowed for
K$\Delta$, K$^*$N and K$^*\Delta$. Although we do find attractive interactions
in certain channels, in no case is the corresponding interhadron potential
sufficiently strong to form a bound state. Of course this result may be an
artifact of our approximations, in particular the assumption of keeping only
the spin-spin color hyperfine term and the single channel approximation. The
effect of relaxing these approximations would be a very interesting topic
for future study.
There are additional effects in the L$>$0 KN system which are known to be
important experimentally, which are not incorporated in our calculations of
single channel color hyperfine matrix elements. The most important of these is
a very large spin-orbit force, which it has not been possible to explain as an
OGE interaction \cite{VTJ,MP}.
Both this spin-orbit interaction and the Z$^*$ candidates may
be strongly affected by coupled channel effects, which we plan to investigate
in future work. Since much is already known experimentally about the reactions
KN$\to$K$^*$N and KN$\to$K$\Delta$, it should be possible to test
predictions of the quark Born diagrams for these channel couplings using
existing data sets. Although one might expect OPE forces to be important in
coupling KN to inelastic channels, such as in I=1 KN$\to$K$^*$N, the OPE
contribution to this process has been found experimentally to be small near
threshold \cite{bland2}. Thus experiment suggests that interquark forces such
as OGE and the confining interaction may be more important
than meson exchange in coupling KN to inelastic channels. We plan to evaluate
these offdiagonal couplings in detail in a future study.
\acknowledgements
We acknowledge useful contributions from R.Arndt, W.Bugg, S.Capstick,
F.E.Close, G.Condo, H.Feshbach, N.Isgur, R.Koniuk, M.D.Kovarik, K.Maltman,
B.R.Martin, D.Morgan, G.C.Oades, R.J.N.Phillips, B.Ratcliff, R.G.Roberts,
L.D.Roper, D.Ross,
S.Sorensen, J.Weinstein and R.Workman. This work was sponsored in
part by the United States Department of Energy under contracts
DE-AC02-76ER03069 with the Center for Theoretical Physics at the Massachusetts
Institute of Technology and DE-AC05-840R21400 with Martin Marietta Energy
Systems Inc.
|
1,116,691,497,122 | arxiv |
\section{Introduction}
\label{sec:Introduction}
Machine translation (MT) quality has improved substantially over the past years, allegedly to the degree that it is no longer distinguishable from professional human translation (HT). The first claims of human--machine parity were based on MT systems geared to news translation \citep{Hassan2018,Popel2018}, and soon refuted due to weaknesses in the evaluation methodology. Reproductions with professional translators rather than crowd workers and full documents rather than single sentences likewise concluded that HT was superior to MT in terms of both accuracy and fluency \citep{Toral2018,Laeubli2018}.
Human--machine parity claims may not hold with MT systems for broad domains such as news articles, but systems geared to narrower domains have been shown to achieve far better quality \citep[e.g.\xspace,][]{Levin2017}, and it is unclear how they compare to specialised human professionals. In this paper, we propose an evaluation design that avoids the weaknesses identified in previous human--machine comparisons (\Section{Background}), and relies on metrics that are arguably better quantifiable and interpretable than adequacy and fluency judgments: error counts and edit distance (\Section{BackgroundEvaluationMT}). Evaluators are asked to flag errors in and post-edit full documents, where half of the sentences are MT and the other half are HT (\Section{Methodology}). We analyse data collected in a study involving three language pairs and ten professional translators, and find that professional translators post-edit professional HT almost as much as MT, and rate the two similarly in terms of issues with terminology, omission, and typography (\Section{Experiment}). We also contextualise our results within the ongoing discussion on human--machine parity, suggesting that further assessments will need to focus specifically on what professional translators can do better than MT systems -- and vice versa -- rather than comparing their \enquote{overall quality} (\Section{Discussion}). Our method should provide a means to assess the viability of MT in specific professional translation contexts, and may possibly help decrease resistance against the technology among professional translators.
\section{Background}
\label{sec:Background}
How to tell whether a translation is good or bad is one of the most important and one of the most difficult questions asked in connection with translation. Best practices for evaluating HT and MT differ, and assessments of human--machine parity have largely ignored the former.
\subsection{Evaluation of HT}
\label{sec:BackgroundEvaluationHT}
Quality assurance in professional translation workflows typically means manual identification of errors in (a sample of) translations. The error types depend on the quality standard. LISA, the first quality standard that gained widespread adoption in the translation industry, defines 20–123 error types and three severity levels: minor, major, and critical. SAE~J2450, originating from the automotive industry, uses fewer error types and only two severity levels: minor and major. In contrast to LISA, SAE~J2450 focusses exclusively on linguistic quality (i.e., no style and formatting, etc.). More recently, a joint academia-industry initiative has proposed the Multidimensional Quality Metrics (MQM) framework, which allows the definition of custom quality standards by choosing a subset of (weighted) error types.
The quality score of a given translation is computed as a linear combination of error counts and severity levels (i.e., weights). The error categories are defined in the quality standard; the number of errors per category and the severity of each error are determined by a single qualified rater. A translation is considered fit for purpose if its quality score does not exceed a given threshold.
\subsection{Evaluation of MT}
\label{sec:BackgroundEvaluationMT}
While there are various automatic metrics such as BLEU \citep{Papineni2002} or TER \citep{Snover2006}, human evaluation is considered the only reliable method in MT quality evaluation.\footnote{At WMT 2019, human quality judgements for the strongest MT systems were negatively correlated with BLEU, the most widely used automatic metric \citep[p.~79]{Ma2019}.} Rather than specific error categories, human evaluation of MT quality has been focussed on two rather abstract dimensions: adequacy and fluency. Human raters judge the degree to which a translation adequately expresses meaning of its source text or constitutes a fluent sentence in the target language, respectively, on either an absolute or relative scale. 5-point adjectival scales were used at the first large-scale MT evaluation campaigns, but soon replaced by relative ranking because categories such as \enquote{[the translation preserves] most meaning} and \enquote{[the translation preserves] much meaning} proved hard to distinguish \citep{KoehnMonz2006}. Relative rankings show better inter- and intra-rater agreement \citep{CallisonBurch2007}, but since they only tell if but not by how much two or more translations differ -- raters chose between better, same (tie), or worse --, the research community has lately embraced continuous Likert-like scales \cite[referred to as direct assessment, see][]{Graham2013}.
The score of a given system output, typically a few hundred to a few thousand sentences, is computed by aggregating the adequacy and fluency judgements of multiple bi- and monolingual raters, respectively. Raters are typically MT researchers \citep[e.g.\xspace,][]{WMT2019} and/or crowd workers, but rarely qualified translators.
\subsection{Assessment of Human--Machine Parity}
\label{sec:BackgroundParity}
\begin{table}
\fontsize{10.1pt}{10.1pt}\selectfont
\renewcommand{\arraystretch}{1.5}
\input{tables/error.definitions}
\caption{Error types and definitions.}
\label{tab:ErrorTypes}
\end{table}
\begin{table*}
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\input{tables/materials.example}
\caption{Example of a pre-translated document in which HT and MT are interleaved, including a segment with wrong terminology (ID 1), an error in typography (3), and an omission (6). The errors in segments 3 and 6 have been fabricated for the purpose of illustration.}
\label{tab:MaterialsExample}
\end{table*}
In summary, the evaluation of HT focusses on quality: raters are qualified translators and give feedback on specific errors (such as the number of severe terminology problems). Because qualified feedback is expensive, few segments are evaluated by a single translator. The evaluation of MT, on the contrary, focusses on quantity: many segments are evaluated by multiple raters, but those raters are not qualified and give feedback on overall quality (such as how adequate a translation is on a 100-point scale).
Given the different evaluation traditions for HT and MT, it could be assumed that a comparison of HT and MT quality would aim at combining the two. However, the first evaluation that claimed MT had reached parity with HT -- in one language pair and domain, i.e.\xspace, Chinese to English news translation -- used an MT evaluation design: bilingual crowd workers rated a large number of translated sentences in terms of adequacy \citep{Hassan2018}. Two reproductions of \citeg{Hassan2018} evaluation showed that their evaluation design disadvantaged HT. Because the translated sentences were shown to raters in random order, they could not consider phenomena related to document-level cohesion, such as consistent translation of a product name throughout a news article. When raters compared full articles rather than single sentences, HT was rated significantly better than MT \citep{Laeubli2018}. Even with isolated sentences, HT was rated significantly better than MT when professional translators rather than crowd workers carried out the evaluation \citep{Toral2018}.
\section{Evaluation Design}
\label{sec:Methodology}
We propose an experimental design for combined evaluation of HT and MT that avoids the weaknesses of previous assessments on human--machine parity in translation (\Section{Background}).
\subsection{Materials}
\label{sec:MethodologyMaterials}
The evaluation is based on a source text (ST) that is segmented into either sentences or paragraphs. We obtain two translations of the entire source text: one created by a professional translator (HT), the other by the MT system (MT). The result is a segment-aligned text where each source segment (e.g.\xspace, ST-1) has two translations (HT-1 and MT-1). HT is translated from scratch, i.e.\xspace, without any MT system. The creator of HT has the same background as the raters (see below), but no further involvement in the experiment.
For each rater, we prepare a translation that combines ST with a mix of HT and MT. To this end, we split ST into sections of equal length. We then randomly pair each source segment with either its corresponding HT or MT, making sure to include an equal number of translations from both sources. An example is shown in \Table{MaterialsExample}. Note that the scrambling of HT and MT may introduce disfluencies, as further discussed in \Section{DiscussionExperimentalValidity}.
\subsection{Raters}
Since our evaluation involves post-editing (see below), and because translation quality is judged differently by professional translators and laypeople \citep{Toral2018}, we engage professional translators as raters. Their area of expertise matches the source text.
\subsection{Procedure}
The evaluation is organised as a task in which raters are instructed to evaluate the segments in their prepared translation (see above). Raters are told that the entire translation is MT. The primary motivation for this experimental manipulation is that we want raters to focus on evaluating segments rather than guessing if they are MT or HT. The latter would likely occur if they knew that both are present, not least because many professional translators fear \enquote{being replaced by a machine} \citep{Cadwell2018}. Translators might also be inclined to evaluate (what they believe is) MT more critically than HT because they have more negative perceptions about the former \citep{LaeubliOrregoCarmona2017}.
The evaluation of each segment involves three subtasks.
First, raters are asked to post-edit the segment. They are instructed to correct spelling and grammatical errors, but not style.
Second, raters are asked to flag the presence (but not count the number) of errors in the original target segment. We use a subset of MQM error types that has been shown to be particularly relevant for post-editing of domain-specific MT \citep{Castilho2018}, as listed in \Table{ErrorTypes}, but note that other subsets or quality standards (\Section{BackgroundEvaluationHT}) could be used instead.
Third, raters have the option to leave a comment for the segment if they wish to give more specific feedback.
Raters complete the experiment within a fixed time frame. While the practical consideration here is limiting experimental cost, time pressure is common in professional translation \citep{EhrensbergerDow2016} and has been shown to increase cognitive function in controlled translation experiments \citep{Campbell1999}.
\subsection{Analysis}
\label{sec:MethodologyAnalysis}
\begin{table}[]
\centering
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\include{tables/fisher.example}
\caption{Contingency table for two binary variables. Raters flagged omissions in 14 segments originating from HT, and in 12 segments originating from MT. Omission does not depend on segment origin (HT vs.\ MT) according to a two-tailed Fisher's exact test ($p=0.693$). Data corresponds to \Figure{PlotsFROmission}.}
\label{tab:ContingencyTableExample}
\end{table}
We calculate the minimum edit distance (MED) between each original and post-edited segment, as well as corpus-level HTER \citep{Snover2006} for all HT and MT segments in each target language. While HTER correlates better with human judgements of MT quality, MED is easier to interpret, particularly for individuals outside the MT research community. In reference to industry-focussed studies on post-editing \citep[e.g.\xspace,][]{Volk2010}, we group post-edited segments into exact matches (MED = 0), non-exact matches (MED >0), and high effort (MED >5).
Besides descriptive statistics, we test if the presence of errors and post-editing effort depends on whether target segments originate from HT or MT. Target segment origin is our binary independent variable, and we test if its proportion varies among the proportion of a single binary dependent variable using a two-tailed Fisher's exact test as implemented in \textit{R} \citep{Bailey1995}. An example is shown in \Table{ContingencyTableExample}.
\section{Experimental Results}
\label{sec:Experiment}
We use the evaluation design described in the previous section to compare HT to MT in an experiment with three language pairs and ten professional translators. The study is conducted within the language services department of a multinational insurance company.
\subsection{MT System}
\label{sec:ExperimentMTSystem}
\begin{table}
\centering
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\include{tables/data}
\caption{Training Data}
\label{tab:TrainingData}
\end{table}
\begin{table*}
\renewcommand{\arraystretch}{1.4}
\fontsize{10.1pt}{10.1pt}\selectfont
\centering
\begin{adjustwidth}{-2mm}{}
\input{tables/results}
\end{adjustwidth}
\caption{Results. Counts denote the number of segments for which a given variable holds true for HT or MT, respectively; relative numbers are shown in brackets. Pairs of significantly different proportions according to a two-tailed Fisher's exact test (at $p\leq0.05$) are marked with *. Example: In DE--FR, 5/237 HT segments and 3/238 MT segments contain a typographical error. The difference is not statistically significant. Visualisations and $p$ values are shown in Figures~\ref{fig:PlotsEN}--\ref{fig:PlotsIT}.}
\label{tab:Results}
\end{table*}
We train a Transformer (big) model \citep{Vaswani2017} as implemented in Sockeye \citep{Sockeye} with FFN size 2048 for each language pair. The training data is listed in \Table{TrainingData}. We combine publicly available out-of-domain data (OOD) from OPUS \citep{OPUS}, from which we discard the lowest-scoring 75\%\, by means of dual conditional cross-entropy filtering \citep{JunczysDowmunt2018}, with in-domain data (ID). We oversample ID to match OOD where possible, with a maximum oversampling factor of 10.
We also integrate domain-specific terminology by means of data augmentation \citep{Dinu2019}. We use two different sets of terms for training and testing (i.e.\xspace, use in production). For training, we automatically filter the insurance company's full terminology, removing terms with low frequencies in the training data for reasons of time efficiency, and using a stop word list to remove terms that occur frequently in regular text (\enquote{normal words}). In addition, we discard terms in 30\%\, of the training segments to increase robustness in constraint-free scenarios. For testing, we use a smaller terminology that was narrowed down by the company's professional terminologists.
\subsection{Texts and Raters}
For each language pair, we select a document that contains terminology and language specific to the company's insurance sector: the description of business processes in a customer application (DE--EN) and a text on specialist training in sales (DE--FR, DE--IT).
We have all three documents translated by external translators who are regularly contracted by the company. We also translate the documents using the MT systems described above, and prepare a pre-translated version of each document in which half the target segments stem from the external translators and the other half from the MT system (\Section{MethodologyMaterials}).
The raters participating in the experiment are in-house translators at the company, and have not previously seen these documents. The number of raters differs between language pairs: four raters each for DE--FR and DE--IT, and two for DE--EN. Each rater is allocated 150 consecutive segments of the document, so the number of experimental items (segments) amounts to 600 for DE--FR and DE--IT, and to 300 for DE--EN.
The raters were given 90 minutes to complete the task. Two raters for DE--FR and one rater for DE--IT did not finish in time, reducing the number of items in our analysis to 475 and 492, respectively.
\subsection{Error Analysis}
\label{sec:ResultsErrorAnalysis}
Experimental results are listed in \Table{Results}. We first analyse the proportion of segments that contain at least one terminology, omission, or typography error originating from HT and MT.
The number of segments with terminology errors is higher for MT than HT. While almost twice as many segments are affected in DE--EN, the difference is less marked in DE--FR, and very small in DE--IT.
Omissions are found in more segments originating from MT in DE--EN, and in more segments originating from HT in DE--FR and DE--IT. The number of segments containing omissions are considerably lower in DE--EN and DE--IT than in DE--FR.
In terms of typography, the number of affected segments is low for both HT and MT. HT is slightly better than MT in DE--EN, and slightly worse in DE--FR and DE--IT.
The proportion of erroneous segments is similar for HT and MT overall. A two-tailed Fisher's exact test shows no significant difference between HT and MT in any error category and language pair. $p$-values are shown in Figures~\ref{fig:PlotsEN}--\ref{fig:PlotsIT}.
\subsection{Post-editing Effort}
\label{sec:ResultsEditDistance}
We compute corpus-level HTER for all HT and MT segments in each language pair (\Table{Results}, last row). We observe very low scores overall, and small differences between HT and MT in DE--FR and DE--IT.
We also compute MED between each pre-translated and post-edited target segment. Descriptive statistics are listed in \Table{Results}. In all language pairs, raters post-edited less characters in HT on average (avg), but again, the differences are small, particularly for DE--IT. The segment that required most post-editing (max) stemmed from HT in DE--EN, and from MT in DE--FR and DE--IT.
We observe a low number of segments that required any post-editing at all. The proportion of these segments is referred to as >0 in \Table{Results}. For example, only 37 out of 150 MT segments in DE--EN were post-edited; raters decided that raw MT was good enough for the remaining segments. However, the proportion of segments that needed any editing was even lower for HT in DE--EN, significantly so according to a two-tailed Fisher's exact test ($p$$\leq$.05). The difference between the proportion of segments with an MED of more than five characters (>5), on the other hand, is not significant ($p$=0.255) in DE--EN. In DE--FR, both >0 and >5 segments are significantly more frequent in MT (both at $p$$\leq$.05). In DE--IT, where raters post-edited more HT than MT segments (see >0), the difference is not significant at $p$=0.110 and $p$=0.674, respectively.
\section{Discussion}
\label{sec:Discussion}
We discuss design decisions in our evaluation and alternative approaches to inference testing, and contextualise our results within the ongoing discussion on human--machine parity in language translation.
\subsection{Experimental Validity}
\label{sec:DiscussionExperimentalValidity}
Our evaluation is based on pre-translated documents in which target segments from HT and MT are interleaved (\Table{MaterialsExample}). In contrast to other MT quality evaluation experiments \citep[e.g.\xspace,][]{Green2013,Hassan2018}, this enables raters to consider document-level context, but the shuffling of MT and HT may introduce disfluencies that would not occur if all segments stemmed from either MT or -- particularly -- HT. In DE--FR, for example, the German term \textit{Einzelfirma} (sole proprietorship), which occurred in seven source segments, was translated as \textit{raison individuelle} and \textit{entreprise individuelle} by HT and MT, respectively. The first three instances were translated by MT, and noting the inconsistency with the fourth instance translated by HT, the rater in charge flagged the segment as erroneous and commented that \enquote{[the term translations] should be harmonised}. The MT system's translation was consistent with the company's terminology database (TB) in this case, and the flagging of HT as erroneous was correct. However, if MT and HT used different translations for a term not specified in the TB, the translation introduced second would likely be marked as wrong even if it was used consistently within HT and MT. This may increase the number of terminology errors overall, but since the order in which MT and HT appear in documents is randomised in our evaluation design, it would not disadvantage one over the other with sufficient sample size. We also note that combining segments from different sources is common in professional translation workflows: when translations for adjacent source segments are retrieved from a translation memory (TM), these translations may (and typically will) stem from different documents and translators. The documents we prepared for our experiment are what translators would normally see in their computer-aided translation (CAT) tool, with HT corresponding to exact matches, except that segment origin (HT or MT) is not shown in the experiment.
We did not use a CAT tool in our experiment, but presented the pre-translated documents as spreadsheets with dedicated columns for error annotations and comments. A downside of this design decision is that the company's TB was not directly integrated into the translation environment. In the CAT tool that the in-house translators (the raters in this experiment) use in their daily work, terms contained in the TB are highlighted in source segments, and term translations are shown in a dedicated window. While raters had access to the TB during the experiment, it is likely that they missed a few terminology errors because terms were not highlighted in the experiment. On the contrary, we noticed that they marked a variety of other mistakes as terminology errors, such as wrong choice of pronoun (e.g.\xspace, \textit{que} instead of \textit{soi} in DE--FR) or wrong verb forms (e.g.\xspace, \textit{data already exists} instead of \textit{data already exist} in DE--EN). Since raters blindly evaluated HT and MT segments the same way, this may affect the true number of terminology errors in our analysis, but not the proportion between errors in HT and MT.
The blind evaluation of pre-translated segments -- the fact that we did not tell raters that half of the pre-translations were HT, and that we did not show that pre-translations originated from different sources (HT and MT) -- is another design decision that warrants discussion. Whether a pre-translated segment was retrieved from a TM (as an exact or fuzzy match) or an MT system is important information to professional translators and thus prominently shown in CAT tools. However, beliefs about (non-)presence of MT have been shown to impact how willing people are to tolerate translation mistakes \citep{Gao2014}, and surveys have shown that professional translators tend to have negative perceptions about MT \citep{LaeubliOrregoCarmona2017,Cadwell2018}. Our experimental manipulation was aimed at fostering equal rigour in evaluating HT and MT, and preventing raters from guessing if segments are HT or MT rather than focussing on actual evaluation.
\subsection{Statistical Analysis}
\label{sec:StatisticalAnalysis}
A limitation of using contingency tables (see \Table{ContingencyTableExample} for an example) is that we can only use categorical variables as dependent variables. To that end, we binarised MED with fixed and arguably arbitrary thresholds (>0 and >5; see \Section{MethodologyAnalysis}). Predicting MED in a regression model would seem more appropriate, and offers the advantage of accommodating further predictors such as segment length, but violated the assumption of normally distributed residuals in our data even when extreme values were removed. Futher analysis, including factors other than origin (HT/MT) that may explain the variance in presence of errors and post-editing distance, is left to future work.
We use Fisher's exact test to analyse contingency tables, the null hypothesis being that the likelihood of a segment showing a certain property -- such as containing wrong terminology or having been post-edited (MED >0) -- is not influenced by its origin (HT or MT). Fisher's exact test has been criticised as rather conservative \citep[see][]{AndresTejedor1995}, but is more appropriate than $\chi^2$ or $G$ tests of independence when sample sizes are small \citep{RuxtonNeuhaeuser2010}.\footnote{Using a $\chi^2$ or $G$ test of independence has no effect on any finding of (non-)significance reported in this paper. We observe the largest difference when testing for independence of origin and omission in DE--EN with a $G$ test ($p$=0.085) instead of a two-tailed Fisher's exact test ($p$=0.214, see \Figure{PlotsENOmission}).}
It would also be desirable to include more raters in the experiment. The limited number of participants is often criticised in translation experiments, justifiably so because translation performance varies considerably between individuals \citep[e.g.\xspace,][]{KoehnGermann2014}. With sufficient participants, this variance can be accounted for by means of mixed-effects modelling \citep{Green2013}, but quite apart from budgetary constraints, there may just not be enough qualified raters in domain-specific settings. The in-house translation department we work with in this study, for example, employs 2--4 specialised translators per language pair. Non-experts who could be involved to increase the number of raters have been shown to evaluate MT less critically \citep{Toral2018}. In the present study, we prioritised rater qualification over quantity.
\subsection{Human--Machine Parity?}
Our results illustrate that the question whether MT quality reaches parity with HT is a matter of definition. \citet{Hassan2018}, who analysed quality judgements by crowd workers in Chinese to English news translation, concluded that parity was reached because the difference between judgements of HT and MT is not statistically significant. The same holds for our experiment: professional translators flagged errors in segments originating from HT and MT, and the proportion of erroneous HT and MT segments does not differ significantly for any error type and language pair (\Section{ResultsErrorAnalysis}). This is mainly because error rates are fairly low for both HT and MT, which indicates that both translation methods achieve high quality. However, MT produced more erroneous segments than professional translators (HT) overall, and the fact that statistical tests (\Section{StatisticalAnalysis}) find no significant difference between HT and MT either means that there really is none, which would imply parity, or that the number of analysed segments (the sample size) is too small to infer a significant difference. Consider the proportion of segments with omissions in DE--EN (\Table{Results}): 1/150 in HT vs.\ 5/150 in MT. Omissions are rare in both, and the difference is attributed to chance ($p$=0.214, see also \Figure{PlotsENOmission}), but in the very document we analysed, omissions were five times more common in MT segments nonetheless. If assessing human--machine parity was the aim of our study, a larger sample size would be imperative to come to understand if such effects are true or random. Nevertheless, the observation that MT produced less erroneous segments than HT in at least one language pair per error type in our experiment -- except for terminology, where MT only came close to HT in DE--IT with 19/248 vs.\ 18/244 erroneous segments, respectively -- is noteworthy.
While our error analysis was limited to three specific phenomena -- terminology, omission, and typography -- the comparison of pre-translated to post-edited segments yields insights about HT and MT quality overall. MT produced significantly more segments that needed post-editing at all (MED >0) in DE--EN and DE--FR. In DE--EN, however, the proportion of segments that needed substantial post-editing (more than five characters, i.e.\xspace, MED >5) was not significantly higher in MT, and in DE--IT, the number of segments that needed any (MED >0) and substantial (MED >5) post-editing was lower in MT than in HT. This is a remarkable finding, given that HT was produced by an expert translator with experience in the textual domain we investigate. The implication here is that domain-specific MT (\Section{ExperimentMTSystem}) achieves strong results, and it may be insightful to contrast it with generic MT. Moreover, feedback from raters, who had the option to leave a comment for each segment, does not suggest that the experimental manipulation -- the mixture of MT with HT -- was noticeable. In one particular instance, a rater commented \enquote{NMT hat überkorrigiert} (\enquote{NMT has overcorrected}), when in fact the segment in question originated from HT.
\section{Conclusion}
\label{sec:Conclusion}
In a blind evaluation, ten specialised translators post-edited and flagged errors in pre-translated documents in which domain-specific MT was interleaved with professional HT. The evaluation comprised three language pairs: DE--EN, DE--FR, and DE--IT. MT required more post-editing than HT on average, but surprisingly, the difference is not significant in DE--IT, where MT produced more segments that needed no post-editing at all, and slightly less segments that needed substantial post-editing. We also analysed if the proportion of segments that contain wrong terminology, omissions, or typographical errors varies among HT and MT, and found no significant dependency in any language pair. MT produced considerably more segments with wrong terminology in two out of three language pairs, but slightly less segments with omissions or typographical errors in at least one language pair each.
Apart from implying that MT can now reach remarkable quality in domain-specific settings, our results show that professional translators may post-edit professional HT almost as much as MT, and tend to rate the two similarly in terms of issues with terminology, omission, and typography. The caveat here and an aspect that warrants further investigation is that we made our participants believe that the HT they were evaluating was MT. From a methodological point of view, it would be interesting to test if this experimental manipulation would also work the other way around, and analyse if translators treat HT and MT differently depending on what they believe it is. From a more practical perspective, it might also be worth exploring whether the proposed evaluation design could help demonstrate the potential benefits of MT to people who are still sceptical about the technology.
\bibliographystyle{acl}
\fontsize{10.1pt}{10.1pt}\selectfont
\section{Introduction}
\label{sec:Introduction}
Machine translation (MT) quality has improved substantially over the past years, allegedly to the degree that it is no longer distinguishable from professional human translation (HT). The first claims of human--machine parity were based on MT systems geared to news translation \citep{Hassan2018,Popel2018}, and soon refuted due to weaknesses in the evaluation methodology. Reproductions with professional translators rather than crowd workers and full documents rather than single sentences likewise concluded that HT was superior to MT in terms of both accuracy and fluency \citep{Toral2018,Laeubli2018}.
Human--machine parity claims may not hold with MT systems for broad domains such as news articles, but systems geared to narrower domains have been shown to achieve far better quality \citep[e.g.\xspace,][]{Levin2017}, and it is unclear how they compare to specialised human professionals. In this paper, we propose an evaluation design that avoids the weaknesses identified in previous human--machine comparisons (\Section{Background}), and relies on metrics that are arguably better quantifiable and interpretable than adequacy and fluency judgments: error counts and edit distance (\Section{BackgroundEvaluationMT}). Evaluators are asked to flag errors in and post-edit full documents, where half of the sentences are MT and the other half are HT (\Section{Methodology}). We analyse data collected in a study involving three language pairs and ten professional translators, and find that professional translators post-edit professional HT almost as much as MT, and rate the two similarly in terms of issues with terminology, omission, and typography (\Section{Experiment}). We also contextualise our results within the ongoing discussion on human--machine parity, suggesting that further assessments will need to focus specifically on what professional translators can do better than MT systems -- and vice versa -- rather than comparing their \enquote{overall quality} (\Section{Discussion}). Our method should provide a means to assess the viability of MT in specific professional translation contexts, and may possibly help decrease resistance against the technology among professional translators.
\section{Background}
\label{sec:Background}
How to tell whether a translation is good or bad is one of the most important and one of the most difficult questions asked in connection with translation. Best practices for evaluating HT and MT differ, and assessments of human--machine parity have largely ignored the former.
\subsection{Evaluation of HT}
\label{sec:BackgroundEvaluationHT}
Quality assurance in professional translation workflows typically means manual identification of errors in (a sample of) translations. The error types depend on the quality standard. LISA, the first quality standard that gained widespread adoption in the translation industry, defines 20–123 error types and three severity levels: minor, major, and critical. SAE~J2450, originating from the automotive industry, uses fewer error types and only two severity levels: minor and major. In contrast to LISA, SAE~J2450 focusses exclusively on linguistic quality (i.e., no style and formatting, etc.). More recently, a joint academia-industry initiative has proposed the Multidimensional Quality Metrics (MQM) framework, which allows the definition of custom quality standards by choosing a subset of (weighted) error types.
The quality score of a given translation is computed as a linear combination of error counts and severity levels (i.e., weights). The error categories are defined in the quality standard; the number of errors per category and the severity of each error are determined by a single qualified rater. A translation is considered fit for purpose if its quality score does not exceed a given threshold.
\subsection{Evaluation of MT}
\label{sec:BackgroundEvaluationMT}
While there are various automatic metrics such as BLEU \citep{Papineni2002} or TER \citep{Snover2006}, human evaluation is considered the only reliable method in MT quality evaluation.\footnote{At WMT 2019, human quality judgements for the strongest MT systems were negatively correlated with BLEU, the most widely used automatic metric \citep[p.~79]{Ma2019}.} Rather than specific error categories, human evaluation of MT quality has been focussed on two rather abstract dimensions: adequacy and fluency. Human raters judge the degree to which a translation adequately expresses meaning of its source text or constitutes a fluent sentence in the target language, respectively, on either an absolute or relative scale. 5-point adjectival scales were used at the first large-scale MT evaluation campaigns, but soon replaced by relative ranking because categories such as \enquote{[the translation preserves] most meaning} and \enquote{[the translation preserves] much meaning} proved hard to distinguish \citep{KoehnMonz2006}. Relative rankings show better inter- and intra-rater agreement \citep{CallisonBurch2007}, but since they only tell if but not by how much two or more translations differ -- raters chose between better, same (tie), or worse --, the research community has lately embraced continuous Likert-like scales \cite[referred to as direct assessment, see][]{Graham2013}.
The score of a given system output, typically a few hundred to a few thousand sentences, is computed by aggregating the adequacy and fluency judgements of multiple bi- and monolingual raters, respectively. Raters are typically MT researchers \citep[e.g.\xspace,][]{WMT2019} and/or crowd workers, but rarely qualified translators.
\subsection{Assessment of Human--Machine Parity}
\label{sec:BackgroundParity}
\begin{table}
\fontsize{10.1pt}{10.1pt}\selectfont
\renewcommand{\arraystretch}{1.5}
\input{tables/error.definitions}
\caption{Error types and definitions.}
\label{tab:ErrorTypes}
\end{table}
\begin{table*}
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\input{tables/materials.example}
\caption{Example of a pre-translated document in which HT and MT are interleaved, including a segment with wrong terminology (ID 1), an error in typography (3), and an omission (6). The errors in segments 3 and 6 have been fabricated for the purpose of illustration.}
\label{tab:MaterialsExample}
\end{table*}
In summary, the evaluation of HT focusses on quality: raters are qualified translators and give feedback on specific errors (such as the number of severe terminology problems). Because qualified feedback is expensive, few segments are evaluated by a single translator. The evaluation of MT, on the contrary, focusses on quantity: many segments are evaluated by multiple raters, but those raters are not qualified and give feedback on overall quality (such as how adequate a translation is on a 100-point scale).
Given the different evaluation traditions for HT and MT, it could be assumed that a comparison of HT and MT quality would aim at combining the two. However, the first evaluation that claimed MT had reached parity with HT -- in one language pair and domain, i.e.\xspace, Chinese to English news translation -- used an MT evaluation design: bilingual crowd workers rated a large number of translated sentences in terms of adequacy \citep{Hassan2018}. Two reproductions of \citeg{Hassan2018} evaluation showed that their evaluation design disadvantaged HT. Because the translated sentences were shown to raters in random order, they could not consider phenomena related to document-level cohesion, such as consistent translation of a product name throughout a news article. When raters compared full articles rather than single sentences, HT was rated significantly better than MT \citep{Laeubli2018}. Even with isolated sentences, HT was rated significantly better than MT when professional translators rather than crowd workers carried out the evaluation \citep{Toral2018}.
\section{Evaluation Design}
\label{sec:Methodology}
We propose an experimental design for combined evaluation of HT and MT that avoids the weaknesses of previous assessments on human--machine parity in translation (\Section{Background}).
\subsection{Materials}
\label{sec:MethodologyMaterials}
The evaluation is based on a source text (ST) that is segmented into either sentences or paragraphs. We obtain two translations of the entire source text: one created by a professional translator (HT), the other by the MT system (MT). The result is a segment-aligned text where each source segment (e.g.\xspace, ST-1) has two translations (HT-1 and MT-1). HT is translated from scratch, i.e.\xspace, without any MT system. The creator of HT has the same background as the raters (see below), but no further involvement in the experiment.
For each rater, we prepare a translation that combines ST with a mix of HT and MT. To this end, we split ST into sections of equal length. We then randomly pair each source segment with either its corresponding HT or MT, making sure to include an equal number of translations from both sources. An example is shown in \Table{MaterialsExample}. Note that the scrambling of HT and MT may introduce disfluencies, as further discussed in \Section{DiscussionExperimentalValidity}.
\subsection{Raters}
Since our evaluation involves post-editing (see below), and because translation quality is judged differently by professional translators and laypeople \citep{Toral2018}, we engage professional translators as raters. Their area of expertise matches the source text.
\subsection{Procedure}
The evaluation is organised as a task in which raters are instructed to evaluate the segments in their prepared translation (see above). Raters are told that the entire translation is MT. The primary motivation for this experimental manipulation is that we want raters to focus on evaluating segments rather than guessing if they are MT or HT. The latter would likely occur if they knew that both are present, not least because many professional translators fear \enquote{being replaced by a machine} \citep{Cadwell2018}. Translators might also be inclined to evaluate (what they believe is) MT more critically than HT because they have more negative perceptions about the former \citep{LaeubliOrregoCarmona2017}.
The evaluation of each segment involves three subtasks.
First, raters are asked to post-edit the segment. They are instructed to correct spelling and grammatical errors, but not style.
Second, raters are asked to flag the presence (but not count the number) of errors in the original target segment. We use a subset of MQM error types that has been shown to be particularly relevant for post-editing of domain-specific MT \citep{Castilho2018}, as listed in \Table{ErrorTypes}, but note that other subsets or quality standards (\Section{BackgroundEvaluationHT}) could be used instead.
Third, raters have the option to leave a comment for the segment if they wish to give more specific feedback.
Raters complete the experiment within a fixed time frame. While the practical consideration here is limiting experimental cost, time pressure is common in professional translation \citep{EhrensbergerDow2016} and has been shown to increase cognitive function in controlled translation experiments \citep{Campbell1999}.
\subsection{Analysis}
\label{sec:MethodologyAnalysis}
\begin{table}[]
\centering
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\include{tables/fisher.example}
\caption{Contingency table for two binary variables. Raters flagged omissions in 14 segments originating from HT, and in 12 segments originating from MT. Omission does not depend on segment origin (HT vs.\ MT) according to a two-tailed Fisher's exact test ($p=0.693$). Data corresponds to \Figure{PlotsFROmission}.}
\label{tab:ContingencyTableExample}
\end{table}
We calculate the minimum edit distance (MED) between each original and post-edited segment, as well as corpus-level HTER \citep{Snover2006} for all HT and MT segments in each target language. While HTER correlates better with human judgements of MT quality, MED is easier to interpret, particularly for individuals outside the MT research community. In reference to industry-focussed studies on post-editing \citep[e.g.\xspace,][]{Volk2010}, we group post-edited segments into exact matches (MED = 0), non-exact matches (MED >0), and high effort (MED >5).
Besides descriptive statistics, we test if the presence of errors and post-editing effort depends on whether target segments originate from HT or MT. Target segment origin is our binary independent variable, and we test if its proportion varies among the proportion of a single binary dependent variable using a two-tailed Fisher's exact test as implemented in \textit{R} \citep{Bailey1995}. An example is shown in \Table{ContingencyTableExample}.
\section{Experimental Results}
\label{sec:Experiment}
We use the evaluation design described in the previous section to compare HT to MT in an experiment with three language pairs and ten professional translators. The study is conducted within the language services department of a multinational insurance company.
\subsection{MT System}
\label{sec:ExperimentMTSystem}
\begin{table}
\centering
\renewcommand{\arraystretch}{1.3}
\fontsize{10.1pt}{10.1pt}\selectfont
\include{tables/data}
\caption{Training Data}
\label{tab:TrainingData}
\end{table}
\begin{table*}
\renewcommand{\arraystretch}{1.4}
\fontsize{10.1pt}{10.1pt}\selectfont
\centering
\begin{adjustwidth}{-2mm}{}
\input{tables/results}
\end{adjustwidth}
\caption{Results. Counts denote the number of segments for which a given variable holds true for HT or MT, respectively; relative numbers are shown in brackets. Pairs of significantly different proportions according to a two-tailed Fisher's exact test (at $p\leq0.05$) are marked with *. Example: In DE--FR, 5/237 HT segments and 3/238 MT segments contain a typographical error. The difference is not statistically significant. Visualisations and $p$ values are shown in Figures~\ref{fig:PlotsEN}--\ref{fig:PlotsIT}.}
\label{tab:Results}
\end{table*}
We train a Transformer (big) model \citep{Vaswani2017} as implemented in Sockeye \citep{Sockeye} with FFN size 2048 for each language pair. The training data is listed in \Table{TrainingData}. We combine publicly available out-of-domain data (OOD) from OPUS \citep{OPUS}, from which we discard the lowest-scoring 75\%\, by means of dual conditional cross-entropy filtering \citep{JunczysDowmunt2018}, with in-domain data (ID). We oversample ID to match OOD where possible, with a maximum oversampling factor of 10.
We also integrate domain-specific terminology by means of data augmentation \citep{Dinu2019}. We use two different sets of terms for training and testing (i.e.\xspace, use in production). For training, we automatically filter the insurance company's full terminology, removing terms with low frequencies in the training data for reasons of time efficiency, and using a stop word list to remove terms that occur frequently in regular text (\enquote{normal words}). In addition, we discard terms in 30\%\, of the training segments to increase robustness in constraint-free scenarios. For testing, we use a smaller terminology that was narrowed down by the company's professional terminologists.
\subsection{Texts and Raters}
For each language pair, we select a document that contains terminology and language specific to the company's insurance sector: the description of business processes in a customer application (DE--EN) and a text on specialist training in sales (DE--FR, DE--IT).
We have all three documents translated by external translators who are regularly contracted by the company. We also translate the documents using the MT systems described above, and prepare a pre-translated version of each document in which half the target segments stem from the external translators and the other half from the MT system (\Section{MethodologyMaterials}).
The raters participating in the experiment are in-house translators at the company, and have not previously seen these documents. The number of raters differs between language pairs: four raters each for DE--FR and DE--IT, and two for DE--EN. Each rater is allocated 150 consecutive segments of the document, so the number of experimental items (segments) amounts to 600 for DE--FR and DE--IT, and to 300 for DE--EN.
The raters were given 90 minutes to complete the task. Two raters for DE--FR and one rater for DE--IT did not finish in time, reducing the number of items in our analysis to 475 and 492, respectively.
\subsection{Error Analysis}
\label{sec:ResultsErrorAnalysis}
Experimental results are listed in \Table{Results}. We first analyse the proportion of segments that contain at least one terminology, omission, or typography error originating from HT and MT.
The number of segments with terminology errors is higher for MT than HT. While almost twice as many segments are affected in DE--EN, the difference is less marked in DE--FR, and very small in DE--IT.
Omissions are found in more segments originating from MT in DE--EN, and in more segments originating from HT in DE--FR and DE--IT. The number of segments containing omissions are considerably lower in DE--EN and DE--IT than in DE--FR.
In terms of typography, the number of affected segments is low for both HT and MT. HT is slightly better than MT in DE--EN, and slightly worse in DE--FR and DE--IT.
The proportion of erroneous segments is similar for HT and MT overall. A two-tailed Fisher's exact test shows no significant difference between HT and MT in any error category and language pair. $p$-values are shown in Figures~\ref{fig:PlotsEN}--\ref{fig:PlotsIT}.
\subsection{Post-editing Effort}
\label{sec:ResultsEditDistance}
We compute corpus-level HTER for all HT and MT segments in each language pair (\Table{Results}, last row). We observe very low scores overall, and small differences between HT and MT in DE--FR and DE--IT.
We also compute MED between each pre-translated and post-edited target segment. Descriptive statistics are listed in \Table{Results}. In all language pairs, raters post-edited less characters in HT on average (avg), but again, the differences are small, particularly for DE--IT. The segment that required most post-editing (max) stemmed from HT in DE--EN, and from MT in DE--FR and DE--IT.
We observe a low number of segments that required any post-editing at all. The proportion of these segments is referred to as >0 in \Table{Results}. For example, only 37 out of 150 MT segments in DE--EN were post-edited; raters decided that raw MT was good enough for the remaining segments. However, the proportion of segments that needed any editing was even lower for HT in DE--EN, significantly so according to a two-tailed Fisher's exact test ($p$$\leq$.05). The difference between the proportion of segments with an MED of more than five characters (>5), on the other hand, is not significant ($p$=0.255) in DE--EN. In DE--FR, both >0 and >5 segments are significantly more frequent in MT (both at $p$$\leq$.05). In DE--IT, where raters post-edited more HT than MT segments (see >0), the difference is not significant at $p$=0.110 and $p$=0.674, respectively.
\section{Discussion}
\label{sec:Discussion}
We discuss design decisions in our evaluation and alternative approaches to inference testing, and contextualise our results within the ongoing discussion on human--machine parity in language translation.
\subsection{Experimental Validity}
\label{sec:DiscussionExperimentalValidity}
Our evaluation is based on pre-translated documents in which target segments from HT and MT are interleaved (\Table{MaterialsExample}). In contrast to other MT quality evaluation experiments \citep[e.g.\xspace,][]{Green2013,Hassan2018}, this enables raters to consider document-level context, but the shuffling of MT and HT may introduce disfluencies that would not occur if all segments stemmed from either MT or -- particularly -- HT. In DE--FR, for example, the German term \textit{Einzelfirma} (sole proprietorship), which occurred in seven source segments, was translated as \textit{raison individuelle} and \textit{entreprise individuelle} by HT and MT, respectively. The first three instances were translated by MT, and noting the inconsistency with the fourth instance translated by HT, the rater in charge flagged the segment as erroneous and commented that \enquote{[the term translations] should be harmonised}. The MT system's translation was consistent with the company's terminology database (TB) in this case, and the flagging of HT as erroneous was correct. However, if MT and HT used different translations for a term not specified in the TB, the translation introduced second would likely be marked as wrong even if it was used consistently within HT and MT. This may increase the number of terminology errors overall, but since the order in which MT and HT appear in documents is randomised in our evaluation design, it would not disadvantage one over the other with sufficient sample size. We also note that combining segments from different sources is common in professional translation workflows: when translations for adjacent source segments are retrieved from a translation memory (TM), these translations may (and typically will) stem from different documents and translators. The documents we prepared for our experiment are what translators would normally see in their computer-aided translation (CAT) tool, with HT corresponding to exact matches, except that segment origin (HT or MT) is not shown in the experiment.
We did not use a CAT tool in our experiment, but presented the pre-translated documents as spreadsheets with dedicated columns for error annotations and comments. A downside of this design decision is that the company's TB was not directly integrated into the translation environment. In the CAT tool that the in-house translators (the raters in this experiment) use in their daily work, terms contained in the TB are highlighted in source segments, and term translations are shown in a dedicated window. While raters had access to the TB during the experiment, it is likely that they missed a few terminology errors because terms were not highlighted in the experiment. On the contrary, we noticed that they marked a variety of other mistakes as terminology errors, such as wrong choice of pronoun (e.g.\xspace, \textit{que} instead of \textit{soi} in DE--FR) or wrong verb forms (e.g.\xspace, \textit{data already exists} instead of \textit{data already exist} in DE--EN). Since raters blindly evaluated HT and MT segments the same way, this may affect the true number of terminology errors in our analysis, but not the proportion between errors in HT and MT.
The blind evaluation of pre-translated segments -- the fact that we did not tell raters that half of the pre-translations were HT, and that we did not show that pre-translations originated from different sources (HT and MT) -- is another design decision that warrants discussion. Whether a pre-translated segment was retrieved from a TM (as an exact or fuzzy match) or an MT system is important information to professional translators and thus prominently shown in CAT tools. However, beliefs about (non-)presence of MT have been shown to impact how willing people are to tolerate translation mistakes \citep{Gao2014}, and surveys have shown that professional translators tend to have negative perceptions about MT \citep{LaeubliOrregoCarmona2017,Cadwell2018}. Our experimental manipulation was aimed at fostering equal rigour in evaluating HT and MT, and preventing raters from guessing if segments are HT or MT rather than focussing on actual evaluation.
\subsection{Statistical Analysis}
\label{sec:StatisticalAnalysis}
A limitation of using contingency tables (see \Table{ContingencyTableExample} for an example) is that we can only use categorical variables as dependent variables. To that end, we binarised MED with fixed and arguably arbitrary thresholds (>0 and >5; see \Section{MethodologyAnalysis}). Predicting MED in a regression model would seem more appropriate, and offers the advantage of accommodating further predictors such as segment length, but violated the assumption of normally distributed residuals in our data even when extreme values were removed. Futher analysis, including factors other than origin (HT/MT) that may explain the variance in presence of errors and post-editing distance, is left to future work.
We use Fisher's exact test to analyse contingency tables, the null hypothesis being that the likelihood of a segment showing a certain property -- such as containing wrong terminology or having been post-edited (MED >0) -- is not influenced by its origin (HT or MT). Fisher's exact test has been criticised as rather conservative \citep[see][]{AndresTejedor1995}, but is more appropriate than $\chi^2$ or $G$ tests of independence when sample sizes are small \citep{RuxtonNeuhaeuser2010}.\footnote{Using a $\chi^2$ or $G$ test of independence has no effect on any finding of (non-)significance reported in this paper. We observe the largest difference when testing for independence of origin and omission in DE--EN with a $G$ test ($p$=0.085) instead of a two-tailed Fisher's exact test ($p$=0.214, see \Figure{PlotsENOmission}).}
It would also be desirable to include more raters in the experiment. The limited number of participants is often criticised in translation experiments, justifiably so because translation performance varies considerably between individuals \citep[e.g.\xspace,][]{KoehnGermann2014}. With sufficient participants, this variance can be accounted for by means of mixed-effects modelling \citep{Green2013}, but quite apart from budgetary constraints, there may just not be enough qualified raters in domain-specific settings. The in-house translation department we work with in this study, for example, employs 2--4 specialised translators per language pair. Non-experts who could be involved to increase the number of raters have been shown to evaluate MT less critically \citep{Toral2018}. In the present study, we prioritised rater qualification over quantity.
\subsection{Human--Machine Parity?}
Our results illustrate that the question whether MT quality reaches parity with HT is a matter of definition. \citet{Hassan2018}, who analysed quality judgements by crowd workers in Chinese to English news translation, concluded that parity was reached because the difference between judgements of HT and MT is not statistically significant. The same holds for our experiment: professional translators flagged errors in segments originating from HT and MT, and the proportion of erroneous HT and MT segments does not differ significantly for any error type and language pair (\Section{ResultsErrorAnalysis}). This is mainly because error rates are fairly low for both HT and MT, which indicates that both translation methods achieve high quality. However, MT produced more erroneous segments than professional translators (HT) overall, and the fact that statistical tests (\Section{StatisticalAnalysis}) find no significant difference between HT and MT either means that there really is none, which would imply parity, or that the number of analysed segments (the sample size) is too small to infer a significant difference. Consider the proportion of segments with omissions in DE--EN (\Table{Results}): 1/150 in HT vs.\ 5/150 in MT. Omissions are rare in both, and the difference is attributed to chance ($p$=0.214, see also \Figure{PlotsENOmission}), but in the very document we analysed, omissions were five times more common in MT segments nonetheless. If assessing human--machine parity was the aim of our study, a larger sample size would be imperative to come to understand if such effects are true or random. Nevertheless, the observation that MT produced less erroneous segments than HT in at least one language pair per error type in our experiment -- except for terminology, where MT only came close to HT in DE--IT with 19/248 vs.\ 18/244 erroneous segments, respectively -- is noteworthy.
While our error analysis was limited to three specific phenomena -- terminology, omission, and typography -- the comparison of pre-translated to post-edited segments yields insights about HT and MT quality overall. MT produced significantly more segments that needed post-editing at all (MED >0) in DE--EN and DE--FR. In DE--EN, however, the proportion of segments that needed substantial post-editing (more than five characters, i.e.\xspace, MED >5) was not significantly higher in MT, and in DE--IT, the number of segments that needed any (MED >0) and substantial (MED >5) post-editing was lower in MT than in HT. This is a remarkable finding, given that HT was produced by an expert translator with experience in the textual domain we investigate. The implication here is that domain-specific MT (\Section{ExperimentMTSystem}) achieves strong results, and it may be insightful to contrast it with generic MT. Moreover, feedback from raters, who had the option to leave a comment for each segment, does not suggest that the experimental manipulation -- the mixture of MT with HT -- was noticeable. In one particular instance, a rater commented \enquote{NMT hat überkorrigiert} (\enquote{NMT has overcorrected}), when in fact the segment in question originated from HT.
\section{Conclusion}
\label{sec:Conclusion}
In a blind evaluation, ten specialised translators post-edited and flagged errors in pre-translated documents in which domain-specific MT was interleaved with professional HT. The evaluation comprised three language pairs: DE--EN, DE--FR, and DE--IT. MT required more post-editing than HT on average, but surprisingly, the difference is not significant in DE--IT, where MT produced more segments that needed no post-editing at all, and slightly less segments that needed substantial post-editing. We also analysed if the proportion of segments that contain wrong terminology, omissions, or typographical errors varies among HT and MT, and found no significant dependency in any language pair. MT produced considerably more segments with wrong terminology in two out of three language pairs, but slightly less segments with omissions or typographical errors in at least one language pair each.
Apart from implying that MT can now reach remarkable quality in domain-specific settings, our results show that professional translators may post-edit professional HT almost as much as MT, and tend to rate the two similarly in terms of issues with terminology, omission, and typography. The caveat here and an aspect that warrants further investigation is that we made our participants believe that the HT they were evaluating was MT. From a methodological point of view, it would be interesting to test if this experimental manipulation would also work the other way around, and analyse if translators treat HT and MT differently depending on what they believe it is. From a more practical perspective, it might also be worth exploring whether the proposed evaluation design could help demonstrate the potential benefits of MT to people who are still sceptical about the technology.
\bibliographystyle{acl}
\fontsize{10.1pt}{10.1pt}\selectfont
|
1,116,691,497,123 | arxiv | \section{Introduction}
\subsection{Motivation}
Community detection aims to identify underlying communities of similar characteristics in an overall population from the observation of pairwise interactions between individuals \cite{Fortunato10,Newman04,Newman06}. The stochastic block model, also known as {\it planted partition model}, is a popular random graph model for analyzing the community detection problem \cite{Holland83,Snijders97,Bicke09,Yu11,Decelle11}, in which pairwise interactions are binary: an edge is either present or absent between two individuals. In its simplest form, the stochastic block model consists of two communities of approximately equal size, where the within-community edge is present at random with probability $p$; while the across-community edge is present with probability $q$. If $p>q$, it corresponds to assortative communities where interactions are more likely within rather than across communities; while $p<q$ corresponds to disassortative communities.
In practice, interactions can be of various types and these types reveal more information on the underlying communities than the mere existence of the interaction itself. For example, in recommender systems, interactions between users and items come with user ratings. Such ratings contain far more information than the interaction itself to characterize the user and item types. Similarly, protein-protein chemical interactions in biological networks can be exothermic and endothermic; email exchanges in a club may be formal or informal; friendship in social networks may be strong or weak. The labeled stochastic block model was recently proposed in \cite{Heimlicher12} to capture rich interaction types. In this model interaction types are described by labels drawn from an arbitrary collection. In particular, for the simple two communities case, the within-community edge is labeled at random with distribution $\mu$; while the across-community edge is labeled with a different distribution $\nu$. In this context an important question is how to leverage the labeling information for detecting underlying communities.
\subsection{Information-Scarce Regime}
In this paper, we focus on the sparse labeled stochastic block model in which every vertex has a limited average degree, i.e., $p,q=O(1/n)$, where $n$ is the number of vertices. It corresponds to the information-scarce regime where only $O(n)$ edges and labels are observed in total\footnote{We also provide results for $p,q=O(\hbox{polylog}(n)/n)$ in Theorem \ref{ThmSpectralSBM-large}.}. This regime is of practical interest, arising in several contexts. For example, in recommender systems, users only give ratings to few items; in biological networks, only few protein-protein interactions are observed due to cost constraints; in social networks, a person only has a limited number of friends.
For the stochastic block model in this information-scarce regime, there are $\Theta(n)$ isolated vertices, as in Erd\H{o}s-R\'enyi random graphs with bounded average degree. For isolated vertices, it is impossible to determine their community membership and thus exact reconstruction of communities is impossible. Therefore, we resort to finding a partition into communities positively correlated to the true community partition (see Definition \ref{def:Q} below).
\subsection{Main Results}
Focusing on the two communities scenario, we show that a positively correlated reconstruction is fundamentally impossible when below a threshold. This establishes one half of the conjecture in \cite{Heimlicher12}. In the positive direction, we establish the following results. We introduce a graph weighted by a suitable function of observed labels, on which we show that:
(1) Minimum bisection gives a positively correlated partition when above the threshold by a factor of $64 \ln 2$.
(2) A semidefinite relaxation of minimum bisection gives a positively correlated partition when above the threshold by a factor of $2^{17} \ln 2$.
(3) A spectral method combined with removal of edges incident to vertices of high degree gives a positively correlated partition when above the threshold by a constant factor.
Furthermore, we show that the labeled stochastic block model is contiguous to a labeled Erd\H{o}s-R\'enyi random graph when below the reconstruction threshold and orthogonal to it when above the threshold. It implies that for the hypothesis testing problem between the labeled stochastic block model and the labeled Erd\H{o}s-R\'enyi random graph model, the correct identification of the underlying distribution is feasible if and only if above the reconstruction threshold. It also implies that there is no consistent estimator for model parameters when below the reconstruction threshold.
\subsection{Related Work}
For the stochastic block model, most previous work focuses on the ``dense'' regime with an average degree diverging as the size of the graph $n$ grows, (see, e.g., \cite{Chen12,ChenXu14} and the references therein).
For the ``sparse'' regime with bounded average degrees, a sharp phase transition threshold for reconstruction was conjectured in \cite{Decelle11} by analyzing the belief propagation algorithm. The converse part of the conjecture was rigorously proved in \cite{Mossel12}. The achievability part is proved independently in \cite{Mossel13,Massoulie13}. In addition, it is shown in \cite{Coja-oghlan10} that a variant of spectral method gives a positively correlated partition when above the threshold by an unknown constant factor. More recently, it is shown in \cite{Vershynin14} that a semidefinite program finds a correlated partition when above the threshold by some large constant factor.
The labeled stochastic block was first proposed and studied in \cite{Heimlicher12} and a new reconstruction threshold that incorporates the extra labeling information was conjectured. Simulations further indicate that the belief propagation algorithm works when above the threshold, but reconstruction algorithms that provably work are still unknown.
Finally, we recently became aware of the work \cite{ABBS14} that studies the problem of decoding
binary node labels from noisy edge measurements. In the case where the background graph is Erd\H{o}s-R\'enyi\xspace random graph and each node label is independently and uniformly chosen from $\{\pm 1\}$,
the model in \cite{ABBS14} can be viewed as a special case
of the labeled stochastic block model with $p=q$, $\mu=(1-\epsilon) \delta_{+1} + \epsilon \delta_{-1}$ and $\nu=\epsilon \delta_{+1} + (1-\epsilon) \delta_{-1}$, where $\delta_{x}$
denotes the probability measure concentrated on point $x$ (See Section \ref{SectionModel} for the formal model description). When $p =q = a \log n /n$ for some constant $a$ and $\epsilon \to 1/2$, it is shown in \cite{ABBS14} that exact recovery of node labels is
possible if and only if $a (1-2\epsilon)^2 >2$. In contrast, our results show that when $p=q =a /n$ for some constant $a$, correlated recovery of node labels is impossible
if $a (1-2 \epsilon)^2 <1$ for any $0\le \epsilon \le 1$. Moreover, we show that distinguishing hypothesis $\epsilon =\epsilon_0$ and hypothesis $\epsilon = 1/2$ is possible if and only if $ a(1-2\epsilon_0)^2>1$.
\subsection{Outline}
Section \ref{SectionModel} introduces the precise definition of the labeled stochastic block model to be studied and the key notations. The main theorems are introduced and briefly discussed in Section \ref{SectionMainThm}. The detailed proofs are presented in Section \ref{SectionProof}. Section \ref{SectionConclusion} ends the paper with concluding remarks. Miscellaneous details and proofs are in the Appendix.
\section{Model and Notation} \label{SectionModel}
This section formally defines the labeled stochastic block model with two symmetric communities and introduces the key notations and definitions used in the paper. Let $\mathcal{L}$ denote a finite set. The labeled stochastic block model $\mathcal{G}(n,p,q,\mu,\nu)$ is a random graph with $n$ vertices of $\{\pm 1\}$ types indexed by $[n]$ and $\{\ell \in \mathcal{L} \}$-labeled edges. To generate a particular realization $(G,L,\sigma)$, first assign type $\sigma_u \in \{\pm 1\}$ to each vertex $u$ uniformly and independently at random. Then, for every vertex pair $(u,v)$, independently of everything else, draw an edge between $u$ and $v$ with probability $p$ if $\sigma_u=\sigma_v$ and with probability $q$ otherwise. Finally, every edge $e=(u,v)$ is labeled with $\ell$ independently at random with probability $\mu(\ell)$ if $\sigma_u=\sigma_v$ and with probability $\nu(\ell)$ otherwise.
Equivalently, we can specify $\mathcal{G}(n,p,q,\mu,\nu)$ by its probability distribution. Let
\begin{align}
\phi_{uv}(G,L,\sigma)=\left \{
\begin{array}{rl}
p \mu(L_{uv}) & \text{if } \sigma_u=\sigma_v, (u,v) \in E(G), \\
q \nu(L_{uv}) & \text{if } \sigma_u \neq \sigma_v, (u,v) \in E(G), \\
1-p & \text{if } \sigma_u= \sigma_v, (u,v) \notin E(G), \\
1-q & \text{if } \sigma_u \neq \sigma_v, (u,v) \notin E(G), \nonumber
\end{array} \right.
\end{align}
where $E(G)$ is the set of edges of $G$ and $L_{uv}$ is the label on the edge $(u,v)$. Then,
\begin{align}
\mathbb{P}_n (G, L, \sigma) = 2^{-n} \prod_{(u,v): u<v} \phi_{uv}(G,L,\sigma). \label{eq:loglikelihood}
\end{align}
When $\mu=\nu$, it reduces to the classical stochastic block model without labels. This paper focuses on the sparse case where $p=a/n$ and $q=b/n$ for two fixed constants $a$ and $b$, and the goal is to reconstruct the true underlying types of vertices $\sigma$ by observing the graph structure $G$ and the labels on edges $L$.
It is known that in the sparse graph, there are $\Theta(n)$ isolated vertices whose types clearly cannot be recovered accurately. Therefore, our goal is to reconstruct a type assignment which is positively correlated to the true type assignment. More formally, we adopt the following definition.
\begin{definition}\label{def:Q}
A type assignment $\hat{\sigma}$ is said to be positively correlated with the true type assignment $\sigma$ if a.a.s.
\begin{align}
\ Q(\sigma,\hat{\sigma}) := \frac{1}{2} - \frac{1}{n} \min \{ d (\sigma, \hat{\sigma} ), d(\sigma, -\hat{\sigma}) \} >0,
\end{align}
where $d$ is the Hamming distance, and $Q$ is called the {\it Overlap}.
\end{definition}
The shorthand a.a.s. denotes {\it asymptotically almost surely}. A sequence of events $A_n$ holds a.a.s. if the probability of $A_n$ converges to $1$ as $n \to \infty$.
Define $\tau$ as
\begin{align}
\tau= \frac{a+b}{2} \sum_{ \ell \in \mathcal{L} } \frac{a\mu(\ell) + b\nu(\ell) }{a+b} \left( \frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+b \nu(\ell) } \right)^2. \label{DefReconstructionThreshold}
\end{align}
It was conjectured in \cite{Heimlicher12} that $\tau$ is the threshold for positively correlated reconstruction.
\begin{conjecture} \label{Conjecture}
\begin{itemize}
\item[(i)] If $\tau>1$, then it is possible to find a type assignment correlated
with the true assignement a.a.s.
\item[(ii)] If $\tau<1$, then it is impossible to find a type assignment correlated
with the true assignement a.a.s.
\end{itemize}
\end{conjecture}
In this paper, we prove (ii) and propose three different algorithms
able to find a type assignment correlated with the true assignment
for $\tau$ big enough.
\paragraph{Notation}
Let $A$ denote the adjacency matrix of the graph $G$, $\mathbf I $ denote the identity matrix,
and $\mathbf J$ denote the all-one matrix.
We write $X \succeq 0$ if $X$ is positive semidefinite and $X \ge 0$ if all the entries of $X$ are non-negative.
For any matrix $Y$, let $\|Y\|$ denote its spectral norm.
For any positive integer $n$, let $[n]=\{1, \ldots, n\}$.
For any set $T \subset [n]$, let $|T|$ denote its cardinality and $T^c$ denote its complement.
We use standard big $O$ notations,
e.g., for any sequences $\{a_n\}$ and $\{b_n\}$, $a_n=\Theta(b_n)$ or $a_n \asymp b_n$
if there is an absolute constant $c>0$ such that $1/c\le a_n/ b_n \le c$.
Let ${\rm Bern}(p)$ denote the Bernoulli distribution with mean $p$ and
${\rm Binom}(N,p)$ denote the binomial distribution with $N$ trials and success probability $p$.
All logarithms are natural and we use the convention $0 \log 0=0$. For a vector $x \in {\mathbb{R}}^n$, $\mathsf{sign}(x)$ gives the sign of
$x$ componentwise, and $\|x\|$ denotes the $L_2$ norm. For a graph $G$, let $V(G)$ denote its vertex set and $E(G)$
denote its edge set.
\section{Main Theorems} \label{SectionMainThm}
\subsection{Minimum Bisection}
To recover the community partition, one approach is via the maximum
likelihood estimation. In view of \prettyref{eq:loglikelihood}, the log-likelihood function can be written as:
\begin{eqnarray*}
\log \mathbb{P}( G,L | \sigma)&=& \frac{1}{2} \sum_{(u,v) \in E(G) } \left[ \log
\frac{a\mu(L_{uv})}{b\nu(L_{uv})} \sigma_u \sigma_v + \log \left( \frac{ab}{n^2} \mu (L_{uv})\nu(L_{uv}) \right) \right] \\
&+&\frac{1}{2} \sum_{(u,v) \notin E(G) } \left[ \log\left(\frac{1-a/n}{1-b/n}\right)\sigma_u \sigma_v + \log \left( (1-a/n)(1-b/n) \right) \right].
\end{eqnarray*}
Under the constraint $\sum_{u} \sigma_u=0$, the maximum likelihood estimation is equivalent to
\begin{align}
\max_{\sigma} \quad & \sum_{(u,v)\in E(G) } \log \left[
\frac{a (1-b/n)\mu(L_{uv}) }{b (1-a/n) \nu(L_{uv})} \right] A_{uv} \sigma_u \sigma_v \nonumber \\
\text{s.t. } \quad & \sum_u \sigma_u =0, \; \sigma \in \{ \pm 1 \}^n \nonumber .
\end{align}
This is equivalent to the minimum bisection on the weighted graph with a specific weight function $w(\ell) = \log \frac{a (1-b/n) \mu(\ell) }{b (1-a/n) \nu(\ell) }$. For a general weighing function $w:\mathcal{L} \to [-1,1]$, the minimum bisection finds a balanced bipartite subgraph in $G$ with the minimum weighted cut, i.e.,
\begin{align}
\min_{\sigma} & \sum_{(u,v):\sigma_u \neq \sigma_v} W_{uv} \nonumber \\
\text{s.t. } & \sum_{u} \sigma_u =0, \; \sigma_u \in \{ \pm 1\}, \label{eq:MinimumBisection}
\end{align}
where $W_{uv}=A_{uv} w(L_{uv})$ and $A$ is the adjacency matrix of $G$.
\begin{theorem} \label{ThmMinBisection}
Assume the technical condition: $\sum_\ell a \mu(\ell) w^2(\ell),
\sum_\ell b \nu(\ell) w^2(\ell) > 8 \ln 2$. Then if
\begin{align}
\frac{\sum_\ell (a\mu(\ell)-b\nu(\ell) )w (\ell)}{\sqrt{ \sum_\ell (a\mu(\ell)+b \nu(\ell)) w^2(\ell) }} > \sqrt{128 \ln 2 } \label{EqMinBisectionCondition},
\end{align}
a.a.s. solutions of the minimum bisection (\ref{eq:MinimumBisection}) are
positively correlated to the true type assignment $\sigma^\ast.$
Moreover, the left hand side of (\ref{EqMinBisectionCondition}) is
maximized when $w(\ell)=\frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+
b\nu(\ell) }$, in which case (\ref{EqMinBisectionCondition}) reduces
to $\tau > 64\ln 2$.
\end{theorem}
\subsection{Semidefinite relaxation method}\label{SectionSDP}
The minimum bisection is known to be NP-hard in the worst case \cite[Theorem 1.3]{garey76}. In this section,
we present a semidefinite relaxation of the minimum bisection \prettyref{eq:MinimumBisection} which
is solvable in polynomial time, and show it finds an assignment correlated with the true assignment provided
$\tau$ is large enough.
Let $Y=\sigma \sigma^\top$. Then $\sigma_u = \pm 1$ is equivalent to $Y_{uu}=1$, and $\sum_{u} \sigma_u =0$
if and only if $\Iprod{Y}{\mathbf J}=0$. Therefore, \prettyref{eq:MinimumBisection} can be recast as
\begin{align}
\max_{Y,\sigma} & \; \Iprod{W}{Y} \nonumber \\
\text{s.t. } & \; Y=\sigma \sigma^\top \nonumber \\
& \; Y_{uu} =1, \quad u \in [n]\nonumber \\
& \; \Iprod{\mathbf J}{Y} =0 . \label{eq:SBMMB2}
\end{align}
Notice that the matrix $Y=\sigma \sigma^\top$ is a rank-one positive semidefinite matrix. If we relax this
condition by dropping the rank-one restriction, we obtain the following semidefinite relaxation of \prettyref{eq:SBMMB2}:
\begin{align}
\widehat{Y}_{{\rm SDP}\xspace} = \mathop{\arg\max}_{Y} & \; \langle W, Y \rangle \nonumber \\
\text{s.t. } & \; Y \succeq 0 \nonumber \\
& \; Y_{uu} =1, \quad u \in [n] \nonumber \\
& \; \Iprod{\mathbf J}{Y} =0. \label{eq:SBMSDP}
\end{align}
To get an estimator of the type assignment from $\widehat{Y}_{{\rm SDP}\xspace}$, let $y$ denote an
eigenvector of $\widehat{Y}_{{\rm SDP}\xspace}$ corresponding to the largest eigenvalue and $\|y\| =\sqrt{n}$.
The following result shows that $\widehat{\sigma}_{{\rm SDP}\xspace} \triangleq \mathsf{sign}(y)$ is positively correlated with the true type assignment.
\begin{theorem}\label{thm:SBMSDPCorrelated}
Assume the technical condition: $\sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) > 8 \ln 2$. If
\begin{align}
\frac{ \sum_\ell (a\mu(\ell)-b\nu(\ell) )w (\ell) } { \sqrt{\sum_\ell (a\mu(\ell)+b \nu(\ell)) w^2(\ell)} } > 512 \sqrt{ \ln 2} \label{eq:SBMSDPCondition},
\end{align}
then a.a.s.\ $\widehat{\sigma}_{{\rm SDP}\xspace}$ is positively correlated to the true type assignment $\sigma^\star$.
Moreover, the left hand side of \prettyref{eq:SBMSDPCondition} is
maximized when $w(\ell)=\frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+
b\nu(\ell) }$, in which case (\ref{eq:SBMSDPCondition}) reduces
to $\tau > 2^{17} \ln 2$.
\end{theorem}
In the stochastic block model without labels, i.e.\xspace, $\mu=\nu$, condition (\ref{eq:SBMSDPCondition}) reduces to $(a-b)^2> 2^{18} \ln 2 (a+b)$;
similar conditions with a different constant have been proved in \cite[Theorem 1.1]{Vershynin14} using the Grothendieck's inequality. Our proof builds upon the
analysis in \cite{Vershynin14}.
\subsection{Spectral Method}
In this section, we present a polynomial-time spectral algorithm based on the weighted adjacency matrix
$W$ and show that this algorithm allows us to
find an assignment correlated with the true assignment provided
$\tau$ is large enough.
Note that $\mathbb{E}[W | \sigma ] =\frac{\alpha}{n} \mathbf J+ \frac{\beta}{n} \sigma \sigma ^\top- \frac{\alpha+\beta}{n} \mathbf{I}$ with
\begin{align}
\alpha=\frac{1}{2} \sum_{\ell} w(\ell) (a \mu(\ell) + b \nu(\ell)), \nonumber \\
\beta=\frac{1}{2} \sum_{\ell} w(\ell) (a \mu(\ell) -b \nu(\ell)). \label{EqDefAlphaBeta}
\end{align}
The term $\frac{\alpha+\beta}{n} \mathbf{I}$ is irrelevant to the main
results (thanks to Weyl's perturbation theorem) and neglected for simplicity.
Let $D=W-\frac{\alpha}{n} \mathbf J$ and then $\mathbb{E}[D | \sigma ]= \frac{\beta}{n} \sigma \sigma ^\top$ has rank one with singular value $\beta$. Hence, it makes sense to define $\hat{D}$ as the best rank-1 approximation of the
matrix $D$. In other words, if $D=\sum_{i}v_ix_ix_i^\top$ is the
eigenvalue decomposition of $D$ with eigenvalues $|v_1| \geq | v_2 | \geq \dots$, we
define $\hat{D} = v_1x_1x_1^\top$. Then if the matrix $D$ is close
to its mean $\mathbb{E}[D | \sigma ]$ in the spectral norm, we expect $v_1$ to be close to $\beta$, and $\mathsf{sign} (x_1)$ to
be correlated with $\sigma$.
Unfortunately, in the sparse regime, there are
vertices of degree $\Omega(\frac{\log n} {\log \log n})$ and thus the
largest singular value of $W$ could reach $\Omega(\sqrt{\frac{\log n}
{ \log \log n}})$ which is much higher than $\beta$.
In order to take care of the issue, we begin with a preliminary step to clean the spectrum of $W$:
we remove all edges incident to vertices in the graph with degree larger than
$\frac{3}{2}\frac{a+b}{2}$. To summarize, for a given weight function
$w(\ell)$, our algorithm $\rm{Spectral-Reconstruction}$ has the following structure:
\begin{enumerate}
\item Remove edges incident to vertices with degree larger than $\frac{3}{2}\frac{a+b}{2}$ and let $G'$ denote
the resulting graph. Define $W'$ to be the weighted adjacency matrix of $G'$.
\item Let $\hat{x}$ be the left-singular vector associated with
the largest singular value of $D^\prime=W^\prime-\frac{\alpha}{n} \mathbf J $, i.e.,
\begin{align}
\hat{x} = \arg \max\{ | x^\top D^\prime x | ,\: \|x\|=1 \}. \label{eq:spectral}
\end{align}
Output ${\rm sign} (\hat{x})$ for the types of the vertices.
\end{enumerate}
Observe that \prettyref{eq:spectral} can be seen as a (non-convex) relaxation of the minimum bisection (\ref{eq:MinimumBisection}) by replacing the integer constraint with the unit-norm constraint and relaxing the constraint $\sum_{u} \sigma_u=0$ to be a regularized term $\frac{\alpha}{n} x^\top \mathbf J x $ in the objective function. $\rm{Spectral-Reconstruction}$ needs estimates of $\alpha$ and $a+b$, which can be well approximated by $ \frac{1}{n} \mathbf{1}^\top W \mathbf{1}$ and $ \frac{2}{n} \mathbf{1}^\top A \mathbf{1}$, respectively. To simplify the analysis, we will assume that the exact values of $\alpha$ and $a+b$ are known.
\begin{theorem}\label{ThmSpectralSBM}
Assume $a>b>C_0$ for some sufficiently large constant $C_0$. There exists a universal constant $C$ (i.e.\ not depending on $a$, $b$,
$\mu$ or $\nu$) such that if $\beta^2 >C(a+b)$, where $\beta$
is defined in \eqref{EqDefAlphaBeta}, then a.a.s.\ $\rm{Spectral-Reconstruction}$
outputs a type assignment correlated with the true assignment.
In the particular case, where $w(\ell) = \frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+
b\nu(\ell) }$, the condition $\beta^2>C(a+b)$ reduces to $\tau> \sqrt{C(a+b) }$.
\end{theorem}
In the stochastic block model without labels, letting $w(\ell)=1$, condition $\beta^2>C(a+b)$ reduces to $(a-b)^2> 4 C(a+b)$;
the sharp condition $(a-b)^2> 2(a+b)$ has been proved recently in \cite{Mossel13,Massoulie13}.
Compared to point (i) in the Conjecture \ref{Conjecture}, our result
does not give the right order of magnitude when $a$ and $b$ are large. Indeed, we are able to
improve it if we allow $a$ and $b$ to grow with $n$.
\begin{theorem}\label{ThmSpectralSBM-large}
Assume that $\min(a,b) =\Omega(\log^6n)$.
If
\begin{align}
\frac{ [\sum_\ell (a\mu(\ell)-b\nu(\ell) )w (\ell) ]^2 } { \sum_\ell
(a\mu(\ell)+b \nu(\ell)) w^2(\ell) } > 256 \label{EqSpecCondition},
\end{align}
then $\rm{Spectral-Reconstruction}$
outputs a type assignment correlated with the true assignment a.a.s.
Moreover, the left hand side of (\ref{EqSpecCondition}) is
maximized when $w(\ell)=\frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+
b\nu(\ell) }$, in which case (\ref{EqSpecCondition}) reduces
to $\tau > 128$. With this choice of $w(\ell)$, as soon as $\tau\to
\infty$, $\rm{Spectral-Reconstruction}$
outputs the true assignment
for all vertices except $o(n)$ a.a.s.
\end{theorem}
Note that in the regime $\min(a,b) =\Omega(\log^6n)$, the degrees are very concentrated and step 1) of the algorithm can be removed without harm.
The simulation results, depicted in Fig.~\ref{FigSBMSpectralMethod}, further indicate that $\rm{Spectral-Reconstruction}$ leaving out step 1) outputs a positively correlated assignment when above the threshold. In the simulation, we assume for simplicity only two labels: $r$ and $b$, and define $\mu(r)=0.5+\epsilon$ and $\nu(r)=0.5-\epsilon$. We generate the graph from the labeled stochastic block model with $n=1000$ vertices for various $a,b,\epsilon$. Fix $a,b$, we plot the overlap $Q$ against $\epsilon$ and indicate the threshold $\tau=1$ as a vertical dash line. All plotted values are averages over $100$ trials.
\begin{figure}
\centering
\post{FigSBMSpectralMethod}{3.5in}
\centering
\caption{The overlap $Q$ against $\epsilon$ from $0.05$ to $0.5$.}
\label{FigSBMSpectralMethod}
\end{figure}
Note that our algorithm is most efficient when the parameters ($a$,
$b$, $\mu$ and $\nu$) of the model are known as the optimal weight
function depends on these parameters. In the case where the labels are
uninformative, i.e. $\mu=\nu$, our algorithm is very simple, does not
require to know the values $a$ and $b$, and in the range of Theorem
\ref{ThmSpectralSBM-large}, has the best known performance
guarantee (see \cite[Table I]{Chen12}).
\subsection{Converse Result}
This section proves part (ii) of Conjecture \ref{Conjecture}. In particular, we show that when $\tau<1$, asymptotically it is impossible to tell whether any two vertices are more likely to belong to the same community. It further implies that reconstructing a positively correlated type assignment is fundamentally impossible.
\begin{theorem} \label{ThmNonReconstruction}
If $\tau<1$, then for any fixed vertices $\rho$ and $v$,
\begin{align}
\mathbb{P}_n (\sigma_\rho=+1 | G, L, \sigma_v=+1) \to 1/2 \text{ a.a.s}.
\end{align}
\end{theorem}
\begin{remark}
Reconstructing a positively correlated type assignment is harder than telling whether any two vertices are more likely to belong to the same community. In particular, given a positively correlated type assignment $\hat{\sigma}$, for two vertices randomly chosen, they are more likely to belong to the same community if they have the same type in $\hat{\sigma}$.
\end{remark}
Theorem \ref{ThmNonReconstruction} is related to the Ising spin model in the statistical physics \cite{Peres00,Mossel04}, and it essentially says that there is no long range correlation in the type assignment when $\tau<1$. The main idea in the proof of Theorem \ref{ThmNonReconstruction} is borrowed from \cite{Mossel12} and works as follows: (1) pick any two fixed vertices $\rho,v$ and consider the local neighborhood of $\rho$ up to distance $O(\log (n))$. The vertex $v$ lies outside of the local neighborhood of $\rho$ a.a.s.. (2) conditional on the type assignment at the boundary of the local neighborhood, $\sigma_\rho$ is asymptotically independent with $\sigma_v$. (3) the local neighborhood of $\rho$ looks like a Markov process on a labeled Galton-Watson tree rooted at $\rho$. (4) For the Markov process on the labeled Galton-Watson tree, the types of leaves provide no information about the type of the root $\rho$ when the depth of tree goes to infinity.
\subsection{Hypothesis Testing}
Consider a labeled Erd\H{o}s-R\'enyi random graph $\mathcal{G}(n,\frac{a+b}{2})$, where independently at random, each pair of two vertices is connected with probability $\frac{a+b}{2}$, and every edge is labeled with $\ell \in \mathcal{L}$ with probability $\frac{a\mu(\ell)+b\nu(\ell)}{a+b}.$ Let $\mathbb{P}^\prime_n$ denote the distribution of the labeled Erd\H{o}s-R\'enyi random graph.
Given a graph $(G,L)$ which was drawn from either $\mathbb{P}_n$ or $\mathbb{P}^\prime_n$, an interesting hypothesis testing problem is to decide which one is the underlying distribution of $(G,L)$? It turns out that when $\tau>1$, the correct identification of the underlying distribution is feasible a.a.s.; however, when $\tau<1$, one is bound to make error with non-vanishing probability.
\begin{theorem} \label{ThmACER}
If $\tau>1$, then $\mathbb{P}_n$ and $\mathbb{P}^\prime_n$ are asymptotically orthogonal, i.e., there exists event $A_n$ such that $\mathbb{P}_n(A_n) \to 1 $ and $\mathbb{P}^\prime_n (A_n) \to 0$.
If $\tau<1$, then $\mathbb{P}_n$ and $\mathbb{P}^\prime_n$ are contiguous, i.e., for every sequence of event $A_n $,
\begin{align}
\lim_{n \to \infty} \mathbb{P}_n(A_n)=0 \Leftrightarrow \lim_{n \to \infty} \mathbb{P}_n^\prime(A_n)=0. \nonumber
\end{align}
\end{theorem}
Theorem \ref{ThmACER} further implies the following corollary regarding the model parameter estimation.
\begin{corollary}
If $\tau<1$, then there is no consistent estimator for parameters $a,b,\mu,\nu$.
\end{corollary}
\begin{proof}
The second part of Theorem \ref{ThmACER} implies that $\mathcal{G}(n,\frac{a_1}{n},\frac{b_1}{n},\mu_1,\nu_1)$ and $\mathcal{G}(n,\frac{a_2}{n},\frac{b_2}{n},\mu_2,\nu_2)$ are contiguous as long as $a_1 \mu_1(\ell)+b_1\nu_1(\ell)=a_2\mu_2(\ell)+b_2\nu_2(\ell)$ and
\begin{align}
\sum_\ell \frac{ (a_i \mu_i(\ell)-b_i\nu_i(\ell))^2} {2(a_i \mu_i(\ell) + b_i \nu_i(\ell) ) }<1, \nonumber
\end{align}
for $i=1,2$.
Therefore, one cannot distinguish between $\mathcal{G}(n,\frac{a_1}{n},\frac{b_1}{n},\mu_1,\nu_1)$ and $\mathcal{G}(n,\frac{a_2}{n},\frac{b_2}{n},\mu_2,\nu_2)$ with the success probability converging to $1$, and thus there is no consistent estimator for parameters $a,b,\mu,\nu$.
\end{proof}
In the special case where $\mu=\nu$, i.e., no labeling information is available, Theorem \ref{ThmACER} reduces to Theorem 2.4 in \cite{Mossel12}. The positive part of Theorem \ref{ThmACER} is proved by counting the number of labeled short cycles and the second moment method. The negative part of Theorem \ref{ThmACER} is proved using the small subgraph conditioning method as introduced in \cite{Mossel12}. The small subgraph conditioning method was originally developed to show that random $d$-regular graphs are Hamiltonian a.s.s. \cite{Wormald94, Janson11}.
\section{Proofs} \label{SectionProof}
\subsection{Proof of Theorem \ref{ThmMinBisection}} \label{SectionMinBisectionpf}
Recall that $\sigma^\ast$ denotes the true type assignment. Since $| \{ u: \sigma^\ast_u = 1\}| \sim {\rm Binom}(n,1/2)$, by Chernoff bound, a.a.s.,
\begin{align}
|\{ u: \sigma^\ast_u = 1\}| \in \left[ n/2-\sqrt{n \log n}, n/2+ \sqrt{n \log n} \right]. \label{eq:clustersizebalance}
\end{align}
For ease of presentation, assume $|\{ u: \sigma^\ast_u = 1\}|=n/2$.
Let $m(\sigma) \triangleq |\{u: \sigma_u=+1, \sigma^\star_u=-1 \} |$ and $\epsilon>0$ be an arbitrarily small constant. To prove the theorem, by the definition of positively correlated reconstruction, it suffices to show that for all $\sigma$ with $\frac{n}{4} (1-\epsilon) \le m(\sigma) \le \frac{n}{4}$,
\begin{align}
\sum_{ \substack{ (u,v):\sigma_u \neq \sigma_v, \\ \sigma_u^\star=\sigma_v^\star }} W_{uv} - \sum_{\substack{ (u,v):\sigma_u=\sigma_v, \\ \sigma_u^\star \neq \sigma_v^\star} } W_{uv} :=Y_1(\sigma)-Y_2(\sigma) > 0. \nonumber
\end{align}
To ease the notation, we suppress the argument $\sigma$. Observe that $Y_1$ is a sum of $2m(n/2-m)$ i.i.d. random variables whose value is $w(\ell)$ with probability $\frac{a}{n} \mu(\ell)$; $Y_2$ is a sum of $2m(n/2-m)$ i.i.d. random variables whose value is $w(\ell )$ with probability $\frac{b}{n} \nu(\ell)$. Thus,
\begin{align}
y_1&:=\mathbb{E}[Y_1] = 2m(n/2-m)(a/n) \sum_{\ell} \mu(\ell) w(\ell), \nonumber
\\
y_2&:=\mathbb{E}[Y_2] = 2m(n/2-m)(b/n) \sum_{\ell} \nu(\ell) w(\ell). \nonumber
\end{align}
Define
\begin{align}
z_1 &:= 2m(n/2-m)(a/n) \sum_{\ell } \mu( \ell ) w^2(\ell) , \nonumber \\
z_2&:= 2m(n/2-m)(b/n) \sum_{\ell } \nu( \ell ) w^2(\ell). \nonumber
\end{align}
Then, for $0<\lambda \le \frac{1}{2} $,
\begin{align}
\mathbb{E} [\exp( -\lambda Y_1 )] &= \left[1 + \frac{a}{n} \sum_{\ell} ({\rm e}^{-\lambda w(\ell)} -1 ) \mu(\ell) \right]^{2m(n/2-m)} \nonumber \\
& \le \exp \left[ 2m(n/2-m) \frac{a}{n} \sum_{\ell} ({\rm e}^{-\lambda w(\ell)} -1 ) \mu(\ell) \right] \nonumber \\
& \le \exp \left[ 2m (n/2-m) \frac{a}{n} \sum_\ell \left(-\lambda w(\ell) + 2 \lambda^2 w^2(\ell) \right) \mu(\ell) \right] \nonumber \\
&= \exp( -\lambda y_1 + 2\lambda^2 z_1 ), \nonumber
\end{align}
where the first inequality follows from the fact that $1+x \le e^x$ and the second one follows from the fact that $e^x \le 1+x+ 2x^2$ for $|x|\le 1/2$.
The Chernoff bound gives that for $0< \lambda \le \frac{1}{2} $,
\begin{align}
\mathbb{P} ( Y_1 \le (1 -t)y_1 ) &\le \mathbb{E} [\exp( -\lambda Y_1 )] \exp ( (1-t) \lambda y_1) \nonumber \\
& \le \exp(-t \lambda y_1 + 2 \lambda^2 z_1 ). \label{eq:chernoff}
\end{align}
We define $\mathbb{E} [W_\mu] \triangleq \sum_\ell \mu(\ell )w(\ell )$ and $\mathbb{E}
[W_\mu^2] \triangleq \sum_\ell \mu(\ell )w^2(\ell )$.
Let $t_1^2 = (64\ln 2)\frac{1+\epsilon}{1-\epsilon} \frac{1}{a} \frac{\mathbb{E}[W_\mu^2]}{\left( \mathbb{E}
[W_\mu] \right)^2}$ and $\lambda = \frac{t_1y_1}{4z_1}$. We first check that with
these values, we have $\lambda\leq 1/2$:
\begin{eqnarray*}
\lambda \leq \frac{1}{2} &\Leftrightarrow&
t_1\leq\frac{2\mathbb{E}[W_\mu^2]}{\mathbb{E}[W_\mu]}\\
&\Leftrightarrow& \frac{1+\epsilon}{1-\epsilon}\frac{8\ln 2}{a}\leq
\mathbb{E}[W^2_\mu].
\end{eqnarray*}
Thanks to the assumption made in Theorem \ref{ThmMinBisection}, we can
find $\epsilon$ sufficiently small such that this last inequlity is
valid. Notice that $\frac{t_1^2y_1^2}{8z_1} \geq (1+\epsilon)^2 n \ln 2.$ It follows from \prettyref{eq:chernoff} that
\begin{align}
\mathbb{P} ( Y_1\le (1 -t_1)y_1 ) = \exp \left( - \frac{t_1^2 y_1^2}{8z_1} \right) \le 2^{-n (1+\epsilon)}. \nonumber
\end{align}
Since there are $\binom{n/2}{m}\binom{n/2}{m} \le 2^{n}$ different $\sigma$ with $m(\sigma)=m$, a simple union bound yields that as $n \to \infty$,
\begin{align}
\mathbb{P} \left(\exists \sigma: (1-\epsilon)n/4 \le m(\sigma) \le n/4, Y_{1} \le (1 -t_1)y_1 \right) \to 0. \nonumber
\end{align}
Similarly, let $t_2^2= (64 \ln 2) \frac{1+\epsilon}{1-\epsilon} \frac{1}{b} \frac{\mathbb{E}[W_\nu^2]}{ \left( \mathbb{E}
[W_\nu]\right)^2}$ with $\mathbb{E} [W_\nu] \triangleq \sum_\ell \nu(\ell)w(\ell)$ and $\mathbb{E} [W_\nu^2] \triangleq \sum_\ell \nu(\ell)w^2(\ell)$. Then
\begin{align}
\mathbb{P} \left(\exists \sigma: (1-\epsilon)n/4 \le m(\sigma) \le n/4, Y_{2} \ge (1 +t_2)y_2 \right) \to 0. \nonumber
\end{align}
With $\epsilon$ sufficiently small, a.a.s.
\begin{align}
Y_1 -Y_2 & \ge (1-t_1)y_1 - (1+t_2)y_2 \nonumber \\
& = y_1-y_2 -\frac{2m}{n}(n/2-m)\sqrt{\frac{1+\epsilon}{1-\epsilon}(64\ln 2)} \left( \sqrt { a\mathbb{E}[W_\mu^2] } + \sqrt{ b\mathbb{E}[W_\nu^2] }\right) \nonumber \\
& \ge \frac{2m}{n}(n/2-m)\left( a\mathbb{E}[W_\mu]-b\mathbb{E}[W_\nu] -\sqrt{\frac{1+\epsilon}{1-\epsilon}(128\ln 2)} \sqrt { \left( a\mathbb{E}[W_\mu^2] + b\mathbb{E}[W_\nu^2] \right)}\right) \nonumber
\end{align}
which is larger than zero as soon as $\epsilon$ is sufficiently small
and (\ref{EqMinBisectionCondition}) is satisfied.
By Cauchy-Schwartz inequality,
\begin{align}
\left(\sum_\ell (a\mu(\ell)-b\nu(\ell) )w (\ell) \right)^2 \le 2\tau \sum_\ell (a\mu(\ell)+b \nu(\ell)) w^2(\ell) \nonumber
\end{align}
with equality achieved when
$w(\ell)=\frac{a\mu(\ell)-b\nu(\ell)}{a\mu(\ell)+b\nu(\ell)}$. This completes the
proof.
\subsection{Proof of \prettyref{thm:SBMSDPCorrelated}} \label{SectionSDPpf}
Without loss of generality, assume \prettyref{eq:clustersizebalance} holds for $\sigma^\ast$.
Let $Y^\ast=\sigma^\ast (\sigma^\ast)^\top$. By the optimality of $\widehat{Y}_{{\rm SDP}\xspace}$,
\begin{align}
0 \le \Iprod{W}{\widehat{Y}_{{\rm SDP}\xspace}} - \Iprod{W}{Y^\ast} = \Iprod{\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast} + \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast}. \label{eq:OptimalitySDP}
\end{align}
Since $\mathbb{E}[W]= \frac{\alpha}{n} \mathbf J + \frac{\beta}{n} Y^\ast - \frac{\alpha+\beta}{n} \mathbf I$ with $\alpha, \beta$ defined in~\eqref{EqDefAlphaBeta}, and $\widehat{Y}_{{\rm SDP}\xspace}$ is a feasible solution to \prettyref{eq:SBMMB2},
\begin{align*}
\Iprod{\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast} =\frac{\beta}{n} \Iprod{Y^\ast}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast}- \frac{\alpha}{n} \Iprod{\mathbf J}{Y^\ast} \le \frac{\beta}{n} \Iprod{Y^\ast}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast},
\end{align*}
where the last inequality holds because $\Iprod{\mathbf J}{Y^\ast} \ge 0$.
In view of \prettyref{eq:OptimalitySDP}, it follows that
\begin{align}
\frac{\beta}{n} \Iprod{Y^\ast}{ Y^\ast - \widehat{Y}_{{\rm SDP}\xspace}} \le \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast}. \label{eq:correlationbound}
\end{align}
Notice that
\begin{align*}
\fnorm{ Y^\ast - \widehat{Y}_{{\rm SDP}\xspace} }^2 =\fnorm{Y^\ast}^2 + \fnorm{ \widehat{Y}_{{\rm SDP}\xspace}}^2 - 2\Iprod{Y^\ast}{\widehat{Y}_{{\rm SDP}\xspace}} \le 2 \left(n^2 - \Iprod{Y^\ast}{\widehat{Y}_{{\rm SDP}\xspace}} \right) = 2 \Iprod{Y^\ast}{ Y^\ast - \widehat{Y}_{{\rm SDP}\xspace}}.
\end{align*}
It follows from \prettyref{eq:correlationbound} that
\begin{align}
\frac{\beta}{2n} \fnorm{ Y^\ast - \widehat{Y}_{{\rm SDP}\xspace}} ^2 \le \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast} \le | \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}}| + | \Iprod{W-\mathbb{E}[W]}{Y^\ast} | . \label{eq:fnormnbound}
\end{align}
To upper bound $| \Iprod{W-\mathbb{E}[W]}{Y^\ast} |$, Notice that
\begin{align*}
\Iprod{W-\mathbb{E}[W]}{Y^\ast} =2 \sum_{i<j} Y^\ast_{ij} \left( W_{ij} - \mathbb{E}[W_{ij}] \right).
\end{align*}
Let $\sigma^2= \sum_{i<j} \mathsf{var} [ W_{ij} ] = (1+o(1)) \frac{n}{2} \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) .$ By the Bernstein inequality given in \prettyref{thm:Bernstein}, for any $t>0$,
\begin{align*}
\mathbb{P} \left\{ \bigg| \sum_{i<j} Y^\ast_{ij} \left( W_{ij} - \mathbb{E}[W_{ij}] \right) \bigg| \ge \sqrt{ 2 \sigma^2 t} + \frac{2}{3} t \right\} \le 2 {\rm e}^{-t}.
\end{align*}
Letting $t=\log n = o (\sigma^2)$, it follows that with probability at least $1-2n^{-1}$,
\begin{align*}
\big| \sum_{i<j} Y^\ast_{ij} \left( W_{ij} - \mathbb{E}[W_{ij}] \right) \big| \le (1+o(1)) \sqrt{n \log n\sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) )},
\end{align*}
and thus $| \Iprod{W-\mathbb{E}[W]}{Y^\ast} | \le (2+o(1))\sqrt{n \log n\sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) )}$ with probability at least $1-2n^{-1}$.
We bound $| \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}} |$ next.
It follows from Grothendieck's inequality \cite[Theorem 3.4]{Vershynin14} that
\begin{align*}
| \Iprod{W-\mathbb{E}[W]}{\widehat{Y}_{{\rm SDP}\xspace}}| \le \sup_{Y \succeq 0, \diag{Y}=\mathbf I} | \Iprod{W-\mathbb{E}[W]}{Y}| \le K_{\rm G} \|W-\mathbb{E}[W]\|_{\infty \to 1},
\end{align*}
where $K_{\rm G}$ is an absolute constant known as \emph{Grothendieck constant} and it is known that $K_{\rm G} < \frac{\pi}{2 \ln (1+ \sqrt{2}) } \le 1.783$.
Moreover,
\begin{align*}
\|W-\mathbb{E}[W]\|_{\infty \to 1} \triangleq \sup_{ x: \|x \|_{\infty} \le 1} \| (W-\mathbb{E}[W]) x \|_1 &= \sup_{ x, y \in \{ \pm 1\}^n } x^\top (W-\mathbb{E}[W]) y \\
&= \sup_{x, y \in \{\pm 1\}^n} \sum_{i<j} \left( W_{ij} - \mathbb{E}[W_{ij}] \right) (x_iy_j + x_j y_i).
\end{align*}
For any fixed $x, y \in \{\pm 1\}^n$, using the Bernstein inequality, we have for any $t>0$,
\begin{align*}
\mathbb{P} \left\{ \sum_{i<j} \left( W_{ij} - \mathbb{E}[W_{ij}] \right) (x_iy_j + x_j y_i) \ge \sqrt{ 8 \sigma^2 t} + \frac{4}{3} t \right\} \le {\rm e}^{-t}.
\end{align*}
Hence, for arbitrarily small constant $\epsilon>0$, with probability at least $2^{-2(1+\epsilon)n}$,
\begin{align*}
\sum_{i<j} \left( W_{ij} - \mathbb{E}[W_{ij}] \right) (x_iy_j + x_j y_i) &\le n \left( \sqrt{ 8 \ln 2 (1+\epsilon) \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) } + \frac{8\ln 2 (1+\epsilon) }{3} \right) \\
& \overset{(a)}{\le} \frac{4n}{3} \sqrt{ 8 \ln 2 (1+\epsilon) \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) } ,
\end{align*}
where $(a)$ follows from the technical assumption $\sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) > 8 \ln 2$.
It follows from the union bound that with probability at least $1-4^{-\epsilon n}$,
\begin{align*}
\|W-\mathbb{E}[W]\|_{\infty \to 1} \le \frac{4n}{3} \sqrt{ 8 \ln 2 (1+\epsilon) \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) }.
\end{align*}
In view of \prettyref{eq:fnormnbound}, with probability at least $1-4^{-\epsilon n}-2n^{-1}$,
\begin{align}
\frac{1}{n^2} \fnorm{ Y^\ast - \widehat{Y}_{{\rm SDP}\xspace}} ^2 &\le (1+o(1)) \frac{8K_{\rm G} }{3\beta} \sqrt{ 8 \ln 2 (1+\epsilon) \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) } \nonumber \\
& \overset{(a)}{\le} (1+o(1)) 32 \sqrt{\ln 2 (1+ \epsilon) } \frac{ \sqrt{ \sum_{\ell} w^2(\ell) ( a \mu(\ell) +b \nu(\ell) ) } }{ \sum_{\ell} w(\ell) (a \mu (\ell) - b \nu( \ell) } \nonumber \\
& \overset{(b)}{\le} (1-\epsilon) \frac{1}{16}, \label{eq:fnormupperbound}
\end{align}
where $(a)$ follows by $ \sqrt{2} K_G \le 3 $ and the definition of $\beta$ given in~\eqref{EqDefAlphaBeta}; $(b)$ holds by invoking \prettyref{eq:SBMSDPCondition} and letting $\epsilon$ be sufficiently small.
Recall that $y$ is an eigenvector of $\widehat{Y}_{{\rm SDP}\xspace}$ corresponding to the largest eigenvalue and $\|y\| =\sqrt{n}$.
By Davis-Kahan sin$\theta$ theorem stated in \prettyref{lmm:daviskahan},
\begin{align*}
\frac{1}{\sqrt{n} }\min \{ \| \sigma^\ast - y \|, \| \sigma^\ast +y \| \} \le \frac{2 \sqrt{2} \|\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast \| }{n} \le \frac{2 \sqrt{2} \fnorm{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast}}{n } .
\end{align*}
Note that for any $x \in \mathbb{R}^n$, Hamming distance $d (\sigma^\ast, \mathsf{sign}( x ) ) \le \| \sigma^\ast -x \|^2.$
It follows that
\begin{align*}
\frac{1}{n} \min \{ d (\sigma^\ast, \mathsf{sign}( y ) ), d (\sigma^\ast, \mathsf{sign}(-y) ) \} \le \frac{8 \fnorm{\widehat{Y}_{{\rm SDP}\xspace}- Y^\ast}}{n^2},
\end{align*}
and the theorem holds in view of \prettyref{eq:fnormupperbound}.
\subsection{Proof of Theorem \ref{ThmSpectralSBM}} \label{SectionSpectralpf}
Recall that $W'$ is the weighted adjacency matrix after removal of edges incident to vertices with high degrees and $D'=W'-\frac{\alpha}{n} \mathbf J$.
Define $\hat{D^\prime}$ as the best rank-1 approximation of $D^\prime$ such that $\hat{D^\prime} = v_1 x x^\top$ with
$\|x\|=1$. Recall that $\mathbb{E}[D|\sigma] = \frac{\beta}{n} \sigma \sigma^\top$. Applying Davis-Kahan $\sin \theta$ theorem restated in \prettyref{lmm:daviskahan} with $D^\prime$ and $\mathbb{E}[D|\sigma]$ gives:
\begin{eqnarray*}
\min \{
\|\frac{\sigma}{\sqrt{n}}-x\|,\|\frac{\sigma}{\sqrt{n}}+x\|
\}\leq \frac{2\sqrt{2}}{|\beta| } \|D'-\mathbb{E}[D|\sigma]\| .
\end{eqnarray*}
Since Hamming distance $d (\sigma, \mathsf{sign} (x ) \le \| \sigma - \sqrt{n} x \|^2$, it follows that
\begin{align}
\frac{1}{n} \min \{ d (\sigma, \mathsf{sign} (x ), d(\sigma, -\mathsf{sign}( x ) ) \} \le \frac{8}{\beta^2}\|D^\prime-\mathbb{E}[D|\sigma]\|^2 = \frac{8}{\beta^2}\|W'- \mathbb{E}[W | \sigma]\|^2. \label{eq:overlapbound}
\end{align}
\prettyref{lmm:spectrumsparse} implies that a.a.s.\ $\|W'- \mathbb{E}[W | \sigma]\| \le C\sqrt{a+b}$ for some universal positive constant $C$.
Hence, in view of \prettyref{eq:overlapbound}, we get
\begin{eqnarray*}
\frac{1}{n} \min \{ d (\sigma, \mathsf{sign} ( \hat{x} ), d(\sigma, - \mathsf{sign} (\hat{x}) ) \} \leq 8C^2 \frac{a+b}{\beta^2},
\end{eqnarray*}
and the theorem follows.
\subsection{Proof of Theorem \ref{ThmSpectralSBM-large}}
The proof follows the same steps as for Theorem \ref{ThmSpectralSBM},
except that we are able to strengthen Lemma \ref{lmm:spectrumsparse} thanks to a
result of Vu \cite{vu05}. Note that the variance of the elements of
$W$ is upper bounded by $\frac{1}{n} \sum_\ell w^2(\ell) \left(
a\mu(\ell)+b\nu(\ell)\right)$ so that by Theorem 1.4 in \cite{vu05},
we get
\begin{lemma}
Under the conditions of Theorem \ref{ThmSpectralSBM-large}, we have
\begin{eqnarray*}
\|W- \mathbb{E}[W | \sigma] \| \leq 2 \sqrt{\sum_\ell w^2(\ell)\left(
a\mu(\ell)+b\nu(\ell)\right)}\quad a.a.s.
\end{eqnarray*}
\end{lemma}
\subsection{Proof of Theorem \ref{ThmNonReconstruction}} \label{SectionNonReconstruction}
Consider a Galton-Watson tree $T$ with Poisson offspring distribution with mean $\frac{a+b}{2}$. The type of the root $\rho$ is chosen from $\{ \pm 1\}$ uniformly at random. Each child has the same type as its parent with probability $\frac{a}{a+b}$ and a different type with probability $\frac{b}{a+b}$ . Every edge $(u,v)$ is labeled at random with distribution $\mu$ if $\sigma_u=\sigma_v$ and $\nu$ otherwise. Let $T_R$ denote the Galton-Watson tree $T$ up to depth $R$ and $\partial T_R$ denote the set of leaves of $T_R$. Let $G_R$ denote the subgraph of $G$ induced by vertices up to distance $R$ from $\rho$ and $\partial G_R$ be the set of vertices at distance $R$ from $\rho$.
The following lemma similar to Proposition 4.2 in \cite{Mossel12} establishes a coupling between the local neighborhood of $\rho$ and the labeled Galton-Watson tree rooted at $\rho$.
\begin{lemma}\label{PropCouplingTree}
Let $R=R(n)=\lfloor \frac{\log n}{10 \log (2(a+b) )} \rfloor $, then there exists a coupling such that a.a.s.
\begin{align}
(G_R,L_{G_R},\sigma_{G_R})=(T_R, L_{T_R},\sigma_{T_R}), \nonumber
\end{align}
where $L_{G_R}$ and $\sigma_{G_R}$ denote the labels and types on the subgraph $G_R$, respectively.
\end{lemma}
\begin{proof}
See proof in Section \ref{PfPropCouplingTree}.
\end{proof}
To ease notation, we omit the shorthand a.a.s. in the sequel. To prove Theorem \ref{ThmNonReconstruction}, it suffices to show that $\text{Var}(\sigma_{\rho} |G,L, \sigma_v) \to 1$.
By the law of total variance,
\begin{align*}
\text{Var}(\sigma_{\rho} |G,L, \sigma_v) = \mathbb{E}_{\sigma_{\partial G_R }} \left[ \text{Var} ( \sigma_{\rho} | G, L ,\sigma_v, \sigma_{\partial G_R} ) \right] + \text{Var}_{\sigma_{\partial G_R }} \left [ \mathbb{E} \left [ \sigma_ \rho | G, L, \sigma_v, \sigma_{\partial G_R} \right] \right].
\end{align*}
Hence, it further reduces to show that $\text{Var} (\sigma_{\rho} |G, L, \sigma_v, \sigma_{\partial G_R} ) \to 1$.
Let $R$ be as in Lemma \ref{PropCouplingTree}, then $G_R=o(\sqrt{n})$ and thus $v \notin G_R$. Lemma 4.7 in \cite{Mossel12} shows that $\sigma_{\rho}$ is asymptotically independent with $\sigma_v$ conditionally on $\sigma_{\partial G_R}$. Hence,
\begin{align}
\text{Var}(\sigma_{\rho}|G,L, \sigma_v, \sigma_{\partial G_R}) \to \text{Var} (\sigma_{\rho}|G,L, \sigma_{\partial G_R}). \nonumber
\end{align}
Let $G_R^c$ denote the subgraph of $G$ induced by edges not in $G_R$, and $L_{G_R^c}$ denote the set of labels on $G_R^c$.
Recall that $V(G_{R-1})$ and $V(G_R^c)$ denote the set of vertices in $G_{R-1}$ and $G_R^c$, respectively.
Let $S\triangleq V(G_{R-1}) \setminus \{ \rho \}$ and
$T \triangleq V(G_R^c) \setminus \partial G_R.$ Then $\{\rho\} \cup \partial G_R \cup S \cup T = V(G)$.
Notice that conditional on $( G_R, L_{G_R}, \sigma_{\partial G_R} )$, $\sigma_\rho$ is independent of $(G_R^c, L_{G_R^c} )$.
In particular,
\begin{align*}
& \prob{ \sigma_ \rho | G_R, L_{G_R}, \sigma_{\partial G_R} } \\
& = \frac{ \sum_{G_R^c, L_{G_R^c} } \prob{\sigma_\rho, G, L, \sigma_{\partial G_R} } } { \sum_{G_R^c, L_{G_R^c} } \prob{G, L, \sigma_{\partial G_R} } } \\
& =\frac{ \sum_{G_R^c, L_{G_R^c} } \left( \sum_{\sigma_S } \prod_{u,v \in V(G_R): u<v } \phi_{uv} \right) \left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right) }
{ \sum_{G_R^c, L_{G_R^c} } \left( \sum_{\sigma_\rho} \sum_{\sigma_S } \prod_{u,v \in V(G_R): u<v } \phi_{uv} \right) \left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right) } \\
& \overset{(a)} { =} \frac{ \left( \sum_{\sigma_S } \prod_{u,v \in V(G_R): u<v } \phi_{uv} \right) \sum_{G_R^c, L_{G_R^c} } \left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right)
} { \left( \sum_{\sigma_\rho} \sum_{\sigma_S } \prod_{u,v \in V(G_R): u<v } \phi_{uv} \right) \sum_{G_R^c, L_{G_R^c} } \left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right) } \\
& =\frac{ \sum_{\sigma_S } \prod_{(u,v) \in V(G_R): u<v } \phi_{uv} } {
\sum_{\sigma_\rho} \sum_{\sigma_S } \prod_{(u,v) \in G_R: u<v } \phi_{uv} } \\
& = \frac{ \left( \sum_{\sigma_S } \prod_{(u,v) \in V(G_R): u<v } \phi_{uv} \right) \left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right) } { \left( \sum_{\sigma_\rho} \sum_{\sigma_S } \prod_{(u,v) \in V(G_R): u<v } \phi_{uv} \right)
\left( \sum_{\sigma_T }
\prod_{u,v \in T: u<v} \phi_{uv} \prod_{u \in \partial G_R, v \in T } \phi_{uv} \right) } \\
& = \frac{ \prob{\sigma_\rho, G, L, \sigma_{\partial G_R} } }{ \prob{G, L, \sigma_{\partial G_R} } } =\prob{ \sigma_ \rho | G, L, \sigma_{\partial G_R} } ,
\end{align*}
where $(a)$ holds because $\sum_{\sigma_S } \prod_{(u,v) \in V(G_R): u<v } \phi_{uv} $ does not depend on $G_R^c$ and $L_{G_R^c}$. It follows that
\begin{align}
\text{Var} (\sigma_{\rho}|G,L, \sigma_{\partial G_R})= \text{Var} (\sigma_{\rho}|G_R,L_{G_R}, \sigma_{\partial G_R}). \nonumber
\end{align}
Lemma \ref{PropCouplingTree} implies that
\begin{align}
\text{Var} (\sigma_{\rho}|G_R,L_{G_R}, \sigma_{\partial G_R}) \to \text{Var} (\sigma_{\rho}|T_R,L_{T_R}, \sigma_{\partial T_R}). \nonumber
\end{align}
For the labeled Galton-Watson tree, it was shown in \cite{Heimlicher12} that if $\tau<1$, the types of the leaves provide no information about the type of the root when the depth $R \to \infty$, i.e.,
\begin{align}
\mathbb{P} ( \sigma_{\rho} =+1 |T, L, \sigma_{\partial T_R} ) \to \frac{1}{2}. \nonumber
\end{align}
Hence, $\text{Var} (\sigma_{\rho}|T_R,L_{T_R}, \sigma_{\partial T_R}) \to 1$
and the theorem follows.
\subsection{Proof of Theorem \ref{ThmACER}} \label{SectionHypTesting}
We introduce some necessary notations. For a graph $G$ with $n$ vertices and labeled edges, denote a $k$-sequence of labels by $ [\ell]_k=(\ell_1,\ell_2,\ldots, \ell_k) \in \mathcal{L}^k$. A cycle in $G$ is called a $k$-cycle with labels $[\ell]_k$, if starting from the vertex with the minimum index and ending at its neighbor with the smaller index among its two neighbors, the sequence of labels on edges is given by $[\ell]_k$. Let $X_n([\ell]_k)$ denote the number of $k$-cycles with labels $[\ell]_k$ in $G$. Let $(X)_j=X(X-1)\cdots (X-j+1)$ for integers $X$ and $1 \le j \le X$. Then $(X_n([\ell]_k))_j$ is the number of ordered $j$-tuples of $k$-cycles with labels $[\ell]_k$ in $G$. The product $\prod_{[\ell]_k}$ is assumed to taken over all possible sequences of labels with length $k$. The following lemma gives the asymptotic distribution of the number of $k$-cycles with labels $[\ell]_k$.
\begin{lemma} \label{LemmaNumCycles}
For any fixed integer $m \ge 3$, $\{X_n([\ell]_k): [\ell]_k \in {\mathcal{L}}^k \}_{k=3}^{m}$ jointly converge to independent Poisson random variables with mean $\lambda([\ell]_k)$ under graph distribution $\mathbb{P}_n^\prime$, and $\xi([\ell]_k)$ under graph distribution $\mathbb{P}_n$, where
\begin{align}
&\lambda([\ell]_k)= \frac{1}{2^{k+1} k } \prod_{i=1}^k (a\mu(\ell_i)+b\nu(\ell_i)), \nonumber \\
&\xi([\ell]_k) = \frac{1}{2^{k+1} k } \left( \prod_{i=1}^k (a\mu(\ell_i)+ b\nu(\ell_i)) + \prod_{i=1}^k (a\mu(\ell_i)-b\nu(\ell_i)) \right). \nonumber
\end{align}
\end{lemma}
We are ready to prove Theorem \ref{ThmACER}. The first part of Theorem \ref{ThmACER} is proved using Lemma \ref{LemmaNumCycles} and Chebyshev inequality. Define $\eta([\ell]_k)=\xi([\ell]_k) / \lambda([\ell]_k) -1$ and $X_k= \sum_{[\ell]_k} X([\ell]_k) \eta([\ell]_k)$. Then, by Lemma \ref{LemmaNumCycles}, as $n \to \infty$,
\begin{align}
\mathbb{E}_{\mathbb{P}^\prime}[X_k] &=\sum_{[\ell]_k} \lambda([\ell]_k) \eta([\ell]_k), \nonumber\\
\mathbb{E}_{\mathbb{P}}[X_k] &= \sum_{[\ell]_k} \lambda([\ell]_k ) \eta([\ell]_k) (1+ \eta([\ell]_k)). \nonumber
\end{align}
Note that
\begin{align}
2k \sum_{[\ell]_k} \lambda([\ell]_k) \eta^2([\ell]_k) = \sum_{[\ell]_k} \prod_{s=1}^k \frac{(a \mu(\ell_s) - b \nu(\ell_s) )^2 }{ 2(a\mu(\ell_s) + b \nu(\ell_s)) } = \left( \sum_{\ell \in \mathcal{L}} \frac{(a \mu(\ell) - b \nu(\ell) )^2 }{ 2(a\mu(\ell) + b \nu(\ell)) } \right)^k = \tau^k. \label{EqTau}
\end{align}
Therefore,
\begin{align}
\mathbb{E}_{\mathbb{P}}[X_k]- \mathbb{E}_{\mathbb{P}^\prime}[X_k]= \sum_{[\ell]_k} \lambda([\ell]_k) \eta^2([\ell]_k) = \tau^k/ (2k) , \nonumber
\end{align}
and
\begin{align}
\text{Var}_{\mathbb{P}^\prime}[X_k] &= \sum_{[\ell]_k} \lambda([\ell]_k) \eta^2([\ell]_k) = \tau^k/ (2k), \nonumber\\
\text{Var}_{\mathbb{P}}[X_k] &= \sum_{[\ell]_k} \xi([\ell]_k) \eta^2([\ell]_k) \le \tau^k/ k. \nonumber
\end{align}
Choose $\rho= \tau^k/(6k)$. By Chebyshev's inequality,
\begin{align*}
\mathbb{P}' \{X_k > \mathbb{E}_{\mathbb{P}^\prime}[X_k] + \rho \} \le \frac{\text{Var}_{\mathbb{P}^\prime}[X_k] }{\rho^2} = \frac{18 k}{\tau^k}.
\end{align*}
Let $k$ increases with $n$ sufficiently slowly. Then since $\tau>1$, $X_k \le \mathbb{E}_{\mathbb{P}^\prime}[X_k] + \rho$ $\mathbb{P}^\prime$-a.a.s..
Similarly, $X_k \ge \mathbb{E}_{\mathbb{P}}[X_k]-\rho$ $\mathbb{P}$-a.a.s..
By definition of $\rho$, $\mathbb{E}_{\mathbb{P}}[X_k]-\rho > \mathbb{E}_{\mathbb{P}^\prime}[X_k] + \rho$. Set $A_n= \{X_k \le \mathbb{E}_{\mathbb{P}^\prime}[X_k] + \rho \}$, then $\mathbb{P}^\prime(A_n) \to 1$ and $\mathbb{P}(A_n) \to 0$.
The second part of Theorem \ref{ThmACER} is proved using the following small subgraph conditioning theorem, which is adapted from \cite[Theorem 9.12]{Janson11}.
\begin{theorem} \label{ThmSubgraphCond}
Let $Y_n= \frac{\mathbb{P}_n}{\mathbb{P}_n^\prime}$. If $\mathbb{P}_n$ and $\mathbb{P}_n^\prime$ are absolutely contiguous for any fixed $n$, and
\begin{enumerate}
\item For each fixed $m \ge 3$, $\{X_n([\ell ]_k) \}_{k=3}^{m}$ converge jointly to independent Poisson variables with means $\lambda([\ell]_k)>0$ under distribution $\mathbb{P}_n^\prime$, and $\xi([\ell ]_k)$ under distribution $\mathbb{P}_n$;
\item $\sum_{k \ge 3} \sum_{[\ell ]_k} \lambda([\ell ]_k) \eta([\ell ]_k)^2 < \infty$;
\item $\mathbb{E}_{\mathbb{P}^\prime_n}[Y_n^2] \to \exp ( \sum_{k \ge 3} \sum_{[\ell ]_k} \lambda([\ell ]_k) \eta^2([\ell ]_k) )$ as $n \to \infty$,
\end{enumerate}
Then, $\mathbb{P}_n$ and $\mathbb{P}_n^\prime$ are contiguous.
\end{theorem}
In this paper, $\mathbb{P}_n$ and $\mathbb{P}_n^\prime$ are discrete distributions on the space of labeled graphs, and for any fixed $n$,
$\mathbb{P}_n$ and $\mathbb{P}_n^\prime$ are absolutely continuous.
Condition 1) is verified by Lemma \ref{LemmaNumCycles}. Condition 2) holds because in view of (\ref{EqTau}),
\begin{align}
\sum_{k \ge 3} \sum_{[\ell] _k} \lambda([\ell ]_k) \eta^2([\ell ]_k) = \sum_{k \ge 3} \frac{\tau^k}{2k} =
-\frac{\log(1-\tau)+\tau+\tau^2/2 }{2}< \infty. \label{eq:expressiontau}
\end{align}
We are left to verify condition 3). By definition,
\begin{align}
Y_n(G,L)=2^{-n} \sum_{\sigma \in \{\pm 1\}^n} \prod_{(u,v): u<v} W_{u,v}(G,L,\sigma), \nonumber
\end{align}
where
\begin{align}
W_{uv}(G,L,\sigma)=\left \{
\begin{array}{rl}
\frac{2 a \mu(l)}{a \mu(\ell) + b \nu(\ell) } & \text{if } \sigma_u=\sigma_v, (u,v) \in E(G), L_{uv}=\ell, \\
\frac{2 b \nu(l)}{a \mu(\ell) + b \nu(\ell ) } & \text{if } \sigma_u \neq \sigma_v, (u,v) \in E(G), L_{uv}=\ell, \\
\frac{1-a/n}{1-(a+b)/(2n) } & \text{if } \sigma_u= \sigma_v, (u,v) \notin E(G), \\
\frac{1-b/n}{1-(a+b)/(2n) } & \text{if } \sigma_u \neq \sigma_v, (u,v) \notin E(G),
\end{array} \right. \nonumber
\end{align}
Then,
\begin{align}
Y_n^2=2^{-2 n} \sum_{\sigma,\delta \in \{\pm 1 \}^n } \prod_{(u,v): u<v} W_{u,v}(G,L,\sigma) W_{u,v}(G,L,\delta). \label{eq:Ysecondmoment}
\end{align}
\begin{lemma} \label{lmm:WV}
For any fixed $\sigma, \delta \in \{\pm 1 \}^n $, if $\sigma_u \sigma_v =\delta_u \delta_v$, then
\begin{align}
\mathbb{E}_{\mathbb{P}^\prime_n}[W_{u,v}(G, L, \sigma) W_{u,v} (G, L, \delta) ] = 1+ \tau/n+ (a-b)^2/(4n^2)+ O(n^{-3}). \nonumber
\end{align}
Otherwise,
\begin{align}
\mathbb{E}_{\mathbb{P}^\prime_n}[W_{u,v}(G, L, \sigma) W_{u,v} (G, L, \delta)] = 1- \tau/n - (a-b)^2/(4n^2)+ O(n^{-3}). \nonumber
\end{align}
\end{lemma}
\begin{proof}
Suppose $\sigma_u \sigma_v =\delta_u \delta_v=1$. Then,
\begin{align}
& \mathbb{E}_{\mathbb{P}^\prime_n}[W_{u,v}(G, L, \sigma) W_{u,v} (G, L, \delta)] \nonumber \\
& = \sum_\ell \left( \frac{2 a \mu(\ell) }{a \mu(\ell) + b\nu(\ell ) } \right)^2 \frac{a\mu(\ell )+b\nu(\ell ) }{2n} + \left( \frac{1-a/n}{1-(a+b)/(2n) } \right)^2 \left( 1-\frac{a+b}{2n} \right) \nonumber \\
& = \frac{1}{n }\sum_\ell \frac{2a^2 \mu^2(\ell) }{a\mu(\ell)+b\nu(\ell)} +\left(1-\frac{a}{n} \right)^2 \left(1+ \frac{a+b}{2n} + \frac{(a+b)^2}{4n^2}+ O(n^{-3}) \right) \nonumber \\
& = 1+ \frac{1}{n} \sum_\ell \left( \frac{2a^2 \mu^2(\ell) }{a\mu(\ell)+b\nu(\ell)} + \frac{b \nu(\ell) - 3a \mu(\ell) }{2} \right) + \frac{(a-b)^2}{4n^2} + O(n^{-3}) \nonumber \\
&= 1+ \frac{1}{n} \sum_\ell \frac{(a\mu(\ell )-b\nu(\ell ))^2 }{2(a\mu(\ell)+b\nu(\ell))} + \frac{(a-b)^2}{4n^2} + O(n^{-3}) \nonumber \\
&= 1+ \tau/n + (a-b)^2/(4n^2)+ O(n^{-3}). \label{eq:secondmoment}
\end{align}
By symmetry, \prettyref{eq:secondmoment} holds for $\sigma_u \sigma_v =\delta_u \delta_v=-1$.
Suppose $\sigma_u =\sigma_v$ and $\delta_u \neq \delta_v$. Then,
\begin{align}
& \mathbb{E}_{\mathbb{P}^\prime_n}[W_{u,v}(G, L, \sigma) W_{u,v} (G, L, \delta)] \nonumber \\
& = \sum_\ell \frac{4 ab \mu(\ell) \nu(\ell) }{ (a \mu(\ell) + b\nu(\ell))^2 } \frac{a\mu(\ell)+b\nu(\ell) }{2n} + \frac{(1-a/n)(1-b/n) }{(1-(a+b)/(2n) )^2} \left( 1-\frac{a+b}{2n} \right) \nonumber \\
&= 1- \frac{1}{n} \sum_\ell \frac{(a\mu(\ell )-b\nu(\ell))^2 }{2(a\mu(\ell)+b\nu(\ell))} - \frac{(a-b)^2}{4n^2} + O(n^{-3}) \nonumber \\
&= 1- \tau/n - (a-b)^2/(4n^2)+ O(n^{-3}). \nonumber
\end{align}
\end{proof}
In view of \prettyref{lmm:WV}, letting $S(\sigma, \delta) \triangleq \{ (u,v): u<v, \sigma_u \sigma_v =\delta_u \delta_v \}$
and $T(\sigma, \delta) \triangleq \{ (u,v): u<v, \sigma_u \sigma_v \neq \delta_u \delta_v \}$, and $\gamma_n \triangleq \tau/n + (a-b)^2/(4n^2)+ O(n^{-3})$,
it follows from \prettyref{eq:Ysecondmoment} that
\begin{align}
\expects{Y_n^2}{\mathbb{P}^\prime_n} =2^{-2 n} \sum_{\sigma,\delta \in \{\pm 1 \}^n } \left( 1+ \gamma_n \right)^{|S(\sigma, \delta)| }
\left(1-\gamma_n \right)^{| T (\sigma, \delta) | }. \label{eq:secondmoment2}
\end{align}
Define $\rho(\sigma, \delta)= \Iprod{\sigma}{\delta}$ and then $|S(\sigma, \delta)| = (n^2+ \rho^2)/4 - n/2$ and $| T(\sigma, \delta) | = (n^2- \rho^2) /4$.
It follows from \prettyref{eq:secondmoment2} that
\begin{align}
\expects{Y_n^2}{\mathbb{P}^\prime_n}= \left( 1+ \gamma_n \right)^{n^2/4-n/2} \left( 1- \gamma_n \right)^{n^2/4} 2^{-2 n} \sum_{\sigma,\delta \in \{\pm 1 \}^n } \left( 1+ \gamma_n \right)^{\rho^2/4} \left( 1- \gamma_n \right)^{-\rho^2/4}. \label{eq:secondomement3}
\end{align}
Taylor expansion yields
\begin{align}
\left( 1+ \gamma_n \right)^{n^2/4-n/2} \left( 1- \gamma_n \right)^{n^2/4} &= \left(1 + O(n^{-1} ) \right)\exp \left[ -\tau^2/4 -\tau/2 \right], \nonumber \\
\left( 1+ \gamma_n \right)^{\rho^2/4} \left( 1- \gamma_n \right)^{-\rho^2/4} &=\exp \left[ \frac{ \rho^2}{n} ( \tau/2 + O(n^{-1} ) ) \right]. \label{eq:Taylor}
\end{align}
Combing \prettyref{eq:secondomement3} and \prettyref{eq:Taylor}, we get that
\begin{align}
\expects{Y_n^2}{\mathbb{P}^\prime_n} = \left(1 + O(n^{-1} ) \right) \exp \left[ -\tau^2/4 -\tau/2 \right] \expect{ {\rm e}^{ Z_n^2 ( \tau /2 + O(n^{-1} )) } }, \label{eq:secondmement4}
\end{align}
where $Z_n = \frac{1}{\sqrt{n}} \Iprod{\sigma}{\delta}$ and $\sigma, \delta$ are independently and uniformly distributed over $\{\pm 1\}^n$.
Let $Z$ denote a standard Gaussian random variable. Then central limit theorem implies that $Z_n$ converges to $Z$ in distribution.
Since $ x \to \exp( x^2 \tau/2)$ is a continuous mapping, $ \exp ( Z_n^2 \tau /2 )$ converges to $\exp(Z^2 \tau/2)$ in distribution.
Moreover, $\{ \exp ( Z_n^2 \tau /2 ) \}$ are uniformly bounded in $L_{1+\epsilon} $ norm for some $\epsilon>0$ and thus uniformly integrable. In particular,
\begin{align*}
\expect{ \exp ( (1+\epsilon) Z_n^2 \tau /2 ) } = \int_{0}^\infty \prob{ \exp ( (1+\epsilon) Z_n^2 \tau/2 )> t } {\rm d} t
& = \int_{0}^\infty \prob{Z_n > \sqrt{\frac{2 \ln t }{ (1+\epsilon) \tau }} } {\rm d} t \\
& \overset{(a)}{=} \int_{0}^\infty t^{-\frac{1} {(1+\epsilon) \tau} } {\rm d} t \overset{(b)}{<} \infty,
\end{align*}
where $(a)$ follows from the Hoeffding's inequality $\prob{Z_n \ge t } \le \exp (-t^2/2)$; $(b)$ holds by choosing $\epsilon$ sufficiently
small such that $(1+\epsilon) \tau<1$. Hence, $ \mathbb{E} [ \exp ( Z_n^2 \tau /2 ) ] $ converges to $ \mathbb{E}[ \exp(Z^2 \tau/2) ]= \frac{1}{\sqrt{1-\tau} }.$
It follows from \prettyref{eq:secondmement4} that when $\tau<1$, as $n \to \infty$,
\begin{align}
\expects{Y_n^2}{\mathbb{P}^\prime_n} \to \frac{ \exp^{-\tau/2-\tau^2/4} }{ \sqrt{1-\tau}} . \nonumber
\end{align}
Hence, in view of \prettyref{eq:expressiontau}, condition 3) of Theorem \ref{ThmSubgraphCond} holds and the second part of Theorem \ref{ThmACER} follows from Theorem \ref{ThmSubgraphCond}.
\section{Conclusion} \label{SectionConclusion}
Our results show that when $\tau<1$ it is fundamentally impossible to give a positively correlated reconstruction;
when $\tau$ is large enough, the labeling information can be effectively exploited through the suitably weighted graph.
An interesting future work is to prove the positive part of Conjecture \ref{Conjecture}.
\section{Acknowledgement}
J. X.\ would like to thank Yudong Chen and Bruce Hajek for helpful conversations related to this project.
M. L.\ acknowledges the support of the French
Agence Nationale de la Recherche (ANR) under reference
ANR-11-JS02-005-01 (GAP project).
J. X.\ acknowledges the
support of the National Science Foundation under Grant ECCS
10-28464.
\bibliographystyle{abbrv}
|
1,116,691,497,124 | arxiv | \section{Introduction}
There is growing interest in mechanical metamaterials, man-made
structures with counter-intuitive mechanical
properties~\cite{Bertoldi2017}. Unlike in ordinary uniform
materials, deformations in such metamaterials derive from the
geometry of the assembly rather than the elastic properties of the
components. This behavior is scale independent, covering
structures from the macro- to the nanoscale. Most attention in
this respect seems to be drawn by the Poisson ratio
$\nu$~\cite{greaves2011}, the negative ratio of lateral to applied
strain. Ordinary materials with typical values $0<{\nu}<0.5$
contract laterally when stretched,
with unusually large values reported for cellular
materials~\cite{gibson1982mechanics}.
Auxetic metamaterials with ${\nu}<0$, on the other hand, expand in
both directions when
stretched~\cite
{gibson1982mechanics},
{Lakes87}
{Lakes93}
{baughman1993crystalline}
{alderson1999triumph}}
leading to advanced functionalities~\cite
{Mitschke2011}
{gao2017novel}}.
Auxetic systems with macroscopic components have been utilized for
shock absorption in automobiles~\cite{RR-patent}, in
high-performance clothing~\cite
{papadopoulou2017auxetic}
{Chen17}
{Nike-patent}}, in bioprostheses~\cite{scarpa2008auxetic} and
stents~\cite{Liner-patent} in medicine, and for strain
amplification~\cite{baughman1998negative}. Auxetic 2D mechanical
metamaterials with nanostructured components,
some of which have been described previousl
~\cite
{Shan15},{Julian93},{Wojciechowski89},{grima2000self
},
may find their use when precise micromanipulation of 2D structures
including bilayer graphene is
required~\cite{cao2018unconventional}.
Here we report the design of 2D mechanical metamaterials that may
be deformed substantially at little or no energy cost. Unlike
origami- and kirigami-inspired metamaterials, which derive their
functionality from folding a 2D material into the third
dimension~\cite{{schenk2013},{yasuda15},{Rafsanjani17}
{grima2015tailoring}}, the structures we describe are confined to
a plane during deformation. Such confinement may be achieved by a
strong attraction to a planar substrate or in a sandwich geometry.
Specifically, we consider
infinite
assemblies of rigid isosceles triangles hinged in their corners on
the macro-scale~\cite{GuestHutchinson03} and polymerized
phenanthrene molecules forming `porous graphene' on the
nano-scale. In these and in a large class of related structures,
consisting of connected and near-rigid isosceles triangles, the
Poisson ratio $\nu$ diverges at particular strain values. $\nu$
also changes its magnitude and sign, and displays a `shape memory'
effect in a specific range of deformations, meaning that this
quantity depends on previously applied strain. Our corresponding
results are scale invariant.
\begin{figure*}
\includegraphics[width=1.8\columnwidth]{fig1}
\caption{Deformations in a 2D assembly of rigid
isosceles triangles.
(a) Adjacent triangles with opening angle $\alpha$ and mutual
orientation defined by the closing angle $\beta$, hinged
tip-to-corner, forming the primitive unit cell. The triangle
height $x_0$ and the length $y_0$ of its base define
the horizontal and vertical length scales.
(b) Snap shots of the $\alpha=120^\circ$ triangle assembly for
different values of $\beta$. The conventional rectangular unit
cell is twice the size of the primitive unit cell.
(c) Contour plot of the Poisson ratio $\nu_{xy}=-(dy/y)/(dx/x)$ as
a function of $\alpha$ and $\beta$. The dotted red line
highlights behavior of the ${\alpha}=120^\circ$ triangle assembly.
(d) Poisson ratio $\nu_{xy}$ as a
function of $\beta$ in the $\alpha=120^\circ$ system.
(e) Changes in the scaled width $x/x_0$ and height $y/y_0$ of the
conventional unit cell for $\alpha=120^\circ$ caused by changing
the angle $\beta$.
\label{fig1}}
\end{figure*}
\begin{figure*}
\includegraphics[width=1.8\columnwidth]{fig2}
\caption{Deformations in porous graphene, a
phenanthrene-based 2D mechanical metamaterial.
(a) Structure of the C$_{14}$H$_{10}$ phenanthrene molecule and
its relation to an isosceles ${\alpha}=120^\circ$
triangle of Fig.~\protect\ref{fig1}.
(b) Equilibrium structure of 2D porous graphene consisting of
polymerized phenanthrene molecules with ${\beta}=70^\circ$.
Saturating hydrogen atoms are shown by the lighter and smaller
spheres.
Changes in the scaled width $x/x_0$ (c) and height $y/y_0$ (d) of
the conventional unit cell in the triangle assembly and porous
graphene as a function of the closing angle $\beta$.
(e) Poisson ratio $\nu_{xy}$ in porous graphene as a function
of $\beta$.
(f) Strain energy in the C$_{56}$H$_{28}$ conventional unit cell
as a function of $\beta$.
The dashed and dotted lines connecting data points for porous
graphene in (c)-(f) are guides to the eye.
\label{fig2}}
\end{figure*}
\section{Computational Approach}
We have studied the electronic and structural properties as well
as the deformation energy of polyphenanthrene dubbed `porous
graphene' using {\em ab initio} density functional theory (DFT) as
implemented in the \textsc{VASP}
code~\cite{{VASP1},{VASP2},{VASP3}}.
We represented this 2D structure
by imposing periodic boundary conditions in all directions and
separating individual
layers by a vacuum region of $20$~{\AA}. We used
projector-augmented-wave (PAW)
pseudopotentials~\cite{{PAW1},{PAW2}} and the
Perdew-Burke-Ernzerhof (PBE)~\cite{PBE} exchange-correlation
functional. The Brillouin zone of the conventional unit cell of
the 2D structure has been sampled by an $5{\times}3{\times}1$
$k$-point grid~\cite{Monkhorst-Pack76}. We used $500$~eV as the
electronic kinetic energy cutoff for the plane-wave basis and a
total energy difference between subsequent self-consistency
iterations below $10^{-4}$~eV as the criterion for reaching
self-consistency. All geometries have been optimized using the
conjugate-gradient method~\cite{CGmethod}, until none of the
residual Hellmann-Feynman forces exceeded $10^{-2}$~eV/{\AA}.
\section{Results}
\subsection{Constructing a 2D mechanical metamaterial}
Figure~\ref{fig1} depicts the macro-scale 2D mechanical
metamaterial we consider, namely an
infinite assembly of rigid isosceles triangles hinged in the
corners and described using periodic boundary conditions.
There are two identical triangles with different orientation in
the primitive unit cell of the lattice, as seen in
Fig.~\ref{fig1}(a). The conventional unit cell, shown in
Fig.~\ref{fig1}(b), is rectangular and twice the size of the
primitive unit cell. The deformation behavior of such constrained
lattices of polygons including rectangles~\cite{grima2005auxetic}
and connected bars,
some of which display a Poisson ratio that changes sign,
value, and even diverges,
has been described and classified
earlier~\cite{{GuestHutchinson03},{milton2013complete}}. In our
system, structural changes are regulated by the only independent
variable, the angle $\beta$. The full range of $\beta$ is
$0~{\leq}~{\beta}~{\leq}~{\alpha}+180^\circ$ for
${\alpha}{\le}60^\circ$ and
$0~{\leq}~{\beta}~{\leq}~270^\circ-{\alpha}/2$ for
${\alpha}{\ge}60^\circ$. Since there is no energy involved when
changing $\beta$, the structure maintains its geometry
after deformation. Snap shots of the triangle assembly and the
conventional unit cell at different values of $\beta$, shown in
Fig.~\ref{fig1}(b), illustrate the unusual flexibility of the
system. The movie of the continuous shape change is provided in
Video~\ref{Video1} in the Appendix.
For a system of triangles aligned with the Cartesian coordinate
system as shown in Fig.~\ref{fig1}(a), we can determine the strain
in the $y$-direction in response to strain applied along the
$x$-direction. The negative ratio of these strains is the Poisson
ratio $\nu_{xy}$, which is given by
\begin{eqnarray}
\nu_{xy}\!&=&\! -\frac{dy/y}{dx/x} \nonumber \\%
&=& \frac{
\cos(\frac{\alpha}{2})\sin(\frac{\beta}{2})
-3\sin(\frac{\alpha}{2})\cos(\frac{\beta}{2})
\cos(\frac{\alpha}{2})\cos(\frac{\beta}{2}) +
3\sin(\frac{\alpha}{2})\sin(\frac{\beta}{2})
\!\tan\!\left(\!\frac{\alpha\!+\!\beta}{2}\right)\!.
\label{Eq1
\end{eqnarray}
Dependence of $\nu_{xy}$ on $\alpha$ and $\beta$ is presented as a
contour plot in Fig.~\ref{fig1}(c). Several aspects of this result
are noteworthy when inspecting the behavior of $\nu_{xy}({\beta})$
for a constant value of the opening angle $\alpha$. With the
exception of ${\alpha}=60^\circ$ describing equilateral
triangles~\cite{{grima2000zeolites},{Sun12369}}, $\nu_{xy}$
changes magnitude and sign with changing $\beta$. Presence of the
tangent function in Eq.~(\ref{Eq1}) causes $\nu_{xy}$ to diverge
to $\pm\infty$ for ${\beta}_{crit}(\nu_{xy})=180^\circ-\alpha$,
with ${\beta}_{crit}(\nu_{xy})=60^\circ$ for ${\alpha}=120^\circ$.
For ${\alpha}>60^\circ$, $\nu_{xy}$ changes sign twice across the
full range of $\beta$ values, as shown in Fig.~\ref{fig1}(d) for
${\alpha}=120^\circ$. The condition for the divergence of
$\nu_{yx}=1/\nu_{xy}$, describing strain in the $x$-direction in
response to strain applied in the $y$-direction, is
$\tan({\beta}_{crit}(\nu_{yx})/2)=3\tan({\alpha}/2)$. For
${\alpha}=120^\circ$, $\nu_{yx}$ will diverge at
${\beta}_{crit}(\nu_{yx})=158.2^\circ$.
Maybe the most unexpected aspect of our result is the
`shape memory' effect displayed by both $\nu_{xy}$ and $\nu_{yx}$
if the angle $\beta$ becomes a hidden variable in the system. To
explain what we mean, we first
inspect the $(x(\beta)/x_0, y(\beta)/y_0)$ trajectory given by
\begin{eqnarray}
\frac{x}{x_0}&=&
2 \left[ \tan\left(\frac{\alpha}{2}\right)
\cos\left(\frac{\beta}{2}\right) +
\sin\left(\frac{\beta}{2}\right) \right]\,,
\label{Eq2} \\
\frac{y}{y_0}&=&
3 \sin\left(\frac{\beta}{2}\right)
\cot\left(\frac{\alpha}{2}\right)
\cos\left(\frac{\beta}{2}\right)\,.
\label{Eq3}
\end{eqnarray}
The $(x(\beta)/x_0, y(\beta)/y_0)$ trajectory, describing the
changing shape of the unit cell, is shown for ${\alpha}=120^\circ$
in Fig.~\ref{fig1}(e), and for other values of $\alpha$ in
Fig.~\ref{fig4} in the Appendix section.
The sign of the
slope of the trajectory, opposite to the sign of $\nu_{xy}$ and
$\nu_{yx}$, changes twice as the structure unfolds with increasing
$\beta$. Regions of positive and negative $\nu_{xy}$ and
$\nu_{yx}$, delimited by the above-mentioned critical values
${\beta}_{crit}(\nu_{xy})$ for $\nu_{xy}$ and
${\beta}_{crit}(\nu_{yx})$ for $\nu_{yx}$, are distinguished
graphically in Fig.~\ref{fig1}(e). For any $x$ in the range
$3.46<x/x_0<4.00$, there are two different values of $y$
associated with different values of $\beta$ and different signs of
$\nu_{xy}$. Similarly, for any $y$ in the range $2.75<y/y_0<3.06$,
there are two different solutions for $x$ associated with
different values of $\beta$ and different signs of $\nu_{yx}$.
Let us now consider a macroscopic piece of `material' consisting
of hinged triangles, which are so small that their mutual
orientation cannot be made out.
With no information about the deformation history, the
material may exhibit either a positive or a negative Poisson
ratio. The {\em only} way to change the material so that it would
exhibit a definite positive or negative sign of the Poisson ratio
is to subject it to a sequence of deformations. Assume that this
material is first stretched to its maximum along a given direction
such as $x$. Subsequent stretching along a direction normal to the
first will result in a positive, subsequent compression in a
negative Poisson ratio. We may say that the system retains a
memory of previous deformations.
What happens microscopically can be clearly followed in
Fig.~\ref{fig1}(e). Even though the value of $\beta$ is hidden, we
know that it becomes $60^\circ$ for maximum stretch along $x$ and
$158.2^\circ$ for maximum stretch along $y$. Subsequent
deformation normal to the first direction then dictates the sign
of $\nu$.
This behavior derives from the nonlinearity in the system and, in
some aspect, parallels the behavior of shape memory alloys.
\subsection{Porous graphene as a 2D mechanical metamaterial}
Whereas macroscopic triangular assemblies with various values of
$\alpha$ will
find their use in particular applications, we turn our interest to
2D nanostructures that can be formed by coordination chemistry and
macromolecular assembly. Microstructures including colloidal
Kagom{\'e}
lattices~\cite{{chen2011directed},{Hiroshi16},{Hiroshi17}} and
graphitic nanostructures~\cite{{treier2011surface},{Moreno18}}
including polyphenylene~\cite{Porgra09}, sometimes dubbed
nanoporous graphene, have been synthesized, but do not display a
negative Poisson ratio. In the following, we focus on
polyphenanthrene, a 2D structure of phenanthrene molecules shown
in Fig.~\ref{fig2}(a). There is a strong similarity between this
molecule and ${\alpha}=120^\circ$ triangles depicted in
Fig.~\ref{fig1}. In particular, 2D assemblies of structures in
Figs.~\ref{fig1}(a) and \ref{fig2}(a) display strong similarities
in their Poisson ratio behavior discussed below.
The calculated equilibrium structure of 2D porous graphene formed
of polymerized phenanthrene molecules with the optimum angle
${\beta}=70^\circ$, shown in Fig.~\ref{fig2}(b), illustrates the
relationship between this structure and the ${\alpha}=120^\circ$
triangle assembly. The unusual flexibility of polyphenanthrene is
owed to the connection of phenanthrene molecules by strong C-C
$\sigma$ bonds, which are also responsible for the strength and
flexibility of polyethylene.
Our DFT calculations indicate only small structural distortions of
the phenanthrene molecules, which nevertheless break their initial
mirror symmetry.
In Fig.~\ref{fig2}(c) we compare changes in the scaled width
$x/x_0$ of the conventional unit cell as a function of the closing
angle $\beta$ for the assembly of triangles and for porous
graphene. The corresponding changes in the scaled height $y/y_0$
are shown in Fig.~\ref{fig2}(d) in the same range of $\beta$
values. Interestingly, $x({\beta})/x_0$ reaches its maximum at
${\beta}_{crit}(\nu_{xy})$ for both systems, whereas
$y({\beta})/y_0$ increases monotonically with increasing $\beta$.
According to the definition of the Poisson ratio
$\nu_{xy}=-(dy/y)/(dx/x)$, $\nu_{xy}$ diverges at
${\beta}_{crit}(\nu_{xy})=60^\circ$ in the triangular assembly, as
seen in Fig.~\ref{fig1}(d). Similarly, $\nu_{xy}$ diverges at
${\beta}_{crit}(\nu_{xy})=70^\circ$ in porous graphene, as shown
in Fig.~\ref{fig2}(e). The slope of $x({\beta})/x_0$ changes sign
at ${\beta}_{crit}$, resulting in $\nu_{xy}<0$ for
${\beta}<{\beta}_{crit}(\nu_{xy})$ and $\nu_{xy}>0$ for
${\beta}>{\beta}_{crit}(\nu_{xy})$ in both systems.
\begin{figure}[t]
\includegraphics[width=0.7\columnwidth]{fig3}
\caption{ Electronic structure of porous graphene, a
phenanthrene-based 2D mechanical metamaterial, based on
DFT-PBE calculations.
(a) Band structure of the equilibrium structure with
${\beta}=70^\circ$ obtained using the rectangular C$_{56}$H$_{28}$
unit cell. High-symmetry points in the rectangular Brillouin zone
are shown in the inset.
(b) Fundamental band gap $E_g$ as a function of the angle $\beta$.
\label{fig3}}
\end{figure}
The energy investment ${\Delta}E$ associated with deforming the
polyphenanthrene structure is shown in Fig.~\ref{fig2}(f). Our
results were obtained by optimizing the structure for selected
values of the angle $\beta$ that defines the relative orientation
of the two inequivalent phenanthrene molecules in the unit cell.
With ${\beta}{\approx}70^\circ$ representing the structural
optimum, we found that changing $\beta$ by ${\pm}10^\circ$
required ${\Delta}E<3$~eV per unit cell, corresponding to an
energy investment of only ${\approx}50$~meV per C atom, about 1\%
of the bond breaking energy. Thus, the polyphenanthrene structure
is rather soft
and represents a valid counterpart to the isoenergetic model
system of Fig.~\ref{fig1}.
Phenanthrene is a tricyclic organic molecule with a $3.36$~eV wide
DFT-PBE gap between the lowest unoccupied molecular orbital (LUMO)
and the highest occupied molecular orbital (HOMO). When
polymerized to the 2D polyphenanthrene structure depicted in
Fig.~\ref{fig2}(b), the HOMO broadens to the valence and the LUMO
to the conduction band. This is seen in Fig.~\ref{fig3}(a), which
depicts the band structure and the density of states of the
optimum geometry of polyphenanthrene with ${\beta}=70^\circ$, with
the Brillouin zone shown in the inset. Our DFT-PBE results
indicate that the fundamental band gap $E_g$ is reduced from the
molecular value to 1.75~eV in the equilibrium structure of the
layer, but still does not vanish for $55^\circ<{\beta}<80^\circ$.
The gap is near-direct due to the flatness of bands, and decreases
from $1.9$~eV at ${\beta}=55^\circ$ to $1.7$~eV at
${\beta}=80^\circ$. We should remember that Kohn-Sham eigenvalues
in all DFT calculations including ours do not correctly represent
the electronic structure and typically underestimate the band
gaps.
The decrease of $E_g$ and its dependence on $\beta$ upon
polymerization is caused by the presence of covalent C-C bonds
that connect individual phenanthrene molecules elastically and
electronically. Unfolding of the polyphenanthrene structure with
increasing angle $\beta$ rotates individual phenanthrene molecules
and modifies the bonding at the connection between adjacent
monomers, causing the the electronic structure to depend on
$\beta$. The range of deformations in polyphenanthrene is smaller
than in triangular assemblies due to the steric hindrance caused
by hydrogen termination. In absence of planar confinement,
phenanthrene molecules rotate out-of-plane at large tensile strain
values not considered here.
\section{Discussion}
Elastic response of materials is commonly described by elastic
constants constituting the elastic matrix, which describe
stress-strain relationships and thus contain energy in their
dimension. The Poisson ratio is fundamentally different. It is a
dimensionless quantity that describes deformations induced by
strain, independent of the energy cost.
According to its definition in Eq.~(\ref{Eq1}), it depends on the
choice of the coordinate system. The trace of the strain matrix,
however, which describes the fractional change of the area induced
by the mechanism, is independent of the choice of coordinates and
could couple naturally to external fields such as pressure.
We believe that changes in pore size caused by the deformation of
the 2D unit cell may find their use in
tunable sieving in a
layered system~\cite{{Bernhard18},{filter2018}},
including application in desalination membranes.
2D mechanical metamaterials may also find unusual applications in
micro-manipulation. In particular, a 2D layer in partial contact
with an in-plane junction of 2D metamaterials with different
values of $\nu$, including ${\nu}>0$ and ${\nu}<0$, may experience
a torque normal to the plane when in-plane strain is applied at
the junction of the 2D systems. Also the observation of
strain-related electronic structure changes in polyphenanthrene
opens new possibilities. Since polyphenanthrene and a wide range
of porous graphene structures can be viewed as a system of
covalently connected quantum dots, in-layer strain may be used to
tune the coupling between such quantum dots and thus change the
electronic structure of the system.
\section{Summary and Conclusions}
In summary, we have designed 2D mechanical metamaterials that may
be deformed substantially at little or no energy cost. Unlike
origami- and kirigami-based mechanical metamaterials that derive
their functionality from folding a 2D material to the third
dimension, the structures we design are confined to a plane during
deformation. In reality, such confinement may be achieved by a
strong attraction to a planar substrate or in a sandwich geometry.
On the macro-scale, the structures we describe are assemblies of
rigid isosceles triangles hinged in their corners. Their nanoscale
counterpart are molecules such as phenanthrene that may be
polymerized using coordination chemistry or macromolecular
assembly to form specific geometries with a porous graphene
structure. In these and in a large class of related structures,
consisting of connected and near-rigid isosceles triangles
confined to a plane, the Poisson ratio $\nu$ diverges for
particular strain values. $\nu$ also changes its magnitude and
sign, depending on the applied uniaxial strain, and
displays a
shape
memory effect with respect to the deformation history.
\section{Appendix}
\setcounter{equation}{0}
\renewcommand{\theequation}{A\arabic{equation}}
\subsection{Deformation behavior in 2D isosceles triangle assemblies}
\begin{video}[h]
\includegraphics[width=0.4\columnwidth]{Video1-img}
\setfloatlink{Video1.mp4}
\caption{Unfolding of a 2D assembly of ${\alpha}=120^\circ$
isosceles triangles with changing angle $\beta$.
\label{Video1}}
\end{video}
As discussed earlier,
for a given value $y$ of
the unit cell height in a 2D assembly of isosceles triangles with
${\alpha}>60^\circ$, we can find two different values $x$ of the
unit cell width, with the two structures displaying opposite signs
of $\nu$. Similarly, we can find two different values $y$ for a
given value of $x$, with the two structures displaying opposite
signs of $\nu$. This unusual behavior results from the presence of
a hidden variable, the relative triangle orientation $\beta$, and
causes $\nu$ to depend not only on the overall sample shape, but
also the history of the system. The unfolding of an assembly of
triangles with ${\alpha}=120^\circ$ and its history dependence has
been characterized by the $x-y$ trajectory in Fig.~\ref{fig1}(e)
in the range of accessible $\beta$ angles. The unfolding process
of the ${\alpha}=120^\circ$ triangle assembly is depicted in
Video~\ref{Video1}.
\begin{figure}[b]
\includegraphics[width=0.6\columnwidth]{fig4}
\caption{ Changes in the scaled width $x/x_0$ and height $y/y_0$
of the conventional unit cell for different values of the opening
angle $\alpha$ as a function of the closing angle $\beta$. The
relevant quantities are defined in Fig.~1.
\label{fig4}}
\end{figure}
$x-y$ trajectories for several values of $\alpha$ are shown in
Fig.~\ref{fig4}. The particular shape of these $x-y$ trajectories
indicates that also for opening angles other than
${\alpha}=120^\circ$ discussed above, the value and sign of $\nu$
may depend on sample history. Only in the specific case of
equilateral triangles with ${\alpha}=60^\circ$, discussed in the
following, the $y-x$ trajectory in Fig.~\ref{fig4} is linear and
$\nu$ is history independent.
\subsection{Deformations in a 2D assembly of rigid equilateral triangles}
We mentioned above that the behavior of ${\alpha}=60^\circ$
triangle systems, depicted in Fig.~\ref{fig5}, is unique among
the 2D assemblies of corner-sharing isosceles triangles. As
discussed in the main manuscript and above, the Poisson ratio
changes drastically for triangle systems with opening angle
$\alpha$ other than $60^\circ$. While hinged equilateral triangles
gradually unfold when $\beta$ increases, as seen in
Video~\ref{Video2}, the width $x$ of the unit cell remains
proportional to its height $y$, resulting in a constant,
$\beta$-independent Poisson ratio ${\nu}_{xy}=-1$, as noted
earlier~\cite{{grima2000zeolites},{Sun12369}}. For the particular
angle ${\beta}=120^\circ$, the structure of the assembly resembles
the Kagom\'{e} lattice.
\begin{figure}[h]
\includegraphics[width=1.0\columnwidth]{fig5}
\caption{ Deformations in a 2D assembly of rigid equilateral
triangles.
(a) Adjacent triangles with mutual orientation defined by the
closing angle $\beta$, hinged at the corners, forming the
primitive unit cell. The triangle height $x_0$ and the length
$y_0$ of its base define the horizontal and vertical length
scales.
(b) Snap shots of the triangle assembly for different values of
$\beta$. The conventional unit cells of width $x$ and height $y$
are indicated.
\label{fig5} }
\end{figure}
\subsection{
Deformations of 2D polyphenanthrene}
Changes in the 2D polyphenanthrene structure as a function of
$\beta$ are shown in Video~\ref{Video3}. The structural changes
resemble those shown in Video~\ref{Video1} for the assembly of
${\alpha}=120^\circ$ rigid triangles.
\begin{video}[h]
\includegraphics[width=0.4\columnwidth]{Video2-img}
\setfloatlink{Video2.mp4}
\caption{Unfolding of a 2D assembly of ${\alpha}=60^\circ$
equilateral triangles with changing angle $\beta$.
\label{Video2}}
\end{video}
\begin{video}[h]
\includegraphics[width=0.7\columnwidth]{Video3-img}
\setfloatlink{Video3.mp4}
\caption{Unfolding of a 2D polyphenanthrene structure dubbed
`porous graphene' with changing angle $\beta$.
\label{Video3}}
\end{video}
\begin{acknowledgments}
We thank Jie Ren for useful discussions. D.L. and D.T. acknowledge
financial support by the NSF/AFOSR EFRI 2-DARE grant number
EFMA-1433459. Z.G. gratefully acknowledges the China Scholarship
Council (CSC) for financial support (China Scholarship number
201706260027). Computational resources have been provided by the
Michigan State University High Performance Computing Center.
\end{acknowledgments}
|
1,116,691,497,125 | arxiv | \section{Introduction and motivation}
Conventional iterative Krylov subspace solvers for the Dirac equation share a
common behavior when going to small quark masses: Their iteration number and
time to solution (wall-clock time) increases drastically, which basically
renders them unusable. Therefore a lot of effort has been put into developing
efficient preconditioning algorithms that aim at tackling this problem, such as
domain decomposition \cite{Luscher:2003qa}, inexact deflation
\cite{Luscher:2007se}, and multigrid approaches
\cite{Babich:2010qb,Frommer:2013fsa}. While these methods significantly reduce
the iteration number, the latter two introduce an additional overhead compared
to standard solvers since they require an initial setup phase before one can
start solving the Dirac equation. In HMC the setup cost can even be the dominant
contribution to the total wall-clock time spent in the solver since only a few
solves can be done before the setup has to be updated. Thus an optimization of
the setup code potentially has a large impact on the overall HMC performance.
Based on the attractive theoretical properties and the performance of the \WMG\
algorithm, the Regensburg group (RQCD) recently decided to port the
implementation of this algorithm by the Wuppertal group \cite{Frommer:2013fsa},
which is C-MPI code aimed at standard CPUs, to SIMD architectures, with a
special focus on the Intel Xeon Phi architecture (KNC) used in
\qp~\cite{Arts:2015jia}. This involved threading the code using OpenMP,
optimizing it for the wide SIMD registers of the KNC, and reducing
memory-bandwidth requirements by enabling the use of half precision on the
coarse grid. For a detailed description of this effort see
\cite{Heybrock:lat15}.
Even with the improvements achieved in \cite{Heybrock:lat15}, there is still
optimization potential in the setup of \WMG, as it remains expensive. In this
contribution we document our work on an improved implementation that modifies
the computation order in the setup phase to process multiple right-hand sides
simultaneously.
\section{Description of the algorithm}
The \WMG\ algorithm uses FGMRES as the outer Krylov subspace solver for the
Dirac equation, preconditioned by a multigrid method that consists of two parts:
a smoother working on the fine grid that reduces the error contribution of
eigenvectors with large eigenvalues (high modes), and a coarse-grid correction
(CGC) that reduces the error contribution of low modes.\footnote{As in
\cite{Heybrock:lat15} we restrict ourselves to two grid levels.} To this end
projection operators between the grids and the Dirac operator on the coarse grid
need to be defined in an initial setup phase.
The setup procedure of \WMG\ is based on a set of $\Ntv$ random test vectors
(each of dimension $12V$, where $V$ is the lattice volume) that are used to
construct restriction $R$, prolongation $P = R^{\dagger}$, and coarse-grid
operator $D_c$. The setup is split into two parts: an initial phase and an
iterative refinement phase. In the initial phase, a domain-decomposition (DD)
smoother based on the Schwarz alternating procedure is run on each of the test
vectors for a few iterations with starting guess $0$. Then the initial operators
are constructed from the updated test vectors. This completes the initial
phase. The operators are then updated in the iterative refinement phase
(Alg.~\ref{alg:mgsetup}) that makes use of the full V-cycle of the multigrid
algorithm. For a more detailed description see
\cite{Frommer:2013fsa,Heybrock:lat15}.
\begin{algorithm}[ht]
\caption{Iterative part of MG setup (standard implementation)}
\label{alg:mgsetup}
\SetAlgorithmStyle
\For {$i=1$ \KwTo $\Nsetup$}
{
// apply V-cycle to test vectors\;
\For {$j=1$ \KwTo $\Ntv$}
{
// \emph{coarse-grid correction}\;
restrict test vector $v_j$ to coarse grid: $v_{c,j} = R\,v_j$\;
solve coarse system to low accuracy: $u_{c,j} \approx D_c^{-1}\,v_{c,j}$\;
prolongate result of coarse-grid solve to fine grid: $u_j = P\,u_{c,j}$\;
// \emph{fine grid}\;
apply smoother to test vector $v_j$, with result from CGC as starting
guess\;
replace test vector $v_j$ by result of smoother\;
}
setup of restriction $R$ and coarse-grid operator $D_c$
}
\end{algorithm}
\section{Basic idea}
After the optimizations described in \cite{Heybrock:lat15}, we identified the
application of the V-cycle to the test vectors in the iterative refinement phase
to be the dominant contribution to the setup time. Therefore our work focuses on
this part of the code exclusively.
In the implementations of \cite{Frommer:2013fsa,Heybrock:lat15} the V-cycle is
applied to the test vectors in a loop sequentially, i.e., to a single right-hand
side (SRHS) at a time. The basic idea of our improvements is simple. We modify
the computation order of the code by blocking the loop over the test vectors
(Alg.~\ref{alg:mgsetup2}) with a block length of $\Nb$ and apply the V-cycle to
multiple right-hand sides (MRHS), i.e., all vectors inside such a block,
simultaneously. Choosing $\Nb = \Nsimd$ and moving the loop inside a block to
the lowest level functions of the code enables us to use this loop for SIMD
vectorization.
\begin{algorithm}[h]
\caption{Iterative part of MG setup (improved implementation)\protect\footnotemark}
\label{alg:mgsetup2}
\SetAlgorithmStyle
\For {$i=1$ \KwTo $\Nsetup$}
{
\For {$j=1$ \KwTo $\Ntv/\Nb$}
{
$k = 1 + (j-1)\cdot\Nb,\ \ell = j\cdot\Nb$\;
apply coarse-grid correction (CGC) to test vectors
$v_k$,\,\dots,\,$v_\ell$\;
apply smoother to test vectors $v_k$,\,\dots,\,$v_\ell$, with result from
CGC as starting guess\;
replace test vectors $v_k$,\,\dots,\,$v_\ell$ by result of smoother\;
}
setup of restriction $R$ and coarse-grid operator $D_c$\;
}
\end{algorithm}
\footnotetext{%
In the description of the algorithm we assume that $\Ntv$ is an integer
multiple of $\Nb$. If it is not the algorithm gets modified in a
straightforward way, but then part of the SIMD unit is wasted in the last
iteration, see also \cite{Heybrock:lat15}.
}
\section{Communication bandwidth}
The effective network bandwidth for off-chip communication via MPI depends on
the message size (cf.~Fig.~\ref{plot:comm-bw}). For small messages latency
effects are dominant, resulting in low bandwidth. For larger messages the
effective bandwidth increases since latency effects become negligible.
\begin{figure}[ht]
\centering
\input{gfx/comm_bw.pgf}
\caption{Network bandwidth vs. message size between two KNCs (bi-directional)
in \qp\ via FDR InfiniBand. Typical message sizes in the \WMG\ setup are shown
for SRHS (green) and MRHS (blue) setup.}
\label{plot:comm-bw}
\end{figure}
The message size ($S_\mu$ in direction $\mu$) on the coarse grid of the \WMG\
setup depends on the local volume of one MPI rank and the degrees of freedom per
site ($2\Ntv$):
\begin{align}
S_\mu &= \prod_{\nu = 0, \nu \neq \mu}^{3} \frac{\text{(local
lattice)}_\nu}{\text{(domain size)}_\nu} \cdot \frac{2 \Ntv }{2} \cdot
\unit[8]{Byte}\,,
\label{eq:sizes}
\end{align}
where the factor $2$ in the denominator accounts for even-odd preconditioning
and $\unit[8]{Byte}$ is the size of a complex number in single precision. From
Eq.~\eqref{eq:sizes} we obtain message sizes of order 1 KiB for the default
setup, which is just the point where the effective bandwidth starts to increase
significantly. By processing multiple right-hand sides simultaneously we are
able to perform all halo exchanges and global sums, respectively, in the same
call to MPI. Thus the message size increases by a factor of $\Nb = \Nsimd = 16 =
2^4$. Fig.~\ref{plot:comm-bw} suggests an estimated increase in effective
bandwidth of a factor $3\sim4$, which might translate directly to the wall-clock
time spent in communication.
\section{Mapping to SIMD registers}
\label{section:mapping}
The basic layout for data structures based on complex numbers in the original
Wuppertal implementation did not take vectorization into account and uses the
\emph{complex} data type in C. This way, a vector-like object $v$ of length
$\ell$ is stored such that the real and imaginary parts alternate in memory:
\begin{center}
\begin{tabularx}{.6\textwidth}{|c *{7}{|Y}|}
\hline &&&&&&\\[-4.5mm]
$\re v_1$ & $\im v_1$ &
$\re v_2$ & $\im v_2$ &
$\cdots$ & $\re v_\ell$ & $\im v_\ell$ \\[.7mm]
\hline
\end{tabularx}
\end{center}
This is known as Array-of-Structs (AoS) layout. In the code parts relevant for
us, the implementation in \cite{Heybrock:lat15} works with this layout by
de-interleaving two registers using swizzle intrinsics before and after doing a
SIMD computation. This introduces additional overhead that could be avoided with
a data layout more suitable to vectorization.
Our implementation uses another index for vectorization, i.e., the index of the
different right-hand sides inside a block. For each vector index $i$ we store
$\Nsimd$ $(=16)$ real parts of the right-hand sides followed by the
corresponding imaginary parts:
\begin{center}
\begin{tabularx}{.8\textwidth}{|c *{8}{|Y}|}
\hline &&&&&&&\\[-3.5mm]
$\re v^{(1)}_i$ & $\re v^{(2)}_i$ &
$\cdots$ & $\re v^{(16)}_i$ &
$\im v^{(1)}_i$ & $\im v^{(2)}_i$ &
$\cdots$ & $\im v^{(16)}_i$ \\[1.3mm]
\hline
\end{tabularx}
\end{center}
This is known as Array-of-Structs-of-Short-Vectors (AoSoSV) layout. While the
conversion required non-trivial programming effort, this layout yields a more
natural mapping to SIMD. The de-interleaving overhead is gone, and the
individual entries in the registers contain data independent of one another,
which eliminates the need for reduction operations over the elements in the
register.
With our modifications to the data layout, matrix-vector multiplications become
matrix-matrix multiplications, which enables us to use a different vectorization
scheme. In contrast to \cite{Heybrock:lat15} we vectorize the restriction of a
vector from the fine to the coarse grid (as an example) by broadcasting the
elements of the projection operator $R$ as shown in
Alg.~\ref{alg:mrhs_restriction} (see \cite{Heybrock:lat15} for the definition of
$\Nblock$, $V_\text{block}$, and $y_c$). The same vectorization scheme is used
for the application of $D_c$ in the coarse-grid solve. BLAS-like linear
algebra (e.g., vector adds) is vectorized trivially with this data layout.
\medskip
\begin{algorithm}[H]
\caption{SIMD implementation of restriction $R y = y_c$ with $\Nsimd$
right-hand sides}
\label{alg:mrhs_restriction}
\SetAlgorithmStyle
\For {$i=1$ \KwTo $\Nblock$}
{
\ForEach {$h=\ell,r$}
{
\For {$n=1$ \KwTo $6V_{\text{block}}$}
{
load real and imag.\ parts of $\Nsimd$ rhs for entry $y_{i,n}^h$ into
SIMD vectors\;
\For {$j=1$ \KwTo $\Ntv$}
{
load real and imag.\ parts of \smash{$(y_c)^h_{i,j}$} for $\Nsimd$
rhs\;
\mbox{broadcast real and imag.\ part of entry $j$ in column $n$ of
$R_i^h$ into SIMD vectors\hspace*{-10mm}}\;
increase $(y_c)^h_{i,j}$ by complex fused multiply-add and
write to memory\;
}
}
}
}
\end{algorithm}
\section{Memory-bandwidth and cache-reuse considerations}
\label{section:cache}
A dense complex matrix-vector multiplication $c = A \cdot b$, where $A$, $b$,
and $c$ are of dimension $M \times K$, $K$, and $M$, respectively, requires
transferring $(2M + M \cdot K + K)\cdot\unit[8]{Byte}$ from and to memory in
single precision. The computation needs a minimum of $4\cdot M \cdot K/16$
cycles, where a complex \emph{fmadd} consists of 4 real \emph{fmadd} operations,
of which a KNC core can perform $16$ in one cycle. The ratio of these numbers
yields the memory bandwidth per core required to avoid stalls, i.e., $32 \cdot
(2/K + 1 + 1/M)$ Byte/cycle. For a typical working set of a core, $K$ and $M$
are large enough so that their contribution $2/K+1/M$ is negligible compared to
1.\footnote{$M$ needs to be multiplied by $2$ for the spin-splitting of \WMG.}
The resulting required memory bandwidth is then $32$~Byte/cycle per core, or
$2377$~GB/s on $60$ cores of a KNC with a clock speed of $\unit[1.238]{GHz}$,
which is well above the KNC's sustained memory bandwidth of $150-170$~GB/s,
measured with the STREAM benchmark.
Performing the analogous calculation for the matrix-matrix multiplication with
$\Nsimd$ right-hand sides ($A = M \times K$, $B = K \times \Nsimd$, and $C = M
\times \Nsimd$) yields $32 \cdot (2/K + 1/\Nsimd + 1/M)~\text{Byte/cycle}\sim
2~\text{Byte/cycle}=149~\text{GB/s}$. Here, an element of $A$ can stay in cache
for $\Nsimd$ right-hand sides, which results in the difference to the value
above. Thus our method is able to reduce the memory bandwidth requirements of
this code part significantly, and our estimate is now within reach of the KNC's
sustained memory bandwidth.
\section{Results}
At the time of this writing we have finished the implementation of the
coarse-grid solve and the projection operators, while the smoother
\cite{Heybrock:2014iga} still works with the default data layout. This
introduces some temporary copying overhead which will disappear as soon as we
have a MRHS implementation of the smoother.
The results below are from runs on the CLS lattice C101 ($ 48^3\times96$,
$\beta=3.4$, $m_{\pi}=\unit[220]{MeV}$, $a=\unit[0.086]{fm}$) described in
\cite{Bruno:2014jqa}. We use $\Nsimd = 16$ test vectors, a domain size of $4^4$,
and a relative coarse-grid tolerance of $0.05$. The remaining solver
parameters are tuned for minimal propagator wall-clock time with the default
(SRHS) setup. To exclude algorithmic effects and allow for a direct comparison
we use the same parameter combination also for the MRHS setup.
\nopagebreak
\begin{figure}
\centering
\input{gfx/results_setup_solve.pgf}
\caption{Summary of contributions to the wall-clock time in the \WMG\ setup on
$64$ KNCs in \qp\ (solve time for comparison). Improvements in parts
affected by our modifications are given in detail.}
\label{plot:results}
\end{figure}
With these parameters, a lattice vector on the coarse grid requires
$\unit[1]{\%}$ of the memory of a vector on the fine grid. With the MRHS setup,
we need $32$ lattice vectors on the fine grid and $16 \cdot 32 = 512$ on the
coarse grid. This is to be compared to $17$ and $32$ with the SRHS setup. Thus,
our method needs roughly a factor of $2.2$ more memory in total for the setup.
In a realistic measurement run we typically also keep around several propagators
on the fine grid (consisting of 12 vectors each), so the increase in total
memory consumption is actually considerably smaller.
In Fig.~\ref{plot:results} we show the improvements in wall-clock time we
achieve with our method. We gain a factor of $2.9$ in the projection operators
and a factor of $2.4$ in computation on the coarse grid. Our method needs fewer
calls to barriers between threads, which yields an improvement of $2.7$x in
on-chip synchronization. However, the largest gains are in halo exchanges
($4.7$x) and global sums ($10.3$x), which were the dominant contributions
previously. After our improvements, the wall-clock time is now dominated by
copying data from and to MPI buffers. This is currently done by a single thread
on a single core. In the future we will reduce the impact of these copy
operations by threading them over cores, which will allow us to exploit a larger
fraction of the KNC's sustained memory bandwidth.
In total, the time spent on the coarse grid is reduced by a factor of $2.9$,
which translates to a factor of $1.4$ for the total setup time of \WMG.
\section{Conclusions and outlook}
By combining multiple right-hand sides we were able to significantly reduce the
wall-clock time of the previously dominant contribution (i.e., coarse-grid
solve) to the setup of \WMG, see Fig.~\ref{plot:results} for details. Our
biggest improvements are in communication, where we can send fewer messages that
are larger and thus are able to reduce the impact of latency effects.
Additional improvements were made in computation and on-chip synchronization.
As mentioned above, the impact of the red block in Fig.~\ref{plot:results} (copy
from/to MPI buffers) will be reduced in the future by threading these copy
operations over cores. More importantly, Fig.~\ref{plot:results} shows that the
biggest optimization potential is now in the fine-grid part of the iterative
setup. Therefore we will complete the multiple right-hand-side V-cycle by
applying the techniques used in the present work also to the smoother
\cite{Heybrock:2014iga}, which should yield similar speedups.
\bibliographystyle{jbJHEP_notitle}
|
1,116,691,497,126 | arxiv | \section{Introduction}
A single optical mode confined inside an optical cavity behaves
like a simple harmonic oscillator, where all the energy levels are
equally spaced. When this cavity mode is strongly coupled to a
two-level quantum emitter such as a quantum dot (QD), the energy
structure of the coupled system becomes anharmonic. This
anharmonic (Jaynes-Cummings) ladder has been recently probed in
atomic \cite{2009.PRL.Rempe.TwoPhoton} and super-conducting
\cite{Superconducting_dressed_states} cQED system. In addition,
nonclassical correlations between photons transmitted through the
cavity result from such anharmonicity, which in turn leads to
fundamental phenomena of photon blockade and photon induced
tunneling. These effects have been recently demonstrated in atomic
systems \cite{birnbaum_nature}, as well as solid-state
\cite{AF_natphys}. Moreover, photon blockade and photon-induced
tunneling can be used for applications beyond cQED, including
generation of single photons on demand \cite{AM_blockade_PRA} for
quantum information processing, high precision sensing and
metrology \cite{high-NOON}, as well as quantum simulation of
complex many-body systems \cite{ciuti_fermionized_photon}. In this
Letter, we explore the utility of the photon induced tunneling and
blockade for non-classical light generation and probing of higher
order dressed states in the solid state cQED system consisting of
a single quantum dot (QD) coupled to a photonic crystal cavity.
First, we provide numerical simulation data showing that photon
induced tunneling can be used to preferentially generate specific
multi-photon states. Following this, we present experimental data
demonstrating the transition from blockade to tunneling regime in
such a system and show the signature of higher order dressed
states observed in the measured photon statistics. The probing of
the ladder of dressed states by photon-correlation measurement has
previously been performed experimentally only in an atomic cavity
QED system \cite{2009.PRL.Rempe.TwoPhoton}, while in solid state
systems it has been studied theoretically
\cite{laussy_finley_dressed_state_2011} and signatures of higher
order dressed states were observed only using four wave mixing
\cite{second_order_langbein}.
\begin{figure}
\centering
\includegraphics[width=3.25in]{Figure1_setup.eps}
\caption{(color online) (a) Schematic of the coupled QD-cavity
system driven by a Gaussian pulse (coherent state
$|\alpha\rangle$). The transmitted light through the cavity is
nonclassical($|\psi\rangle$) due to the nonlinearity provided by
the strongly coupled QD-cavity system. (b) The anharmonic
Jaynes-Cummings ladder structure.} \label{Figure1_setup}
\end{figure}
The dynamics of a coupled QD-cavity system, coherently driven by a
laser field (Fig. \ref{Figure1_setup} a), is well described by the
Jaynes-Cummings Hamiltonian of the form
\begin{equation}
\label{eqn:H} H=\Delta_a \sigma_+ \sigma_-+\Delta_ca^\dag
a+g(a^\dag\sigma_-+a\sigma_+)+\mathcal{E}(t)(a+a^\dag)
\end{equation}
\noindent which assumes the rotating wave approximation (RWA) and
a frame of reference rotating with the frequency of the laser
field $\omega_l$. Here, $\Delta_a=\omega_a-\omega_l$ and
$\Delta_c=\omega_c-\omega_l$ are respectively the detunings of the
laser from the QD resonant frequency $\omega_a$ and from the
cavity resonance frequency $\omega_c$, $g$ is the coherent
coupling strength between the QD and the cavity mode,
$\mathcal{E}(t)=\sqrt{\kappa P(t)\over{2\hbar \omega_c}}$
\cite{supplementary} is the slowly varying envelope of the
coherent driving field with power $P(t)$ incident onto the cavity
with field decay rate $\kappa$, and $a$ is the annihilation
operator for the cavity mode. If the excited and ground states of
the QD are denoted by $|e\rangle$ and $|g\rangle$ then
$\sigma_-=|g\rangle\langle e|$ and $\sigma_+=|e\rangle\langle g|$.
Two main loss mechanisms in this system are the cavity field decay
rate $\kappa=\omega_{c}/2Q$ ($Q$ is the quality factor of the
resonator) and QD spontaneous emission rate $\gamma$. When the
coupling strength $g$ is greater than $\kappa\over2$ and $\gamma$,
the system is in the strong coupling regime
\cite{Yoshie04,Bloch_2005,Reithmaier_2004}. In this regime, energy
eigenstates are grouped in two-level manifolds with eigen-energies
given by $n\omega_c \pm g\sqrt{n}$ (for $\omega_a=\omega_c$),
where $n$ is the number of energy quanta in the coupled QD-cavity
system (Fig. \ref{Figure1_setup} b). The eigenstates can be
written as:
\begin{equation}
|n,\pm\rangle=\frac{|g,n\rangle \pm |e,n-1\rangle}{\sqrt{2}}
\end{equation}
Signatures of the photon blockade and tunneling can be detected
through photon-statistics measurements, such as the second-order
coherence function at time delay zero $g^{(2)}(0)=\frac{\langle
a^\dag a^\dag a a\rangle}{\langle a^\dag a\rangle ^2}$.
$g^{(2)}(0)$ is less (greater) than $1$ in photon blockade
(tunneling) regime, signifying presence of single (multiple)
photons in the light coming out of the coupled QD-cavity system.
$g^{(2)}(0)$ can be experimentally measured by Hanbury-Brown and
Twiss (HBT) setup, where coincidences between the photons are
detected \cite{AF_natphys}. Another important statistical quantity
is $n^{th}$ order differential correlation function
$C^{(n)}(0)=\langle a^{\dag n}a ^n\rangle-\langle a^\dag a\rangle
^n$, which provides a clearer measure of the probability to create
$n$ photons at once in the cavity \cite{2009.PRL.Rempe.TwoPhoton}.
Second order differential correlation function can also be
expressed as $C^{(2)}(0)=[g^{(2)}(0)-1]n_c^{2}$, where
$n_c=\langle a^\dag a\rangle$ is the average intra-cavity photon
number. Particularly for a weakly driven system ($n_c \ll 1$),
$C^{(2)}(0)$ becomes positive only when the probability of
two-photon state becomes significant compared to that of a
single-photon state, while a peak in $C^{(2)}(0)$ indicates
maximum probability of a two-photon state inside the cavity. As
the driving power increases, the peak in $C^{(2)}(0)$ shifts
towards detunings corresponding to maximum probability of exciting
higher manifolds, as described below and in the supplement.
Although the photon blockade and tunneling phenomena can be
observed under continuous wave (CW) excitation in a numerical
simulation \cite{supplementary}, for practical consideration it is
important to analyze the response of the cavity-QD system to a
pulsed driving field. In particular, the ability to measure the
photon statistics of the system's output during the actual
experiment is determined by the time resolution capabilities of
the single photon counters in the HBT setup, which in practice do
not allow for $g^{(2)}$ measurement of a CW-driven cavity-QD
system. A common way to overcome this limitation is to drive the
strongly coupled cavity-QD system with a train of weak, coherent
pulses of sufficiently narrow bandwidth \cite{AF_natphys}. We use
quantum trajectory method \cite{CarmichaelOpenSystems,
Quan_trajectory_dep} to analyze the pulsed driving of the coupled
QD-cavity system and find the resulting photon statistics
\cite{AM_blockade_PRA}. We also investigate the effect of pure QD
dephasing \cite{article:majumdar09} on the photon statistics and
observe that, even though the actual value of $g^{(2)}(0)$ is
affected due to dephasing, the qualitative nature of the
$g^{(2)}(0)$ spectrum remains same \cite{supplementary}.
As the non-classical state is collected from the cavity, only the
collapse operator corresponding to the cavity decay ($a$) is
monitored. A histogram is calculated based on the photon counts in
the cavity decay channel, and probability $P(n)$ for having
exactly $n$ photons in the system is found.
\begin{figure}
\centering
\includegraphics[width=3.25in]{fig2-pulsed_ideal.eps}
\caption{(color online) Numerically calculated photon statistics
at the output of the QD-cavity system driven by Gaussian pulses
with duration $\tau_p \sim 24$ ps. The simulation parameters are
$g = 2\pi \times 40$ GHz, $\kappa =2\pi \times 4$ GHz, and
$\mathcal{E}_o=2\pi \times 9$GHz; pure QD dephasing is neglected.
(a) $P(n)$, probability of generating an $n$ photon state at the
cQED system output as a function of laser-cavity detuning
$\Delta_c$. (b) Second order auto-correlation $g^{(2)}(0)$ as a
function of $\Delta_c$. The red dashed line shows the expected
$g^{(2)}(0)$ for a coherent state. (c) Second order differential
correlation $C^{(2)}(0)$ as a function of $\Delta_c$. (d)
$C^{(2)}(0)$ as a function of the laser-cavity detuning $\Delta_c$
for different values of the peak laser field $\mathcal{E}_o/2\pi$
(in units of GHz). We observe that the peak in the $C^{(2)}(0)$
occurs at $\Delta_c=0.7g$ for weaker excitation (where the second
order manifold is excited resonantly via two photons). However,
with increasing excitation power, the peak positions shifts
towards $\Delta_c=0$, due to excitation of higher manifolds.}
\label{fig2-pulsed_ideal}
\end{figure}
The driving term $\mathcal{E}(t)$ in the Hamiltonian described in
Eqn. \ref{eqn:H} is assumed to be of the form
$\mathcal{E}(t)=\mathcal{E}_o exp(-{t^2\over{2\tau_{p}^2}})$,
where $\mathcal{E}_o$ is the peak amplitude of the pulse. We set
$\tau_p=24.4$ ps (i.e., full width at half maximum - FWHM of $34$
ps), which satisfies the narrow-band condition and corresponds to
our experimental parameters.
Figure \ref{fig2-pulsed_ideal} shows the behavior of the system
with better than current state of the art \cite{Arakawa_3d} but
achievable experimental parameters (assuming QD dipole moment of
$30$ Debye embedded in a linear three holes defect cavity with
mode volume $\sim 0.7(\lambda/n)^3$) resulting in $g = 2\pi\times
40$GHz and $\kappa = 2\pi\times 4$ GHz . These parameters can be
achieved by improving the alignment of the QD to the cavity field
and optimizing the photonic-crystal cavity fabrication process to
achieve higher quality factor. The results in
Fig.\ref{fig2-pulsed_ideal}a show that such a cavity-QD system can
be employed to deterministically generate selected Fock states of
high purity at the cavity output, where the particular Fock state
can be selected by adjusting the detuning of the drive laser from
the bare cavity resonance (no pure QD dephasing is included in the
simulation). The detuning values ($1.1g$, $0.9g$ and $0.7g$) are
different from what one intuitively expects from a lossless
strongly coupled QD-cavity system under CW driving ($g$,
$g/\sqrt{2}$ and $g/\sqrt{3}$, corresponding to the excitation of
first, second and third order manifold, respectively) because of
both the losses and the pulsed driving of the system
\cite{AF_natphys}. We note that, in presence of pure QD dephasing,
$P(n)$ for $n$ photon states decreases \cite{supplementary}. From
the probability distribution of the different Fock states we can
find the wave-function of the overall photon state
$|\psi\rangle=\sum\limits_{n}c_n|n\rangle$ with $P(n)=|c_n|^2$,
the second order coherence function
$g^{(2)}(0)=\frac{\langle\psi|a^{\dag}a^{\dag}aa|\psi\rangle}{\langle\psi|a^{\dag}a|\psi\rangle^2}=\frac{\sum\limits_n
n(n-1)P(n)}{\left(\sum\limits_n nP(n)\right)^2}$ and second order
differential correlation function
$C^{(2)}(0)=\langle\psi|a^{\dag}a^{\dag}aa|\psi\rangle-\langle\psi|a^{\dag}a|\psi\rangle^2=\sum\limits_n
n(n-1)P(n)-\left(\sum\limits_n nP(n)\right)^2$, which we can
measure experimentally. Figure \ref{fig2-pulsed_ideal}b shows
$g^{(2)}(0)$ as a function of $\Delta_c$, the laser detuning from
the empty cavity. The dashed line indicates the expected
$g^{(2)}(0)$ for a coherent state. Figure \ref{fig2-pulsed_ideal}c
shows $C^{(2)}(0)$ as a function of $\Delta_c$. $C^{(2)}(0)$
transitions from negative to positive value with decreased
detuning at $\Delta_c\sim0.9g$, thanks to the excitation of the
second manifold in the ladder when two photons are simultaneously
coupled into the cavity-QD system. Fig. \ref{fig2-pulsed_ideal}d
shows $C^{(2)}(0)$ as a function of $\Delta_c$ for different laser
excitation powers. We note that, the peak position changes
depending on the excitation laser power and at lower driving power
we observe the peak at $\Delta_c\sim0.7g$, where the second order
manifold is excited via two photons. With increasing power, the
higher (third and more) manifolds starts being populated, and the
peak in $C^{(2)}(0)$ subsequently shifts to smaller values of
detuning. In Fig.\ref{fig2-pulsed_ideal}c, the peak in
$C^{(2)}(0)$ is at a detuning of $\Delta_c\sim 0.5g$.
\begin{figure}
\centering
\includegraphics[width=3.5in]{fig_exp_a.eps}
\caption{(color online)(a) The transmission spectrum of a strongly
coupled QD-cavity system showing two polaritons. (b) Second order
coherence function at $t=0$, $g^{(2)}(0)$ as a function of the
laser detuning from the empty cavity frequency. The system is
excited with $\tau_p=24$ ps Gaussian pulses, with $80$ MHz
repetition frequency. The dashed grey (solid black) line results
from a numerical simulation based on the system's experimental
parameters and no (with) QD blinking. The average laser power for
the measurement is $P_{avg}=0.2$nW. For the simulations we use a
QD dephasing rate $\gamma_d/2\pi=1$ GHz. } \label{fig_exp1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3in]{fig_exp_b.eps}
\caption{(color online)(a) Normalized differential correlation
function $C^{(2)}(0)$ as a function of the laser detuning. The
dashed red line shows the result of a numerical simulation based
on the system's experimental parameters. (b) $g^{(2)}(0)$ in the
tunneling regime ($\Delta_c=0$) as a function of laser power
$P_{avg}$ measured in front of the objective lens. The solid line
shows the result of numerical simulation including the effects of
QD blinking, while the inset plots the numerically simulated
$g^{(2)}(0)$ in the absence of blinking. For the simulations we
use a QD dephasing rate $\gamma_d/2\pi=1$ GHz. } \label{fig_exp2}
\end{figure}
We confirm our theoretical predictions by performing experiment
with InAs QDs coupled to a linear three hole defect GaAs photonic
crystal cavity. Details of the fabrication and experimental setup
can be found in Ref. \cite{AF_natphys}. We measure laser
transmission through the system (using a cross-polarized
reflectivity setup \cite{AF_natphys}) and observe anti-crossing
between the QD and cavity (by changing temperature) signifying the
system is in the strong coupling regime. At resonance, the QD and
cavity mix to generate two polaritons, seen as two Lorentzian
peaks in Figure \ref{fig_exp1}a. By fitting the spectrum at
resonance we estimate the system parameters as $\kappa/2\pi=27$
GHz (corresponding to $Q\approx6,000$) and $g/2\pi=21$ GHz. To
drive the cavity-QD system, we use a mode-locked Ti-Sapphire
laser, that generates $3$ ps pulses at a repetition rate of
$f_{rep}=80$ MHz. These $3$ ps pulses are passed through a
monochromator to elongate the pulse in time domain, which results
in pulses with approximately Gaussian temporal profile of $34$ ps
FWHM, corresponding to $\tau_p=24.44$ ps (as in our theoretical
analysis). We determine the amplitude of the coherent driving
field using $ \mathcal{E}_o=\sqrt{{\eta
P_{avg}\over{4\pi^{1\over2}Q\tau_{p}f_{rep}\hbar}}}$
\cite{supplementary}, where $P_{avg}$ is the average optical power
of the pulse train measured before the objective lens and
$\eta\sim 0.03$ \cite{AF_natphys} is the coupling efficiency of
the incident light into the cavity including all the optics
losses. For our experimental parameters, $\mathcal{E}_o\approx
2\pi \sqrt{P_{avg}(nW)}\times9.3$GHz. The second order
auto-correlation $g^{(2)}(0)$ is measured as a function of
excitation laser frequency (Figure \ref{fig_exp1}b) to observe
transition from photon blockade to photon induced tunneling
regime. Typical histograms obtained for blockade and tunneling are
shown in the supplementary material. We estimate $g^{(2)}(0)$ as
the ratio of the coincidence counts at zero-time delay and
non-zero time delay.
Following this we calculate the second order differential
correlation function $C^{(2)}(0)$ for the coupled-QD cavity system
as a function of the laser-cavity detuning (Fig. \ref{fig_exp2}a).
We observe the transition of $C^{(2)}(0)$ from negative to
positive values and the onset of a peak at $\Delta_c\sim0.5g$
corresponding to the excitation of the higher order dressed
states. Simulations with our system parameters are shown by the
dashed line in Fig. \ref{fig_exp2} a). As explained before, the
peak in $C^{(2)}(0)$ does not correspond exactly to the resonant
excitation of the second order manifold via two photon process,
because of the additional excitation of the higher order
manifolds. All the measurements are performed at $14$ K. We note
that, in the simulation, $g^{(2)}(0)$ in the tunneling regime is
much larger than the experimentally measured value as a result of
QD blinking, which causes the experimentally collected data to be
a weighted average of transmission through an empty cavity and a
cavity with strongly coupled QD; in other words, blinking
effectively squashes the $g^{(2)}(0)$ curve towards $g^{(2)}(0)=1$
\cite{AF_natphys}. We model the blinking behavior of the QD by
assuming that during a unit time interval the QD is active for a
fraction $r$ and inactive for $(1-r)$ of the time. Thus the
$g^{(2)}(0)$ measured in the experiment will be a statistical
mixture of the coherent photon state (when QD is inactive, i.e.,
QD-cavity coupling $g=0$) and the correlated photons from the
coupled QD-cavity system \cite{supplementary}. We obtain good fit
to our experimental data with $r=0.65$. The vertical error bars in
all the figures are computed from the uncertainties in the fit of
the histogram data sets. The horizontal error bars are given by
the uncertainty in the measurement of the laser wavelength or the
laser power.
Finally, Fig. \ref{fig_exp2} b shows $g^{(2)}(0)$ as a function of
excitation laser power in the tunneling regime ($\Delta_c=0$).
This data is taken with the same cQED system on a different day,
when the cavity is red-shifted compared to the previous
measurements. For this particular experiment, the QD and the
cavity are resonant at $26$ K. This slightly higher temperature
might cause more QD dephasing, leading to a worse value of
$g^{(2)}(0)$ ($1.12$ as opposed to $1.2$ from the previous
measurement). Overall, $g^{(2)}(0)$ decreases with increasing
laser power as expected from the intuitive picture of QD
saturation at high driving power and the numerical simulation.
This clearly shows that the bunching observed in tunneling regime
is coming solely from the quantum mechanical nature of the
QD-cavity system, and not from any classical effect. We also
observe interesting oscillatory behavior in $g^{(2)}(0)$ as a
function of power. An oscillatory behavior is also observed in the
simulation that includes the effects of QD blinking. Without any
QD blinking, the simulation results show a mostly monotonically
decreasing $g^{(2)}(0)$ with increasing laser power (inset of Fig.
\ref{fig_exp2}b).
Finally, we would like to point out that these measurements have
been performed at the lowest $P_{avg}=0.2$nW that we can reliably
use, corresponding to $\mathcal{E}_o\approx 2\pi\times 4.2$GHz.
This roughly corresponds to the red plot in the theoretical Fig.
\ref{fig2-pulsed_ideal}d, where the peak in $C^{(2)}(0)$ is near
$\sim 0.5g$. This lower power limit is caused by the limited
mechanical stability of the cryostat and the low overall
efficiency with which we can couple the cavity photons into the
single photon counters in our HBT setup. The time needed to
perform the second-order coherence measurement increases
quadratically with decreasing $P_{avg}$ and for low powers the
cavity drifts out of focus before we can collect sufficient number
of coincidence counts.
In summary, we analyzed the photon induced tunneling regime in a
coupled QD-cavity system and proposed a scheme to use this system
for multi-photon state generation. In addition, we experimentally
characterized the second order coherence function $g^{(2)}(0)$ for
a coupled QD-cavity system as a function of laser-cavity detuning
and laser power. Using the experimental results of the photon
statistics measurement, we find the signature of the higher order
manifolds of the Jaynes-Cummings anharmonic ladder in the second
order differential correlation function $C^{(2)}(0)$.
The authors acknowledge financial support provided by DARPA, ONR,
NSF and the ARO; and useful discussion with Dr. Andrei Faraon. The
authors also acknowledge Dr. Pierre Petroff and Dr. Hyochul Kim
for providing the QD sample.
|
1,116,691,497,127 | arxiv | \section{Introduction}
Due to its proximity and size, our nearest large neighbour galaxy M\,31 offers a unique and attractive opportunity to study stellar populations and galactic structure in detail. It has been a source for groundbreaking discoveries ever since the secure acknowledgement of this nebula as an extragalactic object by Ernst \"{O}pik \citeyearpar{opik:22}.
Self-consistent treatment of the available photometry and kinematical data of M\,31 enabled the construction of sophisticated multi-component galactic models already decades ago \citep[e.g.][]{tenjes:94}. Considering the huge volume and high detail of observational information available nowadays, complex mass models of M\,31 offer a promising opportunity for casting light on one of the most puzzling problems in astrophysics and cosmology: the nature and properties of dark matter (DM) haloes. By now, the increasing scope of observations has enabled stretching mass models much further than the extent of gas disc rotation curves, providing new clues about DM halo parameters \citep{geehan:06,seigar:08,chemin:09,corbelli:10}.
On the other hand, particle physics instrumentation has reached a level at which it can provide some hints about DM. Although the diffuse Galactic background likely exceeds the expected flux of high-energy particles resulting from decaying or annihilating DM in extragalactic sources \citep{bertone:07,Hutsi:10}, particles from more concentrated extragalactic objects might be detectable as an enhancement of the Galactic signal within certain apertures. By comparing the assumed DM distribution in M\,31 to the data collected with the diverse arsenal of ground-based and space-borne detectors of high-energy particles, some constraints on the energy spectrum of DM particles have already been laid \citep[e.g.][]{aharonian:03,lavalle:06,Boyarsky:08,dugger:10,Watson:12}.
For more stringent constraints, not only more capable detectors are needed but also a better understanding of the properties of the source DM haloes. As bizarre as it seems, the derivation of the detailed mass distribution of the \object{Andromeda} galaxy was limited by the lack of suitable optical imaging up to recent times.
Although visible even to the naked eye, its span over four degrees on the celestial sphere makes Andromeda a real challenge to observe with a usual scientific telescope. Thus it is only very recently that observations covering the galaxy with deep wide-field CCD imaging have started to appear: a dedicated scan within the Sloan Digital Sky Survey \citep{york:00}, the Canada-France-Hawaii telescope Megacam programme PAndAS \citep{mcconnachie:09}, the Pan-STARRS telescope project PAndromeda \citep[][]{lee:12}. Combined with the space-based ultraviolet \citep{thilker:05}, near- \citep{barmby:06} and far-infrared \citep{gordon:06,fritz:11} observations, these data now provide an unprecedented panchromatic view of a galaxy at a resolution of a few parsecs, allowing for the derivation of detailed properties of the stellar populations.
In this paper we estimate the stellar and DM distribution of M\,31 using the SDSS and Spitzer 3.6-micron imaging to constrain the properties of the stellar populations. We construct a mass distribution model of the galaxy in correspondence with the latest kinematical data from the literature, giving estimates for the DM halo properties and the related uncertainties.
We have taken the inclination angle of M\,31 to be 77.5\degr \citep{Walterbos:88,deVaucouleurs:91} and the distance to the galaxy \mbox{785\,kpc} \citep{McConnachie:05}, yielding the scale \mbox{228\,pc/arcmin}. Throughout the paper, luminosities are presented in AB-magnitudes and are corrected for the Galactic extinction according to \citet{tempel:11}, where extinction corresponding to the Sloan filters is derived from the \citet{Schlegel:98} estimates and the Galactic extinction law by linear interpolation. The absolute solar luminosity for each filter is taken from \citet{blanton:07}.
\section{Observational SEDs} \label{sect:obs}
In an ideal case, one would study stellar populations using all the available photometric data to sample the spectral energy distribution (SED) of a galaxy throughout the full electromagnetic spectrum. However, we have limited ourselves here to the optical and near-infrared section of the spectrum, since its interpretation with synthetic stellar population models is most straightforward. Also the stellar mass is best traced by this wavelength domain.
For deriving the observed SEDs of M\,31, we relied on the Sloan Digital Sky Survey (SDSS) observations through the $ugriz$ filters and the Spitzer Space Telescope IRAC camera imaging at 3.6 microns. For our purposes, these observations provide a sufficiently wide and deep coverage and the calibration of the data is relatively well-understood.
The basic steps of the SDSS image processing and mosaicing used here have been introduced in \citet{tempel:11}. The intrinsic absorption of the galaxy has been taken into account by applying the dust disc model developed in \citet{tempel:10,tempel:11} on the basis of the far-infrared flux distribution as measured by the Spitzer MIPS camera. We have used the resultant absorption-free SDSS images for recovering the total starlight along sight-lines affected by the dust disc. The final SDSS mosaic was resampled to 3.96\,arcsec\,px$^{-1}$.
To reconstruct the near-infrared view of M\,31 we retrieved the pipeline-calibrated and -processed (post-BCD) Spitzer images from the NASA/IPAC Infrared Science Archive. Exposures severely suffering from cosmic rays were omitted. A mosaic image was created using pointing information in the image headers. The final mosaic was resampled to the same pixel scale as the SDSS images.
The spatial resolution of the dust absorption model was limited by the point-spread-function (PSF) of the Spitzer MIPS camera at 160 microns, thus we convolved the SDSS and Spitzer images with the same PSF. This step improved also the signal-to-noise ratio in the outer regions of the galaxy and removed negative pixel values resulting from the noise after sky removal.
The images were matched with each other geometrically to sub-pixel accuracy. Foreground stars, satellite galaxies and background objects were masked with a common mask for each filter, constructed using SExtractor \citep{Bertin:96} and manual region placing. Additionally, the noisy edges of the Spitzer images were masked. Now the SED could be directly derived by calibrating the intensity within each pixel to standard flux units.
\section{Model SEDs} \label{sect:mod}
A wide variety of synthetic stellar population models is available for interpreting the observed spectral energy distributions. In order to address possible degeneracies and uncertainties of such models, we have used three models to reproduce the observational SEDs. The chosen models follow significantly different approaches for generating the properties of the synthetic stellar populations: (I) the composite model spectra by \citet[][hereafter B07]{blanton:07} are composed as a mixture of \citet{bruzual:03} instantaneous-burst stellar population models of different ages and metallicities, and models of gas emission from MAPPINGS-III \citep{kewley:01}; (II) \citet[][hereafter M05]{maraston:05} models lay a special emphasis on the thermally pulsing asymptotic giant branch stars; (III) the evolutionary synthesis model GALEV \citep{kotulla:09} is the sole model in which the chemical evolution of gas and stars is treated self-consistently. In B07, the \citet{chabrier:03} stellar initial mass function (IMF) was used; for M05 and GALEV, we have chosen the \citet{kroupa:01} IMF option.
B07 provides five composite spectra corresponding to an extremely young, an old, and three intermediate populations. It is shown in B07 that a linear combination of these spectra can adequately describe the spectral energy distribution of most of the galaxies. However, this aspect alone does not necessarily prove that the underlying models are meaningful; rather, it demonstrates that the spectra are sufficiently diverse. Thus for the M05 and GALEV models, we have followed the general observational knowledge about the star formation history and metallicity of M\,31 to tame the age-metallicity degeneracy, assuming that much of the galaxy is dominated by old stars of nearly solar metallicity, while the star-forming ring is composed of younger stars \citep{bellazzini:03,sarajedini:05,brown:06,olsen:06,saglia:10,zou:11}. From the available M05 models, we have selected single-, instantaneous-burst stellar populations. GALEV is used in the chemically consistent regime to generate old populations with different star formation histories, with and without an additional starburst having occurred 1--4 billion years ago to mimic the star-forming regions.
For each set of model spectra, we sought linear combinations of up to 5 spectra to represent the SED of M\,31 within each pixel according to the formula
\begin{equation}
f(\lambda)_{\mathrm{obs}} = \sum\limits_{i}{m_i f(\lambda)_i} ,
\label{eq:sed_fit}
\end{equation}
where $f(\lambda)_{\mathrm{obs}}$ is the observed SED, $f(\lambda)_i$ are the model SEDs per unit mass and $m_i$ are the corresponding weights of the model SEDs. In this formalism, $m_i$ effectively measure the mass of each model stellar population within a given pixel.
The spectra with a non-negligible contribution to the integral SED of the galaxy are listed in Table~\ref{tab:mod_spec} together with other relevant information. An illustration of the SED fitting within random pixels from the bulge and disc regions of the galaxy is presented in Fig.~\ref{fig:sed_fit}. Here, the observed flux through each of the six filters is overlaid with the best-fitting linear combination of the B07 composite spectra.
\begin{figure}
\includegraphics[width=88mm]{figure1.eps}
\caption{Examples of the observed (large circles) and modelled (lines) SED for a random pixel in the bulge region (\emph{upper panel}) and in the young disc region (\emph{lower panel}). The sizes of the datapoints indicate the photometric uncertainties of each measurement. The model values corresponding to each filter are also shown (small datapoints). In most pixels, the reddest model population (B07-4) alone provides a good representation of the observed SED. In the young disc regions, the stellar populations are more diverse: in the lower panel, the B07 model populations 1, 4 and 5 contribute 24.44\%, 75.52\%, and 0.04\% of the mass, respectively. The corresponding SEDs are weighted according to the mass fraction. In this plot, the observed SEDs and the sum of the weighted model spectra are normalised per 1\,$M_{\odot}$ at a distance of 10\,pc.}
\label{fig:sed_fit}
\end{figure}
\begin{table}
\caption{Synthetic stellar populations used for SED fitting.}
\begin{flushleft}
\begin{tabular}{lllllll}
\hline\hline
Name & Age & [Fe/H] & $\frac{M_\mathrm{tot}}{L_g}$ & $\frac{M_\mathrm{tot}}{L_r}$ & $\frac{M_\mathrm{tot}}{L_i}$ & Fract. \\
& [Gyr] & & $[\frac{M_\odot}{L_\odot}]$ & $[\frac{M_\odot}{L_\odot}]$ & $[\frac{M_\odot}{L_\odot}]$ & \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) \\
\hline
B07-1 & 0.7 & 0.40 & 0.76 & 0.78 & 0.72 & 0.014 \\
B07-3 & 0.4--1 & 0.05 & 0.47 & 0.50 & 0.56 & 0.003 \\
B07-4 & 7--12 & 0.03 & 5.05 & 3.87 & 3.12 & 0.983 \\
\hline
M05-1 & 1 & 0.00 & 1.11 & 1.00 & 0.85 & 0.008 \\
M05-2 & 2 & 0.00 & 2.18 & 1.70 & 1.43 & 0.002 \\
M05-3 & 4 & 0.00 & 3.99 & 3.03 & 2.56 & 0.214 \\
M05-4 & 12 & 0.00 & 11.6 & 8.08 & 6.47 & 0.767 \\
M05-5 & 12 & -0.33 & 9.00 & 6.60 & 5.37 & 0.009 \\
\hline
GALEV-1 & 1, 10 & 0.04 & 2.88 & 3.14 & 2.92 & 0.004 \\
GALEV-2 & 2, 11 & 0.07 & 4.35 & 4.13 & 3.65 & 0.011 \\
GALEV-3 & 4, 13 & 0.09 & 7.58 & 6.20 & 5.23 & 0.089 \\
GALEV-4 & 12 & 0.12 & 4.63 & 4.55 & 4.05 & 0.015 \\
GALEV-5 & 12 & 0.18 & 10.9 & 8.33 & 6.86 & 0.881 \\
\hline
\end{tabular}
\end{flushleft}
\tablefoot{
The columns contain the following: (1) stellar population model; for B07 models the number is as in the original paper; (2) approximate age of the dominant star-formation epoch(s); (3) average metallicity of the stars; (4)--(6) mass-to-light ratio in the $gri$ filters; (7) total stellar mass fraction in M\,31 of the corresponding stellar population.}
\label{tab:mod_spec}
\end{table}
Following Eq.~(\ref{eq:sed_fit}), the SED-fitting process yielded the stellar mass of each model population within each imaging pixel. For every population synthesis model, the contribution of different synthetic populations to the integral mass of the galaxy is presented in the last column of Table~\ref{tab:mod_spec}. It is seen that for each set of model spectra, the reddest spectrum (corresponding to an old population with near-solar metallicity) dominates all across the galaxy and dictates its population properties. It is only within the star-forming ring that other spectra provide detectable contribution.
The mass-to-light ratios and thus the masses of the stellar components predicted by different population synthesis models are remarkably different, as indicated in Table~\ref{tab:mod_spec}. M05 and GALEV give much more massive stellar populations than B07. The scatter of the mass-to-light ratios of different models results from differences in the modelling approach and hence reflects the uncertainties of stellar mass estimations in general. Although the B07 model spectra provide the most precise description of the actual SEDs, we have no grounds to state that the B07 model represents the actual populations better than the M05 and GALEV models. Thus in the following, we have considered this mass scatter as an uncertainty of the final mass estimates. We used the B07 model for the lowest limit and planned initially to use the other models for the upper limit of the stellar mass. As shown below, however, the masses suggested by the M05 and GALEV models are contradicting the rotation curve measurements, which set a more rigid upper limit for the stellar masses. Nevertheless, we discourage the reader to use this result alone for judging the M05 and GALEV models in general. We can just conclude that with our currently used configuration of the stellar population details (i.e. the IMF, metallicity, and star formation history), these models overestimate the realistic stellar mass of M\,31.
\section{Stellar mass distribution}\label{sect:stellar_m}
In the previous section, the stellar mass corresponding to each model spectrum was derived for each imaging pixel. This gives the 2-dimensional stellar mass distribution in M\,31, as presented in Fig.~\ref{fig:map} for the B07 model. The distribution appears featureless and regular, resulting from the intensive smoothing with the PSF of the Spitzer 160-micron imaging, but it indicates also that the galaxy is generally undisturbed and that the intrinsic dust-absorption effects have been removed in proper proportions along different lines of sight.
\begin{figure}
\includegraphics[width=88mm]{figure2.eps}
\caption{Stellar mass-density map of M\,31. The ellipses enclose 50\%, 75\%, 90\%, and 95\% of the total mass, respectively.}
\label{fig:map}
\end{figure}
The actual spatial distribution of stellar matter can be split into the contributions of different galactic components. We measured the elliptically averaged stellar mass distribution from Fig.~\ref{fig:map} and approximated it as a superposition of the stellar components: a nucleus, a bulge, a disc, and a halo. In addition, the ring-like star-forming region was taken as a separate component from the disc and is referred here as the young disc. Assuming each component to be an ellipsoid of rotational symmetry and constant axial ratio $q$, we used the Einasto law
\begin{equation}
\rho(a)= \rho_c\exp\left\{-d_{N}
\left[\left(\frac{a}{a_\mathrm{c}}\right)^{1/N}-1\right]\right\}
\label{eq:einasto}
\end{equation}
to describe the density distribution of a component. Here distance from the centre $a=\sqrt{r^2+z^2/q^2}$, where $r$ and $z$ are the two cylindrical coordinates; $d_N$ is a function of $N$, such that $\rho_c$ becomes the density at distance $a_{\mathrm{c}}$, which defines the volume containing half of the total mass of the component. The derivation of $d_N$ is presented in Appendix \ref{app:2}. Being mathematically identical to the S\'ersic's $R^{1/n}$ model but fitted to the space density, Eq.~(\ref{eq:einasto}) provides a sufficiently flexible distribution law for describing relaxed galactic components.
For a more accurate description of the star-forming region, the spatial density distribution of the young disc is assumed to have a toroidal form, approximated as a superposition of a positive and a negative density component, both following Eq.~(\ref{eq:einasto}) (see the last paragraph of Appendix~\ref{app:2} for more details).
The structural parameters of all the stellar components have been adopted from a previous analysis \citep{tempel:11}, also based on the SDSS data. In the referred paper, the Einasto law was expressed with respect to the harmonic mean radius instead of $a_c$. Relations between different functional forms of the Einasto distribution are presented in Appendix~\ref{app:2}.
\begin{table*}
\caption{Parameters of stellar components.}
\centering
\begin{tabular}{lcccccccccc}
\hline\hline
Component & $a_0$\tablefootmark{a}
& $a_c$ & $q$ & $N$ & $d_N$ & $\rho_c$ & $M_\mathrm{comp}$\tablefootmark{b} & $M/L_g$\tablefootmark{b} & $M/L_r$\tablefootmark{b} & $M/L_i$\tablefootmark{b} \\
& [kpc] & [kpc] & & & & $[M_\odot\,\mathrm{pc}^{-3}]$ & $[10^{10}M_\odot]$ & $[M_\odot/L_\odot]$ & $[M_\odot/L_\odot]$ & $[M_\odot/L_\odot]$ \\
\hline
Nucleus & 0.01 & 0.0234 & 0.99 & 4.0 & 11.668 & $1.713\cdot 10^{0}$ & 0.008& 4.44 & 3.20 & 2.35 \\
Bulge & 0.63 & 1.155 & 0.72 & 2.7 & 7.769 & $9.201\cdot 10^{-1}$ & 3.1 & 5.34 & 4.08 & 3.01 \\
Disc & 7.7 & 10.67 & 0.17 & 1.2 & 3.273 & $1.307\cdot 10^{-2}$ & 5.6 & 5.23 & 3.92 & 2.92 \\
Young disc\tablefootmark{c}
& 10.0 & 11.83 & 0.01 & 0.2 & 0.316 & $1.179\cdot 10^{-2}$ & 0.1 & 1.23 & 1.12 & 0.88 \\
Stellar halo & 6.3 & 12.22 & 0.50 & 3.0 & 8.669 & $4.459\cdot 10^{-4}$ & 1.3 & 6.19 & 4.48 & 3.25 \\
\hline
\end{tabular}
\tablefoot{
Structural parameters $a_0$, $q$, $N$, and galaxy luminosities in SDSS filters ($L_g$, $L_r$, $L_i$) are taken from \citet{tempel:11}. Component masses $M_\mathrm{comp}$ are derived in this paper.
\tablefoottext{a}{Harmonic mean radius (see Appendix~\ref{app:2}).}
\tablefoottext{b}{Masses and mass-light-ratios corresponding to the B07 model, i.e. the lower limits; the upper limits (from the maximum-stellar model) are 1.5 times higher in each case.}
\tablefoottext{c}{The structural parameters and $\rho_c$ are given for the positive component. In the dynamical models, the gas mass $6\cdot10^9M_{\odot}$ is added to the young disc.}
}
\label{table:stellar_comp}
\end{table*}
\begin{figure}
\includegraphics[width=88mm]{figure3.eps}
\caption{The mass-density distribution of the galaxy, averaged along elliptical iso-density contours, as inferred from the B07 model (thick grey line; its thickness indicates deviations along each ellipse), the model profile (solid line) and the contributions of the individual stellar components (dashed lines) to the model profile. The contribution of the model population B07-1 to the mass distribution is also shown; it is closely traced by the young disc component of the model.}
\label{fig:comps}
\end{figure}
Using the previously derived structural parameters and leaving the component masses as free variables, we fitted these components to the elliptically averaged stellar mass distribution. The lower limit estimates of the masses of the stellar components, derived from the B07 model, are presented in Table~\ref{table:stellar_comp} together with other related parameters. The upper mass limits are constrained by the rotation curve and are 1.5 times higher for each component (see below). The corresponding mass distribution of each stellar component as well as the total stellar mass distribution of the galaxy are shown in Fig.~\ref{fig:comps} for the B07 model. To illustrate the correspondence between the young disc model component and the first model spectrum (B07-1), the contribution of the latter to the overall mass distribution is also shown.
It is natural to suspect that a four-component fit to the galaxy stellar mass distribution has to be degenerate to some extent. We tested the uniqueness of the model by varying the masses of the components and calculating the deviation of each resultant model from the observations using the Bayesian interface tool MultiNest \citep{Feroz:08, Feroz:09}. The results of the degeneracy analysis are presented in Fig.~\ref{fig:mass_like}, indicating the likelihood of different combinations of component masses. Quite expectedly, the most securely determined component is the bulge and the most unreliable mass estimates are for the young disc and halo, both being degenerate with the disc mass to some extent. The degeneracies would be higher, if all the component parameters were set free. A more conservative two-component (bulge + disc) model is described in Appendix~\ref{app:1}.
\begin{figure}
\includegraphics[width=88mm]{figure4.eps}
\caption{Degeneracies between the masses of the different stellar components, shown as likelihood contours of various combinations of component masses. The final B07 model parameters are shown with dots.}
\label{fig:mass_like}
\end{figure}
\section{Dynamics and dark matter distribution}\label{sect:dark_m}
The structural model and masses of the stellar components derived in Sect.~\ref{sect:stellar_m} allowed us to calculate the gravitational potential of stellar matter in the galaxy. The gravitational potential of a galaxy is also traced by the rotation curve. To match the calculated rotation curve with the observed one, the contributions of gas and dark matter (DM) have to be added to the stellar mass model of the galaxy.
The contribution of gas to the potential of the galaxy is modest, thus a precise description of the gas distribution in the model is not essential. We have assumed that the distribution of the gas disc coincides with that of the young disc simply by raising the mass of the young disc by $6\cdot10^9M_{\odot}$, which is the approximate sum of the molecular \citep{Dame:93,Nieten:06} and the neutral \citep{carignan:06,chemin:09,corbelli:10} gas mass estimates. Note that the molecular gas content is rather low in M\,31, in fact even lower than the differences between the neutral gas mass estimates made by different authors.
We have considered various functional forms of DM density distribution while incorporating a DM halo component into the model galaxy. From observations of the dynamics of galaxies, distributions with a nearly constant inner density (and therefore, ``isothermal" or ``cored" haloes) have been derived, e.g. by \citet{burkert:95}:
\begin{equation}
\rho_\mathrm{Burkert}(r)=\frac{\rho_c}{\left(1+\frac{r}{r_\mathrm{c}}\right)
\left[1+(\frac{r}{r_\mathrm{c}})^2\right]} .
\end{equation}
On the other hand, N-body simulations suggest steeply rising DM density towards the centre (therefore ``cuspy" haloes, e.g. \citet{moore:99}:
\begin{equation}
\rho_\mathrm{Moore}(r)=\frac{\rho_\mathrm{c}}{(\frac{r}{r_\mathrm{c}})^{1.5}
\left[1+(\frac{r}{r_\mathrm{c}})^{1.5}\right]}
\end{equation}
and \citet{navarro:97} (hereafter NFW):
\begin{equation}
\rho_\mathrm{NFW}(r)=\frac{\rho_\mathrm{c}}{\left(\frac{r}{r_\mathrm{c}}\right)
\left[1+(\frac{r}{r_\mathrm{c}})\right]^{2}} .
\end{equation}
In these equations, $\rho_\mathrm{c}$ is a density scale parameter.
More recently, it has been found that the Einasto distribution Eq.~(\ref{eq:einasto}) matches with the simulated DM haloes over a wider range of radii \citep{merritt:06,navarro:10,chemin:11}, and is gaining popularity for various applications.
Each of the four DM distributions was used in combination with the stellar components as determined in Sect.~\ref{sect:stellar_m} to calculate the gravitational potential of the galactic model and the corresponding rotation curve.
\begin{figure}
\includegraphics[width=88mm]{figure5.eps}
\caption{\emph{Upper panel:} the observed rotation curve (data points with error bars) overplotted with the model (solid line). Contributions of each component are also shown (dashed lines). The model corresponds to the B07 stellar mass estimates and the Einasto distribution for the DM density. \emph{Lower panel:} the same stellar model with four different DM density distributions. For clarity, only the total rotation curves and the DM contributions are shown.}
\label{fig:rc}
\end{figure}
The model rotation curve was fitted to the observed rotation curve, composed of two \ion{H}{i} datasets from the literature: the Westerbork telescope observations \citep{corbelli:10} and data from the Effelsberg and Green Bank telescopes \citep{carignan:06}. We did not attempt to include observations from the inner parts of the galaxy, where the dynamics of gas clouds is too much affected by non-circular motions, leading to difficulties in interpreting the data.
In addition to the gas rotation curves, we used circular velocities calculated from the measurements of the motions of globular clusters, satellite galaxies, and stellar streams (Table~\ref{table:mass_estimates}), allowing us to trace the gravitational potential of M\,31 out to a projected distance of more than 500\,kpc from the centre.
For fitting the model rotation curve to the observed one, the DM halo parameters were left free, while the masses of the stellar components were kept fixed. During the first runs of the fitting, the Einasto shape parameter $N$ was allowed to vary freely, which lead to a wide variation of its value. To reduce degrees of freedom, we applied a fixed value at $N=6.0$ according to \citet{merritt:06} and \citet{navarro:10}.
\begin{table}
\caption{Enclosed mass estimates and the corresponding circular velocities at large galactocentric radii of M\,31.}
\centering
\begin{tabular}{lllll}
\hline\hline
$R$ & Mass & $V_\mathrm{c}$ & Objects & Reference \\
$[\mathrm{kpc}]$ & $[$$10^{10}M_\odot]$ & $[$km\,s$^{-1}]$ & & \\
\hline
32 & 39$^{+2}_{-10}$ & 230$^{+5}_{-32}$ & 17 globular clusters & 1 \\
37 & 49$^{+12}_{-14}$ & 240$^{+38}_{-38}$ & 21-cm data & 2 \\
41 & 48$^{+34}_{-23}$ & 225$^{+69}_{-63}$ & 17 globular clusters & 1 \\
55 & 55$^{+4}_{-3}$ & 208$^{+7}_{-6}$ & 504 globular clusters & 3 \\
60 & 44$^{+26}_{-4}$ & 178$^{+48}_{-8}$ & 349 globular clusters & 4 \\
100 & 79$^{+5}_{-5}$ & 185$^{+6}_{-6}$ & 12 satellites & 5 \\
125 & 75$^{+25}_{-13}$ & 161$^{+25}_{-15}$ & stellar stream & 6 \\
125 & 74$^{+12}_{-12}$ & 160$^{+13}_{-14}$ & stellar stream & 7 \\
139 & 80$^{+41}_{-37}$ & 157$^{+36}_{-42}$ & 15 satellites & 8 \\
268 & 137$^{+18}_{-18}$ & 149$^{+9}_{-10}$ & 7 satellites & 9 \\
300 & 140$^{+40}_{-40}$ & 142$^{+19}_{-22}$ & satellites & 10 \\
560 & 125$^{+180}_{-60}$& 98$^{+55}_{-27}$ & 11 satellites & 1 \\
560 & 99$^{+146}_{-63}$& 87$^{+51}_{-34}$ & 16 satellites & 11 \\
\hline
\end{tabular}
\tablebib{
(1)~\citet{Evans:00}; (2)~\citet{corbelli:10}; (3)~\citet{Lee:08}; (4)~\citet{Galleti:06}; (5)~\citet{Cote:00}; (6)~\citet{Ibata:04}; (7)~\citet{Fardal:06}; (8)~\citet{Tollerud:12}; (9)~\citet{Courteau:99}; (10)~\citet{Watkins:10}; (11)~\citet{Evans:00a}.
}
\label{table:mass_estimates}
\end{table}
\begin{table*}
\caption{DM halo parameters for various distribution functions.}
\centering
\begin{tabular}{lllllll|ll}
\hline\hline
Profile & $\rho_c$ & $\rho_c$ & $r_c$ & $M_\mathrm{200}$ & $R_\mathrm{200}$ & $V_\mathrm{200}$ & $\rho_{-2}$ & $a_{-2}$ \\
& [$M_\odot$pc$^{-3}$] & [\mbox{GeV/$c^2$~cm$^{-3}$}] & [kpc] & $[10^{10}M_\odot]$ & [kpc] & [km\,s$^{-1}$] & [\mbox{GeV/$c^2$~cm$^{-3}$}] & [kpc] \\
\hline
Einasto\tablefootmark{a}
& $8.12\pm 0.16\cdot 10^{-6}$ & $3.08\cdot 10^{-4}$ & $178\pm 18$ & 113 & 213 & 151 & $8.92\cdot 10^{-2}$ & 17.44 \\
NFW & $1.10\pm 0.18\cdot 10^{-2}$ & $4.18\cdot 10^{-1}$ & $16.5\pm 1.5$ & 104 & 207 & 147 \\
Moore & $1.46\pm 0.26\cdot 10^{-3}$ & $5.54\cdot 10^{-2}$ & $31.0\pm 3.0$ & 106 & 209 & 148 \\
Burkert & $3.68\pm 0.40\cdot 10^{-2}$ & $1.40\cdot 10^{0}$ & $9.06\pm 0.53$ & 79 & 189 & 134 \\
\hline
Einasto\tablefootmark{a}\tablefootmark{b}
& $1.40\pm 0.27\cdot 10^{-6}$ & $5.32\cdot 10^{-5}$ & $387\pm 44$ & 127 & 221 & 157 & $1.54\cdot 10^{-2}$ & 37.95 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Parameter $N$ has been taken 6.0, yielding $d_N=17.668$. Spherical symmetry is assumed by taking $q=1$ in Eq.~(\ref{eq:einasto}), in which case $r_c = a_c$.}
\tablefoottext{b}{Dark matter parameters for the maximum-stellar model.}
}
\label{table:dark_comp}
\end{table*}
As shown in Sect.~\ref{sect:mod}, the stellar masses yielded by different stellar population synthesis models vary by a factor of two. We first considered two cases: the lowest-mass model with the B07 mass estimates and the highest-mass model with the other mass estimates. In the latter case, however, the stellar mass becomes too high, raising the model rotation curve above the observed values at distances 10--20~kpc from the centre even without the inclusion of a dark matter component (a DM halo would still be needed though to gain a match with the outer rotation curve observations). Thus we had to abandon the idea of determining the upper limits of the masses of the stellar components by using stellar population synthesis models.
In cases when stellar masses cannot be estimated independently, the maximum-disc approach is often followed, i.e. first a pure disc is fitted to the observed rotation curve with as high mass as possible and the other components are added thereafter in the required proportions. In the case of M\,31, the disc mass-to-light ratio is very close to the bulge one because of their similar ages and metallicities. Therefore, instead of a maximum-disc approach, we applied a maximum-stellar model, conserving the relative values of the mass-to-light ratios of the stellar components as determined with stellar population synthesis models, but multiplying them with a common constant. Without the inclusion of a DM halo, the rotation curve allowed the multiplication of the B07 stellar masses bye 1.5 at maximum. Henceforth, we are using the corresponding model (together with a minimally required DM halo) as an upper limit of the stellar masses and refer to it as the maximum stellar model.
Parameters of the best-fitting DM models, corresponding to the different distribution functions and the B07 stellar mass estimates are presented in Table~\ref{table:dark_comp}. For the Einasto DM distribution, parameters corresponding to the maximum-stellar model are also given.
The upper panel of Fig.~\ref{fig:rc} presents the observed rotation curve, over-plotted with the curve derived from the B07 stellar masses and the Einasto DM halo. Contributions of each stellar component and the DM halo are also shown. In the lower panel, model rotation curves for all the four DM models are presented. It is seen that within the observed range of the rotation curve, differences between different DM profiles are negligible. From 7\,kpc inwards along the major axis, outside the range of observations, the model with the Burkert DM profile (and to a lesser extent, also the model with the NFW DM profile) starts to deviate from the other models. In Fig.~\ref{fig:tot_mass}, the outer rotation curve (upper panel) and the corresponding enclosed mass (lower panel) are shown together with the model curves. Again, all the DM distribution models match the observations within the errorbars, except for the Burkert distribution, which produces a slightly lighter DM halo, missing a few outer datapoints.
\begin{figure}
\includegraphics[width=88mm]{figure6.eps}
\caption{Outer rotation curve observations and models (\emph{upper panel}), calculated for the B07 stellar masses, and the corresponding cumulative mass (\emph{lower panel}). With the exception of the Burkert distribution, all DM models fit well to the observations.}
\label{fig:tot_mass}
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{figure7.eps}
\caption{The observed rotation curve together with the maximum-stellar model, in which the stellar masses are 1.5 times higher than in the B07 model.}
\label{fig:rc_stellar}
\end{figure}
The model rotation curves for the maximum-stellar model are shown in Fig.~\ref{fig:rc_stellar}. Now the fit is somewhat worse than for the B07 model, especially at the innermost observational datapoints, securing that the maximum-stellar model indeed provides the very upper limits for the stellar masses.
As shown above, it is not possible to prefer any of the given DM distribution models on the basis of the data on M\,31. Furthermore, in each case, the derived characteristic radii and densities of the DM haloes are very degenerate, as indicated in Fig.~\ref{fig:dm_params}: a significant increase of the characteristic radius can easily be compensated by lowering the characteristic density and vice versa. In this plot, the virial mass $M_\mathrm{200}$, defined as the mass within a sphere of mean density 200 times the cosmological critical density\footnote{Here the critical density is calculated for the Hubble constant value $H_0=71\,$km\,s$^{-1}$Mpc$^{-1}$.}, \citep[e.g.][]{navarro:10}, is also shown in colour coding for each DM model. Interestingly, despite the uncertainty of the DM density distribution, the virial mass is well constrained, regardless of the chosen DM distribution model. The same statement holds in Fig.~\ref{fig:app_dm_params}, where the Einasto DM halo parameter likelihoods for the two stellar mass models are compared. The virial mass is actually quite firmly established by the outer ``test particles" of the dynamics and is almost independent of the stellar model of the galaxy. For the ``cuspy" DM profiles (Einasto, NFW, Moore), the derived virial mass is $(1.04$--$1.13)\cdot10^{12}M_{\odot}$ and the corresponding virial radius $207$--$213$\,kpc. For the ``cored" Burkert profile, the values are $0.8\cdot10^{12}M_{\odot}$ and $189$\,kpc, respectively. In the case of the Einasto DM distribution, a sphere extending to 10 pc from the centre contains 16--61\,$M_{\odot}\mathrm{\,pc^{-3}}$ (0.6--2.3\,$\mathrm{TeV}/c^2\,\mathrm{cm}^{-3}$) of DM on an average.
\begin{figure}
\includegraphics[width=88mm]{figure8.eps}
\caption{Parameter likelihoods for different DM density distributions in the case of B07 stellar masses. The virial mass corresponding to each parameter combination is shown in colour coding; 90$\cdot10^{10}M_{\odot}$, 110$\cdot10^{10}M_{\odot}$, and 130 $\cdot10^{10}M_{\odot}$ levels are indicated with solid contours. For the Einasto, NFW, and Moore DM profile, the virial mass is almost the same.}
\label{fig:dm_params}
\end{figure}
\begin{figure}
\includegraphics[width=88mm]{figure9.eps}
\caption{Comparison of DM halo parameters for B07 and maximum stellar mass models. In the upper panel, the virial mass corresponding to each parameter combination is shown in colour coding; 90$\cdot10^{10}M_{\odot}$, 110$\cdot10^{10}M_{\odot}$, and 130 $\cdot10^{10}M_{\odot}$ levels are indicated with solid contours. In the lower panel, virial radii are shown in colour coding, with solid contours tracing 200, 210, and 220~kpc.}
\label{fig:app_dm_params}
\end{figure}
\section{Discussion and comparison to previous models}
\begin{table*}
\caption{Comparison of bulge, disc, and DM halo mass estimates. All masses are in $10^{10}M_\odot$.}
\label{table:mass_comp}
\centering
\begin{tabular}{lccc}
\hline\hline
Model & $M_{\mathrm{bulge}}$ & $M_{\mathrm{disc}}$ & $M_{200}$ \\
\hline
\citet{geehan:06}, best-fit (maximum-disc) model & 3.3 & 8.4 (13.7) & 68 (94) \\
\citet{seigar:08}, model without (with) adiabatic contraction & 3.5 (3.5) & 5.8 (7.3) & 73 (89) \\
\citet{chemin:09}, ``hybrid" model & 2.32 & 7.1 & 100 \\
\citet{corbelli:10}, NFW model with $(M/L)_\mathrm{bulge} = (M/L)_\mathrm{disc}$ & 3.8 & 8.8 & 85\tablefootmark{c} \\
This work, B07 model & 4.4\tablefootmark{a} & 5.7\tablefootmark{b} & 113 \\
This work, maximum-stellar model & 6.6\tablefootmark{a} & 8.6\tablefootmark{b} & 127\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{Sum of the bulge and stellar halo masses.}
\tablefoottext{b}{Sum of the disc and young disc masses.}
\tablefoottext{c}{Recalculated from $M_{98}$ in the original paper.}
}
\end{table*}
\begin{figure}
\includegraphics[width=88mm]{figur10.eps}
\caption{Average DM density inside a given radius, corresponding to different DM distributions in the case of the B07 stellar masses. For the Einasto DM distribution, also the maximum-stellar-mass case is plotted. For comparison, central densities of some nearby dwarf galaxies and low-surface-brightness galaxies are shown. The triangular/quadrangular datapoints are calculated assuming the Burkert DM, the circular datapoints correspond to the NFW DM.}
\label{fig:dm_cumul}
\end{figure}
Let us compare our mass models of M\,31 to some other recently constructed models. In Table~\ref{table:mass_comp}, mass estimates suggested by our models for the bulge, disc, and DM halo are compared to the estimates by \citet{geehan:06}, \citet{seigar:08}, \citet{chemin:09}, and \citet{corbelli:10}. For a better understanding of the compatibility of these models, we will briefly summarise the principal properties and differences of these models below.
The model derived by \citet{geehan:06} consists of a central supermassive black hole, a bulge, a disc, and a DM halo. Its stellar components are determined using various luminosity measurements in $V$, $R$, and $r$ filters out to the (projected) distance of 25\,kpc along the major axis, and the kinematics is based on a composite rotation curve. Mass-to-light ratios of the stellar components are treated as free parameters. The total mass is constrained by data on the motions of outer planetary nebulae, globular clusters and satellite galaxies. In Table~\ref{table:mass_comp}, \citet{geehan:06} values for the best-fit model and the maximum-disc model (in brackets) are presented.
\citet{seigar:08} constructed a similar black hole\,+\,bulge\,+\,disc\,+\,DM halo model. They used the Spitzer 3.6-micron imaging data and \mbox{$B\!-\!R$} colour profile to determine the mass-to-light ratios; dynamical mass estimators were the same or similar as in \citet{geehan:06}. In addition to the usual DM profiles, \citet{seigar:08} considered the case of a dark halo that has undergone an adiabatic contraction due to the gravitational attraction of the baryonic material. In Table~\ref{table:mass_comp}, the model with adiabatic contraction is presented in brackets; it should not be compared directly to the other models.
\citet{chemin:09} constructed a black hole\,+\,bulge\,+\,disc\,+\,gas model, using their newer \ion{H}{i} data for constraining the kinematics. For a more accurate description of the disc potential, the disc density distribution was settled as the residual of the surface brightness distribution after the subtraction of the bulge contribution. The small contribution of gas mass was considered on the basis of $\mathrm{H}_2$ and \ion{H}{i} surveys. In Table~\ref{table:mass_comp}, the ``hybrid" model (with the bulge mass is determined from stellar velocity dispersions and the disc mass from simple stellar population models) values of \citet{chemin:09} are presented.
\citet{corbelli:10} used a bulge\,+\,disc\,+\,gas model together with an optional Burkert/NFW DM halo to fit their \ion{H}{i} kinematics data and outer dynamics estimators. The best-fit model with equal bulge and disc mass-to-light ratios and the NFW dark halo is shown in Table~\ref{table:mass_comp}. We have rescaled the virial mass $M_{98}$ given by \citet{corbelli:10} to $M_{200}$.
In contrast to these four works, our stellar model is based on fully 2-dimensional dust-corrected imaging through 6 filters and some more recent dynamical mass estimators. Instead of the central black hole, our model includes the nucleus of the galaxy, which contributes significantly more to the total mass and the model rotation curve of the galaxy. This contribution is nevertheless tiny and has a negligible effect on other model parameters, as well as our usage of an oblate bulge \citep[with an axial ratio 0.8;][]{tempel:11} instead of a spherical one. In Table~\ref{table:mass_comp}, our B07 model and the maximum-stellar model results are given. The actual values probably lie between the estimates of these two models.
Table~\ref{table:mass_comp} shows that the bulge mass suggested by our models is somewhat higher than in previous models, probably resulting from different bulge parameters but also because we have used a larger set of observational data to constrain the stellar populations. Nevertheless, the bulge dominates the total gravitational potential only up to the radius 6--8\,kpc and bulge properties have little effect on the DM halo parameters.
As can be seen from Table~\ref{table:mass_comp}, the DM halo virial mass estimates have previously remained mostly below 1$\cdot10^{12}M_\odot$, whereas our models suggest slightly higher values, (1.0--1.3)\,$\cdot10^{12}M_\odot$. Once again, the most likely source of differences is our usage of a larger collection of observational data on the outer dynamics. As shown in Figs.~\ref{fig:dm_params} and~\ref{fig:app_dm_params}, the virial mass is almost independent of the DM density profile and the stellar mass model, being uniquely determined by the outer dynamics of the galaxy.
In Fig.~\ref{fig:dm_cumul}, the distribution of the average DM halo density within a given radius is shown for each DM density distribution model. The added datapoints provide an illustrative comparison with DM haloes of other galaxies, for which the average central density has been recently measured more or less reliably: nearby dwarf galaxies \citep{Gilmore:07, Gentile:07a, Gentile:07b, Oh:11, Adams:12, Breddels:12} and low-surface-brightness galaxies \citep{Coccato:08, deBlok:08, KuziodeNaray:08}. Some of these values are calculated for the modified-isothermal DM distribution, others for the NFW distribution. In several cases, both versions are presented. These datapoints should be compared to the Burkert and NFW profiles of M\,31, respectively. It is seen that the central density of DM haloes varies by a couple of orders of magnitude and despite its higher total mass, the DM halo of M\,31 cannot be distinguished from an average dwarf or low-surface-brightness galaxy in this aspect. Interestingly, the estimate of the central density (0.012--0.028)$\cdot M_{\odot}\mathrm{pc^{-3}}$ of DM haloes of massive disc galaxies near redshift $z\simeq0.9$ \citep{tamm:05} also falls within this range, hinting that the DM halo concentration process seems to be restricted rather uniformly over a very wide variety of halo masses, environments, and cosmological epochs.
To conclude our work, it is interesting and also disappointing to note that the usage of additional observational data does not reduce significantly the uncertainties and scatter of the parameters of M\,31 mass distribution models. Our vague understanding of the evolution of the physical properties of stellar populations restrains the gain from all the gathered observational information on the chemical content and formation history of a galaxy. Despite the improved imaging and kinematics data, we are still unable to confirm or rule out the maximum-disc or maximum-baryonic approach in splitting the contributions of luminous and dark matter to the overall mass distribution. It can only be concluded that the bulge mass of M\,31 probably lies within the range (4.4--6.6)\,$\cdot10^{10}M_\odot$ and the disc mass within the range (5.7--8.6)\,$\cdot10^{10}M_\odot$. Nevertheless, M\,31 provides an exceptional opportunity to estimate the virial mass and the outer distribution of a DM halo thanks to the possibility of tracing the gravitational potential with various test bodies at large distances from the galactic centre.
\section*{Acknowledgments}
We are grateful to the anonymous referee for carefully reading the manuscript and for making useful suggestions for improving it. We acknowledge the financial support from the Estonian Science Foundation (incl. the grant MJD272) and the Ministry of Education and Research.
This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. All the figures were made with the gnuplot plotting utility.
|
1,116,691,497,128 | arxiv | \section{Introduction}
Although inflation seems to be inevitable in cosmology to resolve
the homogeneous and flatness problems, it is highly nontrivial to
realize the idea in the scalar field theory framework
\cite{review}. It is because a small scalar mass is perturbatively
unstable, and a heavy inflaton mass term destroys the slow-roll
conditions needed for sufficient e-folds ($\gtrsim 50$--60).
Supersymmetry (SUSY) is helpful for keeping the smallness of the
inflaton mass against quantum corrections, but it is just upto the
Hubble scale. Supergravity (SUGRA) correction in de Sitter space
usually induces a Hubble scale mass term of the inflaton at tree
level even with the minimal K${\rm \ddot{a}}$hler potential,
unless the model is carefully constructed. It is called the
``$\eta$ problem.''
One of the promising models, which potentially avoids the $\eta$
problem, is the SUSY hybrid inflation model
\cite{FtermInf2,FtermInf}. In that model, the superpotential is
dominated by $W=\kappa SM^2$ during the inflation era, where $S$
denotes the inflaton superfield, and $\kappa$ and $M$ are
dimensionless and dimensionful parameters, respectively. $M$ turns
out to be associated with a symmetry breaking scale. With this
superpotential, the SUGRA correction does not induce the dangerous
Hubble scale inflaton mass term ($3H^2|S|^2$) in the scalar
potential, if the K${\rm \ddot{a}}$hler potential is given by the
minimal form ($K=|S|^2$) \cite{FtermInf2,SUGRAcorr}: Such a mass
term is accidentally cancelled out at tree level in this model.
However, a quartic term with the dimensionless coefficient of
order unity in the K${\rm \ddot{a}}$hler potential, $K\supset
c_1|S|^4/4M_P^2$, where $M_P$ is the reduced Planck mass ($\approx
2.4\times 10^{18}$ GeV), generates the unwanted inflaton mass term
in the scalar potential. Actually, only the quartic term in the
K${\rm \ddot{a}}$hler potential is dangerous, while higher order
terms with coefficients of order unity are all harmless, because
$|S|\lesssim M_P$. Therefore, only the coefficient $c_1$ should be
assumed to be adequately suppressed ($\lesssim 10^{-2}$). This
naive assumption needs to be justified by a UV theory or a quantum
gravity theory in the future.
A remarkable feature in the SUSY hybrid inflation model is that
the CMB anisotropy $\delta T/T$ is proportional to $(M/M_P)^2$
\cite{FtermInf}. Thus, the observational data of $\delta T/T\sim
10^{-5}$ \cite{WMAP5} determine the spontaneous symmetry breaking
scale: $M\approx 10^{15-16}$ GeV, which is tantalizingly close to
the scale of SUSY grand unified theory (GUT) \cite{FtermInf}. As
a result, the SUSY hybrid inflation model can be embedded in the
models of SUSY GUT. Indeed, this idea has been combined with the
particle physics models of
SU(3)$_c\times$SU(2)$_L\times$SU(2)$_R\times$U(1)$_{B-L}$
\cite{3221}, SU(4)$_c\times$SU(2)$_L\times$SU(2)$_R$ \cite{422},
SU(5)$\times$U(1)$_X$ \cite{FlippedSU(5)}, and SO(10)
\cite{SO(10)}. In those models, $M$ is interpreted as the
U(1)$_{B-L}$ breaking scale.
In the SUSY hybrid inflation model, the scalar spectral index is
predicted:
\begin{eqnarray} \label{spectral}
n_s\approx 1+2\eta\approx 1-\frac{1}{N_l}\approx 0.98 ,
\end{eqnarray}
where $\eta$ ($\equiv M_P^2V''/V$) is the slow-roll parameter, and
$N_l$ denotes the e-folding number ($=50$--$60$). On the other
hand, the recent WMAP 5-year (WMAP5) observation result on the
scalar spectral index is $n_s=0.963^{+0.014}_{-0.015}$
\cite{WMAP5}. Thus, the prediction of the SUSY hybrid inflation
model, $n_s\approx 0.98$ is quite deviated from the center value
of the WMAP5 result. Indeed, unless relatively larger SUGRA
corrections are included or the model is much ameliorated, the
deviation can not be easily overcome.
The literatures attempted to explain the deviation by considering
a small (but relatively larger) quartic
\cite{hilltop,nonminimal-K} term and/or a more higher order term
\cite{quartic-term} in the K${\rm \ddot{a}}$hler potential, or a
quite small soft SUSY breaking ``A-term'' \cite{a-term} in the
scalar potential. In this letter, we will propose another way, in
which the superpotential plays an essential role in explaining
$n_s\approx 0.963$.
\section{Modification of Hybrid Inflation}
The SUSY hybrid inflation model is defined as the following
superpotential \cite{FtermInf2,FtermInf};
\begin{eqnarray} \label{original}
W=\kappa S(M^2-\phi\overline{\phi}) ,
\end{eqnarray}
where $\phi$ and $\overline{\phi}$ are a conjugate pair of
superfields carrying gauge and/or global charges. At the SUSY
minimum, $S=0$ and $|\phi|=|\overline{\phi}|=M$ by including the
D-term potential, breaking a symmetry by the non-zero vacuum
expectation values (VEVs) of $\phi$ and $\overline{\phi}$.
Inflation starts when the inflaton $S$ is much deviated from the
minimum, $S\gtrsim M$. Then, the complex scalars, $\phi$ and
$\overline{\phi}$ achieve heavy masses, by which
$\phi=\overline{\phi}=0$ during inflation. It is the quasi-stable
point. Thus, the superpotential is dominated by $W=\kappa SM^2$
during inflation. It provides the positive constant vacuum energy
density $\kappa^2 M^4$, which gives rise to inflation. As
mentioned above, the superpotential $W=\kappa SM^2$ and the
minimal K${\rm \ddot{a}}$hler potential do not raise the ``$\eta$
problem.'' Since the higher order terms of the singlet $S$ in the
superpotential destroys the slow-roll conditions, they should be
forbidden by introducing the U(1)$_R$ symmetry. The triggering
condition for inflation, $S\gtrsim M$ would be possible, if the
universe was hot enough before inflation was initiated.
Because of the positive vacuum energy, SUSY is broken and so the
constant scalar potential is quantum mechanically corrected.
Neglecting the SUGRA corrections, thus, the scalar potential is
given by \cite{FtermInf}
\begin{eqnarray} \label{CWpot}
V\approx \kappa^2M^4\left(1+\frac{\kappa^2}{16\pi^2}~{\rm
log}\frac{\kappa^2|S|^2}{\Lambda^2}\right) ,
\end{eqnarray}
where the logarithmic term denotes the quantum correction when
$S\gtrsim M$, and $\Lambda$ means the renormalization scale. It
makes a small slope in the potential, leading the inflaton to the
SUSY minimum. As shown in Eq.~(\ref{spectral}), however, the
scalar potential Eq.~(\ref{CWpot}) yields $n_s\approx 0.98$,
unless it is somehow modified.
Let us consider the following form of the modified inflaton
potential;
\begin{eqnarray} \label{toypot}
V=\mu^4\left(1+\alpha ~{\rm
log}~\varphi+\frac{\delta}{2}\varphi^2\right) ,
\end{eqnarray}
where $\mu^4$ is the positive vacuum energy density leading to
inflation. The dimensionless field $\varphi$ denotes an inflaton
scalar defined as $S/M_P$. The logarithmic term arises from the
quantum correction caused by SUSY breaking \cite{FtermInf}.
Comparison with Eq.~(\ref{CWpot}) yields the relations,
$\mu^4=\kappa^2M^4$ and $\alpha=\kappa^2/8\pi^2$. In
Eq.~(\ref{toypot}), the inflaton's mass term is introduced:
$V\supset (\delta/2)\mu^4\varphi^2=(3\delta/2)H^2S^2$, where $H$
($=\sqrt{\mu^4/3M_P^2}$) is the Hubble constant during inflation.
For successful inflation, thus, the dimensionless coupling
$\delta$ should be small enough, $|\delta|\ll 1$.
The slow-roll parameter $\epsilon$ is still much smaller than
$|\eta|$. It is basically because $\varphi$ is assumed to be
smaller than 1. In the presence of the mass term in
Eq.~(\ref{toypot}), the expressions of the e-folding number and
``$\eta$'' are given by \cite{hilltop}
\begin{eqnarray} \label{N,eta}
N_l=\frac{1}{2\delta}~{\rm
log}\left(1+\frac{\delta}{\alpha}\varphi^2\right) , \quad {\rm
and} \quad\quad \eta=\delta \times\frac{e^{2\delta
N_l}-2}{e^{2\delta N_l}-1} .
\end{eqnarray}
We note that in the limit of $\delta\rightarrow 0$, the expression
for $N_l$ and $\eta$ become $\varphi^2/2\alpha$ and $-1/2N_l$,
respectively, which are the expressions given in the original form
of the SUSY hybrid inflation model. In Eq.~(\ref{N,eta}), the
limit $\alpha\rightarrow 0$ does not make sense. It means that
the logarithmic quantum correction makes an important contribution
to $N_l$.
With the help of the inflaton mass term, the scalar spectral index
can be compatible with the center value of WMAP5:
\begin{eqnarray} \label{ns}
n_s ~ \approx ~ 1+2\eta ~ \approx ~ 0.963 \quad\quad{\rm for}\quad
\frac{\delta}{2} = -3.0\times 10^{-3} .
\end{eqnarray}
Here we set $N_l=55$, but $n_s$ is quite insensitive to large
$N_l$s. Since the sign of the quadratic mass term in
Eq.~(\ref{toypot}) is negative, the potential is convex-upward. If
the inflaton starts at a point of $V'>0$ or
$\alpha+\delta\cdot\varphi^2>0$, the inflaton can roll down
eventually to the origin. It is fulfilled for $\kappa\gtrsim
5\times 10^{-2}$ ($5\times 10^{-3}$) and $\varphi\sim 0.1$
($0.01$). Actually, inflation would take place near the local
maximum, ``hilltop'' \cite{hilltop}, unless $\varphi \ll 1$.
The curvature perturbation is estimated as
\begin{eqnarray}
{\cal P}^{1/2}_{\cal R}= \frac{1}{\sqrt{12}\pi M_{P}^3}~
\frac{V^{3/2}}{V'} \approx \left(\frac{M}{M_{P}}\right)^2
\sqrt{\frac{2|1-e^{2\delta N_l}|}{3 |\delta|~e^{4\delta N_l}}} ,
\end{eqnarray}
where we set $\mu^2=\kappa M^2$.
For $N_l=55$, $\delta=-6.0\times 10^{-3}$, and ${\cal
P}^{1/2}_{\cal R}\approx 4.91\times 10^{-5}$ \cite{WMAP5}, $M$ is
approximately $4.5\times 10^{15}$ GeV, which is slightly lower
than that in the case of $\delta=0$ ($5.7\times 10^{15}$ GeV).
If $10^{12}$ GeV $\lesssim M\lesssim 10^{15}$ GeV, the curvature
perturbation should be supplemented by another scalar field,
``curvaton'' \cite{curvaton}. Then, inflation don't have to occur
near the local maximum, since the room between $M\lesssim S$ and
$S\ll M_P$ (or $\varphi \ll 1$) can be much larger. If $M\ll
10^{15}$ GeV, however, the inflationary scenario can not be
embedded in a SUSY GUT model any more.
With the scalar potential Eq.~(\ref{toypot}), the fraction of the
tensor perturbation is unlikely to be detectable in the near
future.
\section{Twinflation}
For explaining the negative small inflaton mass squared, let us
introduce one more inflaton $S'$. It carries the same quantum
number as $S$, but has a mass different from that of $S$. In the
presence of the twin inflanton fields $\{S,S'\}$, and two pairs of
the waterfall fields $\{\phi_1,~\overline{\phi}_1\}$,
$\{\phi_2,~\overline{\phi}_2\}$, the general superpotential takes
the following form;
\begin{eqnarray} \label{superPot}
W= S\left(\kappa_1M^2-\kappa_1\phi_1\overline{\phi}_1
-\kappa_2\phi_2\overline{\phi}_2\right)
+S^\prime\left(\kappa_2^\prime M^{\prime
2}-\kappa_1^\prime\phi_1\overline{\phi}_1
-\kappa_2^\prime\phi_2\overline{\phi}_2 \right) ,
\end{eqnarray}
where we assign the U(1)$_R$ charges of 2 (0) to $S$ and
$S^\prime$ ($\phi_{1,2}$ and $\overline{\phi}_{1,2}$) such that
the higher power terms of $S$ and $S^\prime$ are forbidden
\cite{Kim}. The different coupling constants $\kappa_{1,2}$ and
$\kappa_{1,2}'$ distinguish $S$ and $S'$. We assume that
$\kappa_{1,2}^{(\prime)}$ and $M^{(\prime) 2}$ are real quantities
for simplicity. At the SUSY minimum, $\phi_{1,2}$ and
$\overline{\phi}_{1,2}$ achieve heavy masses as well as the VEVs,
satisfying $\phi_1^*=\overline{\phi}_1$ ($\neq 0$),
$\phi_2^*=\overline{\phi}_2$ ($\neq 0$), and
$\kappa_1M^2-\kappa_1\phi_1\overline{\phi}_1
-\kappa_2\phi_2\overline{\phi}_2=\kappa_2^\prime M^{\prime
2}-\kappa_1^\prime\phi_1\overline{\phi}_1
-\kappa_2^\prime\phi_2\overline{\phi}_2=0$. Hence, both $S$ and
$S^\prime$ also get the heavy masses, and so $S=S^\prime =0$.
Similar to the original SUSY hybrid inflationary scenario,
inflation in this model can be initiated at a quasi-stable point
of
\begin{eqnarray} \label{initialCondi}
\Bigg \{
\begin{array}{l}
~(\kappa_1S+\kappa_1^\prime S^\prime )^2 ~\gtrsim ~
|\kappa_1^2M^2+\kappa_1^\prime\kappa_2^\prime M^{\prime 2}| ~,
\quad {\rm and}
\\
~(\kappa_2S+\kappa_2^\prime S^\prime )^2~\gtrsim ~
|\kappa_2^{\prime 2}M^{\prime 2}+\kappa_1\kappa_2 M^{2}| ~,
\end{array}
\end{eqnarray}
for which the tree level scalar potential is minimized at $\phi
_1=\overline{\phi}_1=\phi _2=\overline{\phi}_2=0$.
The left (right) hand sides of Eq.~(\ref{initialCondi}) come from
the (off-) diagonal components of the mass matrices for
$(\phi_1,\overline{\phi}_1^*)$ and $(\phi_2,\overline{\phi}_2^*)$.
Thus, inflation is described by the following effective
superpotential;
\begin{eqnarray} \label{effsuperpot}
W=(\kappa S+\kappa'S')M^2 ,
\end{eqnarray}
where we redefined $\kappa$ and $\kappa^\prime$ as
\begin{eqnarray}
\kappa\equiv\kappa_1 \quad {\rm and}\quad \kappa^\prime\equiv
\kappa_2^\prime\left(\frac{M^{\prime 2}}{M^2}\right) .
\end{eqnarray}
We will assume a mild hierarchy between $\kappa$ and $\kappa'$,
i.e. $\kappa'/\kappa =(\kappa_2'M^{\prime 2}/\kappa_1M^2)\lesssim
{\cal O}(1)$, and $M\approx 4.5\times 10^{15}$ GeV. With
Eq.~(\ref{effsuperpot}), we obtain again the constant vacuum
energy at tree level, breaking SUSY. So the logarithmic quantum
corrections will be generated in the scalar potential as in the
single inflaton case.
The K${\rm \ddot{a}}$hler potential is expanded with the power of
$S^{(\prime)}/M_P$ ($\lesssim 1$) upto the quartic terms as
\begin{eqnarray} \label{Kahlerpot}
K=|S|^2+|S^\prime|^2+c_1\frac{|S|^4}{4M_P^2}+c_1^\prime\frac{
|S^\prime|^4}{4M_P^2} +c_2\frac{|S|^2|S'|^2}{M_P^2}
+\frac{c_3|S|^2+c_3^\prime |S^\prime|^2}{2M_P^2}(SS^{\prime
*}+S^*S^\prime) ,
\end{eqnarray}
where $c_i^{(\prime)}$ ($i=1,2,3$) are dimensionless coefficients.
The quartic terms' coefficients of order unity in the K${\rm
\ddot{a}}$hler potential would destroy the slow-roll condition of
the inflation. In the original version of the SUSY hybrid
inflation model, as mentioned in Introduction, the quartic term
coefficient $c_1$ in the K${\rm \ddot{a}}$hler potential,
$K=|S|^2+c_1|S|^4/4M_P^2+\cdots$, is assumed to be suppressed
($\lesssim 10^{-3} $) in order to satisfy Eq.~(\ref{ns}) as well
as the slow roll condition. Along the line of it, we also assume
one fine-tuned relation among the parameters of the K${\rm
\ddot{a}}$hler potential, $c_1$, $c_2$, and $c_3$:
\begin{eqnarray} \label{tuning}
c_1+\frac{c_3^2}{1-c_2} ~\lesssim ~ {\cal O}(10^{-3}) .
\end{eqnarray}
It can be satisfied, e.g. if they all are of order unity or
smaller, but related to each other by $c_1\approx
{-c_3^2}/{(1-c_2)}$ [Case ${\rm{\bf (A)}}$], or if they
(particularly $c_1$ and $c_3$) are sufficiently suppressed, $c_1$,
${c_3^2}/{(1-c_2)}\lesssim {\cal O}(10^{-3})$ [Case ${\rm{\bf
(B)}}$].
With Eqs.~(\ref{effsuperpot}) and (\ref{Kahlerpot}), the
corrections coming from the scalar potential in SUGRA,
\begin{eqnarray} \label{sugrapot}
V_F=e^{K/M_P^2}\left[K_{ij^*}^{-1}D_iW(D_jW)^*-\frac{3}{M_P^2}|W|^2\right]
\end{eqnarray}
can be estimated. In our case, $i,j=\{S,S'\}$. $K_{ij^*}^{-1}$ and
$D_iW$ stand for the inverse K${\rm \ddot{a}}$hler metric and the
covariant derivative of the superpotential, respectively. Upto the
quadratic terms, their components are approximately given by
\begin{eqnarray}
&&\quad\quad\quad\quad~~ K_{SS^*}^{-1}\approx
1+\frac{c_3^2|S|^2}{(1-c_2)M_P^2}-c_2\frac{|S^\prime|^2}{M_P^2}
-\frac{c_3}{M_P^2}(SS^{\prime *}+S^*S^\prime) ~, \label{Kss}
\\
&&\quad\quad\quad\quad\quad~ K_{S^\prime S^{\prime *}}^{-1}\approx
1-c_1^\prime\frac{|S^\prime|^2}{M_P^2}
-c_2\frac{|S|^2}{M_P^2}-\frac{c_3^\prime}{M_P^2}(SS^{\prime
*}+S^*S^\prime) ~,
\\
&&K_{SS^{\prime *}}^{-1}\approx -c_2\frac{S^\prime S^*}{M_P^2}
-c_3 \frac{|S|^2}{M_P^2} -c_3^\prime \frac{|S^\prime|^2}{M_P^2}
~,\quad
K_{S^{\prime}S^*}^{-1}\approx -c_2\frac{S S^{\prime *}}{M_P^2}
-c_3 \frac{|S|^2}{M_P^2}-c_3^\prime \frac{|S^\prime|^2}{M_P^2} ~,
\\
&&D_SW\approx M^2\left(\kappa+\kappa\frac{|S|^2}{M_P^2}+
{\kappa'}\frac{S^\prime S^*}{M_P^2}\right) ~, \quad
D_{S^\prime}W\approx M^2\left(\kappa^\prime
+\kappa^\prime\frac{|S^\prime|^2}{M_P^2}+ {\kappa}\frac{S
S^{\prime *}}{M_P^2}\right) ~.
\end{eqnarray}
In Eq.~(\ref{Kss}), we inserted Eq.~(\ref{tuning}). The scalar
potential Eq.~(\ref{sugrapot}) is, thus, estimated as
\begin{eqnarray}
V_F&\approx&
\kappa^2M^4\bigg\{1+\frac{c_3^2}{1-c_2}|x|^2+(1-c_2)|y|^2-c_3(x^*y+xy^*)
\bigg\}\nonumber \\
&&+{\kappa^{\prime 2}}M^4\bigg\{1+(1-c_2)|x|^2-c_1^\prime
|y|^2-c_3^\prime (x^*y+xy^*)\bigg\}
\nonumber \\
&&-{\kappa^\prime}{\kappa}M^4 \bigg\{(1+c_2)(x^*y+xy^*)+2c_3
|x|^2+2c_3^\prime |y|^2\bigg\}
\nonumber\\
&=& (\kappa^2+\kappa^{'2})M^4+\kappa^2M^4~(x^*~y^*){\cal M} (x ~
y)^T ,
\end{eqnarray}
where $x\equiv S/M_P$, $y\equiv S'/M_P$, and the mass matrix
${\cal M}$ is given by
\begin{eqnarray}
{\cal M}=\left[
\begin{array}{cc}
\frac{c_3^2}{1-c_2}-\frac{\kappa'}{\kappa}\{2c_3
-\frac{\kappa'}{\kappa}(1-c_2)\} ~~&~~
-c_3-\frac{\kappa'}{\kappa}\left\{(1+c_2)+\frac{\kappa'}{\kappa}c_3^\prime\right\}
\\
-c_3-\frac{\kappa'}{\kappa}\left\{(1+c_2)+\frac{\kappa'}{\kappa}c_3^\prime\right\}
~~&~~ (1-c_2)-\frac{\kappa^{'}}{\kappa}\left\{
2c_3^\prime+\frac{\kappa'}{\kappa}~c_1^\prime\right\}
\end{array} \right] .
\end{eqnarray}
Note that in the absence of the second inflatons's contribution to
the superpotential Eq.~(\ref{effsuperpot}), namely,
$\kappa'/\kappa\rightarrow 0$, one of the mass eigenvalues of
${\cal M}$ is zero. Of course, if the relation
$c_1={-c_3^2}/{(1-c_2)}$ is just slightly relaxed, then the small
negative mass term of Eq.~(\ref{toypot}) can be supported purely
by the K${\rm \ddot{a}}$hler potential. In this letter, however,
we intend to acquire the effect with the help of the
superpotential.
Case ${\rm{\bf (A)}}$: If $c_1={-c_3^2}/{(1-c_2)}$ and
$\kappa'/\kappa \lesssim {\cal O}(1)$, the mass eigenstates and
eigenvalues during inflation are
\begin{eqnarray} \label{Evalues}
\left(
\begin{array}{c}\phi_L
\\
\phi_H
\end{array}\right)
\approx \frac{1}{D^{1/2}}\left[
\begin{array}{cc}
1-c_2 & c_3 \\
-c_3 & 1-c_2
\end{array} \right]
\left(
\begin{array}{c}
x\\
y~\end{array}\right)
\quad {\rm for}\quad
\Bigg\{
\begin{array}{l}
m_L^2 \approx
-\frac{\kappa'}{\kappa}\frac{2c_3(2-2c_2+c_3c_3')}{D} ~,
\\
m_H^2 \approx 1-c_2+\frac{c_3^2}{1-c_2} ~
\end{array}
\end{eqnarray}
where $D\equiv (1-c_2)^2+c_3^2$.
Case ${\rm{\bf (B)}}$: If $c_1$, ${c_3^2}/{(1-c_2)} \ll {\cal
O}(1)$, fulfilling Eq.~(\ref{tuning}), and $\kappa'/\kappa
\lesssim {\cal O}(1)$, the mass eigenstates and eigenvalues are
given by
\begin{eqnarray} \label{Evalues}
\left(
\begin{array}{c}\phi_L
\\
\phi_H
\end{array}\right)
\approx \left[
\begin{array}{cc}
1 ~&~ \frac{\kappa'}{\kappa}\frac{1+c_2}{1-c_2} \\
-\frac{\kappa'}{\kappa}\frac{1+c_2}{1-c_2} ~&~ 1
\end{array} \right]
\left(
\begin{array}{c}
x\\
y~\end{array}\right)
\quad\quad {\rm for}\quad
\Bigg\{
\begin{array}{l}
m_L^2 ~\approx ~
-\left(\frac{\kappa'}{\kappa}\right)^2\frac{4c_2}{1-c_2} ~,
\\
m_H^2 ~\approx ~ 1-c_2 ~.
\end{array}
\end{eqnarray}
In both cases, the mass squared of the heavier component,
$\phi_H\times M_P$ is of the Hubble scale [$\sim {\cal
O}(\kappa^2M^4/M_P^2)$]. Consequently, it is expected to be stuck
to the origin during inflation, $\phi_H=0$. On the other hand, the
mass squared of the lighter component, $\phi_L\times M_P$ can be
much lighter than the Hubble scale, if $({\kappa'}/{\kappa})c_3$
for Case ${\rm{\bf (A)}}$, or $({\kappa'}/{\kappa})^2c_2$ for Case
${\rm{\bf (B)}}$ is small enough. Therefore, inflation can be
driven only by lighter mass eigenstate. Moreover, the sign of
$m_L^2$ can be negative, if $c_3$ for Case ${\rm{\bf (A)}}$ [or
$c_2$ for Case ${\rm{\bf (B)}}$] is positive. $\phi_L$ can be
identified with the $\varphi$ of Eq.~(\ref{toypot}), and
$\mu^4=(\kappa^2+\kappa^{'2})M^4$. Quantum correction by the
coupling between $\phi_L$ and $\{\phi_{1,2},
\overline{\phi}_{1,2}\}$ would induce the logarithmic term in
Eq.~(\ref{toypot}), which leads $\phi_L$ eventually into the
origin. $\phi_L=\phi_H=0$ implies $S=S^\prime=0$. As $S$ and
$S^\prime$ approach the origin, Eq.~(\ref{initialCondi}) becomes
violated, and then $\phi_{1,2}$ and $\overline{\phi}_{1,2}$ also
roll down to the absolute minima, developing VEVs. Hence, SUSY is
recovered after inflation terminates.
Identification of $m_L^2$ with $\delta/2$ of Eq.~(\ref{ns}) yields
\begin{eqnarray}
\frac{\delta}{2}\approx -3.0\times 10^{-3} \approx \Bigg\{
\begin{array}{l}
-\frac{\kappa'}{\kappa}~\frac{2c_3(2-2c_2+c_3c_3')}{(1-c_2)^2+c_3^2}
\quad~ {\rm for}\quad {\rm Case}\quad {\rm{\bf (A)}} ,
\\
-\left(\frac{\kappa'}{\kappa}\right)^2\frac{4c_2}{1-c_2}
\quad\quad\quad~~~ {\rm for}\quad {\rm Case}\quad {\rm{\bf (B)}} .
\end{array}
\end{eqnarray}
In Case ${\rm{\bf (A)}}$, hence, ${\kappa'}/{\kappa}=
(\kappa_2^\prime M^{\prime 2})/(\kappa_1M^2)\sim {\cal
O}(10^{-3}-10^{-1})$ fulfills the constraint for $c_3\sim {\cal
O}(1-10^{-2})$. Particularly, if all the quartic terms in the
K${\rm \ddot{a}}$hler potential are suppressed, i.e.
$c_i^{(\prime)}$ ($i=1,2,3$) including $c_3$ are of order
$10^{-2}$ or smaller, $\kappa'/\kappa$ of order $10^{-1}$ is
necessary. In Case ${\rm{\bf (B)}}$, ${\kappa'}/{\kappa}\sim {\cal
O}(10^{-2}-10^{-1})$ satisfies the constraint for $c_2\sim {\cal
O}(1-10^{-1})$. Thus, the mildly hierarchical $\kappa'$ and
$\kappa$ couplings (or $\kappa_2^\prime M^{\prime 2}$ and
$\kappa_1M^2$) can generate the small negative inflaton's mass
squared, explaining $n_s\approx 0.963$.
\section{Conclusion}
In this letter, we proposed a SUSY hybrid inflation model, in
which one more singlet field carrying the same quantum number with
the inflaton is introduced. Inflation is dominated by the
superpotential, $W=(\kappa S+\kappa' S')M^2$, but only one linear
combination of $S$ and $S'$ drives inflation. The smallness of
$\kappa'/\kappa$ [$\sim {\cal O}(10^{-1}$--$10^{-3})$] is
responsible to the small negative mass squared of the inflaton
needed for explaining $n_s\approx 0.963$.
\acknowledgments{ \noindent The author is supported by the FPRD of
the BK21 program, and in part by the Korea Research Foundation,
Grant No. KRF-2005-084-C00001. }
|
1,116,691,497,129 | arxiv |
\section{Introduction}
The increase in synchrony of global crop production and frequency of climate change-driven abnormal weather events is leading to higher variance in crop yields \cite{ray_climate_2015, iizumi_changes_2016, mehrabi_synchronized_2019}. Most staple food crops are more vulnerable to yield loss in specific stages of growth and as such, accurate crop growth stage estimation (CGSE) is vital to track crop growth at different spatial scales - local, regional, and national - and anticipate and mitigate the effects of variable harvest. High-resolution Remote Sensing (RS) data have been successfully employed to track crop growth at regional scales, however current methods for CGSE utilize curve-fitting and simplistic Machine Learning (ML) models that cannot describe the more complex relationships between crop growth drivers and crop growth stage progress \cite{shen_hidden_2013, zeng_hybrid_2016, seo_improving_2019, diao_remote_2020, ghamghami_parametric_2020}. Many of these methods require full-season data and do not provide in-season CGSE information.
Advanced ML models have found success in applications such as crop-cover mapping/classification and yield estimation \cite{orynbaikyzy_crop_2019, jia_bringing_2019, kerner_resilient_2020, teimouri_novel_2019, Weiss2020}, but these models have yet to been applied for in-season CGSE. Whereas methods such as Neural Networks (NNs) have been used in crop mapping, for which researchers can utilize crop cover maps \cite{service_cropscape_2015} to retrieve millions of crop cover examples per year, field-level crop growth stage (CGS) data is not publicly available and producing field scale ground truth data via field studies is prohibitively expensive. As such, large scale CGSE research relies on local and regional level crop progress data for ground truth. Even for the longest continually running sub-weekly temporal resolution remote sensing (RS) sensors (e.g. MODIS), there are only 21 full growing seasons of crop growth data. Constructing accurate, in-season ML approaches from such limited data is difficult, particularly with few example seasons of abnormal weather. In addition, many crop growth studies estimate events such as `start of season', `peak of season', `end of season' etc. \cite{seo_improving_2019}\cite{diao_remote_2020}, even though these events do not really describe phenological progress and knowledge of their timing may not be actionable.
Recently, domain knowledge has been used to improve the performance of ML techniques in applied research using techniques collectively known as Theory-guide Machine Learning (TgML) \cite{karpatne_theory-guided_2017}\cite{khandelwal_physics_2020}. TgML techniques include the use of physical models outputs \cite{willard_integrating_2020}, the integration of known domain limits into ML loss functions \cite{karpatne_physics-guided_2018}, and the designing of NN structures that reflect how variables interact within a real physical system \cite{hu_physics-guided_2020}. These techniques have been shown to reduce the amount of data required to reach a given level of performance \cite{karpatne_theory-guided_2017}. These Theory-guided Neural Networks have begun to significantly improve upon current state-of-the-art methods in applications such as \cite{rong_lagrangian_2020}\cite{wang_deep_2020}. Whereas NNs have shown great promise in agricultural RS studies (e.g. \cite{kerner_resilient_2020}\cite{teimouri_novel_2019}), TgML methods have yet to utilize significant disciplinary advances in agriculture over the last two to three decades. The goal of this study is to understand the impact of incorporating domain knowledge into NN design for in-season CGSE at regional scales. Specifically, the objective of the study is to develop a Domain-guided NN (DgNN) that separates independent growth drivers and compare its performance to sequential NN structures of equivalent complexity. The TgML approach in this study is demonstrated for regional CGSE in field corn, which is one of the most cultivated crops in the world \cite{usda_foreign_agricultural_service_world_2021}. The methodology here, when paired with adequate crop mapping techniques, can be extended to track in-season growth of other crops.
\section{Materials and Methods}
\subsection{Study Area and Data}
\label{sec:study_area}
This study was conducted in the state of Iowa, US, from 2003 to 2019. The state consists of nine separate Agricultural Statistical Districts (ASDs) (see Figure~\ref{fig:ASDs}) and had an average of 13.4 million acres of corn under cultivation across the study period \cite{usda_national_agricultural_statistics_service_corn_nodate}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{figs/ASDs.png}
\caption{Agricultural Statistical Districts in Iowa with average corn acreage planted over the study period in million acres (MA) \cite{usda_national_agricultural_statistics_service_corn_nodate}}
\label{fig:ASDs}
\end{figure}
Location of corn field within the study region were obtained from the Corn-Soy Data Layer (CSDL) \cite{wang_mapping_2020} for 2003-2007 and from the USDA Crop Data Layer (CDL) from 2008-2019\cite{usda_ag_data_commons_cropscape_nodate}. In Iowa, corn is typically planted in mid April / early May (week of year (WOY) 15-24), reaches its reproductive stages around late June (WOY 27 onward), and is harvested from early September through late November (WOY 36-48). Weekly USDA-NASS Crop Progress Reports (CPRs), generated from grower and crop assessor surveys, were used as ground truth. CPR progress stages include Planted, Emerged, Silking, Dough, Dent, Mature, and Harvested. In this study, the Planted stage was replaced with Pre-Emergence, a placeholder progress stage that represents all crop/field states prior to emergence, and the Dough and Dent stages were combined as Grainfill.
CGSE requires both canopy growth information and meteorological data. This study used ASD-wide means and standard deviations of field-level RS and other data shown in Table~\ref{tab:inputs}. Fields in each ASD were selected from the CSDL and CDL based on size and boundary criteria (see Figure~\ref{fig:fpar_proc}).
\begin{figure}[H]
\centering
\includegraphics[width=0.55\linewidth]{figs/fpar_processing.pdf}
\caption{Field selection and processing of MODIS FPAR images.}
\label{fig:fpar_proc}
\end{figure}
Micro-meteorological observations obtained from DayMet were used to compute field-level accumulated growing degree days (AGDD), which is a measure of accumulated temperature required for crop growth. AGDD is used to model progression through different corn growth stages, both in remote sensing studies and in mechanistic models \cite{shen_hidden_2013}\cite{ghamghami_parametric_2020}\cite{lizaso_csm-ixim_2011}. The total number of growing degree days (GDD) for a single 24 hour period is calculated using the function:
\[
GDD = \left\{
\begin{array}{ll}
\cfrac{T_{max} + T_{min}}{2} - T_{base} & T_{min} \geq T_{base} \\
0 & T_{max} < T_{base}\\
\end{array}
\right.
\]
where $T_{max}$ is lower value of daily maximum temperature and $34^\circ C$, $T_{min}$ is the minimum recorded daily temperature, and $T_{base}$ is the minimum temperature above which GDD is accumulated, set to $8^\circ C$ \cite{hanks_maize_2015}. AGDD is a running total of daily GDD values, and in this study it is calculated from April 8th of a given year, which is the date prior to first planting during the study period. Solar radiation inputs were converted from W/m$^2$ to MJ/m$^2$/week using day length taken from DayMet to incorporate photoperiod information. Saturated hydraulic conductivities and bulk densities at field centroids from the CSDL and CDL were combined to obtain ASD-wide means and standard deviations.
\end{paracol}
\newpage
\begin{specialtable}[H]
\widefigure
\caption{Remote sensing, meteorological, and soil inputs used in this study.}
\vspace{6pt}
\label{tab:inputs}
\scriptsize
\begin{tabular}{@{}llllcllll@{}}
\toprule
Parameter & Source & Wavelength & Spat. Res. & Temp. Res. & Orig. Units & Input Units & Reference \\ \midrule
Solar Radiation & ORNL DayMet & - & 1km & Daily & W/m$^2$ & MJ/m$^2$/week & \cite{thornton_daymet_2020} \\
Temperature & ORNL DayMet & - & 1km & Daily & degrees C & AGDD & \cite{thornton_daymet_2020} \\
Rainfall & ORNL DayMet & - & 1km & Daily & mm / day & mm / week & \cite{thornton_daymet_2020} \\
FPAR & NASA MODIS Aqua / Terra & VIS / NIR & 250m+ & 4-day & - & - & \cite{r_myneni_y_knyazikhin_t_park_mcd15a3h_2015} \\
Soil Hydrologic Conductivity & USDA-NRSC gSSURGO & - & 30m & Constant & $\mu$m/s & $\mu$m/s & \cite{soil_survey_staff_gridded_2020} \\
Soil Bulk Density & USDA-NRSC gSSURGO & - & 30m & Constant & g/cm$^3$ & g/cm$^3$ & \cite{soil_survey_staff_gridded_2020} \\ \bottomrule
\end{tabular}
\end{specialtable}
\begin{paracol}{2}
\linenumbers
\switchcolumn
To measure canopy growth, 4-day MODIS Fraction of absorbed Photosynthetically Activate Radiation (FPAR) values for each field within an ASD were filtered to produce daily time series data following the Savitsky-Golay (SG) filter method for NDVI used in \cite{chen_simple_2004}. In this study, the SG filter parameters were \textit{m} = 40, \textit{d} = 1 for long-term change trend fitting and m = 4, d = 1 for the main FPAR time series, where 2\textit{m}+1 is the moving filter size and \textit{d} is the degree of the smoothing polynomial. To simulate in-season availability of FPAR data, the values were filtered independently up to each in-season cut-off week. FPAR values for a given week vary slightly as the season progresses and more data is included within the long-term filtering window. Noise filter adaptions to the existing SG method included rejecting points with an absolute gradient of $>$ 0.3 from the previous value, prior to September, the earliest harvest over the study period. These adaptions prevented noisy, phenologically unrealistic data from being included in the moving filter window. It should be noted that while this filtering system is effective for a uni-modal crop such as corn, it may not be effective for crops with more complex seasonal FPAR patterns such as winter wheat, where higher polynomial filter parameters may be required.
\subsection{Data Standardization}
Since CPRs are released every Monday from data collected during the prior week, weekly meteorological data were obtained by aggregating field-scale daily data from Monday-Sunday. The dataset spanned 38 weeks, WOY 13-51, encompassing the earliest planting and latest harvest reported during the study period. To simulate in-season monitoring, one time series was produced from pre-emergence to the `current' week per field, totaling 39 time series. Field-level time series for each input were then aggregated to ASD-level by calculating the mean and standard deviation of the values across each district (median was used for rainfall), with 12 total inputs (Table~\ref{tab:inputs}). These ASD-wide means and standard deviations formed the un-scaled data for in the study. Solar radiation and rainfall were standardized using Z-score scaling and AGDD and FPAR were standardized using MinMax scaling. To standardize the length of each time series to 39 weekly values, all Z-scored in-season time series were zero padded, while MinMax-scaled inputs were padded with 0.5. ASD location within Iowa was represented using a one-hot location vector of length 9, with each bit representing an ASD. The complete 17-year dataset consisted of 5967 time series, each of dimension (39 X 12) with accompanying location vector.
\subsection{Neural Network Design}
The NNs in this study were based upon Long Short-Term Memory (LSTM) layers \cite{hochreiter_long_1997}, that are widely used in sequence identification / classification problems, such as speech recognition, translation, and time series prediction \cite{yu_review_2019}. LSTM has also found success in RS studies, including crop classification \cite{kerner_resilient_2020} and yield prediction \cite{khaki_cnn-rnn_2020}. In this study, LSTM was used because of its ability to handle time series data with variable length gaps between key events \cite{graves_long_2012}, such as variable in-season crop growth data. Three NN implementations were investigated, two reference structures and a third NN that incorporated domain knowledge. The two reference structures included a dense NN, with traditional dense layers of decreasing size (Figure~\ref{fig:dense_struct}) and a sequential NN in which LSTM layers were linearly chained (Figure~\ref{fig:sequential_struct}).
In designing the third NN, interactions between the 12 different inputs (see Table~\ref{tab:inputs}) and their effect on crop growth were considered. Domain knowledge was incorporated into the NN by separating inputs into a branched, structure based on their relationship to crop growth. TgML studies suggest that organizing NN inputs to reflect their real world interactions may improve performance \cite{karpatne_theory-guided_2017}. For example, Khandelwal et al. \cite{khandelwal_physics_2020} were able to improve overall streamflow prediction by 17\% versus traditional LSTM architecture by training dedicated LSTM layers to predict intermediate variables, such as snow pack and soil water content. Although the dataset used in this study does not include target intermediate variables, it is possible to construct a NN structure that encourages LSTM hidden states to learn and track intermediate variables by separating the relevant inputs through a branched structure. For example, excess photoperiod during vegetative stages has been shown to delay crop progress but increase leaf initiation rate \cite{warrington_corn_1983}\cite{warrington_corn_1983-1}, and excess solar radiation or photoperiod is used in mechanistic crop models to determine growth stage timing (e.g. \cite{lizaso_csm-ixim_2011}). In addition, soil moisture stress, due to low rainfall, in juvenile corn has been found to delay growth progression and reduce final plant size \cite{nesmith_short_1992}\cite{cakir_effect_2004}. Typically, the effects of these two drivers on canopy growth and crop progress are modeled separately \cite{lizaso_csm-ixim_2011, jones1983ceres, holzworth_apsim_2014}. In this study, solar radiation and soil moisture-related inputs were separated from FPAR and AGDD using a LSTM-based branched structure, similar to their treatment within agronomic models, and aims to encourage LSTM branches to learn and track these intermediate variables (Figure~\ref{fig:attention_struct}).
Domain knowledge regarding timing and impact of different crop growth drivers during the growing season was incorporated using an attention mechanism. Attention in NNs allows a network to learn the importance of different inputs. Many natural language processing tasks involving LSTM utilize attention mechanisms to calculate the importance of different words in a sentence, e.g. \cite{young_recent_2018}\cite{liu_learning_2016}. In this study, self-attention based on Multi-Head Attention \cite{vaswani_attention_2017} is employed to allow the NN to learn importance weightings for different meteorological inputs. Agronomic literature suggests that surplus or deficit of solar radiation and rainfall during particular weeks in the growth cycle are what impact crop growth \cite{warrington_corn_1983}\cite{cakir_effect_2004}. Attention mechanisms were added to both the solar radiation and soil moisture branches to exploit the time-dependent effects of solar radiation and rainfall as growth drivers in corn. The final branched structure with attention mechanisms is shown in Figure~\ref{fig:attention_struct}. This NN is hereafter referred to as the Domain-guided NN (DgNN).
\end{paracol}
\begin{figure}[]
\widefigure
\centering
\begin{subfigure}[t]{0.65\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/Dense.png}
\caption{}
\vspace{0.5cm}
\label{fig:dense_struct}
\end{subfigure}
\begin{subfigure}[t]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/sequential.png}
\caption{}
\vspace{0.5cm}
\label{fig:sequential_struct}
\end{subfigure}
\begin{subfigure}[t]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/branched_w_attention.png}
\caption{}
\vspace{0.5cm}
\label{fig:attention_struct}
\end{subfigure}
\caption{Structures of the three NNs implemented in this study. (a) Dense NN, with 1,170,054 trainable parameters. Numbers in parenthesis represent layer node count. (b) Sequential NN, with 1,046,278 trainable parameters. (c) DgNN, with 1,018,094 trainable parameters.}
\label{fig:structs}
\end{figure}
\begin{paracol}{2}
\linenumbers
\switchcolumn
\subsection{NN Loss, Validation and Evaluation}
\label{sec:NN_eval}
In this study, Kullback-Leibler Divergence ($D_{KL}$) was used as the loss function for all NNs. $D_{KL}$ is a measure of the difference between two probability distributions. $D_{KL}$ is often used in NN regression problems with targets that are distributions. Given two distributions $P(x)$a nd $Q(x)$, $D_{KL}$ is calculated as:
\begin{equation}
D_{KL}(P||Q) = \sum_{x \in X} P(x) \log \left( \frac{P(x)}{Q(x)}\right)
\end{equation}
Here $D_{KL}$ is used as it provides a measure of the difference between the predicted and actual distributions of crop progress for a given week.
The 17-year CGSE dataset was split into 13 training years and 4 test years, and NN hyperparameters were selected using five-fold cross-validation on the training data. An initial 300 epochs were used in conjunction with early stopping, with a patience of 30 (i.e. 1/10th of the total) epochs and best weights restoration. Early stopping was determined based on NN loss on a single randomly selected year from the training data of each fold, kept separate and assessed at the end of each epoch. NNs were trained using the Adam optimizer with default parameters and a learning rate of 1$e^{-5}$. All NNs stopped early during training. In this study, the Dense NN followed a traditional funnel structure with layer widths ranging from 1024 to 128 nodes. Both the Sequential NN and DgNN used LSTM units with 64 hidden nodes and a 128 node dense layer. Dropout was used between hidden layers with a rate of 0.2. Each NN used a six head softmax output layer, with each head corresponding to one of the six growth stages defined in Section~\ref{sec:study_area}. For self-attention, a two-headed attention layer with a key dimension of 40 was used.
As shown in Tables~\ref{tab:test_years_variables} and~\ref{tab:test_years_stages}, the four test years, 2009, 2012, 2014, and 2019 were chosen based upon deviation from the mean of the dataset in terms of rainfall, planting and harvest dates. All test years remained unseen by NNs during the design phase, and were only used for NN evaluation after selection of the final NN parameters. Initially during evaluation we trained the NNs using the average number of epochs required from early stopping during model validation, but reverted to the same early stopping technique used during NN validation, with the loss on a single randomly selected year (2004) monitored for all NNs.
\begin{specialtable}[H]
\centering
\caption{State-wide deviations of precipitation and solar radiation during test years from 2003-2019 study period mean. Standard deviations are given in parenthesis.}
\label{tab:test_years_variables}
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Year & \begin{tabular}[c]{@{}l@{}}AGDD \\ (full season) \\ ($^\circ$C)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Precipitation\\ (April - June) \\ (mm)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Precipitation \\ (full season) \\ (mm)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Solar Radiation \\ (April - June) \\ (W/m$^2$/day)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Solar Radiation \\ (full season) \\ (W/m$^2$/day)\end{tabular} \\ \midrule
2009 & -277.1 (-2.285) & -52.7 (-0.825) & +43.7 (+0.388) & +19.81 (+0.476) & +5.94 (+0.115) \\
2012 & +135.4 (+1.116) & -65.8 (-1.029) & -193.4 (-1.718) & +24.00 (+0.576) & +84.54 (+1.635) \\
2014 & -127.0 (-1.047) & +101.2 (+1.584) & +104.5 (+0.928) & -20.81 (-0.500) & -11.94 (-0.231) \\
2019 & -80.6 (-0.664) & +15.9 (+0.249) & +99.9 (+0.887) & -23.50 (-0.565) & -87.77 (-1.697) \\ \bottomrule
\end{tabular}
\end{specialtable}
\begin{specialtable}[H]
\centering
\caption{State-wide deviations of crop progress during test years from the 2003-2019 study period mean. Values indicate time to cumulative progress into growth stage at 50\%. Standard deviations are given in parenthesis.}
\label{tab:test_years_stages}
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Year & Emerged (days) & Silking (days) & Grainfill (days) & Mature (days) & Harvested (days) \\ \midrule
2009 & -2 (-0.397) & +4 (+0.774) & +11 (+1.58) & +9 (+1.00) & +20 (+1.98) \\
2012 & -6 (-1.19) & -10 (-1.93) & -10 (-1.42) & -16 (-2.01) & -25 (-2.48) \\
2014 & +3 (+0.60) & -1 (-0.19) & -4 (-0.57) & +6 (+0.75) & +6 (+0.60) \\
2019 & +11 (+2.18) & +2 (+0.39) & +2 (+0.29) & +16 (+2.01) & +13 (+1.29) \\ \bottomrule
\end{tabular}
\end{specialtable}
For comparison to existing CGSE approaches, a Hidden Markov Model-based (HMM) CGSE method presented by Shen et al. was implemented \cite{shen_hidden_2013}. This method uses a standard Expectation Maximization (EM) algorithm along with USDA CPRs to supply priors and transition matrices to the model. Following the method from \cite{shen_hidden_2013}, the HMM was run for 100 runs on the 13 training years, with a 4-year random subset from within the training years selected each run to act as validation data. Average performance over 100 runs was used to reduce EM sensitivity to initialization and local minima. During testing, the HMM was run 10 times on each of the four test years and the mean of these runs is the final performance reported.
ASD-level CGS estimates for the NNs and HMM were aggregated to state-level estimates for comparison via weighted sum, with ASD weights calculated based on the number of corn fields in each ASD that passed the processing criteria, explained in Section~\ref{sec:study_area} and shown in Figure~\ref{fig:fpar_proc}. Performance of the three NN structures and the HMM were evaluated against state-wide USDA CPRs using two metrics. The first, Nash-Sutcliffe efficiency (NSE), is a measure commonly used in hydrology and crop modeling to measure how well a model describes an observed time series versus the mean value of that time series, and is defined as:
\begin{equation}
NSE = 1 - \frac{\sum_{t=1}^{T}(Q_m^t - Q_o^t)^2}{\sum_{t=1}^{T}(Q_o^t - \bar{Q}_o)^2}
\end{equation}
Where $Q_m^t$ is the model estimate at time $t$, $Q_o^t$ is the observed value at time $t$, and $\bar{Q}_o$ is the mean value of the observed time series. NSE ranges from $-\infty$ to a maximum of 1, where 1 means the model perfectly describes the observed time series. An NSE of below 0 means the model is worse at describing the observed time series than the observed time series mean. In this study, NSE is used as a metric for how well each NN estimates the percentage of corn in a given growth stage over time.
The second metric, cosine similarity (CS), is a measure of the angle between two vectors in a multi-dimension space. CS between two vectors $\mathbf{A}$ and $\mathbf{B}$ is calculated as:
\begin{equation}
\textnormal{CS} = cos(\theta) = \frac{\mathbf{A}\cdot\mathbf{B}}{||\mathbf{A}||\ ||\mathbf{B}||}
\end{equation}
In this study, CS is used as a metric for the accuracy of each NN in describing crop progress across all stages for a single week. CS ranges from -1 to +1, representing vectors that are exactly opposite in direction to exactly the same in direction. Orthogonal vectors have a CS of 0. A CS value of 1 means that the CGSE method produces perfect estimates of the amount of corn in each growth stage for that week. Lower CS values indicate higher discrepancies between estimated and real crop progress across all stages for that week.
\subsection{Visualization of NN Operation}
Uniformed Manifold Approximation and Projection (UMAP) \cite{mcinnes_umap_2020} was used to visualize layer activations and gain insight into the differences in behavior among NN structures. UMAP is a dimension reduction technique that is often used for visualizing high-dimension data, e.g. \cite{becht_dimensionality_2019}. For UMAP embedding, local neighborhoods of size 15 were used and hidden layer activations were embedded to 2 dimensions. Layer activations for training and test data were visualized for the layers in each NN feeding into the 128 node dense and softmax layers, these being common to all three NNs (see Figure~\ref{fig:structs}). Color representations of crop progress were formed by reducing the six crop stages to three RGB channels using a UMAP reduction to 3 dimensions, with 15 neighbors.
\section{Results}
\subsection{Model Validation}
Table~\ref{tab:cross_NSE} shows the state-wide means and standard deviations of NSE for each of the five CGS for five-fold cross-validation on the training data. Overall, the three NN structures performed better than the HMM. The HMM had particularly low NSE during Silking and Mature stages, meaning those stages were difficult to estimate. Figure~\ref{fig:CS_CV_NN} shows five-fold cross-validated CS performance on the training data, with box plots representing the minimum, maximum, and inter-quartile range of the CS. Week-to-week CS for the HMM was also significantly worse than for the NNs. It should be noted that the CS scale for the HMM plot is larger (0.2 to 1) than those used for the NNs in Figure~\ref{fig:CS_CV_NN}.
Among the three NN structures, the Dense NN performed the worst, with the lowest NSE for all stages except Grainfill. In addition, the range of CS values is larger for the Dense NN during transition between Silking and Grainfill (WOY 32-33) and Mature to Harvest (WOY 42-43).
\end{paracol}
\begin{figure}[]
\centering
\begin{subfigure}[t]{0.9\textwidth}
\centering
\caption{}
\vspace{0.11cm}
\includegraphics[width=0.9\textwidth]{figs/HMM_state_CV_base2_CS.png}
\end{subfigure}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\caption{}
\vspace{0.11cm}
\includegraphics[width=0.9\textwidth]{figs/dense_cv_base8_CS.png}
\end{subfigure}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\caption{}
\vspace{0.11cm}
\includegraphics[width=0.9\textwidth]{figs/sequential_cv_base8_CS.png}
\end{subfigure}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\caption{}
\vspace{0.11cm}
\includegraphics[width=0.9\textwidth]{figs/branched_w_attention_cv_base8_CS.png}
\end{subfigure}
\begin{subfigure}[t]{0.91\textwidth}
\centering
\caption{}
\vspace{0.11cm}
\includegraphics[width=0.91\textwidth]{figs/CPR_plot_CV.png}
\end{subfigure}
\caption{Week-to-week state-wide CS between real and estimated crop progress of (a) HMM (b) Dense NN (c) Sequential NN (d) DgNN for five-fold cross validation on the 13-year training data. Box plots are mean (red) and inter-quartile range (IQR) and whiskers are 1.5 $\times$ IQR. Outliers are plotted as single points. (e) Average crop progress from USDA crop progress reports during the training years. Scales for HMM (1 to 0.2) and the NNs (1 to 0.6) are different to allow inclusion of all HMM points.}
\label{fig:CS_CV_NN}
\end{figure}
\begin{paracol}{2}
\linenumbers
\switchcolumn
The Sequential NN improved upon the Dense NN cross-validated NSEs, particularly for the Mature stage. NSE for both the Emerged and Harvested stages for the Sequential NN were also very similar to the DgNN model during validation. Mean CS was only slightly lower for the Sequential NN than for the DgNN, and the range of CS values over the training data is actually higher during the start of Grainfill (WOY 31). During cross-validation, CGSE proved more challenging for the Silking, Grainfill, and Mature stages. Cross-validated CS performance was high for all methods for weeks 22-27, during the height of the Emerged stage. This is because the Emerged stage is the longest in-season stage and as such, at certain weeks, such as WOY 25, the crop is 100\% emerged for all years during the study period. Similarly, CS performance was high for all methods after WOY 45, after which nearly 100\% of the crop had reached the Harvested stage in all training years.
\begin{specialtable}[H]
\centering
\caption{Mean state-wide NSEs and their standard deviations for five-fold cross validation on the 13-year training data. Standard deviations are given in parenthesis. Bold numbers represent highest NSE and lowest standard deviation.}
\label{tab:cross_NSE}
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Model & NS - Emerged & NS - Silking & NS - Grainfill & NS - Mature & NS - Harvested \\ \midrule
HMM & 0.874 (0.041) & 0.624 (0.166) & 0.785 (0.060) & 0.688 (0.160) & 0.931 (0.062) \\
Dense NN & 0.949 (0.041) & 0.863 (0.123) & 0.890 (0.062) & 0.869 (0.087) & 0.966 (0.034) \\
Sequential NN & 0.979 (0.024) & 0.890 (0.108) & 0.884 (0.132) & 0.906 (0.075) & 0.982 (0.028) \\
DgNN & \textbf{0.984 (0.013)} & \textbf{0.914 (0.077)}& \textbf{0.923 (0.063)} & \textbf{0.935 (0.043)} & \textbf{0.988 (0.014)} \\
\bottomrule
\end{tabular}
\end{specialtable}
\subsection{Model Evaluation}
The four test years used for evaluation were 2009, 2012, 2014, and 2019. In 2009, planting was delayed by heavy rains in May and a cool July (6 $^\circ$ C below normal) delayed crop progress. Rain at the end of September and through much of October delayed harvest significantly \cite{usda-national_agricultural_statistics_service_upper_midwest_region_iowa_field_office_2010_2010}. For 2012, corn planting began quickly but was slowed by rain in May. Low rainfall and hot temperatures in June and July caused both soil moisture deficit and fast crop progress. A dry and warm late August and early September brought about early crop maturity and harvesting began early \cite{usda-national_agricultural_statistics_service_upper_midwest_region_iowa_field_office_2013_2013}. Crops progression in 2014 was similar to the 17-year study period average. Planting was slightly behind average during April, but by mid-July corn progression had surpassed the study period average. Wet fields and high grain moisture slowed harvest progress in October \cite{usda-national_agricultural_statistics_service_upper_midwest_region_iowa_field_office_2015_2015}. In 2019, rain in April and May delayed corn planting progress, and by the end of a drier June corn emergence was over a week behind average. Cooler temperatures in September also slowed crop progress, leading to delayed crop maturity and harvesting \cite{usda-national_agricultural_statistics_service_upper_midwest_region_iowa_field_office_2020_2020}.
Table~\ref{tab:test_avg} shows state-wide means and standard deviations of NSE performance for the four test years. The HMM showed lower NSEs during test years for all stages except the Emerged stage. The HMM Mature NSE was significantly lower than in cross-validation, with an average less than 0.3 and standard deviation greater than 0.4. The Dense NN had higher NSEs than the HMM for the four test years, but also a lower mean and higher standard deviation than its cross-validated performance. The Sequential NN produced higher mean and lower standard deviation NSEs than the Dense NN in each stage, and also produced the best NSE for the Silking stage of any method. The DgNN produced the best mean NSE results on the test data for all stages except Silking, with significantly higher performance for the Mature stage. Like all methods, it produced lower mean NSEs than its cross-validated performance on the training data, however it also produced a lower standard deviation for Grainfill performance on the test years.
\begin{specialtable}[H]
\centering
\caption{Mean state-wide NSEs and their standard deviations for the four test years. Standard deviations are given in parenthesis. Bold numbers represent highest NSE and lowest standard deviation.}
\label{tab:test_avg}
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Model & NS - Emerged & NS - Silking & NS - Grainfill & NS - Mature & NS - Harvested \\ \midrule
HMM & 0.891 (0.049) & 0.475 (0.247) & 0.732 (0.106) & 0.245 (0.468) & 0.786 (0.117) \\
Dense NN & 0.932 (0.016) & 0.761 (0.094) & 0.745 (0.185) & 0.635 (0.375) & 0.917 (0.062) \\
Sequential NN & 0.947 (0.048) & \textbf{0.823 (0.048)} & 0.841 (0.097) & 0.758 (0.186) & 0.936 (0.031) \\
DgNN & \textbf{0.964 (0.015)} & 0.790 (0.127) & \textbf{0.877 (0.026)} & \textbf{0.870 (0.126)} & \textbf{0.976 (0.021)} \\ \bottomrule
\end{tabular}
\end{specialtable}
Figures~\ref{fig:2009} through~\ref{fig:2019} show the CS for the four test years. Dense NN CS values for the test years, particularly 2009 (Figure~\ref{fig:2009}) and 2012 (Figure~\ref{fig:2012}), were lower than the range produced during cross-validation. While the Sequential NN was able to produce more accurate overall estimates of all-stage crop progress (as measured by CS) during some of the most difficult periods for all models, it also suffered greater CS performance degradation than the other NNs during some CGS transition periods. The DgNN maintained the highest CS for some of the most difficult periods, such as the Grainfill-Mature-Harvested transitions, during the test years for all NNs. The DgNN was also less prone to CS performance drops during the start and end of the Emergence stage, which were particularly pronounced for the other models in 2009 (WOY 29-30) and 2014 (WOY 18-19). In addition, as seen in Table~\ref{tab:CS_high}, the DgNN produced the best estimates across all growth stages for the greatest number of weeks in each of the test years, producing the highest CS value for 39\% more weeks than the next best NN.
\end{paracol}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{figs/final_TEST_2009_CS.png}
\caption{(a) Week-to-week CS between actual and estimated crop progress for the NNs in 2009; and (b) actual crop progress for 2009 is shown below.}
\label{fig:2009}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{figs/final_TEST_2012_CS.png}
\caption{(a) Week-to-week CS between actual and estimated crop progress for the NNs in 2012; and (b) actual crop progress for 2012 is shown below.}
\label{fig:2012}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{figs/final_TEST_2014_CS.png}
\caption{(a) Week-to-week CS between actual and estimated crop progress for the NNs in 2014; and (b) actual crop progress for 2014 is shown below.}
\label{fig:2014}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{figs/final_TEST_2019_CS.png}
\caption{(a) Week-to-week CS between actual and estimated crop progress for the NNs in 2019; and (b) actual crop progress for 2019 is shown below.}
\label{fig:2019}
\end{figure}
\begin{paracol}{2}
\linenumbers
\switchcolumn
\begin{specialtable}[H]
\centering
\caption{Total number of weeks during test years that each NN produced the highest CS value that week. Count includes each NN (one or more) that produced the highest CS value. Bold numbers represent highest number of weeks.}
\label{tab:CS_high}
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Test Year & 2009 & 2012 & 2014 & 2019 & Total \\ \midrule
Dense NN & 14 & 9 & 14 & 15 & 44 \\
Sequential NN & 18 & 19 & 18 & 15 & 70 \\
DgNN & \textbf{19} & \textbf{32} & \textbf{24} & \textbf{22} & \textbf{97} \\ \bottomrule
\end{tabular}
\end{specialtable}
Table~\ref{tab:testNSE_years} shows NSE performance for each of the test years. For the 2009 test year, the DgNN outperformed all other models, with the highest NSE for all stages except Emerged. Similar to what was seen during model validation, all NNs had reduced performance during the stages where corn transitioned into and out of Silking (see Figure~\ref{fig:2009}). The Dense NN showed greater performance degradation than the LSTM-based NNs during this period, with significantly lower CS from WOY 29-34. There was a decrease in CS due to a delayed harvest around WOY 45, a time when close to 100\% of the crop had been harvested for every year represented in the training data. The Sequential NN was the worst affected. The DgNN, however, suffered very little decrease in CS during that week. Figure~\ref{fig:cum_test} shows cumulative crop progress for each stage for each of the years. During 2009, each of the three NNs was early in estimating progress for every stage except Emerged.
\begin{specialtable}[H]
\caption{State-wide NSEs for each of the four test years. Bold numbers represent highest NSE.}
\label{tab:testNSE_years}
\centering
\scriptsize
\begin{tabular}{@{}llllll@{}}
\toprule
Model & NS - Emerged & NS - Silking & NS - Grainfill & NS - Mature & NS - Harvested \\ \midrule
\multicolumn{6}{c}{2009} \\ \midrule
HMM & 0.855 & 0.767 & 0.865 & 0.454 & 0.609 \\
Dense NN & 0.915 & 0.610 & 0.781 & 0.904 & 0.933 \\
Sequential NN & \textbf{0.993} & 0.896 & 0.880 & 0.856 & 0.898 \\
DgNN & 0.987 & \textbf{0.917} & \textbf{0.907} & \textbf{0.952} & \textbf{0.991} \\ \midrule
\multicolumn{6}{c}{2012} \\ \midrule
HMM & 0.868 & 0.671 & 0.571 & -0.537 & 0.763 \\
Dense NN & 0.930 & 0.802 & 0.436 & -0.010 & 0.814 \\
Sequential NN & \textbf{0.956} & 0.836 & 0.677 & 0.448 & 0.914 \\
DgNN & 0.954 & \textbf{0.908} & \textbf{0.837} & \textbf{0.659} & \textbf{0.942} \\ \midrule
\multicolumn{6}{c}{2014} \\ \midrule
HMM & \textbf{0.975} & 0.258 & 0.721 & 0.703 & 0.921 \\
Dense NN & 0.958 & 0.767 & 0.848 & 0.869 & 0.975 \\
Sequential NN & 0.867 & \textbf{0.789} & \textbf{0.933} & 0.933 & 0.978 \\
DgNN & 0.967 & 0.706 & 0.891 & \textbf{0.980} & \textbf{0.994} \\ \midrule
\multicolumn{6}{c}{2019} \\ \midrule
HMM & 0.864 & 0.203 & 0.770 & 0.359 & 0.851 \\
Dense NN & 0.924 & \textbf{0.865} & \textbf{0.915} & 0.778 & 0.947 \\
Sequential NN & \textbf{0.973} & 0.771 & 0.873 & 0.797 & 0.954 \\
DgNN & 0.948 & 0.627 & 0.875 & \textbf{0.891} & \textbf{0.978} \\ \bottomrule
\end{tabular}
\end{specialtable}
\end{paracol}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{figs/RNN_test_stages.png}
\caption{State-wide estimated cumulative crop progress of each NN (dashed) for the four test years compared to actual cumulative crop progress (solid).}
\label{fig:cum_test}
\end{figure}
\begin{paracol}{2}
\linenumbers
\switchcolumn
The year 2012 produced the worst NSE across all models for the Grainfill and Mature stages. This is expected given the large deviation in crop progress timing from the study mean during that year (see Table~\ref{tab:test_years_stages}). With the exception of the Emerged stage, the DgNN best described each of the growth stages, as measured by NSE. The week-to-week CS performance degraded less for the DgNN than the other NNs during the fast-moving WOY 33-43 Grainfill-Mature progression, as seen in Figure~\ref{fig:2012}. The pronounced problems of all NNs during this period may have been caused by the weather-induced fast crop progression and sustained drought during the 2012 growing season. Significant soil moisture stress contributed to degradation in corn crop condition \cite{usda_national_agricultural_statistics_service_corn_nodate-1}, which may have affected canopy appearance and the FPAR signal. In addition, warm temperatures during the growing season sped up crop progress to rates not seen since 1987 \cite{usda-national_agricultural_statistics_service_upper_midwest_region_iowa_field_office_2013_2013}. As shown in Table~\ref{tab:test_years_stages}, growth stage time to 50\% was over two standard deviations less the mean for both Mature and Harvested stages. This is in contrast with 2009, where a delay of comparable magnitude (+20 days) did not have as significant an effect on later season week-to-week performance. There were significant delays in NN progress estimation for Mature and Harvested in 2012, as show in Figure~\ref{fig:cum_test}.
In 2014, Silking NSE for the NNs was low, even though crop progress that year was relative typical of the study period. All three NN structures were late in estimating the onset of the Grainfill stage, as shown in Figure~\ref{fig:cum_test} and also reflected in the decline in CS between WOY 30 and 34 (see Figure~\ref{fig:2014}). Cumulative progress estimates, as seen in Figure~\ref{fig:cum_test}, exhibit the closest cumulative estimation curves for Silking of any of the test years, particularly with the DgNN. However, because the NNs were late in predicting the onset of Grainfill, NSE performance for Silking degraded. Whereas here the NNs performed well at estimating crop progress into Silking, they could not accurately describe the rate at which the crop progressed out of that stage. NNs were also early in estimating emergence, which degraded CS during WOY 18-21.
NN performance for Silking also suffered in 2019, with low Silking NSE cause by late DgNN and Sequentiall NN estimates for the start of the Grainfill stage, a problem not experienced by the Dense NN (see Figure~\ref{fig:cum_test}). The opposite is true for the Mature stage, where NSE is higher for the DgNN than the other two NNs. This is reflected in the week-to-week CS values for WOY 37-46 (see Figure~\ref{fig:2019}), higher for the DgNN, during which time the crop was transitioning from Grainfill to Mature to Harvested.
A common theme for all CGSE methods across all training and test years is the three periods of easy estimation that happen at different times during the growing season: WOY 13-17, when crops have yet to emerge, periods around WOY 25 (100\% Emerged for all study years), and WOY 48-51 when the vast majority of the crop has been harvested. Predictably, the most difficult periods for estimation as measured by CS are those with the crop in two or more stages, and progress of mid season stages proved more difficult to estimate than start- and end-of-season stages. Silking is the shortest yet most critical growth stage in terms of potential yield loss, but was the most difficult to estimate for all NNs. Compared to the Emerged and Harvested stages, where significant FPAR gradients help to highlight timing, canopy appearance and therefore FPAR remain relatively unchanged during the Silking-Grainfill transition. Estimation of the this transition was a consistent problem. With the exception of 2009, when the Grainfill stage was delayed by 11 days compared to the study period average, all NNs were late in their estimations of the cumulative progress into the Grainfill stage (see Figure~\ref{fig:cum_test}). In the absence of measurable canopy change, AGDD is a useful proxy for estimating progress from Silking to Grainfill. AGDD, however, is cultivar specific, and cultivars are selected based on different factors, such as planting timing and drought risk. Cultivar-specific variation in required AGDD for progress through Silking and the short duration of that stage may reduce the effectiveness of AGDD as a proxy. In addition, AGDD for this study is accumulated from April 8th of each year and so, due to variable inter-year planting dates, is not an exact measure of how much AGDD that year's crop has accumulated. As the NNs presented here have provided accurate estimates of the timing of progression into Silking, one possible improvement for future methods could be to use weekly GDD as an input so that the NN could learn the accumulation of GDD required relative to the start of Silking. On the other hand, AGDD provides a measure of accumulated temperature over the growing season that serves as an anchor for plausible crop progression. There is no guarantee than NNs presented with GDD only would be able to effectively learn variable accumulation functions that, un-guided, lead to better estimates.
All NN structures produced NSE and CS results that were worse during model evaluation. This suggests that each of the methods experienced overfitting. In any NN implementation for RS, some overfitting is expected because of the limited number of years of data available for RS-based methods. This leaves CGSE approaches vulnerable to test years with progress timing that is under-represented in the training data. This problem is particularly pronounced in 2012, where fast crop progression saw corn begin to mature in WOY 34, two weeks before any year in the training set. This fast crop progress, however, degraded the performance of the LSTM-based methods less than the HMM and the Dense NN, suggesting that the ability of LSTM to handle variable length gaps between key events make these methods more robust to outlier years such as 2012. Further studies could investigate the relationship between CGSE NN complexity and overfitting through an \textit{a posteriori} ablation study, e.g. \cite{meyes_ablation_2019}.
One notable area of performance degradation were the low NSEs in the DgNN implementation for Silking during model evaluation. In the DgNN structure, self-attention is used to take advantage of field studies in agronomy. In this context, there is one caveat to using attention that may cause overfitting. Since zero-padded, variable length time series were used, the majority of time series inputs in the training-set have later season values set to zero. Therefore, there are less examples of non-zero values during later weeks and the attention layers may tend to reduce the assigned importance of later inputs, introducing a bias. For example, two thirds of all time series in this study have the last one third of their input tensor zero-padded. This could affect the ability of the DgNN to identify important differences later in the season.
Overall, the DgNN showed significant improvement in estimating CGS over the HMM and the other two NNs, particularly in Mature stage NSE and in CS for weeks with multiple overlapping stages. However, the DgNN had reduced NSE for Silking during evaluation due to difficulty in estimating the timing of the Silking-Grainfill transition. Given the higher Silking NSE of the Sequential NN during evaluation, future studies may be able to combine the strengths of both NNs through some form of NN boosting, e.g. \cite{peerlinck_adaboost_2019}.
\subsection{UMAP Visualization of Layer Activations}
Figures~\ref{fig:UMAP_128} and \ref{fig:UMAP_softmax} show layer activations for the final layers of each NN preceding their respective 128 node and softmax layers. Layer activations have been reduced using UMAP and labeled stages in the colorbar represent the height of each new stage. Figure~\ref{fig:UMAP_128} illustrates how the LSTM-based NNs treat the time series differently from the Dense NN. Each of the separate branches visible in the activation space for both the DgNN and Sequential NN contain time series from a specific ASD. The Dense NN, however, does not keep time series from different ASDs as separate, even though it is given the same location information. This may be a source of performance boost for the LSTM-based models, as test year data is projected into the activation space closer to other data from the same ASD. Regression performed by final layers to estimate crop progress is then more strongly influenced by training time series from the same region. This is a benefit as farmers in different ASDs elect to plant cultivars with traits more suited to the local climate. As such, regression performed in an activation space that keeps different ASDs more separate may be more robust to inter-season variation.
The UMAP plots also show evidence of NN overfitting, manifest as noisier delineation among clusters in the test data. For the Dense NN, this is most observable in the Pre-Emergence to Emerged green to purple (\ref{fig:UMAP_softmax} (a) and (b)), and the Mature to Harvested transitions between blue and maroon (\ref{fig:UMAP_128} (a) and (b)).
While the above is a simple analysis of layer activations using UMAP for model evaluation purposes, UMAP visualizations may be used as a diagnostic tool in future work to assess the impact of including different structures and mechanisms during CGSE NN design.
\end{paracol}
\begin{figure}[]
\centering
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_Dense_densein_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_Dense_densein_test_128.png}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_Seq_densein_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_Seq_densein_test_128.png}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_DgNN_densein_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\vspace{0.11cm}
\includegraphics[width=\linewidth]{figs/UMAP_DgNN_densein_test_128.png}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\includegraphics[width=\linewidth]{figs/colorbar.png}
\end{subfigure}
\caption{UMAP visualizations of combined activations from layers feeding into the 128 node layer for training and test years for (a) and (b) Dense NN; (c) and (d) Sequential NN; (e) and (f) DgNN.}
\label{fig:UMAP_128}
\end{figure}
\begin{figure}[]
\centering
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_Dense_softin_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_Dense_softin_test_128.png}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_Seq_softin_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_Seq_softin_test_128.png}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_DgNN_softin_train_128.png}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{figs/UMAP_DgNN_softin_test_128.png}
\end{subfigure}
\begin{subfigure}{0.9\linewidth}
\includegraphics[width=\linewidth]{figs/colorbar.png}
\end{subfigure}
\caption{UMAP visualizations of 128 node layer activations feeding into the softmax layer for training and test years for (a) and (b) Dense NN; (c) and (d) Sequential NN; (e) and (f) DgNN.}
\label{fig:UMAP_softmax}
\end{figure}
\begin{paracol}{2}
\linenumbers
\switchcolumn
\section{Conclusions}
In this study, an agronomy-informed neural network, DgNN, was developed to provide in-season CGSE estimates. The DgNN separates inputs that can be treated as independent crop growth drivers using a branched structure, and uses attention mechanisms to account for the varying importance of inputs during the growing season. The DgNN was trained and evaluated on RS and USDA CPR data for Iowa from 2003 to 2019 using NSE and CS as metrics. The DgNN structure was compared to a HMM and two NN structures of similar complexity. The DgNN outperformed each of the other methods on all growth stages for five-fold cross validation on the training data, with an average improvement in NSE across all stages of 22\% versus the HMM and 2.2\% versus the next best NN. The four models were evaluated on four test years that remained unseen during validation. The mean performance during model evaluation was also higher for the DgNN than the other NNs and the HMM. Mean evaluation NSE for the DgNN across all stages was 43\% versus the HMM and 4.0\% higher versus the next best NN (5.9\% when excluding Silking). The DgNN also had 39\% more weeks with the highest CS across all test years than the next best NN. CS metrics showed that weeks when a region's crop is in multiple stages concurrently are more difficult to estimate. Estimating timing of the short yet critical growth stage of Silking was the most difficult for all methods, particularly the Silking-to-Grainfill stage transition. During evaluation, Silking NSE for the DgNN was reduced primarily due to the DgNN's inability to correctly estimate the timing of this transition. While performance of all methods was lower on the test data, LSTM-based methods, the DgNN and Sequential NN, were found to be more robust when presented with abnormal crop progress during model evaluation. UMAP analysis of hidden layers indicated that LSTM-based NNs hold time series from different locations more separate in the activation space.
This study demonstrated that a domain-guided design, such as the DgNN, can improve in-season CGSE compared to NN structures of equivalent complexity. However, UMAP-based NN structure diagnostics and ablation studies to investigate optimum NN complexity may be able to further improve upon these results. In addition, NN boosting methods may also address stage-specific shortcomings, such as Silking-Grainfill transition.
\vspace{6pt}
\authorcontributions{Conceptualization, all authors; methodology, all authors; validation, G.W.; investigation, all authors; writing---original draft preparation, G.W.; writing---review and editing, all authors; visualization, G.W. All authors have read and agreed to the published version of the manuscript.}
\funding{This research was supported by funding from the NASA Terrestrial Hydrology Program (Grant No. NNX16AQ24G). }
\conflictsofinterest{The authors declare no conflict of interest.}
\end{paracol}
\reftitle{References}
\externalbibliography{yes}
|
1,116,691,497,130 | arxiv | \subsection*{Acknowledgements}}{}
\newcommand{\unskip\enskip\hbox{}\nobreak\hfill$\lozenge$}{\unskip\enskip\hbox{}\nobreak\hfill$\lozenge$}
\newenvironment{example}[1][]{\begin{Example}[#1]}{\end{Example}}
\newcommand{\lemref}[1]{Lemma~\textup{\ref{lem:#1}}}
\newcommand{\propref}[1]{Proposition~\textup{\ref{p:#1}}}
\newcommand{\corref}[1]{Corollary~\textup{\ref{c:#1}}}
\newcommand{\thmref}[1]{Theorem~\textup{\ref{t:#1}}}
\newcommand{\remref}[1]{Remark~\textup{\ref{r:#1}}}
\newcommand{\secref}[1]{Section~\ref{sec:#1}}
\newcommand{\subref}[1]{Subsection~\ref{sub:#1}}
\newcommand{\exref}[1]{Example~\ref{ex:#1}}
\newcommand{\itref}[1]{\ref{it:#1}}
\newcommand{\defref}[1]{Definition~\ref{d:#1}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\DE}[1][J]{\mathcal{D}_E(#1)}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\MMI}{\mathbb{M}_I}
\newcommand{\FMI}{\MMI^\mathrm{fct}}
\newcommand{\Mh}[1][h]{\MMI^{#1}}
\newcommand{\Mh[h_\varepsilon]}{\Mh[h_\varepsilon]}
\newcommand{\Mde}[1][\delta]{\Mh[#1,\varepsilon]}
\newcommand{\Mdmh}[1][h_\varepsilon]{\Mh[\delta_m,#1(\delta_m)]}
\newcommand{\Mh[\delta_m,\varepsilon_m]}{\Mh[\delta_m,\varepsilon_m]}
\newcommand{\Mdefull}[2][\delta]{\Mh[#1,#2]}
\newcommand{\Mdh}[1][\delta]{\Mh[#1,h(#1)]}
\newcommand{\Mdheps}[1][h_\varepsilon]{\Mh[\delta,#1(\delta)]}
\newcommand{\HMI}[1][h]{\mathfrak{M}_I^{#1}}
\newcommand{\DEMI}[1][\delta]{\HMI[#1,\varepsilon]}
\newcommand{\DEMIfull}[2]{\HMI[#1,#2]}
\newcommand{\Ade}[1][X]{A_{\delta,\varepsilon}^{#1}}
\newcommand{\Adtwoe}[1][X]{A_{2\delta,\varepsilon}^{#1}}
\newcommand{\Adh}[1][X]{A_{\delta,h(\delta)}^{#1}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\renewcommand{\P}{\mathbb{P}}
\newcommand{\Ps}[1]{\P(\{#1\})}
\newcommand{\bPs}[1]{\P\(\bigl\{\,#1\,\bigr\}\)}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{U}}{\mathbb{U}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathfrak{B}}{\mathfrak{B}}
\newcommand{\BMMI}[1][\delta]{B_{#1}^{\MMI}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathds{1}}{\mathds{1}}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\diam}{diam}
\DeclareMathOperator{\cont}{cont}
\renewcommand{\epsilon}{\varepsilon}\newcommand{\varepsilon}{\varepsilon}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\smallcal}[1]{
{\mathchoice{\scriptstyle}{\scriptstyle}{\scriptscriptstyle}{\scriptscriptstyle}\mathcal{#1}}}
\newcommand{\smallcal{X}}{\smallcal{X}}
\newcommand{\smallcal{Y}}{\smallcal{Y}}
\newcommand{\boldsymbol{\nu}}{\boldsymbol{\nu}}
\newcommand{\musq}[1][\mu]{{#1}^{\otimes 2}}
\newcommand{\mupsq}[1][\mu]{(#1')^{\otimes 2}}
\newcommand{\munpsq}[1][n]{\mupsq[\mu_{#1}]}
\newcommand{\munsq}[1][n]{\mu_{#1}^{\otimes 2}}
\newcommand{\hat{\nu}}{\hat{\nu}}
\newcommand{\hat{\mu}}{\hat{\mu}}
\newcommand{\hat{h}}{\hat{h}}
\newcommand{\hat{X}}{\hat{X}}
\newcommand{\hat{r}}{\hat{r}}
\newcommand{\hat{\eps}}{\hat{\varepsilon}}
\newcommand{\hat{x}}{\hat{x}}
\newcommand{\hat{u}}{\hat{u}}
\newcommand{\hat{y}}{\hat{y}}
\newcommand{\hat{v}}{\hat{v}}
\newcommand{\mu_\eps}{\mu_\varepsilon}
\newcommand{\mu_\eps^{\otimes 2}}{\mu_\varepsilon^{\otimes 2}}
\newcommand{\,\hat{\!\smallx}}{\,\hat{\!\smallcal{X}}}
\newcommand{\betrag}[1]{\bigl| #1 \bigr|}
\newcommand{\Betrag}[1]{\Bigl| #1 \Bigr|}
\newcommand{\wtspace}{\mathchoice{\,}{\,}{}{}}
\newcommand{\wmspace}{\mathchoice{\;}{\,}{}{}}
\newcommand{\leftmidright}[5]{\left#1\, #2\vphantom{#4} \>\right#3 \left. \vphantom{#2} #4 \,\right#5}
\newcommand{\bigm|}{\bigm|}
\newcommand{\setbar}[2]{\{\wtspace #1 \mid #2 \wtspace\}}
\newcommand{\bsetbar}[2]{\bigl\{\, #1 \bigm| #2 \,\bigr\}}
\newcommand{\set}[2]{\{\wtspace #1 : #2 \wtspace\}}
\newcommand{\bset}[2]{\bigl\{#1 : #2\bigr\}}
\newcommand{\aset}[2]{\leftmidright{\{}{#1}{:}{#2}{\}}}
\newcommand{\varaset}[2]{\left\{\, #1 \bigm| #2 \,\right\}}
\newcommand{\Bset}[2]{\Bigl\{\, #1 \,:\, #2 \,\Bigr\}}
\newcommand{\xrightarrow{\mathrm{mGw}}}{\xrightarrow{\mathrm{mGw}}}
\newcommand{\xrightarrow[\scriptscriptstyle n\to\infty]{\mathrm{mGw}}}{\xrightarrow[\scriptscriptstyle n\to\infty]{\mathrm{mGw}}}
\newcommand{\tow}[1][]{\xrightarrow[\scriptscriptstyle #1]{w}}
\newcommand{\xrightarrow{\mathcal{L}}}{\xrightarrow{\mathcal{L}}}
\newcommand{\lim_{n\to\infty}}{\lim_{n\to\infty}}
\newcommand{\liminf_{n\to\infty}}{\liminf_{n\to\infty}}
\newcommand{\liminf_{\delta\downarrow0}}{\liminf_{\delta\downarrow0}}
\newcommand{\limsup_{\delta\downarrow0}}{\limsup_{\delta\downarrow0}}
\newcommand{\limsup_{n\to\infty}}{\limsup_{n\to\infty}}
\newcommand{\restricted}[1]{{\mathclose|}_{#1}}
\newcommand{\bigldelimiter}[1]{\mathchoice{\bigl#1}{\bigl#1}{{\textstyle#1}}{{\scriptstyle#1}}}
\newcommand{\bigrdelimiter}[1]{\mathchoice{\bigr#1}{\bigr#1}{{\textstyle#1}}{{\scriptstyle#1}}}
\renewcommand{\(}{\bigldelimiter(}
\renewcommand{\)}{\bigrdelimiter)}
\newcommand{\doublesup}[2]{\mathop{\sup_{#1}}_{#2}}
\newcommand{\folge}[2][n]{(#2_{#1})_{#1\in\NN}}
\newcommand{\bfolge}[2][n]{\(#2\)_{#1\in\NN}}
\newcommand{\ton}[1][n]{\wmspace\displaystyle\mathop{\longrightarrow}_{\scriptscriptstyle #1\to\infty}\wmspace}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{d_\mathrm{mGTV}}{d_\mathrm{mGTV}}
\newcommand{d_\mathrm{mGP}}{d_\mathrm{mGP}}
\newcommand{d_\mathrm{fGP}}{d_\mathrm{fGP}}
\newcommand{\dPr}[1][]{d_\mathrm{Pr}^{#1}}
\newcommand{\Mf}{\mathcal{M}_\mathrm{f}}
\newcommand{\floor}[1]{\lfloor#1\rfloor}
\renewcommand{\d}{\mathrm{d}}
\newcommand{\d x}{\d x}
\newcommand{\d y}{\d y}
\newcommand{\d t}{\d t}
\newcommand{\d u}{\d u}
\newcommand{\integralspace}{\/\mathchoice{\;}{\,}{\,}{}}
\newcommand{\integral}[4]{\int_{#1}^{#2} #3 \integralspace\d#4}
\newcommand{\plainint}[2]{\int #1 \integralspace\d#2}
\newcommand{\pintegral}[4]{\integral{#1}{#2}{\left(#3\right)}{#4}}
\newcommand{\inta}[3]{\integral{#1}{}{#2}{#3}}
\newcommand{\intp}[1]{\plainint{#1}{\P}}
\newcommand{\intpw}[1]{\plainint{#1}{\P(\omega)}}
\newcommand{\intap}[2]{\integral{#1}{}{#2}{\P}}
\newcommand{\intapw}[2]{\integral{#1}{}{#2}{\P(\omega)}}
\newcommand{\intmu}[1]{\plainint{#1}{\mu}}
\newcommand{\intmuw}[1]{\plainint{#1}{\mu(\omega)}}
\newcommand{\intamu}[2]{\integral{#1}{}{#2}{\mu}}
\newcommand{\intnu}[1]{\plainint{#1}{\nu}}
\newcommand{\intanu}[2]{\integral{#1}{}{#2}{\nu}}
\newcommand{\kernel}[3]{#1(#2,\, #3)}
\newcommand{\bkernel}[3]{#1\(#2,\, #3\)}
\newcommand{\kernint}[4]{\int #1 \integralspace \kernel{#2}{#3}{\d#4}}
\newcommand{\kerninta}[5]{\int_{#1} #2 \integralspace \kernel{#3}{#4}{\d#5}}
\newcommand{\bkerninta}[5]{\int_{#1} #2 \integralspace \bkernel{#3}{#4}{\d#5}}
\newcommand{\bkernint}[4]{\int #1 \integralspace \bkernel{#2}{#3}{\@#4}}
\newcommand{\probint}[3]{\int #1 \integralspace #2(\d#3)}
\newcommand{\bprobint}[3]{\int #1 \integralspace #2\(\d#3\)}
\newcommand{\probinta}[4]{\int_{#1} #2 \integralspace #3(\d#4)}
\newcommand{\bprobinta}[4]{\int_{#1} #2 \integralspace #3\(\d#4\)}
\newcommand{\probintamu}[3]{\int_{#1} #2 \integralspace \mu\(\d#3\)}
\newcommand{\probintanu}[3]{\int_{#1} #2 \integralspace \nu(\d#3)}
\newcommand{\define}[1]{\emph{#1}}
\newcommand{\comment}[1]{}
\newenvironment{proofsteps}{\setcounter{enumi}{0}}{}
\newcommand{\refstepcounter{enumi}\removelastskip\smallskip\par\noindent\emph{Step \arabic{enumi}.} \hspace{0.5ex}}{\refstepcounter{enumi}\removelastskip\smallskip\par\noindent\emph{Step \arabic{enumi}.} \hspace{0.5ex}}
\newcommand{\proofcase}[3][\Rightarrow]{\removelastskip\smallskip\par\noindent\emph{``#2$\,#1\,$#3'':}\hspace{0.5ex}}
\newcommand{\eproofcase}[2]{\proofcase[\Leftrightarrow]{#1}{#2}}
\newcommand{c\`adl\`ag}{c\`adl\`ag}
\newcommand{modulus of \cadlag ness}{modulus of c\`adl\`ag ness}
\newcommand{moduli of \cadlag ness}{moduli of c\`adl\`ag ness}
\newcommand{\eqn}[1]{\begin{equation} #1 \end{equation}}
\newcommand{\eqan}[1]{\begin{align} #1 \end{align}}
\newcommand{\lbeq}[1]{\label{#1}}
\newcommand{\refeq}[1]{(\ref{#1})}
\newcommand{\nonumber}{\nonumber}
\newcommand\red[1]{{\color{red}#1}}
\newcommand{\chs}[1]{{\small\color{red}(S: #1)}}
\newcommand{\chw}[1]{{\small\color{blue}(W: #1)}}
\newcommand{\cgray}[1]{{\color{usablecyan} #1}}
\newcommand{\cgreen}[1]{{\color{usablegreen} #1}}
\newcommand{\todo}[1]{\par {\tt\scriptsize todo: #1} \par}
\renewcommand{\labelenumi}{\theenumi}
\renewcommand{\theenumi}{\textup{(\roman{enumi})}}%
\newcommand{\picturefig}[4]{
\begin{figure}[t]
\begin{center}
\includegraphics[scale=#1]{#2}
\end{center}
\caption{#3}
\label{pic:#4}
\end{figure}}
\setcounter{secnumdepth}{3}
\setcounter{tocdepth}{2}
\numberwithin{equation}{section}
\begin{document}
\author{
Sandra Kliem%
\thanks{
Fakult\"at f\"ur Mathematik, Universit\"at Duisburg-Essen, Thea-Leymann-Str.\ 9, D-45127 Essen,
Germany.\newline
\hspace*{1.8em}E-mail: {\tt [email protected]}, {\tt [email protected]}}
\and
Wolfgang L\"ohr$^{*}$
}
\title{Existence of mark functions in marked metric measure spaces\footnotetext[0]{Preprint of \emph{Electronic
Journal of Probability}, \textbf{20}, no.\ 73, pp.\ 1--24\\[-2.1ex]}}
\date{\today}
\maketitle
\begin{abstract}
We give criteria on the existence of a so-called mark function in the context of marked metric measure spaces
(mmm-spaces). If an mmm-space admits a mark function, we call it functionally-marked metric measure space
(fmm-space). This is not a closed property in the usual marked Gromov-weak topology, and thus we put particular
emphasis on the question under which conditions it carries over to a limit.
We obtain criteria for deterministic mmm-spaces as well as random mmm-spaces and mmm-space-valued processes.
As an example, our criteria are applied to prove that the tree-valued Fleming-Viot dynamics with mutation and selection from
\cite{DGP12} admits a mark function at all times, almost surely. Thereby, we fill a gap in a former proof of this
fact, which used a wrong criterion.
Furthermore, the subspace of fmm-spaces, which is dense and not closed, is investigated in detail. We show that
there exists a metric that induces the marked Gromov-weak topology on this subspace and is complete. Therefore,
the space of fmm-spaces is a Polish space. We also construct a decomposition into closed sets which are related
to the case of uniformly equicontinuous mark functions.
\smallskip
\noindent
{\bf Key words:} mark function; tree-valued Fleming-Viot process; mutation; marked metric measure space; Gromov-weak topology; Prohorov metric; Lusin's theorem.
\smallskip
\noindent
{\bf MSC2000 subject classification.} {Primary
60K35,
Secondary
60J25,
60G17,
60G57.
}
\end{abstract}
\smallskip
\begin{quote}\begin{quote}\begin{quote}
\def\subsection{\subsection}
\footnotesize
\tableofcontents
\end{quote}\end{quote}\end{quote}
\smallskip
\subsection{Introduction}
A metric (finite) measure spaces (\emph{mm-space}) is a complete, separable metric space $(X,r)$ together with a finite
measure $\nu$ on it. Considering the space of (equivalence classes of) mm-spaces itself as a metric space dates
back to Gromov's invention of the $\underline\Box_\lambda$-metric in \cite[Chapter~3$\frac12$]{Gromov}.
Motivated by Aldous' work on the Brownian continuum random tree (\cite{Aldous:CRT3}), it was realised in
\cite{GPW09} that the space of mm-spaces is a useful state space for tree-valued stochastic processes, and
Polish when equipped with the Gromov-weak topology. That the Gromov-weak topology actually coincides with the
one induced by the $\underline\Box_\lambda$-metric was shown in \cite{Loehr13}.
Important examples for the use of mm-spaces within probability theory are individual-based populations $X$ with
given mutual genealogical distances $r$ between individuals. Here, $r$ can for instance measure the time to the most recent
common ancestor (MRCA) (cf.\ \cite[(2.7), Remark~3.3]{DGP12}), where the resulting metric space is ultrametric.
Another possibility is the number of mutations back to the MRCA (cf.\ \cite{KW2014}), where the resulting space is not
ultrametric. Finally, there is a sampling measure $\nu$ on the space $X$ which models population density.
This means that the state of the process is an mm-space $(X,r,\nu)$.
Such individual-based models are often formulated for infinite population size (with diffuse measures $\nu$) but
obtained as the high-density limit of approximating models with finite populations (where $\nu$ is typically the
uniform distribution on all individuals).
For encoding more information about the individuals, such as an (allelic) type or location (which may change over time),
marked metric measure spaces (mmm-spaces) and the corresponding marked Gromov-weak topology (mGw-topology) have
been introduced in \cite{DGP11}.
For a fixed complete, separable metric space $(I,d)$ of marks, the sampling measure $\nu$ is replaced by a measure
$\mu$ on $X\times I$, which models population density in combination with mark distribution.
A natural question in this context is whether or not every point of the limiting population $X$ has a single
mark almost surely, that is, does genetic distance zero imply the same type/location?
Put differently, we ask ourselves if $\mu$ factorizes into a ``population density'' measure $\nu$ on $X$
and a mark function $\kappa\colon X\to I$ assigning each individual its mark. If this is the case, we call the
mmm-space functionally-marked (fmm-space). This property is often desirable, and one might want to consider the
space of fmm-spaces, rather than mmm-spaces, as the state space. Unfortunately, the subspace of fmm-spaces is not
closed in the mGw-topology, which means that limits of finite-population models that are constructed as fmm-spaces might
not admit mark functions themselves. It is therefore of interest, if the space of fmm-spaces with marked Gromov-weak topology
is a Polish space (that is a ``good'' state space). Here, we show in \thmref{Polish} that this is indeed the
case. We also produce criteria to enable one to check if an mmm-space admits a mark function.
For limiting populations, they are given in terms of the approximating mmm-spaces. We derive such criteria
for deterministic spaces (\thmref{modulus}), random spaces (\thmref{rnd-modulus}) and mmm-space-valued
processes (\thmref{pr-modulus} and \thmref{modcadlag}).
An important example of such a high-density limit of approximating models with finite populations is the
tree-valued Fleming-Viot dynamics. In the neutral case, it is constructed in \cite{GPW13} using the formalism of
mm-spaces. In \cite{DGP12}, (allelic) types -- encoded as marks of mmm-spaces -- are included, in order
to model mutation and selection.
For this process, the question of existence of a mark function has already been posed.
In \cite[Remark~3.11]{DGP12} and \cite[Theorem~6]{DGP13} it is stated that the tree-valued Fleming-Viot process
admits a mark function at all times, almost surely.
The given proof, however, contains a gap, because it relies on the criterion claimed in \cite[Lemma~7.1]{DGP13}, which is
wrong in general, as we show in \exref{counter}.
We fill this gap by applying our criteria and showing in \thmref{mark-FV} that the claim is indeed true and the
tree-valued Fleming-Viot process with mutation and selection (TFVMS) admits a mark function at all times, almost
surely. We also show in \thmref{mark-Lambda} that the same arguments apply to the $\Lambda$-version of the TFVMS
in the neutral case, that is where selection is not present.
Intuitively, the existence of a mark function in the case of the TFVMS holds because mutations are large but rare in the approximating sequence of tree-valued Moran models.
Hence, as genealogical distance becomes small, the probability that any mutation happened at all in the
close past becomes small as well (recall that distance equals time to the MRCA). In contrast, in \cite{KW2014}, where evolving phylogenies of trait-dependent branching with mutation and competition are under investigation, mutations happen at a high rate but are
small which justifies the hope for the existence of a mark function also for the limiting model. Our criteria
are also suited for this kind of situation.
\medskip\noindent\textbf{Outline. }
The paper is organized as follows. In the subsections of the introduction we first introduce notations and basic
results for the Prohorov metric for finite measures. Then, we give a short introduction to the space $\MMI$ of
marked metric measure spaces (mmm-spaces) with the marked Gromov-weak topology, as well as the marked
Gromov-Prohorov metric $d_\mathrm{mGP}$ on it. We continue with defining the so-called functionally-marked metric measure
spaces (fmm-spaces) $\FMI\subseteq \MMI$, and finally investigate the case of equicontinuous mark functions as an
illustrative example.
We emphasize that the restriction of the marked Gromov-Prohorov metric $d_\mathrm{mGP}$ to $\FMI$ is not complete.
In \secref{Polish}, we therefore show that there exists another metric on $\FMI$ that induces the marked
Gromov-weak topology and is complete. As one sees in \subref{equicont}, the situation becomes easy if we
restrict to a subspace of $\MMI$ containing spaces with uniformly equicontinuous mark functions. We introduce in
\subref{betaestim} several related subspaces capturing some aspect of equicontinuity, and obtain a decomposition
of $\FMI$ into closed sets. This decomposition is used to prove Polishness of $\FMI$, and in \secref{criteria} to
formulate criteria for the existence of mark functions
\secref{criteria} gives criteria for the existence of mark functions. Based on the construction of the
complete metric and the decomposition of $\FMI$, we derive in \subref{criteria-1} criteria to
check if an mmm-space admits a mark function, especially in the case where it is given as a
marked Gromov-weak limit.
We then transfer the results in \subref{criteria-2} to random mmm-spaces and in \subref{criteria-3} to
$\MMI$-valued stochastic processes.
To conclude, \secref{examples} gives examples. We first show that the criterion in \cite{DGP13} is wrong in
general by means of counterexamples. Our criteria are then applied in \subref{FV} to prove the existence of a
mark function for the tree-valued Fleming-Viot dynamics with mutation and selection. To this goal, we verify the
necessary assumptions for a sequence of approximating tree-valued Moran models. In \subref{TFV} we show that a
similar strategy applies if we replace the tree-valued Moran models by so-called tree-valued $\Lambda$-Cannings
models. Finally, in \subref{FApp}, a future application to evolving phylogenies of trait-dependent branching with
mutation and competition is indicated.
\subsection{Notations and prerequisites}
In this paper, let all topological spaces be equipped with their Borel $\sigma$-algebras.
We use the following notation throughout the article.
\begin{Notation}
For a Polish space $E$, let $\mathcal{M}_1(E)$
respectively $\Mf(E)$ denote the space of probability respectively finite measures on the Borel $\sigma$-algebra
$\mathfrak{B}(E)$ on $E$. The space $\Mf(E)$ is always equipped with the topology of weak convergence, which is denoted by $\tow$.
We also use the distance in variational norm of\/ $\mu,\nu \in \Mf(E)$, which is
\begin{equation}
\| \mu-\nu \| := \sup_{B\in\mathfrak{B}(E)} \bigl|\mu(B)-\nu(B)\bigr|.
\end{equation}
In particular, $\| \mu \| = \mu(E)$, and\/ $\|\mu-\nu\| = \nu(E)-\mu(E) $ if\/ $\mu \leq \nu$, that is $\mu(A)
\leq \nu(A)$ for all\/ $A \in \mathfrak{B}(E)$.
For\/ $Y \in \mathfrak{B}(E)$ and\/ $\mu \in \Mf(E)$, denote by\/ $\mu \restricted Y\in \Mf(E)$ the restriction
of\/ $\mu$ to\/ $Y$, that is\/ $\mu\restricted Y(B):=\mu(B \cap Y)$ for all\/ $B \in \mathfrak{B}(E)$. Because\/
$\mu \restricted Y \leq \mu$, we have\/ $\| \mu \restricted Y - \mu \| = \mu(E\setminus Y)$.
For $\varphi\colon E \rightarrow F$ measurable, with $F$ some other Polish space, denote the image measure of $\mu$
under $\varphi$ by $\varphi_* \mu:=\mu\circ \varphi^{-1}$.
Finally, for the product space $X:= E \times F$, the canonical projection operators from $X$ onto $E$ and $F$
are denoted by $\pi_E$ and $\pi_F$, respectively.
\end{Notation}
\begin{definition}[Prohorov metric]
For finite measures\/ $\mu_0, \mu_1$ on a metric space\/ $(E, r)$, the \define{Prohorov metric} is
defined as
\begin{equation}
\dPr(\mu_0, \mu_1) := \inf\bset{\varepsilon>0}{\mu_i(A) \le \mu_{1-i}(A^\varepsilon) + \varepsilon \;\;\forall A\in
\mathfrak{B}(E),\, i\in\{0,1\}},
\end{equation}
where\/ $A^\varepsilon := \set{x\in E}{r(A, x) < \varepsilon}$ is the $\varepsilon$-neighbourhood of\/ $A$.
\end{definition}
It is well-known that the Prohorov metric metrizes the weak convergence of measures if and only if the
underlying metric space is separable. The following equivalent expression for the Prohorov metric turns out to be useful.
\begin{remark}[coupling representation of the Prohorov metric]
Let $(E,r)$ be a separable metric space and $\mu_1,\mu_2\in \mathcal{M}_1(E)$.
For a finite measure $\xi$ on $E^2$, we denote the marginals as $\xi_1 := \xi(\cdot \times E)$ and
$\xi_2:=\xi(E\times \cdot)$. It is well-known (see, e.g., \cite[Theorem~III.1.2]{EK}) that
\begin{equation}\label{eq:Prc}
\dPr(\mu_1, \mu_2) = \inf\bset{\varepsilon>0}{\exists \xi\in\mathcal{M}_1(E^2) \text{ with }
\xi(N_\varepsilon) \le \varepsilon,\;\xi_i=\mu_i,\,i=1,2},
\end{equation}
where $N_\varepsilon := \set{(x,y)\in E^2}{r(x,y) \ge \varepsilon}$. We obtain from this equation
\begin{equation}\label{eq:Prcouple}
\dPr(\mu_1,\mu_2) = \inf\bset{\varepsilon>0}{\exists \xi'\in\Mf(E^2) \text{ with } \xi'(N_\varepsilon)=0,\;
\xi_i' \le \mu_i,\, \|\mu_i - \xi_i'\| \le \varepsilon, \,i=1,2}.
\end{equation}
Indeed, consider $\xi':=\xi\restricted{E^2\setminus
N_\varepsilon}$ respectively $\xi:=\xi'+(1-\|\xi'\|)^{-1} \big( (\mu_1-\xi'_1) \otimes (\mu_2-\xi'_2) \big)$ to obtain equality in the above.
Following the ideas of the proof of the representation \eqref{eq:Prc} in \cite{EK}, the representation \eqref{eq:Prcouple} for the Prohorov metric $\dPr(\mu_0, \mu_1)$ is easily seen to hold true for
measures $\mu_1,\mu_2\in\Mf(E)$ as well, which are not necessarily probability measures.
\end{remark}
From \eqref{eq:Prcouple}, we can easily deduce the following lemma, which we use below.
\begin{lemma}[rectangular lemma]\label{lem:rectangle}
Let\/ $(E,r)$ be a separable, metric space, $\varepsilon,\delta>0$, and\/ $\mu_1,\mu_2\in\Mf(E)$. Assume that\/
$\dPr(\mu_1,\mu_2) < \delta$ and there is\/ $\mu_1'\le \mu_1$ with\/ $\|\mu_1-\mu_1'\| \le \varepsilon$. Then
\begin{equation}\label{eq:rectangle}
\exists \mu_2' \le \mu_2: \dPr(\mu_1', \mu_2') < \delta,\; \|\mu_2-\mu_2'\| \le \varepsilon.
\end{equation}
\end{lemma}
\begin{proof}
According to \eqref{eq:Prcouple}, we find $\xi\in\Mf(E^2)$ with marginals $\xi_i \le \mu_i$, $i=1,2$,
$\|\mu_i-\xi_i\| <\delta$, and $\xi(\{r\ge\delta\})=0$.
Let $L$ be a probability kernel from $E$ to $E$ (for existence see \cite[Theorems~8.36--8.38]{Kle14}) with $\xi=\mu_1\otimes L$ and define
$\xi':=(\mu_1' \land \xi_1)\otimes L$.
Obviously, $\xi'_1\le \mu_1'$ and $\|\mu_1'-\xi_1'\| \le \|\mu_1-\xi_1\| < \delta$. Now set
\begin{equation}
\mu_2' := \xi'_2 + \mu_2 - \xi_2.
\end{equation}
Then $\xi_2'\le\mu_2'$, $\|\mu_2' - \xi'_2\| = \|\mu_2-\xi_2\| < \delta$ and thus $\dPr(\mu_2,\mu_2') <
\delta$ by \eqref{eq:Prcouple}.
Furthermore, $\mu_2'\le \mu_2$ and $\|\mu_2-\mu_2'\| = \|\xi_2 - \xi_2'\| \le \|\mu_1 - \mu_1'\| \le
\varepsilon$.
\end{proof}
\subsection{The space of marked metric measure spaces (mmm-spaces)}
In this subsection, we recall the space $\MMI$ of marked metric measure spaces, and the marked Gromov-Prohorov
metric $d_\mathrm{mGP}$, which induces the marked Gromov-weak topology on it. This space, $(\MMI, d_\mathrm{mGP})$, will be the
basic space used in the rest of the paper. These concepts have been introduced in \cite{DGP11}, and are based on
the corresponding non-marked versions introduced in \cite{GPW09}. In contrast to \cite{DGP11}, we allow the
measures of the marked metric measure spaces to be finite, that is do not restrict ourselves to probability measures only. Because a
sequence of finite measures converges weakly if and only if their total masses and the normalized measures converge, or
the masses converge to zero, this straight-forward generalization requires only minor modifications (compare
\cite[Section~2.1]{LoehrVoisinWinter14}, where this generalization is done for metric measure spaces without
marks).
In what follows, fix a complete, separable metric space $(I,d)$, called the \emph{mark space}. It is the same for all marked metric
measure spaces in $\MMI$.
\begin{definition}[mmm-spaces, $\MMI$]
\begin{enumerate}
\item
An \emph{($I$-)marked metric measure space (mmm-space)} is a triple $(X,r,\mu)$ such that\/ $(X,r)$ is a complete,
separable metric space, and\/ $\mu \in \Mf(X \times I)$, where $X \times I$ is equipped with the product topology.
\item
Let\/ $\smallcal{X}_i = (X_i,r_i,\mu_i)$, $i=1,2$, be two mmm-spaces, and\/ $\nu_i:=\mu_i(\cdot \times I)$ the
marginal of\/ $\mu_i$ on $X_i$. For a map $\varphi\colon X_1 \to X_2$ we use the notation
\begin{equation}\label{eq:tildephi}
\tilde\varphi \colon X_1 \times I \to X_2 \times I, \quad (x,u) \mapsto \tilde{\varphi}(x,u) := (\varphi(x),u).
\end{equation}
We call\/ $\smallcal{X}_1$ and\/ $\smallcal{X}_2$ \emph{equivalent} if they are measure- and mark-preserving
isometric, that is there is an isometry $\varphi\colon \supp(\nu_1) \rightarrow \supp(\nu_2)$, such that
\begin{equation}
\tilde{\varphi}_* \mu_1 = \mu_2.
\end{equation}
\item Finally, define
\begin{equation}
\MMI := \bigl\{ \text{equivalence classes of mmm-spaces\/} \bigr\}.
\end{equation}
With a slight abuse of notation, we identify an mmm-space with its equivalence class and write
$\smallcal{X} = (X,r,\mu) \in \MMI$ for both mmm-spaces and equivalence classes thereof.
\end{enumerate}
\end{definition}
Next, we recall the marked Gromov-weak topology from \cite[Section~2.2]{DGP11} that turns $\MMI$ into a Polish
space (cf.\ \cite[Theorem~2]{DGP11}). To this goal, we first recall
\begin{definition}[marked distance matrix distribution]
Let\/ $\smallcal{X} := (X,r,\mu) \in \MMI$ and
\begin{equation}
R^{(X,r)} := \left\{\begin{matrix}
(X \times I)^{\mathbb{N}} & \rightarrow & \RR_+^{\binom\N2} \times I^{\mathbb{N}}, \cr
\big( (x_k,u_k)_{k \geq 1} \big) & \mapsto &\big( \big(r(x_k,x_l) \big)_{1 \leq k < l}, (u_k)_{k \geq 1} \big). \cr
\end{matrix} \right.
\end{equation}
The \emph{marked distance matrix distribution} of\/ $\smallcal{X}$ is defined as
\begin{equation}
\boldsymbol{\nu}^\smallcal{X} := \|\mu\| \cdot \big( R^{(X,r)} \big)_* (\tfrac\mu{\|\mu\|})^{\mathbb{N}}
\in \Mf\big(\RR_+^{\binom{\NN}{2}} \times I^{\mathbb{N}} \big).
\end{equation}
\end{definition}
The marked Gromov-weak topology is the one induced by the map $\smallcal{X}\mapsto \boldsymbol{\nu}^\smallcal{X}$.
\begin{definition}[marked Gromov-weak topology]\label{def:gw-top}
Let\/ $\smallcal{X}, \smallcal{X}_1,\smallcal{X}_2,\ldots \in \MMI$. We say that\/ $(\smallcal{X}_n)_{n\in\NN}$ converges to $\smallcal{X}$ in
the \emph{marked Gromov-weak topology}, $\smallcal{X}_n \xrightarrow[\scriptscriptstyle n\to\infty]{\mathrm{mGw}} \smallcal{X}$, if and only if
\begin{equation}
\boldsymbol{\nu}^{\smallcal{X}_n} \tow[n\to\infty] \boldsymbol{\nu}^\smallcal{X}
\end{equation}
in the weak topology on $\Mf\big( \mathbb{R}_+^{\binom{\NN}{2}} \times I^{\mathbb{N}} \big)$.
\end{definition}
Finally, let us recall the Gromov-Prohorov metric from \cite[Section~3.2]{DGP11}. It is complete and metrizes
the marked Gromov-weak topology, as shown in \cite[Proposition~3.7]{DGP11}.
\begin{definition}[marked Gromov-Prohorov metric, $d_\mathrm{mGP}$]
For $\smallcal{X}_i=(X_i,r_i,\mu_i) \in \MMI, i=1,2$, set
\begin{equation}
d_\mathrm{mGP}(\smallcal{X}_1,\smallcal{X}_2) :=
\inf_{(E,\varphi_1,\varphi_2)} \dPr\big( (\tilde{\varphi}_1)_* \mu_1, (\tilde{\varphi}_2)_* \mu_2 \big),
\end{equation}
where the infimum is taken over all complete, separable metric spaces $(E,r)$ and isometric embeddings
$\varphi_i\colon X_i \rightarrow E$, and\/ $\tilde\varphi_i$ is as in \eqref{eq:tildephi}, $i=1,2$. The
Prohorov metric $\dPr$ is the one on $\Mf(E \times I)$, based on the metric $\tilde{r} = r + d$ on $E \times I$,
metrizing the product topology. The metric $d_\mathrm{mGP}$ is called the \emph{marked Gromov-Prohorov metric}.
\end{definition}
A direct consequence of the fact that $d_\mathrm{mGP}$ induces the marked Gromov-weak topology is the following
characterization of marked Gromov-weak convergence obtained in \cite[Lemma~3.4]{DGP11}.
\begin{lemma}[embedding of marked Gromov-weakly converging sequences]\label{lem:seqembed}
Let\/ $\smallcal{X}_n=(X_n,r_n,\mu_n) \in \MMI$ for $n\in\NN\cup\{\infty\}$. Then\/ $(\smallcal{X}_n)_{n\in\NN}$
converges to $\smallcal{X}_\infty$ Gromov-weakly if and only if there is a complete, separable metric space\/
$(E,r)$, and isometric embeddings\/ $\varphi_n \colon X_n \to E$, such that for $\tilde\varphi_n$ as in
\eqref{eq:tildephi},
\begin{equation}
(\tilde\varphi_n)_*\mu_n \tow \mu.
\end{equation}
\end{lemma}
\subsection{Functionally-marked metric measure spaces (fmm-spaces)}
Consider an $I$-marked metric measure space $\smallcal{X}=(X,r,\mu) \in \MMI$. Since $\mu$ is a finite measure on the
Polish space $X \times I$, regular conditional measures exist (cf.\ \cite[Theorems~8.36--8.38]{Kle14}), and we write
\begin{equation}\label{nuK}
\mu(\d x,\d u)
= \nu(\d x) \cdot K_x(\d u),
\end{equation}
in short $\mu=\nu\otimes K$, for the marginal $\nu := \mu(\cdot \times I) \in \Mf(X)$, and a ($\nu$-a.s.\
unique) probability kernel $K$ from $X$ to $I$.
In the present article we investigate criteria for the \emph{existence of a mark function} for $\smallcal{X}$, that
is (cf.\ \cite[Section 3.3]{DGP13}) a measurable function $\kappa\colon X \to I$ such that
\begin{equation}
\lbeq{mark-fcn}
\mu(\d x,\d u) = \nu(\d x) \cdot \delta_{\kappa(x)}(\d u),
\end{equation}
or equivalently, $K_x = \delta_{\kappa(x)}$ for $\nu$-almost every $x$.
Obviously, $\smallcal{X}$ admits a mark function if and only if $K_x$ is a Dirac measure for $\nu$-almost every $x$.
Recall that the complete, separable mark space $(I,d)$ is fixed once and for all.
\begin{Definition}[fmm-spaces, $\FMI$]
We call\/ $\smallcal{X}=(X,r,\nu,\kappa)$ an \define{($I$-)functionally-marked metric measure space}
(\define{fmm-space}) if $(X,r)$ is a complete, separable metric space, $\nu \in \Mf(X)$,
and $\kappa \colon X \to I$ is measurable.
We identify $\smallcal{X}$ with the marked metric measure space $(X, r, \mu)\in \MMI$, where $\mu$ satisfies
\eqref{mark-fcn}. With a slight abuse of notation, we write $(X,r,\nu,\kappa)=(X,r,\mu)$ if
\eqref{mark-fcn} is satisfied.
Denote by $\FMI\subseteq \MMI$ the space of (equivalence classes of) fmm-spaces.
\end{Definition}
A first, simple observation is that $\FMI$ is a dense subspace of $\MMI$.
\begin{lemma}\label{lem:dense}
The subspace\/ $\FMI$ is dense in\/ $\MMI$ with marked Gromov-weak topology.
\end{lemma}
\begin{proof}
For $\smallcal{X}=(X,r,\mu) \in \MMI$, define $\smallcal{X}_n=(X\times I, r_n, \nu_n, \kappa_n)\in\FMI$ with $\nu_n=\mu$,
$\kappa_n(x,u) = u$, and $r_n\((x,u), (y,v)\) := r(x,y) + e^{-n}\land d(u,v)$, for $x,y\in X,\;u,v\in I$.
It is easy to see that $\smallcal{X}_n \rightarrow \smallcal{X}$ in the marked Gromov-weak topology.
\end{proof}
\subsection{The equicontinuous case}\label{sub:equicont}
It directly follows from \lemref{dense} that the subspace $\FMI$ is not closed in $\MMI$, meaning that if
$\smallcal{X}_n \xrightarrow{\mathrm{mGw}} \smallcal{X}$ is a marked Gromov-weakly converging sequence in $\MMI$, and all $\smallcal{X}_n$ admit a
mark function, this need not be the case for $\smallcal{X}$.
In applications, however, the limit $\smallcal{X}$ is often not known explicitly, and it would be important to have
(sufficient) criteria for the existence of a mark function in terms of the $\smallcal{X}_n$ alone.
An easy possibility is Lipschitz equicontinuity: if all $\smallcal{X}_n$ admit a mark function that is Lipschitz
continuous with a common Lipschitz constant $L>0$, the same is true for $\smallcal{X}$ (see \cite{Piotrowiak:phd}).
More generally, this holds for uniformly equicontinuous mark functions as introduced below. We briefly discuss the equicontinuous
case in this subsection, because it is straightforward and illustrates the main ideas.
Recall that a \emph{modulus of continuity} is a function $h\colon \RR_+ \to \RR_+ \cup\{\infty\}$ that is
continuous in $0$ and satisfies $h(0)=0$. A function $f\colon X \to I$, where $(X,r)$ is a metric space,
is \emph{$h$-uniformly continuous} if $d\(f(x),f(y)\) \le h\(r(x,y)\)$ for all $x,y \in X$.
Note that for every modulus of continuity $h$, there exists another modulus of continuity $h'\ge h$ which is
increasing and continuous with respect to the topology of the one-point compactification of
$\RR_+$. Therefore, we can restrict ourselves without loss of generality to moduli of continuity from
\begin{equation}
\lbeq{eq:def-H}
\mathcal{H}:=\bsetbar{h\colon \RR_+ \to \RR_+\cup\{\infty\}}{h(0)=0,\; h\text{ is continuous and increasing}}.
\end{equation}
For $h\in\mathcal{H}$ and a metric space $(X,r)$, we define
\begin{equation}\label{eq:goodset}
A_h^X := A_h^{(X,r)} := \bset{(x_i,u_i)_{i=1,2}\in (X\times I)^2}{d(u_1,u_2) \le h(r(x_1,x_2))}
\subseteq (X\times I)^2.
\end{equation}
Note that $f\colon X \to I$ is $h$-uniformly continuous if and only if $\((x, f(x)), (y, f(y))\) \in A_h^X$
for all $x, y \in X$, and that $A_h^X$ is a closed set in $(X\times I)^2$ with product topology.
\begin{definition}[$\HMI$]\label{d:HMI}
For\/ $h\in\mathcal{H}$, let\/ $\HMI\subseteq \FMI$ be the space of marked metric measure spaces admitting an\/
$h$-uniformly continuous mark function.
\end{definition}
The next lemma states that a marked metric measure space $(X,r,\mu)$ admits an $h$-uniformly continuous mark
function if and only if a pair of independent samples from $\mu$ is almost surely in $A_h^X$. Furthermore, if a
sequence with $h$-uniformly continuous mark functions converges marked Gromov-weakly, the limit space also
admits an $h$-uniformly continuous mark function.
\begin{lemma}[uniform equicontinuity]\label{lem:equicont}
Fix a modulus of continuity\/ $h\in\mathcal{H}$.
\begin{enumerate}
\item\label{it:hmichar} $\displaystyle \HMI = \bset{(X,r,\mu)\in\MMI}{\musq(A_h^X) = \|\musq\|}.$
\item\label{it:equicontcl} $\HMI$ is closed in the marked Gromov-weak topology.
\end{enumerate}
\end{lemma}
\begin{proof}
The mmm-space $\smallcal{X}=(X, r, \mu)$ is in $\HMI$ if and only if $\supp(\mu)$ is the graph of an $h$-uniformly
continuous function. This is clearly equivalent to $\musq\((X\times I)^2 \setminus A_h^X\) = 0$.
Item \itref{equicontcl} is obvious from \itref{hmichar}, because $A_h^X$ is a closed set.
\end{proof}
This preliminary result is quite restrictive because of the condition to have the same modulus of continuity
for all occurring spaces. In fact, the mark function of the tree-valued Fleming Viot
dynamic considered in \subref{FV} is not even continuous.
At the heart of the following generalisation to measurable mark functions lies the fact that measurable functions
are ``almost continuous'' by Lusin's celebrated theorem (see for instance \cite[Theorem~7.1.13]{BogachevII}).
Here, we give a version tailored to our setup:
\begin{LusinsThm}
Let\/ $X,Y$ be Polish spaces, $\mu$ a finite measure on\/ $X$, and\/ $f\colon X \rightarrow Y$ a measurable
function. Then, for every $\epsilon > 0$, there exists a compact set\/ $K_\epsilon \subseteq X$ such that\/
$\mu(X \setminus K_\epsilon) < \epsilon$ and\/ $f\restricted{K_\epsilon}$ is continuous.
\end{LusinsThm}
\subsection{The space of fmm-spaces is Polish}\label{sec:Polish}
The subspace $\FMI$ is not closed in $\MMI$ in the marked Gromov-weak topology, and hence the restriction of the
marked Gromov-Prohorov metric $d_\mathrm{mGP}$ to $\FMI$ is not complete. In this section, we show that there exists
another metric on $\FMI$ that induces the marked Gromov-weak topology and is complete. This shows that $\FMI$
is a Polish space in its own right.
\subsection{A complete metric on the space of fmm-spaces}
For a measure $\xi$ on $I$, we define
\begin{equation}
\beta_\xi := \probinta{I}{\probinta{I}{\left( 1 \land d(u,v) \right)}{\xi}{u}}{\xi}{v}.
\end{equation}
Note that $\beta_\xi=0$ if and only if $\xi$ is a Dirac measure.
For $\smallcal{X}=(X,r,\mu) \in \MMI$, with $\mu=\nu\otimes K$ as in \eqref{nuK}, we define
\begin{equation}\label{eq:beta}
\beta(\smallcal{X}) := \probintanu{X}{\beta_{K_x}}{x}
= \probintamu{X\times I}{\probinta{I}{\left( 1\land d(u,v) \right)}{K_x}{v}}{(x,u)}.
\end{equation}
\begin{prop}[characterization of $\FMI$ as continuity points]\label{p:usc}
Let\/ $\cont(\beta)\subseteq \MMI$ be the set of continuity points of\/ $\beta\colon \MMI \to \RR_+$,
where\/ $\MMI$ carries the marked Gromov-weak topology. Then
\begin{equation}\label{eq:cont}
\cont(\beta) = \beta^{-1}(0) = \FMI.
\end{equation}
\end{prop}
\begin{proof}[Proof (first part).]
As seen before, $\smallcal{X}=(X,r,\nu\otimes K)\in\MMI$ admits a mark function if and only if $K_x$ is a
Dirac measure for $\nu$-almost every $x\in X$, which is the case if and only if $\beta(\smallcal{X})=0$.
Hence $\beta^{-1}(0) = \FMI$.
Because $\FMI$ is dense in $\MMI$ by \lemref{dense}, no $\smallcal{X}\in \MMI\setminus \beta^{-1}(0)$ can be
a continuity point of $\beta$. Thus $\cont(\beta) \subseteq \beta^{-1}(0)$.
We defer the proof of the inclusion $\beta^{-1}(0) \subseteq \cont(\beta)$ to \subref{betaestim},
because it requires a technical estimate on $\beta$ derived in \propref{betaestim}.
\end{proof}
In view of \eqref{eq:cont}, we can use standard arguments to construct a complete metric on $\FMI$ that metrizes
marked Gromov-weak topology. Namely consider the sets
\begin{equation}\label{Bdelta}
F_m := \overline{\beta^{-1}\([\tfrac1m, \infty)\)} \subseteq \MMI, \quad m\in\NN,
\end{equation}
where the closure is in the marked Gromov-weak topology.
Then, due to \propref{usc}, $F_m$ is disjoint from $\FMI$, and $\FMI = \MMI \setminus
\bigcup_{m\in\NN} F_m$. Because $F_m$ is also closed by definition, we obtain
\begin{equation}\label{eq:rhogt0}
\FMI = \bigcap_{m\in\NN}\bset{\smallcal{X} \in \MMI}{d_\mathrm{mGP}(\smallcal{X}, F_m) > 0}.
\end{equation}
We consider the metric $d_\mathrm{fGP}$ on $\FMI$ defined for $\smallcal{X}, \smallcal{Y} \in \FMI$ by
\begin{equation}\label{dfmi}
d_\mathrm{fGP}(\smallcal{X}, \smallcal{Y}) := d_\mathrm{mGP}(\smallcal{X}, \smallcal{Y}) + \sup_{m\in\NN} 2^{-m} \land
\Betrag{\frac1{d_\mathrm{mGP}(\smallcal{X}, F_m)} - \frac1{d_\mathrm{mGP}(\smallcal{Y}, F_m)}}.
\end{equation}
\begin{theorem}[$\FMI$ is Polish]\label{t:Polish}
The space\/ $\FMI$ of\/ $I$-functionally-marked metric measure spaces with marked Gromov-weak topology is
a Polish space. Namely, $d_\mathrm{fGP}$ is a complete metric on\/ $\FMI$ inducing the marked
Gromov-weak topology.
\end{theorem}
\begin{proof}
First, we show that $d_\mathrm{fGP}$ induces the marked Gromov-weak topology on $\FMI$. For $m\in\NN$, $\smallcal{X}\in
\MMI$, define
\begin{equation}\label{eq:rhodef}
\rho_m(\smallcal{X}) :=d_\mathrm{mGP}(\smallcal{X}, F_m),
\end{equation}
with $F_m$ defined in \eqref{Bdelta}.
Note that $\rho_m$ is a continuous function on $\MMI$.
Let $\smallcal{X}_n, \smallcal{X} \in \FMI$. Then $\rho_m(\smallcal{X}) >0$ for all $m\in\NN$ because of \eqref{eq:rhogt0}.
Therefore, by definition, $d_\mathrm{fGP}(\smallcal{X}_n, \smallcal{X}) \ton 0$ if and only if the two conditions
$d_\mathrm{mGP}(\smallcal{X}_n, \smallcal{X}) \ton 0$ and
\begin{equation}\label{rhoconv}
\rho_m(\smallcal{X}_n) \ton \rho_m(\smallcal{X}) \qquad \forall m\in\NN
\end{equation}
hold. We have to show that the marked Gromov-weak convergence already implies \eqref{rhoconv}. This,
however, follows from the continuity of the $\rho_m$.
It remains to show that $d_\mathrm{fGP}$ is a complete metric on $\FMI$.
Consider a $d_\mathrm{fGP}$-Cauchy sequence $(\smallcal{X}_n)_{n\in\NN}$ in $\FMI$. By completeness of $d_\mathrm{mGP}$ on
$\MMI$, it converges marked Gromov-weakly to some $\smallcal{X}=(X, r, \mu)\in \MMI$.
Furthermore, for every fixed $m\in\NN$, \eqref{dfmi} implies that $1/\rho_m(\smallcal{X}_n)$ converges as
$n\to \infty$, and hence $d_\mathrm{mGP}(\smallcal{X}_n, F_m)$ is bounded away from zero. Thus $\smallcal{X} \not\in F_m$.
Because $\FMI= \MMI\setminus \bigcup_{m\in\NN} F_m$, this means that $\smallcal{X}\in\FMI$, and by the first
part of the proof $d_\mathrm{fGP}(\smallcal{X}_n, \smallcal{X}) \ton 0$.
\end{proof}
With $\BMMI(\smallcal{X}) := \bset{\smallcal{Y}\in \MMI}{d_\mathrm{mGP}(\smallcal{X}, \smallcal{Y}) < \delta}$ we denote the open
$\delta$\nobreakdash-ball in\/
$\MMI$ with respect to $d_\mathrm{mGP}$. The following corollary gives formal criteria for a limiting space to admit a
mark function, which are useful only together with estimates on $\beta$.
\begin{corollary}\label{c:complete}
Let\/ $\folge\smallcal{X}$ be a sequence in\/ $\MMI$ which converges marked Gromov-weakly to\/ $\smallcal{X}$.
Then the following four conditions are equivalent:
\begin{enumerate}
\item\label{it:fct} $\smallcal{X}\in \FMI$.
\item\label{it:rho} $\limsup_{n\to\infty}\rho_m(\smallcal{X}_n) > 0$ for all\/ $m\in\NN$, with\/ $\rho_m$ defined
in \eqref{eq:rhodef}.
\item\label{it:betainv} For every $\delta > 0$,
\begin{equation}
\limsup_{n\to\infty} \inf_{\smallcal{Y} \in \beta^{-1}([\delta, \infty[)}d_\mathrm{mGP}(\smallcal{X}_n, \smallcal{Y}) > 0.
\end{equation}
\item\label{it:betareg}
\begin{equation}\label{betaneighbour}
\lim_{\delta\downarrow0}\, \liminf_{n\to\infty} \sup_{\smallcal{Y} \in \BMMI(\smallcal{X}_n)} \beta(\smallcal{Y}) = 0.
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
\eproofcase{\ref{it:fct}}{\ref{it:rho}}
We have $\rho_m(\smallcal{X}) = \lim_{n\to\infty} \rho_m(\smallcal{X}_n)$, and
$\rho_m(\smallcal{X}) > 0$ for all $m\in\NN$ if and only if $\smallcal{X} \in \FMI$.
\eproofcase{\ref{it:rho}}{\ref{it:betainv}} follows directly from the definition of $\rho_m$.
\eproofcase{\ref{it:betainv}}{\ref{it:betareg}} Using monotonicity in $\delta$ we obtain
\begin{align}
\text{\ref{it:betainv}} &\iff \forall \delta>0\,\exists \varepsilon>0\,\forall \folge\smallcal{Y} \subseteq \MMI
\text{ with\/ } \beta(\smallcal{Y}_n) \ge \delta : \limsup_{n\to\infty} d_\mathrm{mGP}(\smallcal{X}_n, \smallcal{Y}_n) \ge \varepsilon \\
& \iff \forall \delta>0\, \exists\varepsilon>0\,\forall \folge\smallcal{Y}\subseteq \MMI:
\liminf_{n\to\infty} \beta(\smallcal{Y}_n) < \delta \text{ or\/ } \limsup_{n\to\infty} d_\mathrm{mGP}(\smallcal{X}_n, \smallcal{Y}_n)\ge\varepsilon
\nonumber\\
& \iff \forall \varepsilon>0\,\exists\delta>0\, \forall \folge\smallcal{Y} \subseteq \MMI \text{ with\/ }
\smallcal{Y}_n\in\BMMI(\smallcal{X}_n) :
\liminf_{n\to\infty} \beta(\smallcal{Y}_n) < \varepsilon \;\iff\; \text{\ref{it:betareg}}, \nonumber
\end{align}
where, in the third equivalence, we renamed $\delta$ to $\varepsilon$ and $\varepsilon$ to $\delta$.
\end{proof}
\subsection{A decomposition of $\FMI$ into closed sets and estimates on $\beta$}\label{sub:betaestim}
In this subsection, we derive some estimates on $\beta$ and use them to complete the proof of \propref{usc}.
Furthermore, we construct a decomposition of $\FMI$ into closed sets which are related to the sets $\HMI$.
As we have seen in \subref{equicont}, the situation becomes easy if we restrict to the uniformly equicontinuous
case, that is to the subspace $\HMI$ for some $h\in\mathcal{H}$ as in \defref{HMI}. We introduce in what follows several related
subspaces capturing some aspect of equicontinuity. In analogy to the definition of $A_h^X$ in
\eqref{eq:goodset}, we use for a metric space $(X,r)$, and $\delta,\varepsilon>0$, the notation
\begin{equation}\label{eq:epsgoodset}
\Ade := \Ade[(X,r)] := \bset{(x_i,u_i)_{i=1,2}\in (X\times I)^2}{r(x_1,x_2)\ge
\delta \text{ or } d(u_1,u_2) \le \varepsilon} \subseteq (X\times I)^2.
\end{equation}
Note that $\Ade$ is a closed set. For every $h\in\mathcal{H}$, using monotonicity and continuity of $h$, we observe that
\begin{equation}
A_h^X = \bigcap_{\delta>0} \Adh.
\end{equation}
\begin{definition}[$\DEMI$, $\Mde$, $\Mh$]\label{d:EpsMI}
Let\/ $\delta, \varepsilon > 0$ and $h\in\mathcal{H}$. We define
\begin{align}
\DEMI &:= \bset{(X,r,\mu)\in\MMI}{\musq(\Ade) = \|\mu^{\otimes 2}\|}, \\
\Mde &:= \bset{(X,r,\mu)\in\MMI}
{\exists \mu' \in \Mf(X \times I):\mu'\le \mu,\, \|\mu-\mu'\|\le\varepsilon,\, (X,r,\mu') \in \DEMI},
\end{align}
and\/ $\Mh := \bigcap_{\delta>0} \Mdh$.
\end{definition}
The intuition is that for spaces in $\DEMIfull{\delta}{h(\delta)}$, the measure behaves as if it admitted
an $h$-uniformly continuous mark function when distances of order $\delta$ are observed. The same holds for the
spaces in $\Mdh$ if we are additionally allowed to neglect a portion $h(\delta)$ of mass.
\begin{enremark}
\item Clearly $\HMI \subseteq \Mh$. We will see in \lemref{cupMh} that $\Mh \subseteq \FMI$.
\item The space $\Mh$ is \emph{much} larger than $\HMI$: while $\bigcup_{h\in\mathcal{H}} \HMI$ contains only mmm-spaces
admitting a uniformly \emph{continuous} mark function, we will see in \lemref{cupMh} that every element
of $\FMI$ is in some $\Mh$.
\item The spaces $\DEMI$ and $\Mde$ are not contained in $\FMI$. For instance, consider $I=\RR$ and
$\smallcal{X}=\(\{0\}, 0, \delta_{(0,0)} + \delta_{(0,\varepsilon)}) \in \DEMI\subseteq \Mde$.
\end{enremark}
We have the following stability of $\Mde$ with respect to small perturbations in the marked Gromov-Prohorov
metric.
\begin{lemma}[perturbation of $\Mde$]\label{lem:perturb}
Let\/ $\delta,\varepsilon>0$, $\smallcal{X}\in \Mde$ and\/ $\,\hat{\!\smallx}\in \MMI$. Then
\begin{equation}\label{eq:perturb}
\delta' := d_\mathrm{mGP}(\smallcal{X}, \,\hat{\!\smallx}) < \tfrac12\delta
\;\implies\; \,\hat{\!\smallx}\in \Mdefull[\delta-2\delta'\!]{\,\varepsilon+2\delta'}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\smallcal{X}=(X,r,\mu)$, $\,\hat{\!\smallx}=(\hat{X},\hat{r},\hat{\mu})$. We may assume that $X, \hat{X}$ are subspaces of some
separable, metric space $(E,r_E)$ such that $\dPr(\mu,\hat{\mu}) < \delta'$.
By definition of $\Mde$, there is $\mu'\le \mu$ with
$\|\mu-\mu'\|\le \varepsilon$ and $\smallcal{X}':=(X,r,\mu') \in \DEMI$. Due to \lemref{rectangle}, we find $\hat{\mu}'
\le \hat{\mu}$ with $\|\hat{\mu}-\hat{\mu}'\|\le \varepsilon$ and $\dPr(\mu',\hat{\mu}') < \delta'$, where $\,\hat{\!\smallx}{}'=(\hat{X},\hat{r},\hat{\mu}')$.
By the coupling representation of the Prohorov metric, \eqref{eq:Prcouple},
we obtain a measure $\xi$ on $(E\times I)^2$ with marginals $\xi_1\le \mu'$ and $\xi_2\le \hat{\mu}'$ such that
$\|\hat{\mu}' - \xi_2\|\le \delta'$ and
\begin{equation}
\lbeq{xi-zero}
\xi\(\bset{\((x,u), (\hat{x}, \hat{u})\) \in (X \times I) \times (\hat{X} \times I)}{ r_E(x,\hat{x})+d(u,\hat{u}) \ge \delta'}\) = 0.
\end{equation}
By definition, $\mupsq$ is supported by $A_{\delta,\varepsilon}^X$. Therefore, the same is true for
$\xi_1^{\otimes 2}$ and
we obtain
\begin{align}
\|\xi_2^{\otimes 2} \| &= \|\xi^{\otimes 2} \|
= \xi^{\otimes 2}\(\bset{(x_i, u_i,\hat{x}_i,\hat{u}_i)_{i=1,2} \in ((X \times I) \times (\hat{X} \times I))^2}{(x_i,u_i)_{i=1,2} \in \Ade}\) \\
&\le \xi^{\otimes 2}_2\(\bset{(\hat{x}_i,\hat{u}_i)_{i=1,2} \in (\hat{X}\times I)^2}{ r_E(\hat{x}_1,\hat{x}_2) \geq \delta-2\delta'
\text{ or } d(\hat{u}_1,\hat{u}_2) \leq \varepsilon+2\delta'}\) \nonumber\\
&= \xi_2^{\otimes 2}(A_{\delta-2\delta',\varepsilon+2\delta'}^{\hat{X}}), \nonumber
\end{align}
where the inequality follows from \eqref{xi-zero} together with the triangle-inequality.
Therefore,
$(\hat{X},\hat{r},\xi_2) \in \DEMIfull{\delta-2\delta'\!}{\,\varepsilon+2\delta'}$.
Now the claim follows from $\|\hat{\mu} - \xi_2\| \le \|\hat{\mu}-\hat{\mu}'\| + \|\hat{\mu}'-\xi_2\| \le \varepsilon + \delta'$.
\end{proof}
\begin{prop}[estimates on $\beta$]\label{p:betaestim}
Let\/ $\delta, \varepsilon >0$ and consider $\smallcal{X}=(X,r,\mu)\in \MMI$.
Then the following hold:
\begin{enumerate}
\item\label{it:varLip} If\/ $\mu'\in\Mf(X \times I)$, then\/ $\beta(\smallcal{X}) \le \beta\((X,r,\mu')\) + 2\|\mu-\mu'\|$.
\item\label{it:DEMIest} If\/ $\smallcal{X}\in \DEMI[2\delta]$, then\/ $\beta(\smallcal{X}) \le \varepsilon\|\mu\|$.
\item\label{it:estimate-beta-3} If\/ $\smallcal{X}\in \Mde[2\delta]$ and\/ $\,\hat{\!\smallx}\in\MMI$ with\/
$d_\mathrm{mGP}(\smallcal{X}, \,\hat{\!\smallx})<\delta$, then\/ $\beta(\smallcal{X}) \le \varepsilon\(\|\mu\|+2\)$ and
\begin{equation}\label{eq:ballest}
\beta(\,\hat{\!\smallx}) \le (\varepsilon+2\delta)(2+\|\mu\| + \delta).
\end{equation}
\end{enumerate}
\end{prop}
\begin{enproof}
\item follows directly from the definition.
\item If $x \in X$ and $u,v\in I$ satisfy $\((x,u),\,(x,v)\) \in \Adtwoe$, then $d(u,v) \le \varepsilon$ by definition of $\Adtwoe$.
Thus $\beta(\smallcal{X}) = \probintamu{X\times I}{\probinta{I}{(1\land d(u,v))}{K_x}{v}}{(x,u)} \le \varepsilon\|\mu\|$.
\item Combining \itref{varLip} and \itref{DEMIest} yields $\beta(\smallcal{X}) \le 2\varepsilon + \varepsilon\|\mu\|$.
Let $\delta'=d_\mathrm{mGP}(\smallcal{X},\,\hat{\!\smallx})$. By \lemref{perturb}, we have
$\,\hat{\!\smallx}\in \Mdefull[2\delta-2\delta'\!]{\,\varepsilon+2\delta'}$ and thus $\beta(\,\hat{\!\smallx}) \le
(2+\|\hat{\mu}\|)(\varepsilon+2\delta')\le(2+\|\mu\|+\delta)(\varepsilon+2\delta)$.
\end{enproof}
In order to complete the proof of \propref{usc} with the help of \propref{betaestim}, we first observe that, as
a consequence of Lusin's theorem, every functionally marked metric measure space is an element of $\Mh$ for some
$h\in\mathcal{H}$. Together with \lemref{Mhclosed} below, this means that we have a nice (though uncountable)
decomposition of $\FMI$ into closed sets.
\begin{lemma}[decomposition of $\FMI$]\label{lem:cupMh}
The following equality holds: $\FMI=\bigcup_{h\in\mathcal{H}} \Mh$.
\end{lemma}
\begin{proof}
We have $\Mh\subseteq\beta^{-1}(0)=\FMI$ for every $h\in\mathcal{H}$. Indeed, the equality was
shown in the first part of the proof of \propref{usc}. To obtain the inclusion, that is
$\beta(\smallcal{X})=0$ for all $\smallcal{X} \in \Mh$, recall $\Mh$ from \defref{EpsMI} and choose
$\varepsilon=h(2\delta)$ in \propref{betaestim}\itref{estimate-beta-3}.
Conversely, let $\smallcal{X}=(X,r,\nu,\kappa)\in \FMI$. According to Lusin's theorem, we find for every
$\varepsilon>0$ a compact set $K_\varepsilon\subseteq X$, and a modulus of continuity $h_\varepsilon\in\mathcal{H}$, such that
$\nu(X\setminus K_\varepsilon) \le \varepsilon$ and $\kappa\restricted{K_\varepsilon}$ is $h_{\varepsilon}$-uniformly continuous.
In particular,
\begin{equation}\label{eq:xinm}
\smallcal{X}\in \Mdefull{h_\varepsilon(\delta)\lor\varepsilon} \quad\forall \varepsilon,\delta>0.
\end{equation}
We may assume without loss of generality that $\varepsilon\mapsto h_\varepsilon(\delta)$ is decreasing and
right-continuous for every $\delta>0$. We define
\begin{equation}
h(\delta) := \inf\bset{\varepsilon>0}{h_\varepsilon(\delta) < \varepsilon} \in \RR_+\cup\{\infty\}.
\end{equation}
Clearly, $h(\delta)$ converges to $0$ as $\delta\downarrow0$ because $h_\varepsilon\in\mathcal{H}$.
Furthermore, $h_{h(\delta)}(\delta) \le h(\delta)$, and hence \eqref{eq:xinm} with $\varepsilon=h(\delta)$
implies $\smallcal{X} \in \Mh$.
\end{proof}
\begin{proof}[Proof of \propref{usc} (completion).]
We still have to show continuity of $\beta$ in $\smallcal{X} \in \beta^{-1}(0)$.
Due to \lemref{cupMh}, there is $h\in\mathcal{H}$ with $\smallcal{X}\in\Mh$. Now \propref{betaestim} yields for
$\delta>0$ the estimate
$\sup_{\,\hat{\!\smallx}\in\BMMI(\smallcal{X})}\beta(\,\hat{\!\smallx}) \le (h(2\delta) + 2\delta)(2+\|\mu\| + \delta)$,
which converges to $0$ as $\delta \downarrow 0$.
\end{proof}
It directly follows from \propref{betaestim}\itref{estimate-beta-3} that the marked Gromov-weak closure of $\Mh$ is
contained in $\FMI$. In fact, $\Mh$ is even Gromov-weakly closed, which will be used in the
proof of \thmref{modcadlag} below.
\begin{lemma}[closedness of $\Mh$]\label{lem:Mhclosed}
For every\/ $\delta,\varepsilon>0$, $\Mde$ is marked Gromov-weakly closed in\/ $\MMI$. In particular, $\Mh$ is closed
for every\/ $h\in\mathcal{H}$.
\end{lemma}
\begin{proof}
Fix $\varepsilon,\delta>0$ and let $\folge\smallcal{X}$ be a sequence in $\Mde$ converging marked Gromov-weakly to
some $\smallcal{X}=(X,r,\mu)\in\MMI$. Using \lemref{seqembed}, we may assume that $X_n$, $n\in\NN$, and $X$ are
subspaces of a common separable, metric space $(E,r_E)$, such that $\mu_n \tow \mu$ on $E\times I$. By
definition of $\Mde$, we find $\mu_n' \le \mu_n$, $\|\mu_n' - \mu_n\| \le \varepsilon$, such that $\munpsq$ is
supported by $\Ade[E]$ for all $n\in\NN$. Since $(\mu_n')_{n\in\NN}$ is tight, we may assume, by passing
to a subsequence, that $\mu_n' \tow \mu'$ for some $\mu'\in \Mf(E)$. Obviously, $\mu'\le \mu$ and
$\|\mu-\mu'\| = \lim_{n\to\infty} \|\mu_n\| - \|\mu'_n\| \le \varepsilon$. Because $\Ade[E]$ is closed, $\mupsq$ is
supported by $\Ade[E]$ and hence $\smallcal{X} \in \Mde$.
\end{proof}
\subsection{Criteria for the existence of mark functions}\label{sec:criteria}
Based on the construction of the complete metric and the decomposition $\FMI=\bigcup_{h\in\mathcal{H}}\Mh$ into closed
sets obtained in \secref{Polish}, we now derive criteria to check if a marked metric measure space admits a mark
function, especially in the case where it is given as a marked Gromov-weak limit.
We then transfer the results to random mmm-spaces and $\MMI$-valued stochastic processes.
\subsection{Deterministic criteria} \label{sub:criteria-1}
Our main criterion for deterministic spaces is a direct consequence of the results in \secref{Polish}.
Recall that $\mathcal{H}$ is the set of moduli of continuity defined in \eqref{eq:def-H}.
\begin{theorem}[characterization of existence of a mark function in the limit]\label{t:modulus}
Let\/ $\folge\smallcal{X}$ be a sequence in\/ $\MMI$ with\/ $\smallcal{X}_n \xrightarrow{\mathrm{mGw}} \smallcal{X}\in\MMI$.
Then $\smallcal{X} \in \FMI$ if and only if there exists\/ $h\in\mathcal{H}$ such that for every\/ $\delta>0$
\begin{equation}\label{eq:mod}
\smallcal{X}_n \in \Mdh \quad\text{ for infinitely many\/ $n\in\NN$.}
\end{equation}
In this case, $\smallcal{X}\in \Mh$.
\end{theorem}
\begin{proof}
First assume there is $h\in\mathcal{H}$ such that \eqref{eq:mod} is satisfied.
Since $\Mdh$ is closed by \lemref{Mhclosed}, \eqref{eq:mod} implies that $\smallcal{X} \in \Mdh$ for every
$\delta$, that is $\smallcal{X} \in \Mh$. By \lemref{cupMh}, $\Mh\subseteq \FMI$.
Conversely, assume $\smallcal{X}\in\FMI$. Then, by \lemref{cupMh}, we find $h\in \mathcal{H}$ with $\smallcal{X} \in \Mh$.
We claim that \eqref{eq:mod} holds with $h$ replaced by $\hat{h}(\delta) := h(3\delta) + 2\delta$.
Indeed, fix $\delta>0$ and observe that $\smallcal{X}\in \Mh\subseteq\Mdh[3\delta]$.
\lemref{perturb} yields $\smallcal{X}_n \in\Mdheps[\hat{h}]$ for all $n$ with $d_\mathrm{mGP}(\smallcal{X}, \smallcal{X}_n) < \delta$.
\end{proof}
We will use \thmref{modulus} in the following form.
\begin{corollary}\label{c:modulus}
Let\/ $\smallcal{X}_n=(X_n, r_n, \nu_n, \kappa_n)\in\FMI$, $\smallcal{X}_n \xrightarrow{\mathrm{mGw}} \smallcal{X}\in \MMI$.
Let\/ $Y_{n,\delta} \subseteq X_n$ measurable for\/ $n\in\NN,\, \delta>0$, and\/ $h\in\mathcal{H}$.
Then $\smallcal{X} \in \FMI$ if the following two conditions hold for every\/ $\delta>0$:
\begin{gather}
\liminf_{n\to\infty} \nu_n(X_n\setminus Y_{n,\delta}) \le h(\delta), \label{eq:modulus1}\\
\forall n\in\NN,\, x,y\in Y_{n,\delta}: r_n(x,y) < \delta \implies
d\(\kappa_n(x),\kappa_n(y)\) \le h(\delta). \label{eq:modulus2}
\end{gather}
\end{corollary}
\begin{proof}
Let $\mu_n' := \mu_n\restricted{Y_{n,\delta}\times I}$, where $\mu_n=\nu_n\otimes \delta_{\kappa_n}$.
Then \eqref{eq:modulus2} implies $(X_n,r_n,\mu_n') \in\DEMIfull{\delta}{h(\delta)}$ and
\eqref{eq:modulus1} yields $\|\mu_n' - \mu_n\| \le h(\delta)$ for infinitely many $n$.
Hence we can apply \thmref{modulus}.
\end{proof}
\begin{remark}\label{r:seq}
To obtain $\smallcal{X} \in\FMI$, it is clearly enough to show in \thmref{modulus} and \corref{modulus},
\eqref{eq:mod} respectively \eqref{eq:modulus1} and \eqref{eq:modulus2} only for $\delta=\delta_m$ for
a sequence $\folge[m]\delta$ with $\delta_m \downarrow 0$ as $m\to \infty$.
\end{remark}
We illustrate the r\^ole of the exceptional set $X_n \setminus Y_{n,\delta}$, and the importance of its dependence on $\delta$,
with a simple example.
\begin{example}
Consider\/ $X=[0,1]$ with Euclidean metric $r$, $\nu=\lambda + \delta_0$, where $\lambda$ is
Lebesgue-measure, and $\kappa_n(x) = (nx) \land 1$. Obviously, $\smallcal{X}_n=(X,r,\nu,\kappa_n)$
converges marked Gromov-weakly and the limit admits the mark function $\mathds{1}_{(0,1]}$. To see this from
\corref{modulus}, we choose $h(\delta)=\delta$ and $Y_{n,\delta}=\{0\} \cup [\delta \vee \tfrac{1}{n} , 1]$. Note that we cannot choose
$Y_{n,\delta}$ independent of $\delta$.
\end{example}
\begin{remark}[equicontinuous case]
If, in \corref{modulus}, $Y_{n,\delta}=Y_n$ does not depend on $\delta$, then \eqref{eq:modulus2}
means that $\kappa_n$ is $h$-uniformly continuous on $Y_n$. Consequently, the mark function of
$\smallcal{X}$ is in this case $h$-uniformly continuous. If we restrict to $Y_n = X_n$ for all $n$, we
recover part \itref{equicontcl} of \lemref{equicont}.
\end{remark}
\begin{corollary}\label{c:diamcrit}
Let\/ $\smallcal{X}_n=(X_n, r_n, \nu_n, \kappa_n)\in \FMI$ and assume that\/ $\smallcal{X}_n$ converges to\/
$\smallcal{X}=(X,r,\mu)\in \MMI$ marked Gromov-weakly. Further assume that for\/ $n\in\NN,\, \delta>0$, there
are measurable sets\/ $Z_{n,\delta}\subseteq X_n$, such that
\begin{equation}\label{eq:diamcrit}
\lim_{\delta\downarrow0}\, \liminf_{n\to\infty}\, \biggl(
\nu_n(X_n\setminus Z_{n,\delta}) +
\probinta{Z_{n,\delta}}{\left( 1\land\diam\(\kappa_n\(B_{\delta}^{X_n}(x)\cap Z_{n,\delta}\)\)\right) }{\nu_n}{x}
\biggr) \,=\, 0,
\end{equation}
where $\diam$ is the diameter of a set.
Then $\smallcal{X}$ admits a mark function, that is $\smallcal{X} \in \FMI$.
\end{corollary}
\begin{proof}
For $\delta>0$ let
\begin{equation}
\label{eq:diamcrit1}
g(\delta) := \sup_{0<\delta'\leq\delta} \liminf_{n\to\infty}\, \biggl(
\nu_n(X_n\setminus Z_{n,\delta'}) +
\probinta{Z_{n,\delta'}}{\left( 1\land\diam\(\kappa_n\(B_{\delta'}^{X_n}(x)\cap Z_{n,\delta'}\)\) \right)}{\nu_n}{x}
\biggr).
\end{equation}
By \eqref{eq:diamcrit}, $\lim_{\delta\downarrow0} g(\delta)=0$ and $g$ is increasing with
$\|g\|_\infty \leq \|\mu\|$. Let $h\in\mathcal{H}$ be such that $g(\delta) \leq \frac{h(\delta)}{2} \bigl( 1\land
h(\delta))$ for all $\delta>0$. Then
\begin{equation}\label{eq:diamcrit2}
\nu_n\(\bset{x\in Z_{n,\delta}}{\diam\(\kappa_n(B_{\delta}^{X_n}(x)\cap Z_{n,\delta})\) > h(\delta)}\)
\leq \frac{g(\delta)}{1\land h(\delta)} \leq h(\delta)/2.
\end{equation}
Now apply \corref{modulus} with
\begin{equation} \label{eq:diamcrit3}
Y_{n,\delta} := \bset{x \in Z_{n,\delta}}{\diam\(\kappa_n\(B_{\delta}^{X_n}(x)\cap
Z_{n,\delta}\)\) \leq h(\delta)}.
\end{equation}
Then \eqref{eq:modulus2} follows from the definition of $Y_{n,\delta}$ in \eqref{eq:diamcrit3},
and $\nu_n(X_n\setminus Y_{n,\delta}) \le \nu_n(X_n\setminus Z_{n,\delta}) + h(\delta)/2 \le g(\delta) +
h(\delta)/2 \le h(\delta)$ holds by \eqref{eq:diamcrit2} and \eqref{eq:diamcrit1}.
\end{proof}
\subsection{Random fmm-spaces} \label{sub:criteria-2}
The following theorem is a randomized version of \thmref{modulus}. It is our main criterion for $\MMI$-valued
random variables.
\begin{theorem}[random fmm-spaces as limits in distribution]\label{t:rnd-modulus}
Let\/ $\folge\mathcal{X}$ be a sequence of\/ $\MMI$-valued random variables which converges in distribution (w.r.t.\
marked Gromov-weak topology) to an\/ $\MMI$-valued random variable\/ $\mathcal{X}$.
Further assume that for every\/ $\varepsilon>0$, there exists a modulus of continuity\/ $h_\varepsilon\in \mathcal{H}$ such
that
\begin{equation}\label{eq:pasgen}
\limsup_{\delta\downarrow0}\, \limsup_{n\to\infty}\, \bPs{\mathcal{X}_n \in \Mdheps} \ge 1-\varepsilon.
\end{equation}
Then\/ $\mathcal{X}$ admits almost surely a mark function, that is\/ $\mathcal{X}\in\FMI$ almost surely.
If additionally\/ $\mathcal{X}_n=(X_n,r_n,\nu_n,\kappa_n)\in\FMI$ almost surely for all\/ $n\in\NN$, we can replace
\eqref{eq:pasgen} by existence of random measurable sets\/
$Y_{n,\delta}^\varepsilon \subseteq X_n$, $n\in\NN,\, \delta>0$, in addition to the $h_\varepsilon\in\mathcal{H}$, such that
the following two conditions hold for every\/ $\varepsilon>0$:
\begin{gather}
\limsup_{\delta\downarrow0}\, \limsup_{n\to\infty}\, \bPs{\nu_n(X_n\setminus Y_{n,\delta}^\varepsilon) \le h_\varepsilon(\delta)}
\ge 1-\varepsilon. \label{eq:pas}\\
\forall n\in\NN,\, \delta>0,\, x,y\in Y_{n,\delta}^\varepsilon: r_n(x,y) < \delta \implies
d\(\kappa_n(x),\kappa_n(y)\) \le h_\varepsilon(\delta). \label{eq:moduluseps}
\end{gather}
\end{theorem}
\begin{remark}
In \eqref{eq:pas}, we need not worry about measurability of the ``event''
$B_{n,\delta} := \bigl\{\nu_n(X_n\setminus Y_{n,\delta}^\varepsilon) \le h_\varepsilon(\delta)\bigr\}$ due to the choice of $Y_{n,\delta}^\varepsilon$.
The inequality \eqref{eq:pas} is to be understood in the sense of inner measure, that is we require that there are
measurable sets $C_{n,\delta}\subseteq B_{n,\delta}$ with
$\limsup_{\delta\downarrow0}\limsup_{n\to\infty}\P(C_{n,\delta}) \ge 1-\varepsilon$.
\end{remark}
\begin{proof}
The second statement follows in the same way as \corref{modulus}.
We divide the proof of the main part in two steps. First, we show $\mathcal{X}\in\FMI$ if, instead of
\eqref{eq:pasgen}, even
\begin{equation}\label{eq:deltainside}
\P\Bigl(\bigcap_{m\in\NN} \bigl\{ \mathcal{X}_n \in \Mdmh \text{ for infinitely many $n$}\bigr\} \Bigr)
\ge 1-\varepsilon
\end{equation}
holds for a sequence $\delta_m=\delta_m(\varepsilon)\downarrow 0$ as $m\to\infty$. In the second step, we show
that, given \eqref{eq:pasgen}, we can modify $h_\varepsilon$ to $\hat{h}_\varepsilon \in \mathcal{H}$ such that
\eqref{eq:deltainside} holds with $h_\varepsilon$ replaced by $\hat{h}_\varepsilon$.
\begin{proofsteps}
\refstepcounter{enumi}\removelastskip\smallskip\par\noindent\emph{Step \arabic{enumi}.} \hspace{0.5ex} By Skorohod's representation theorem, we may assume that the $\mathcal{X}_n$ are coupled such that
they converge almost surely to $\mathcal{X}$ in the marked Gromov-weak topology.
The inequality \eqref{eq:deltainside} implies that with probability at least $1-\varepsilon$, for all
$m \in \mathbb{N}$, $\mathcal{X}_n \in \Mdmh$ for infinitely many $n$.
By \thmref{modulus} and \remref{seq}, this means that the probability that $\mathcal{X}$ admits a mark
function is at least $1-\varepsilon$. Because $\varepsilon$ is arbitrary, this implies $\mathcal{X}\in\FMI$ almost surely.
\refstepcounter{enumi}\removelastskip\smallskip\par\noindent\emph{Step \arabic{enumi}.} \hspace{0.5ex} Let $T(\varepsilon,\delta):= \limsup_{n\to\infty}\, \bPs{\mathcal{X}_n \in \Mdheps}$ in \eqref{eq:pasgen}. Set
\begin{equation}
\delta_1 := \sup\bigl\{ \delta \in [0,1] : T(\varepsilon/4,\delta) \ge 1-\varepsilon/2 \mbox{ and } h_{\varepsilon/4}(\delta)<1 \bigr\}.
\end{equation}
By \eqref{eq:pasgen} and as $h_{\varepsilon/4} \in \mathcal{H}$, the set inside the supremum is non-empty. Next define recursively
\begin{equation}
\delta_m := \sup\bigl\{ \delta \in [0,\delta_{m-1}/2] :
T(\varepsilon 2^{-(m+1)},\delta) \ge 1-\varepsilon 2^{-m} \mbox{ and } h_{\varepsilon 2^{-(m+1)}}(\delta)<1/m \bigr\}
\end{equation}
for $m \in \mathbb{N}, m \geq 2$. Again, the set inside the supremum is non-empty by \eqref{eq:pasgen} and as $h_{\varepsilon 2^{-(m+1)}} \in \mathcal{H}$. Moreover, $\delta_m=\delta_m(\varepsilon)>0$, $\delta_m \downarrow 0$ for $m \rightarrow \infty$ and $h_{\varepsilon 2^{-(m+1)}}(\delta_m) \leq 1/m$ follows. We can therefore set
\begin{equation}
\hat{h}_\varepsilon(\delta_m) := h_{\eps2^{-(m+1)}}(\delta_m)
\end{equation}
and extend this to $\hat{h}_\varepsilon\in\mathcal{H}$.
Using Fatou's lemma, we obtain
\begin{align}
\P\Bigl(\bigcup_{m\in\NN} \bigl\{\mathcal{X}_n \not\in \Mdmh[\hat{h}_\varepsilon] \text{ eventually}\bigr\} \Bigr)
& \le \sum_{m\in\NN} \mathbb{E}\(\liminf_{n\to\infty} \mathds{1}_{\MMI\setminus\Mdmh[\hat{h}_\varepsilon]}(\mathcal{X}_n)\) \\
& \le \sum_{m\in\NN} \liminf_{n\to\infty} \bPs{\mathcal{X}_n \not\in \Mdmh[\hat{h}_\varepsilon]} \nonumber\\
& = \sum_{m\in\NN} \big( 1-T(\varepsilon 2^{-(m+1)},\delta_m) \big) \nonumber\\
& \le \sum_{m\in\NN} \eps2^{-m} = \varepsilon. \nonumber
\end{align}
Thus \eqref{eq:deltainside} holds with $h_\varepsilon$ replaced by $\hat{h}_\varepsilon$.
\end{proofsteps}
\end{proof}
\subsection{Fmm-space-valued processes} \label{sub:criteria-3}
Let $J\subseteq\RR_+$ be a (closed, open or half-open) interval and consider a stochastic process
$\mathcal{X}=(\mathcal{X}_t)_{t\in J}$ with values in $\MMI$ and c\`adl\`ag paths, where $\MMI$ is equipped with the marked
Gromov-weak topology.
We say that $\mathcal{X}$ is an \emph{$\FMI$-valued c\`adl\`ag process} if
\begin{equation}\label{eq:fmi-val}
\bPs{\mathcal{X}_t,\mathcal{X}_{t-} \in \FMI \text{ for all\/ } t\in J} = 1,
\end{equation}
where\/ $\mathcal{X}_{t-}$ is the left limit of\/ $\mathcal{X}$ at\/ $t$ ($\mathcal{X}_{\ell-} := \mathcal{X}_\ell$ if $\ell$ is the left endpoint of $J$).
In the following, we give sufficient criteria for $\mathcal{X}$ to be an $\FMI$-valued c\`adl\`ag process. We are
particularly interested in the situation where $\mathcal{X}$ is the limit of $\FMI$-valued processes $\mathcal{X}^n$.
Unsurprisingly, if the set of $\mathbb{P}$-measure smaller or equal to $\varepsilon$ in \thmref{rnd-modulus}
is independent of $t$, the result is true for all $t$ simultaneously, almost surely. The modulus of
continuity may also depend on $t$ in a continuous way; or be arbitrary if the limiting process has continuous
paths:
\begin{theorem}\label{t:pr-modulus}
Let\/ $J\subseteq\RR_+$ be an interval, and\/ $\mathcal{X}^n=(\mathcal{X}^n_t)_{t\in J}$, $n\in\NN$, a sequence of\/
$\MMI$-valued c\`adl\`ag\ processes converging in distribution to an\/ $\MMI$-valued c\`adl\`ag\ process\/
$\mathcal{X}=(\mathcal{X}_t)_{t\in J}$. Assume that for every\/ $t\in J$, $\varepsilon>0$, there exists\/ $h_{t,\varepsilon}\in \mathcal{H}$ such that
\begin{equation}\label{eq:pasgenforall}
\limsup_{\delta\downarrow0}\, \limsup_{n\to\infty}\, \bPs{\mathcal{X}^n_t \in \Mdheps[h_{t,\varepsilon}]\;\, \forall t\in J} \ge 1-\varepsilon.
\end{equation}
Then\/ $\mathcal{X}$ is an\/ $\FMI$-valued c\`adl\`ag\ process, that is \eqref{eq:fmi-val} is satisfied,
if at least one of the following two conditions holds:
\begin{enumerate}
\item\label{it:cond1} $\mathcal{X}$ has continuous paths a.s.
\item\label{it:cond2} $t\mapsto h_{t,\varepsilon}(\delta)$ is continuous for every $\varepsilon, \delta >0$.
\end{enumerate}
If additionally\/ $\mathcal{X}^n$ is\/ $\FMI$-valued almost surely for all\/ $n\in\NN$, \eqref{eq:pasgenforall} can be
replaced by existence of random measurable sets\/ $Y_{t,\varepsilon,\delta}^n \subseteq X^n_t$, in addition to
the\/ $h_{t,\varepsilon}\in\mathcal{H}$, satisfying the following two conditions for every\/ $\varepsilon>0$:
\begin{gather}
\limsup_{\delta\downarrow0}\, \limsup_{n\to\infty}\,
\bPs{\nu^n_t(X^n_t\setminus Y_{t,\varepsilon,\delta}^n) \le h_{t,\varepsilon}(\delta)\;\, \forall t\in J}
\ge 1-\varepsilon, \label{eq:pasforall} \\
\forall n\in\NN,\, t\in J,\, \delta>0,\, x,y\in Y_{t,\varepsilon,\delta}^n: r_n(x,y) < \delta \implies
d\(\kappa_n(x),\kappa_n(y)\) \le h_{t,\varepsilon}(\delta). \label{eq:modulust}
\end{gather}
\end{theorem}
\begin{proof}
Due to the Skorohod representation theorem, we may assume that $\mathcal{X}^n\to \mathcal{X}$ almost surely in the Skorohod
topology. For condition \itref{cond1} respectively \itref{cond2} we obtain
\begin{enumerate}
\item If $\mathcal{X}$ has continuous paths a.s., the convergence in Skorohod topology implies uniform convergence of
$\mathcal{X}_t^n(\omega)$ on $J$ a.s. with respect to $d_\mathrm{mGP}$.
Hence we have $\mathcal{X}^n_t \xrightarrow[\scriptscriptstyle n\to\infty]{\mathrm{mGw}} \mathcal{X}_t$ for all $t\in J$, almost surely, and we can proceed as in the proof
of \thmref{rnd-modulus}.
\item There are (random) continuous $w^n \colon J \to J$, converging to the identity uniformly on
compacta, such that $\mathcal{X}^n_{w^n(t)} \to \mathcal{X}_t$ for all $t\in J$, almost surely. We can use the moduli of
continuity $\hat{h}_{t,\varepsilon}(\delta) := h_{t,\varepsilon}(\delta) + \delta$ and proceed as in the proof of
\thmref{rnd-modulus}. Note here that, due to continuity of $h_{t,\varepsilon}(\delta)$ in $t$, there is for
every compact subinterval $\mathcal{J}$ of $J$ an $N_{\mathcal{J},\varepsilon,\delta}\in\NN$ such that
$\hat{h}_{t,\varepsilon}(\delta) \ge h_{w^n(t),\varepsilon}(\delta)$ for all $n\ge N_{\mathcal{J},\varepsilon,\delta}$ and
$t\in \mathcal{J}$.
The same arguments apply for left limits with $w^n_{-}$ such that $\mathcal{X}^n_{w^n_{-}(t)} \to \mathcal{X}_{t-}$.
\qedhere\end{enumerate}
\end{proof}
To use \thmref{pr-modulus}, we have to check in \eqref{eq:pasgenforall} or \eqref{eq:pasforall} a condition for
uncountably many $t$ simultaneously, which is often much more difficult than for every $t$ individually.
One situation, where it is easy to pass from individual $t$ to all $t$ simultaneously is the case where the
moduli of continuity $h_{t,\varepsilon}$ actually do not depend on $t$ and $\varepsilon$ (see \corref{epsindep}).
The independence of $\varepsilon$, however, is a strong requirement.
Therefore, we relax it to not blowing up too fast as $\varepsilon\downarrow 0$, where the ``too fast'' is determined by
the following modulus of c\`adl\`agness of the limiting process.
\begin{definition}[modulus of c\`adl\`agness]
Let\/ $J$ be an interval, $(E,r)$ a metric space, and\/ $e=(e_t)_{t\in J}\in\DE$ a c\`adl\`ag\ path on $J$ with values in $E$.
Following \textup{\cite[(14.44)]{Bil68}}, set
\begin{equation}
w''(e,\delta) := \sup_{t,t_1,t_2 \in J: t_1 \leq t \leq t_2, t_2-t_1 \leq \delta}
\min\bigl\{ r(e(t),e(t_1)),\, r(e(t_2),e(t)) \bigr\}.
\end{equation}
We say that\/ $e$ \emph{admits $w\in\mathcal{H}$ as modulus of c\`adl\`agness} if\/ $w''(e,\delta) \le w(\delta)$ for all\/
$\delta>0$.
\end{definition}
\begin{theorem}\label{t:modcadlag}
Fix an interval\/ $J\subseteq \RR_+$. Let\/ $\mathcal{X}=(\mathcal{X}_t)_{t\in J}$ and\/ $\mathcal{X}^n=(\mathcal{X}^n_t)_{t\in J}$,
$n\in\NN$, be\/ $\MMI$-valued c\`adl\`ag processes such that\/ $\mathcal{X}^n$ converges in distribution to\/ $\mathcal{X}$.
Furthermore, assume that there is a dense set\/ $Q\subseteq J$ and\/ $w_\varepsilon, h_\varepsilon\in \mathcal{H}$,
such that for all\/ $\varepsilon>0$
\begin{gather}
\limsup_{n\to\infty} \Ps{\mathcal{X}_t^n \in \Mdheps} \ge 1-\varepsilon \qquad \forall \delta>0,\,t\in Q, \label{eq:1}\\
\Ps{t\mapsto \mathcal{X}_t \text{ admits\/ $w_\varepsilon$ as modulus of c\`adl\`agness w.r.t.\ $d_\mathrm{mGP}$}} \ge 1-\varepsilon,\,
\text{ and} \label{eq:2}\\
\liminf_{\delta\downarrow 0} h_{\varepsilon\cdot\delta}\(2w_\varepsilon(\delta)\) = 0. \label{eq:3}
\end{gather}
Then\/ $\mathcal{X}$ is an\/ $\FMI$-valued c\`adl\`ag process, that is \eqref{eq:fmi-val} holds.
\end{theorem}
Recall the decomposition $\MMI\setminus \FMI = \bigcup_{m\in\NN} F_m$ with $F_m$ defined in \eqref{Bdelta}.
The basic idea of the proof is to use the following lemma about c\`adl\`ag\ paths to show that, almost surely, the
path of\/ $\mathcal{X}$ avoids $F_m$. The assertion of the lemma follows easily using the triangle-inequality.
\begin{lemma}\label{lem:pathcontain}
Let\/ $J$ be an interval, $(E,r)$ a metric space, and\/ $e=(e_t)_{t\in J}\in\DE$ a c\`adl\`ag\ path
admitting modulus of \cadlag ness\ $w\in\mathcal{H}$. Let\/ $F\subseteq E$ be any set, $\delta>0$, and\/ $Q\subseteq J$ such
that for all $t\in J$ there is $t_1,t_2\in Q$ with $t_1\le t \le t_2 \le t_1+\delta$. Then
\begin{equation}
r(e_t, F) > w(\delta)\;\;\forall t \in Q \quad\implies\quad e_t \not\in F \text{ and\/ } e_{t-} \not\in F \;\;\forall t\in J.
\end{equation}
\end{lemma}
\begin{proof}[Proof of \thmref{modcadlag}]
Because $\Mh[h_\varepsilon]=\bigcap_{\delta>0}\Mdheps$ is closed by \lemref{Mhclosed}, the Portmanteau theorem and \eqref{eq:1} imply
\begin{equation}\label{eq:lim1}
\Ps{\mathcal{X}_t \not\in \Mh[h_\varepsilon]} < \varepsilon \qquad \forall t\in Q,\,\varepsilon>0.
\end{equation}
Due to the Skorohod representation theorem, we may assume that $\mathcal{X}^n\to \mathcal{X}$ almost surely in Skorohod
topology. In order to simplify notation, we assume $J=[0,1]$ and $Q=\bigcup_{k\in\NN} Q_k$
with $Q_k=\set{i2^{-k}}{i=0,\ldots,2^k}$. It is enough to show for every $\varepsilon>0,\, m\in\NN$ and $F_m$ as defined in \eqref{Bdelta} that
\begin{equation}\label{eq:goal}
p_m := \bPs{\exists t\in[0,1]: \mathcal{X}_t \mbox{ or } \mathcal{X}_{t-} \in F_m} \le 3\varepsilon.
\end{equation}
To show \eqref{eq:goal}, fix $\varepsilon>0$ and $m\in\NN$, and let $\mathcal{X}_t=(X_t,r_t,\mu_t)$.
Because $\mathcal{X}$ has c\`adl\`ag\ paths, we find $K=K(\varepsilon)<\infty$ such that
\begin{equation}\label{eq:K}
\bPs{\sup_{t\in[0,1]} \|\mu_t\| \ge K-3} < \varepsilon.
\end{equation}
According to \eqref{eq:3} and \eqref{eq:lim1}, we can choose $k\in \NN$ big enough such that for $h:=h_{\eps2^{-k}}$
we have
\begin{equation}\label{eq:hestim}
h\(2w_\varepsilon(2^{-k})\) < (Km)^{-1} - 2 w_\varepsilon(2^{-k}) \qquad\text{and}\qquad
\Ps{\mathcal{X}_t \not\in \Mh} < \varepsilon 2^{-k}.
\end{equation}
Assume without loss of generality that $w_\varepsilon(2^{-k}) \leq 1$.
Now \propref{betaestim}\itref{estimate-beta-3} implies that, whenever $\mathcal{X}_t\in\Mh$ and $\|\mu_t\| < K-3$, we have
\begin{equation}\label{eq:dmGPestim}
d_\mathrm{mGP}(\mathcal{X}_t, F_m) > w_\varepsilon(2^{-k}).
\end{equation}
Combining \eqref{eq:2} and \lemref{pathcontain}, we obtain
\begin{align}
p_m &\le \varepsilon + \bPs{\exists t\in Q_k: d_\mathrm{mGP}(\mathcal{X}_t, F_m) \leq w_\varepsilon(2^{-k})}. \\
\intertext{Using \eqref{eq:K}, \eqref{eq:dmGPestim}, and (in the last step) \eqref{eq:hestim}, we conclude}
p_m &\le 2\varepsilon + 2^k\sup_{t\in Q_k} \bPs{\|\mu_t\|<K-3,\, \mathcal{X}_t\not\in \Mh} \le 3\varepsilon.
\end{align}
Thus \eqref{eq:goal} holds for all $\varepsilon>0$, and $\Ps{\exists t\in [0,1]:X_t\not\in \FMI} = \sup_{m\in\NN} p_m = 0$ follows.
\end{proof}
If, in \thmref{modcadlag}, we can choose the modulus of continuity $h_\varepsilon=h\in\mathcal{H}$, independent of
$\varepsilon$, such that \eqref{eq:1} holds, we do not need to check \eqref{eq:2} and \eqref{eq:3}.
\begin{corollary}[$\varepsilon$-independent modulus of continuity]\label{c:epsindep}
Assume that\/ $\mathcal{X}^n=(\mathcal{X}^n_t)_{t\in J}$ converges in distribution to an $\MMI$-valued c\`adl\`ag
process\/ $\mathcal{X}$, and\/ $Q\subseteq J$ is dense.
Then\/ $\mathcal{X}$ is an\/ $\FMI$-valued c\`adl\`ag process if, for some\/ $h\in \mathcal{H}$,
\begin{equation} \label{eq:var1}
\limsup_{n\to\infty} \Ps{\mathcal{X}_t^n \in \Mh} = 1 \qquad \forall t\in Q.
\end{equation}
\end{corollary}
\begin{proof}
Let $h\in\mathcal{H}$ be such that \eqref{eq:var1} is satisfied and set $h_\varepsilon:=h$.
Then \eqref{eq:3} is satisfied for every choice of $w_\varepsilon \in \mathcal{H}$, $\varepsilon>0$.
For every c\`adl\`ag process, in particular for $\mathcal{X}$, there exist moduli of \cadlag ness\ $w_\varepsilon$ such that \eqref{eq:2} holds (cf.\ \cite[(14.6),(14.8) and (14.46)]{Bil68}).
Thus, \thmref{modcadlag} yields the claim.
\end{proof}
\subsection{Examples}\label{sec:examples}
The (neutral) tree-valued Fleming-Viot dynamics is constructed in \cite{GPW13} using the formalism of metric
measure spaces. In \cite{DGP12}, (allelic) types -- encoded as marks of marked metric measure spaces -- are
included, in order to be able to model mutation and selection.
In \cite[Remark~3.11]{DGP12} and \cite[Theorem~6]{DGP13} it is stated that the resulting tree-valued Fleming-Viot
dynamics with mutation and selection (TFVMS) admits a mark function at all times, almost surely. The given
proof, however, contains a gap, because it relies on the criterion claimed in \cite[Lemma~7.1]{DGP13}, which is
wrong in general (see \exref{counter}).
The reason why the criterion may fail is a lack of homogeneity of the measure $\nu$, in the sense that
there are parts with high and parts with low mass density. Consequently, if we condition two samples to have
distance less than $\varepsilon$, the probability that they are from the high-density part tends to one as
$\varepsilon\downarrow 0$, and we do not ``see'' the low-density part. This phenomenon occurs if $\nu$ has an
atom but is not purely atomic.
We also give two non-atomic examples, one a subset of Euclidean space, and the other one ultrametric.
\begin{example}[counterexamples]\label{ex:counter}
In both examples, it is straight-forward to see that $(X,r,\mu)$, with $\mu=\nu\otimes K$,
satisfies the assumptions of \cite[Lemma~7.1]{DGP13}, but does not admit a mark function.
The mark space is $I=\{0,1\}$.
\begin{enumerate}
\item Let $\lambda_A$ be Lebesgue measure of appropriate dimension on a set $A$.
Define $X:=[0,1]^2 \cup [2,3]$, where $[2,3]$ is identified with $[2,3]\times\{0\} \subseteq \RR^2$,
\begin{equation}
\nu:=\tfrac12({\lambda_{[0,1]^2} + \lambda_{[2,3]}}) \quad\text{and\/}\quad
K_x:=\begin{cases} \frac12(\delta_0+\delta_1), & x\in [0,1]^2,\\ \delta_0, & x\in [2,3].\end{cases}
\end{equation}
\item In this example think of a tree consisting of a left part with tertiary branching points and a right part with binary branching points. The leaves correspond to $X:=A\cup B$ with $A=\{0,1,2\}^\NN$ and $B=\{3,4\}^\NN$, and we choose as a metric
\begin{equation}
r\(\folge x, \folge y\) := \max_{n\in\NN} e^{-n}\cdot\mathds{1}_{x_n \ne y_n}.
\end{equation}
Note that $(X,r)$ is a compact, ultrametric space. The measure $\nu$ is constructed as follows: choose
the left respectively right part of the tree with probability $\tfrac12$ each. Going deeper in the tree,
at each branching point a branch is chosen uniformly. That is, let $\nu_A$ and $\nu_B$ be the Bernoulli
measures on $A$ and $B$ with uniform marginals on $\{0,1,2\}$ and $\{3,4\}$, respectively. Define
\begin{equation}
\nu:=\tfrac12(\nu_A + \nu_B) \quad\text{and\/}\quad
K_x:=\begin{cases} \frac12(\delta_0+\delta_1), & x\in A,\\ \delta_0, & x\in B.\end{cases}
\end{equation}
\end{enumerate}
\end{example}
\subsection[Tree-valued Fleming-Viot with mutation and selection]{The tree-valued Fleming-Viot dynamics with mutation and selection}\label{sub:FV}
In the following, we prove the existence of a mark function for the TFVMS by verifying the assumptions of \thmref{pr-modulus}
for a sequence of approximating tree-valued Moran models.
Due to the Girsanov transform given in \cite[Theorem~2]{DGP12}, it is enough to consider the neutral case, that is
without selection.
We briefly recall the construction of the tree-valued Moran model with mutation (TMMM) with finite population
$U_N=\{1,\ldots,N\}$, $N \in \NN$, and types from the mark space $I$. For details and more formal definitions, see
\cite[Subsections~2.1--2.3]{DGP12}.
In the underlying Moran model with mutation (MMM), every pair of individuals ``resamples'' independently at rate
$\gamma>0$. Here, resampling means that one of the individuals (chosen uniformly at random among the two) is replaced by
an offspring of the other one, and the offspring gets the same type as the parent. Furthermore, every individual
mutates independently at rate $\vartheta\ge 0$, which means that it changes its type according to a fixed
stochastic kernel $\beta(\cdot,\cdot)$ on $I$. Denote the resulting type of individual $x\in U_N$ at time
$t\ge0$ by $\kappa^N_t(x)$.
To obtain the tree-valued dynamics, define the distance $r_t^N(x,y)$ between two individuals $x,y\in U_N$ at
time $t\ge 0$ as twice the time to the most recent common ancestor (MRCA) (cf.\ \cite[(2.7)]{DGP12}), provided
that a common ancestor exists, and as $2t+r_0^N(x,y)$ otherwise. The TMMM is the resulting process
$\mathcal{X}_t^N=(U_N,r_t^N,\nu_N,\kappa_t^N)$, with sampling measure $\nu_N=\tfrac{1}{N} \sum_{k=1}^N \delta_{k}$. It is
easy to check that, by definition, $(U_N, r_t^N)$ is an ultrametric space, provided that the initial metric
space $(U_N, r_0^N)$ is ultrametric. This explains the name \emph{tree-valued} (cf.\ \cite[Remark~2.7]{DGP12}).
Next recall the graphical construction of the MMM from \cite[Definition~2.2]{DGP12}. A resampling event is
modeled by means of a family of independent Poisson point processes $\{ \eta_\mathrm{res}^{k,\ell}: k, \ell \in
U_N\}$ on $\RR_+$, where each $\eta_\mathrm{res}^{k,\ell}$ has rate $\gamma/2$. If $t \in
\eta_\mathrm{res}^{k,\ell}$, draw an arrow from $(k,t)$ to $(\ell,t)$ to represent a resampling event at time
$t$, where $\ell$ is an offspring of $k$. Similarly, model mutation times by a family of independent Poisson point
processes $\{ \eta_\mathrm{mut}^k: k \in U_N\}$, where each $\eta_\mathrm{mut}^k$ has rate $\vartheta$.
If $t \in \eta_\mathrm{res}^{k,\ell}$, draw a dot at $(k,t)$ to represent a mutation event changing the type of
individual $k$ (see Figure~\ref{pic:ex_4_1-1}).
Let $(M_t^{t_0,N})_{t \geq t_0}$, $M_t^{t_0,N} \subseteq U_N$ with $M_{t_0}^{t_0,N}=\emptyset$ be the process
that records the individuals of the population at time $t$ with an ancestor at a time $t_0 < s \leq t$ involved
in a mutation event. By a coupling argument, this process can be constructed by means of the Poisson point
processes $(\eta_\mathrm{res}^{k,\ell}, \eta_\mathrm{mut}^k, k,\ell \in U_N)$ as follows (compare
Figures~\ref{pic:ex_4_1-1}--\ref{pic:ex_4_1-2}):
\begin{equation}\label{eq:MtN}
M_t^{t_0,N} =
\begin{cases}
M_{t-}^{t_0,N} \cup \{\ell\} & \mbox{ if there is a resampling arrow from $k \in M_{t-}^{t_0,N}$ to $\ell \in U_N$ at time $t$}, \cr
M_{t-}^{t_0,N} \cup \{k\} & \mbox{ if there is a mutation event at $k \in U_N$ at time $t$}, \cr
M_{t-}^{t_0,N} \backslash \{\ell\} & \mbox{ if there is a resampling arrow from $k \notin M_{t-}^{t_0,N}$ to $\ell \in U_N$ at time $t$}. \cr
\end{cases}
\end{equation}
\picturefig{0.7}{ex_4_1-1}{
Graphical construction of the MMM for $N=10$ for the time-period $[t_0,t]$, and the resulting process
$(M_s^{t_0,N})_{s \in [t_0,t]}$. Resampling arrows are drawn at points of $\eta_\mathrm{res}^{k,\ell}$, and
mutation dots at points of $\eta_\mathrm{mut}^k$.}{ex_4_1-1}
\picturefig{0.7}{ex_4_1-2}{Tracing the ancestor backwards in time in Figure~\ref{pic:ex_4_1-1}: This dual
construction is also known as the coalescent backwards in time. Reverse the arrows to see for instance that $3$
at time $t_0$ is an ancestor of $8$ at time $t$.
The elements of $M_t^{t_0,N} \subseteq U_N$ are highlighted by boxes in the right part of the picture.}{ex_4_1-2}
Let $\xi_t^N := \frac{1}{N} \#M_{t_0+t}^{t_0,N}$ be the proportion of individuals at time $t_0+t, t \geq 0$ whose ancestors have mutated
after (the for the moment fixed) time $t_0$.
\begin{lemma}\label{lem:mutbound}
Let\/ $C:=\frac12\vartheta(2\vartheta+\gamma)$. Then for all\/ $a,\delta>0$
\begin{equation} \lbeq{bd-on-Y-exp}
\limsup_{N \rightarrow \infty} \P\bigl( \sup_{t \in [0,\delta]} \xi_t^N \geq a \bigr)
\leq C a^{-2} \delta^2.
\end{equation}
\end{lemma}
\begin{proof}
By definition, $\bigl( \xi_t^N \bigr)_{t \geq 0}$ is a (continuous time) Markov jump process on $[0,1]$ with
$\xi_0^N=0$ and transitions
\begin{equation}
\begin{cases}
x \mapsto x-1/N & \mbox{ at rate } \frac{\gamma}{2} N^2 x (1-x), \cr
x \mapsto x+1/N & \mbox{ at rate } \frac{\gamma}{2} N^2 x (1-x) + \vartheta N (1-x). \cr
\end{cases}
\end{equation}
This process converges weakly with respect to the Skorohod topology to the solution $(Z_t)_{t \geq 0}$ of the
stochastic differential equation (SDE)
\begin{equation}
\lbeq{SDE-mark}
\d Z_t = \vartheta (1-Z_t) \d t + \sqrt{\gamma Z_t (1-Z_t)}\, \d B_t, \quad Z_{0}=0.
\end{equation}
Indeed, to establish tightness use \cite[Theorem~III.9.4]{EK}. Note that, as $[0,1]$ is compact, it
suffices to show the convergence of the generators applied to a set of appropriate test-functions. For existence
and uniqueness of solutions to \eqref{SDE-mark} reason as for the Bessel SDE in \cite[(48.1) and below]{RW2}.
Moreover, $Z_t \in [0,1]$ is a bounded non-negative right-continuous submartingale. Hence, with Doob's
submartingale inequality (see for instance \cite[Proposition~II.2.16(a)]{EK}), we obtain
\begin{equation}
\P\bigl( \sup_{t \in [0,\delta]} Z_t \geq a \bigr)
= \P\bigl( \sup_{t \in [0,\delta]} Z_t^2 \geq a^2 \bigr)
\leq a^{-2} \mathbb{E}[ Z_{\delta}^2 ].
\end{equation}
As $Z_t \in [0,1]$, we further deduce using It\^o's formula that for all $t \geq 0$,
\begin{align}
& \mathbb{E}[ Z_t ] \leq \vartheta t \quad\mbox{ and } \\
& \mathbb{E}[ Z_t^2 ] = \mathbb{E}\bigl[ \int_{0}^t 2 Z_s \vartheta (1-Z_s) + \gamma Z_s (1-Z_s) \,\d s \bigr]
\leq C t^2.
\end{align}
Then
\begin{equation}
\limsup_{N \rightarrow \infty} \P\bigl( \sup_{t \in [0,\delta]} \xi_t^N \geq a \bigr)
\leq \P\bigl( \sup_{t \in [0,\delta]} Z_t \geq a \bigr)
\leq C a^{-2} \delta^2
\end{equation}
follows.
\end{proof}
As the construction of the TFVMS in \cite{DGP12} is only given for a compact type-space $I$, we make the same
assumption. Note, however, that our proof itself does not use compactness and is therefore valid for non-compact
$I$, provided that the TFVMS is the limit of the corresponding Moran models, and there exists a Girsanov
transform allowing us to reduce to the neutral case.
\begin{theorem}[the TFVMS admits a mark-function] \label{t:mark-FV}
Let\/ $I$ be compact and\/ $\mathcal{X}=(\mathcal{X}_t)_{t \geq 0}$ be the tree-valued Fleming-Viot dynamics with mutation and
selection as defined in \textup{\cite{DGP12}}. Then
\begin{equation}
\P( \mathcal{X}_t \in \FMI \text{ for all\/ } t>0 ) = 1.
\end{equation}
In particular, $(\mathcal{X}_t)_{t>0}$ is an $\FMI$-valued c\`adl\`ag process.
\end{theorem}
\begin{proof}
By \cite[Theorem~2]{DGP12}, there exists a Girsanov transform that enables us to assume without loss of
generality that selection is not present. In this case, according to \cite[Theorem~3]{DGP12}, $\mathcal{X}$ is the limit
in distribution of TMMMs $\mathcal{X}^N=(\mathcal{X}^N_t)_{t\ge0}$, as discussed above. Let $\mathcal{X}^N_t=(U_N, r^N_t, \nu_N, \kappa_t^N)$ with
$U_N=\{1,\ldots,N\}$ and $\nu_N$ the uniform distribution on $U_N$. Let $\delta>0$ be fixed for the moment, and
recall that the distance $r_t^N(x,y)$ between two individuals $x,y\in U_N$ at time $t\ge\delta/2$ is twice the
time to the MRCA. Hence, if $r_t^N(x,y) < \delta$, then $x$ and $y$ at time $t$ have a common ancestor at time
$t-\delta/2$.
Further recall that $(M_t^{t_0,N})_{t \geq t_0}$, with $M_t^{t_0,N} \subseteq U_N$ and
$M_{t_0}^{t_0,N}=\emptyset$, records the individuals of the population at time $t$ with an ancestor at a time
$s\in (t_0, t]$ involved in a mutation event (cf.\ \eqref{eq:MtN}).
Fix an arbitrary time horizon $T>0$ and $i \in \NN$, $i \leq 2T/\delta$.
Using the notation of \thmref{pr-modulus}, for $t \in [i \delta / 2,(i+1) \delta / 2)$, let
$Y_{t,\varepsilon,\delta}^N := U_N \backslash M_t^{(i-1)\delta / 2,N}$, independent of $\varepsilon>0$. Set
$Y_{t,\varepsilon,\delta}^N := \emptyset$ for $t<\delta/2$.
We claim that \eqref{eq:modulust} is satisfied for any choice of $h_{t,\varepsilon}\in\mathcal{H}$.
Indeed, if $x,y \in Y_{t,\varepsilon,\delta}^N$ satisfy $r_t^N(x,y) < \delta$, then they have a common ancestor at time
$t_0:= (i-1)\delta / 2 \le t-\delta/2$, and after this point in time no mutation occurred along their ancestral
lineages. In particular, $d(\kappa_t^N(x),\kappa_t^N(y))=0$, and \eqref{eq:modulust} is obvious.
Moreover, $\mathcal{X}^N$ is $\FMI$-valued by construction, and $\mathcal{X}$ has continuous paths by \cite[Theorem~1]{DGP12}.
According to \thmref{pr-modulus}, it is therefore enough to find moduli of continuity $h_{t,\varepsilon} \in \mathcal{H}$ such
that \eqref{eq:pasforall} holds for every $\varepsilon>0$.
By \lemref{mutbound}, we obtain a constant $C>0$ such that for every $a>0$,
\begin{equation}
\limsup_{N \rightarrow \infty} \P\bigl( \sup_{t \in [i \delta / 2,(i+1) \delta / 2)} \nu_N\bigl( U_N
\setminus Y_{t,\varepsilon,\delta}^N \bigr) \geq a \bigr)
\leq Ca^{-2}\delta^2.
\end{equation}
After summation over $i \in \{1,\ldots, \floor{2T/\delta}\}$, we obtain
\begin{equation}
\lbeq{mark-bound}
\limsup_{N \rightarrow \infty} \P\bigl( \sup_{t \in [\delta/2,T]}
\nu_N\bigl( U_N \backslash Y_{t,\varepsilon,\delta}^N \bigr) \geq a \bigr)
\leq 2TC\delta a^{-2}.
\end{equation}
For $\epsilon>0$ arbitrary, we use this inequality with $a := \sqrt{\varepsilon^{-1}2TC\delta}$, together with
$\|\nu_N\|\le 1$ for $t<\delta/2$, to see that \eqref{eq:pasforall} is satisfied for
$h_{t,\varepsilon}\in \mathcal{H}$ with
\begin{equation}
h_{t,\varepsilon}(\delta)\ge\sqrt{\varepsilon^{-1}2TC\delta} + \mathds{1}_{[2t, \infty[}(\delta). \qedhere
\end{equation}
\end{proof}
\subsection[Tree-valued $\Lambda$-Fleming-Viot]{The tree-valued $\Lambda$-Fleming-Viot process} \label{sub:TFV}
Let $\Lambda$ be a finite measure on $[0,1]$, and recall the $\Lambda$-coalescent, introduced in \cite{Pitman99}.
It is a coalescent process, where each $k$-tuple out of $N$ blocks merges independently at rate
\begin{equation}
\lambda_{N,k} := \int_0^1 y^{k-2} (1-y)^{N-k}\, \Lambda(\d y).
\end{equation}
For fixed $N$, it is elementary to construct a finite, random (ultra-)metric measure space encoding the random
genealogy of the $\Lambda$-coalescent, where the distance is defined as the time to the MRCA (recall the construction of Figures~\ref{pic:ex_4_1-1}--\ref{pic:ex_4_1-2} and see Figure~\ref{pic:ex_4_1-3}).
\picturefig{0.5}{ex_4_1-3}{Tracing the ancestor backwards in time: The $\Lambda$-coalescent allows for one parent to have more than one child.}{ex_4_1-3}
In \cite[Theorem~4]{GPW09}, existence and uniqueness of a Gromov-weak limit in distribution, as $N\to\infty$, is
proven to be equivalent to the so-called ``dust-free"-property, namely $\int_0^1 y^{-1}\, \Lambda(\d y)= \infty$.
The resulting limit is called $\Lambda$-coalescent measure tree.
Now, replace the tree-valued Moran models considered in \subref{FV} and \cite{DGP12} by so-called tree-valued
$\Lambda$-Cannings models with $\Lambda$ satisfying the dust-free-property.
That is, leave the mutation- and selection-part as it is and change the resampling-part of the Moran models as follows:
For $k=2,\ldots,N$, at rate $\binom{N}{k} \lambda_{N,k}$ a block of $k$ individuals is chosen uniformly at
random among the $N$ individuals of the population. Upon such a resampling event, all individuals in this block
are replaced by an offspring of a single individual which is chosen uniformly from this block. Note that the genealogy
(disregarding types) of the resulting $\Lambda$\nobreakdash-Cannings model with $N$ individuals is dual to the
$\Lambda$-coalescent starting with $N$ blocks. We call any limit point (in path space) of the tree-valued
$\Lambda$-Cannings processes, as $N$ tends to infinity and $\Lambda$ is fixed, \emph{tree-valued
$\Lambda$-Fleming-Viot process} (TLFV). In the neutral case, existence and uniqueness of such a limit point
follows as a special case of the forthcoming work \cite{GrevenKlimovskyWinter}.
Here, we show that, whenever limit points exist, all of them admit mark functions.
\begin{theorem}[the TLFV admits a mark-function] \label{t:mark-Lambda}
Suppose there is no selection, that is $\alpha=0$, and\/ $\mathcal{X}=(\mathcal{X}_t)_{t \geq 0}$ is a tree-valued
$\Lambda$-Fleming-Viot process with mutation. Then
\begin{equation}
\P( \mathcal{X}_t \in \FMI \text{ for all\/ } t>0 ) = 1.
\end{equation}
\end{theorem}
\begin{proof}
By passing to a subsequence if necessary, we may assume that the $\Lambda$-Cannings models converge in
distribution to $\mathcal{X}$.
We proceed as in Subsection~\ref{sub:FV}. Again, let $(M_t^{t_0,N})_{t \geq t_0}$, $M_t^{t_0,N} \subseteq U_N$
with $M_{t_0}^{t_0,N}=\emptyset$ be the process that records the individuals of the population at time $t$ with
an ancestor at a time $t_0 < s \leq t$ involved in a mutation event and $\xi_t^N := \frac{1}{N}
\#M_{t_0+t}^{t_0,N}$ be the proportion of individuals at time $t_0+t, t \geq 0$ whose ancestors have mutated
after (the for the moment fixed) time $t_0$. By definition, $\bigl( \xi_t^N \bigr)_{t \geq 0}$ is a (continuous
time) Markov jump process on $[0,1]$ with $\xi_0^N=0$ and generator
\begin{align}
\big(\Omega^N f\big)(x)
&= \vartheta N (1-x) \big( f(x+1/N)-f(x) \big) \\
&\phantom{{}={}} + \sum_{k=2}^N \lambda_{N,k}
\sum_{m=0}^{(N x) \wedge k} \binom{N x }{m} \binom{N (1-x) }{k-m} \nonumber\\
& \phantom{{}={}+{}} \times \Bigl( \frac{m}{k} \big( f(x+(k-m)/N)-f(x) \big) + \frac{k-m}{k}
\big(f(x-m/N)-f(x) \big) \Bigr), \nonumber
\end{align}
where $x \in [0,1], N \cdot x \in \mathbb{N} \cup \{0\}, f \in \mathcal{C}_b^2([0,1])$. Due to Taylor's formula, there
is $x_{m,k,N}^+ \in [x,x+(k-m)/N]$, $x_{m,k,N}^- \in [x-m/N,x]$ with
\begin{align}\lbeq{gen-lambda}
\big(\Omega^N f\big)(x)
&= \vartheta N (1-x) \big( f(x+1/N)-f(x) \big) \\
&\phantom{{}={}} + \sum_{k=2}^N \lambda_{N,k}
\sum_{m=0}^{(N x) \wedge k} \binom{N x }{m} \binom{N (1-x) }{k-m}
\Big( \frac{f''(x_{m,k,N}^+)}{2} \frac{m (k-m)^2}{k N^2} + \frac{f''(x_{m,k,N}^-)}{2} \frac{(k-m)
m^2}{k N^2} \Big) \nonumber\\
&= \vartheta (1-x) f'(x) + O(N^{-1}) + x (1-x) \sum_{k=2}^N \lambda_{N,k} \Delta_{N,k}(x), \nonumber
\end{align}
where, using $\binom{n }{i} = \frac ni \binom{n-1 }{i-1}$ for $i \geq 1$,
\begin{equation}
\Delta_{N,k}(x) = \sum_{m=1}^{(N x) \wedge (k-1)} \binom{Nx-1 }{m-1} \binom{N(1-x)-1 }{k-m-1}
\Big( f''(x_{m,k,N}^+) \frac{k-m}{2k} + f''(x_{m,k,N}^-) \frac{m}{2k} \Big).
\end{equation}
Recall that $\sum_{m=0}^k \binom{\ell}m \binom{N-\ell}{k-m} = \binom{N}{k}$ and
$\lambda_{N,k} = \int_0^1 y^{k-2} (1-y)^{N-k}\, \Lambda(\d y)$ with a finite measure $\Lambda$ on $[0,1]$ to see that
\begin{align}
\sum_{k=2}^N \left| \Delta_{N,k}(x) \right|
&\le \|f''\|_\infty \sum_{k=2}^N \lambda_{N,k} \sum_{m=0}^{k-2} \binom{Nx-1}{m} \binom{N(1-x)-1}{k-2-m} \\
&= \|f''\|_\infty \int_0^1 \sum_{k=2}^N \binom{N-2 }{k-2} y^{k-2} (1-y)^{N-k} \,\Lambda(\d y) \nonumber\\
&= \|f''\|_\infty\,\Lambda([0,1]). \nonumber
\end{align}
Therefore,
\begin{equation} \lbeq{gen-lambda-bd}
\big(\Omega^N f\big)(x) = \vartheta (1-x) f'(x) + O(N^{-1}) + x (1-x) O(1).
\end{equation}
Use $f(x)=x, x \in [0,1]$ in \eqref{gen-lambda} to see that $(\xi_t^N)_{t \geq 0}$ is a non-negative
right-continuous submartingale with $\xi_0^N=0$ and $\mathbb{E}[ \xi_t^N ] \leq \vartheta t$. Use $f(x)=x^2$ to deduce
from \eqref{gen-lambda-bd} that
\begin{equation}
\mathbb{E}\bigl[ (\xi_t^N)^2 \bigr] \leq C t^2 + O(N^{-1}) t.
\end{equation}
Now reason as for the TFVMS in the proofs of \lemref{mutbound} and \thmref{mark-FV} to complete the claim.
\end{proof}
\subsection[Future application: trait-dependent branching]{Future application: Evolving phylogenies of trait-dependent branching} \label{sub:FApp}
In \cite{KW2014} the results of the present paper will be applied in a context of evolving genealogies to
establish the existence of a mark function with the help of \thmref{pr-modulus}. These genealogies are random
marked metric measure spaces, constructed as the limit of approximating particle systems. The individual birth-
respectively death-rates in the $N^{\mbox{th}}$-approximating population depend on the present trait of the
individuals alive and are of order $O(N)$. At each birth-event, mutation happens with a fixed probability. Each
individual is assigned mass $1/N$. The metric under consideration is genetic distance: in the
$N^{\mbox{th}}$-approximating population genetic distance is increased by $1/N$ at each birth with mutation.
Hence, genetic distance of two individuals is counted in terms of births with mutation backwards in time to the
MRCA rather than in terms of the time to the MRCA.
Because of the use of exponential times in the modeling of birth- and death-events in this therefore
non-ultrametric setup the analysis of the modulus of continuity of the trait-history of a particle in
combination with the evolution of its genetic age plays a major role in establishing tightness of the
approximating systems and existence of a mark function. In \cite[Lemma~3.9]{K2014}, control on the modulus of
continuity is obtained by transferring the model to the context of historical particle systems. In a first step,
time is related to genetic distance by means of the modulus of continuity. The extend of the change of trait of
an individual in a small amount of time (recall \eqref{eq:pas} and \eqref{eq:modulus2}) can then be controlled
by means of the modulus of continuity of its trait-path in combination with a control on the height of the
largest jump during this period of time. This can in turn be ensured by appropriate assumptions on the mutation
transition kernels of the approximating systems.
\begin{acknowledgements}
We are thankful to Anita Winter for discussions in the initial phase of the project, and to the referee
for helpful comments.
The research of Sandra Kliem was supported by the DFG through the SPP Priority Programme 1590.
\end{acknowledgements}
\footnotesize\renewcommand{\subsection}{\subsection}
|
1,116,691,497,131 | arxiv | \section{Introduction}
\label{sect:intro}
Significant increase of the infrared (IR) flux at the vicinity of flaring methanol masers were recently reported for NGC~6334I~MM1 (\citealt{Hunter+etal+2017}), S255 (\citealt{Stecklum+etal+2016, Zinchenko+etal+2017}), and G107.298+5.639 (\citealt{Stecklum+etal+2018}).
Noteworthy, in NGC~6334I high activity of the water and methanol masers was contemporaneous (\citealt{Hunter+etal+2017, MacLeod+etal+2018}). But the Very Large Array observations of the water masers (\citealt{Brogan+etal+2018}) showed that the majority of the flaring water maser emission originated from the synchrotron source north along the jet driven by the source MM1 while the water maser emission toward the flaring IR source MM1 dropped. Meanwhile, the water and methanol maser flares in the G107.298+5.639 were alternating (\citealt{Szymczak+etal+2016}). Further, no significant flares or dimming of emission were reported for the water maser in S255 during the strong methanol maser flare which took place in~2015 and~2016 (\citealt{Fujisawa+etal+2015, Szymczak+etal+2018}). This shows that association between flares in IR continuum and in maser lines of different molecules can have different nature.
In this paper, we consider near-IR variability in the vicinity of the water maser source G025.65+1.05 which recently experienced strong flares (\citealt{Lekht+etal+2018, Volvach+Volvach+etal+2017a, Volvach+Volvach+etal+2017b, Ashimbaeva+etal+2017}). The vicinity of this maser contains the compact infrared source IRAS~18316-0602 (RAFGL 7009S) with luminosity of about~{$3{\times}10^4 L_{\odot}$} (\citealt{McCutcheon+etal+1995}) and the ultracompact H{\small{II}}~region G025.65+1.05. The radio source, first identified at~3.6~cm by \cite{Kurtz+etal+1994}, coincides spatially with submillimeter emission at $350~\mu$m (\citealt{Hunter+etal+2000}), $450$ and~$850~\mu$m (\citealt{Walsh+etal+2003}). The region contains a massive young stellar object (YSO) which drives CO bipolar outflow (\citealt{Shepherd+Churchwell+1996}). \cite{Zavagno+etal+2002} suggested that RAFGL~7009S is an embedded young stellar object ``associated with the ultracompact H{\small{II}} region G025.65+1.05, which may be excited by a B1V~star". Kinematic distance estimation on the basis of ammonia line observations gives the value of about~3.2~kpc (e.g. \citealt{Molinari+etal+1996}).
A prominent bright rapid flare of the main feature of the water maser took place in September~2017 -- flux density raised from less than 1~kJy to about 20~kJy in a few days (\citealt{Volvach+Volvach+etal+2017a}). This flare and the previous one were preceded by a moderate rise of the methanol maser emission which happened 3~months in advance of the water maser flare (\citealt{Sugiyama+etal+2017}). There were two more bright flares in October-November (\citealt{Volvach+Volvach+etal+2017b, Ashimbaeva+etal+2017}). The latter one lasted only for a couple of days, reached 76~kJy at the maximum and faded to 16~kJy within a day (\citealt{Ashimbaeva+etal+2017}).
We obtained K-band data of G025.65+1.05 on~2017-09-21 (soon after the peak of the first maser flare) at the Caucasian Mountain Observatory (CMO), Sternberg Astronomical Institute of Moscow State University. Variation of G025.65+1.05 K-band flux density distribution was noticed ``by eye" after comparison with an archive UKIDSS image (\citealt{Sobolev+etal+2017}). First photometric data were given in \cite{Stecklum+etal+2017}. In this paper, we present a new photometric study of G025.65+1.05 IR-variability.
\section{CMO observations}
\label{sect:Obs}
Observations of G025.65+1.05 were obtained at CMO on~2017-09-21 in the infrared K-band using ASTRONIRCAM (\citealt{Nadjip+etal+2017}). The instrumental photometric system is close to the standard MKO (Mauna Kea Observatories) photometric system. Camera was set in the imaging mode. The final image is the sum of 50 separate images obtained with an exposure of 3.67~seconds with 3~arcsec dithering. For these separate images, the bias, dark and flatfield correction were conducted. The FWHM (CMO point-spread-function) of the resulting image is about~1.1~arcsec.
At the date of the CMO observations, the flux density of the water maser was about 15~kJy (Volvach~A.~E., private communication). We also used archival IR data and data of previous observations from the literature. Data and references are listed in Table~1.
\begin{table}
\bc
\begin{minipage}[]{100mm}
\caption[]{Observational Data\label{tab1}}\end{minipage}
\setlength{\tabcolsep}{5pt}
\small
\begin{tabular}{cccccc}
\hline\noalign{\smallskip}
Date & MJD & Instrument & Filter &$\lambda_{central},$ & $\Delta\lambda$,\\
(YYYY-MM-DD)& & & & ${\mu}m$ & ${\mu}m$\\
\hline\noalign{\smallskip}
2003-06-16& 52806.4 &UFTI, UKIRT &K&2.20&0.34\\
2004-07-11 & 53197.7 &IRIS2, AAT &K$_s$&2.14&0.32\\
2007-08-27 & 54339.3 &WFCAM, UKIRT &K&2.20&0.34\\
2011-03-20 & 55640.4 &OSIRIS, SOAR &C2&2.14&0.05\\
2011-09-18 & 55822.3 &WFCAM, UKIRT&K&2.20&0.34\\
2017-09-21 & 58017.6 &CMO & K&2.19&0.32\\
\noalign{\smallskip}\hline
\end{tabular}
\ec
\tablecomments{0.86\textwidth}{UFTI data is reported in \cite{Varricatt+etal+2010}, AAT data in \cite{Longmore+etal+2006} and SOAR data in \cite{Navarete+etal+2015}}
\end{table}
\section{Data reduction}
\label{sect:data}
We analyzed the flux density distribution along a chosen line passing through the considered source. The flux density was integrated in the tangential direction within the breadth of considered rectangle with the size of 75~arcsec~$\times$~6~arcsec (shown in Figure~\ref{Fig1}). It contains three IR sources close to the maser position and three isolated stars used for calibration and calibration control. The rectangle breadth was chosen in order to cover at least 90~per cent of emission of these objects. We calibrated the data to uniform scale assuming that the star marked~4 does not vary. Photometric analysis had shown that this star varied only within the limits of 0.1~mag from its mean value for all our data. Extreme variations of this star were the following: magnitude of star~4 increased by about 0.1~mag on~2007-08-27 and decreased on~2011-09-17 by about the same value. Images in these dates were obtained by UKIDSS. So, we used the average signal to get the UKIDSS data calibration parameters.
The numerical scale of obtained flux density was derived from the 2MASS K-flux for star~4. The calibration coefficients were found by fitting of the integrated observed signal of star~4 and it absolute flux density, calculated from the 2MASS point source catalog value $K=14.525\text{ mag}$. To facilitate comparison of the data we smoothed images in order to have the same angular resolution as in CMO observations (convolution process).
The convolution was conducted with the IRAF software. The resulting flux density profiles are shown in Figures~\ref{Fig2} and~\ref{Fig3}. We did not attempt to obtain exact absolute values of the flux densities because further considerations are based on analysis of the relative flux density changes.
\begin{figure}
\centering
\includegraphics[width=14.5cm, angle=0]{ms0119fig1.eps}
\caption{Vicinity of G025.65+1.05 in K-band. The water maser position from \cite{Jenness+etal+1995} is marked by a cross. The rectangle in the left panel shows the region in which flux density distribution is measured. The numbers~1 and~2 indicate the peak position of the IR sources nearest to the water maser position. Star~4 was used for calibration, stars 5 and 6 for calibration control. The two left panels show data obtained on 2003-06-16 in different spatial and brightness scales. The spatial scale is given at the top of each panel, the distance from the source is taken from \cite{Molinari+etal+1996}. The right panel shows the CMO data with the same brightness scale as the central image. The numerical scale in units $10^{-18} W/m^2/\text{micron}/\text{arcsec}^2$ is based on 2MASS catalog value for star~4. The marked positions of~1-3 sources in the right panel are taken from peak positions of the central image. The flux density decrease of the source~1 in~2017 is clearly seen.}
\label{Fig1}
\end{figure}
\section{Results and discussion}
\label{sect:discussion}
Figures~\ref{Fig2} and \ref{Fig3} show that the IR source nearest to the maser position (marked by~1 in the figures) was significantly fainter in September~2017 and March~2011 in comparison to other epochs. Note that the image from March~2011 was obtained with OSIRIS/SOAR using a narrow continuum filter centered at~2.14 ${\mu}$m (C2~in Table~1) while the other images were obtained with broad K-band filters. In September~2011 the emission of the source was slightly dimmer compared to the majority of the epochs. We think that the low flux density in the SOAR data is likely due to actual variability and not to calibration artifacts.
\begin{figure}
\centering
\includegraphics[width=13cm, angle=0]{ms0119fig2.eps}
\caption{The flux density distribution along the rectangle shown in Figure~\ref{Fig1}. The vertical scale is rough and was obtained from $K=14.525\text{ mag}$ 2MASS catalog value for star~4. The horizontal scale shows the angular distance (in arcseconds) from (J2000): $\text{RA}=18^\text{h}34^\text{m}20.952^\text{s}$ and $\text{DEC}=-5^\circ59'25''.375$. The solid line represents the data after smoothing to the CMO angular resolution, dotted line shows the data without angular smoothing (for the cases when the angular resolution of the data is different from the CMO data one). The dashed bar marked by~1 indicates peak position of the IR source nearest to the water maser. The positions of sources, marked by dashed bars in all panels, are taken from the peak positions of the first panel image (2003-06-16, data with better angular resolution). The source designations are the same as in the Figure~\ref{Fig1}. The K-band intensity of source~1 was significantly less in March~2011 and September~2017 with respect to other epochs.}
\label{Fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=13cm, angle=0]{ms0119fig3.eps}
\caption{The superposition of the flux density profiles along the rectangle shown in Figure~\ref{Fig1}. The designations are similar to those in Figure~\ref{Fig1} and Figure~\ref{Fig2}. Star~4 was used for calibration and stars~5 and~6 for calibration control. All data smoothed to the CMO angular resolution. Source~1 was significantly fainter in March~2011 and September~2017 with respect to the other epochs.}
\label{Fig3}
\end{figure}
The epoch of our CMO observation is close to the maser flare. Consideration of the results of long-term water maser monitoring by \cite{Lekht+etal+2018} shows that the epochs of IR observations in~2011 correspond to the periods of the increased maser activity (with several kJy in March and about 400~Jy in September).
The other epochs presented in Table~1 correspond to a considerably lower state of maser activity with flux densities less than 200~Jy. Therefore, our analysis shows a possible connection between water maser flares and dips in IR emission.
A half a day rise and two days duration of the recent maser flare reported by \cite{Ashimbaeva+etal+2017} imply that the flare is caused by changes in the radiative part of the pumping process since the geometrical changes and collisional events are expected to be slower: even in the W49N with its extremely powerful (for the interstellar objects) shocks, the time duration of the fastest detected flare is considerably longer (\citealt{Liljestrom+Cwinn+2000}). Substantial influence of the radiative processes is very likely because they play a very important role in the water maser pumping (\citealt{Gray+2012, Gray+etal+2016}). Since water maser flares presumably happen during dips of the IR emission, it is not likely that the IR radiation makes the main contribution to the source of the maser pumping. This corresponds to results of theoretical considerations which show that the pumping of the strong water masers with the radiative source is unlikely (\citealt{Strelnitskii+1984, Deguchi+1981, Shmeld+1976}) because the pump power in these cases is insufficient.
We have to note that an analysis of the role of IR radiation in the pumping without consideration of the maser sink is not complete. The sink of the pumping is often ignored but it can play a crucial role in the maser formation (\citealt{Strelnitskii+1981, Sobolev+Gray+2012}). For the water masers, theoretical considerations show that the sink of the IR photons due to the cold dust absorption within the masing region can give birth to the strong masers (for far IR photons sink \cite{Deguchi+1981} and for near IR photons sink \cite{Strelnitskii+1977}). Alternating character of the IR radiation density and water maser activity finds observational support in the drop of the water masers flux accompanied by the increase of the radiation field recently observed in the NGC~6334I-MM1 by \cite{Brogan+etal+2018}.
An increase of the water maser sink efficiency is also realized in the case when the IR photons obtain possibility to escape from the masing region. This possibility is blocked when the radiation density in the masing region surroundings is high. Maser flares that are simultaneous with dips of the IR emission can be explained by the following effect: the decrease of the IR radiation field increases efficiency of the sink of the pumping mechanism by allowing more IR photons to escape from the masing region.
At present, we don't have data describing changes of the near and far IR radiation density during the G025.65+1.05 maser flare which is necessary for the full analysis of the situation. Anyhow, low level of the K-band intensity of the source during the flare can indicate that the density of the radiation in the masing region is probably reduced at the longer IR wavelengths as well. Such a situation takes place in the intermediate mass YSO G107.298+5.639. Recent observations of this source by \cite{Stecklum+etal+2018} have shown that the periods of high water maser activity coincide with the periods when the near-IR K-band and mid-IR NEOWISE W1, W2, W3 and~W4 intensities of the source are reduced.
\section{Conclusions}
\label{sect:conclusion}
We report a variability study of IR K-band flux density in the G025.65+1.05 vicinity. The IR source nearest to the water maser had significantly lower IR flux densities in March 2011 and September 2017 with respect to other considered epochs of observations. These two epochs are close to epochs when the water maser was flaring. So, the K-band dips can have a relation to the water maser flares. We suggest that this relation is explained by the alternating character of the water maser sink efficiency and the IR radiation field density in the vicinity of the maser source.
\begin{acknowledgements}-
The authors are grateful to Navarete~F. for cooperation and providing the SOAR data and Volvach~A.~E. for the information about the maser activity. We thank the referee for helpful comments which allowed to increase quality of the paper.
A.~M.~Sobolev and S.~Yu.~Gorda were supported by the Russian Science Foundation grant 18-12-00193. A.~P.~Bisyarina was supported by Russian Foundation for Basic Research according to the research project 18-32-00314.
\end{acknowledgements}
|
1,116,691,497,132 | arxiv | \section{Conclusions}\label{sec:conclusions}
In this paper, we propose JITLine~approach, a machine learning-based JIT defect approach for predicting defect-introducing commits and identifying defective lines that are associated with that commit.
Then, we conduct our empirical study to demonstrate that our JITLine~approach is better (RQ1), more cost-effective (RQ2), faster (RQ3) and more fine-grained (RQ4) than the state-of-the-art JIT defect prediction approaches (i.e., EARL, DeepJIT, and CC2Vec).
Therefore, our JITLine~approach may help practitioners to better prioritize defect-introducing commits and better identify defective lines.
In addition, our results highlight the negative impact of excluding testing datasets in model training and the importance of exploring simple solutions (e.g., explainable AI approaches) first over complex and compute-intensive deep learning approaches.
\textbf{Acknowledgement.} Chakkrit Tantithamthavorn was supported by ARC DECRA Fellowship (DE200100941).
\bibliographystyle{IEEEtranS}
\section{JITLine: A JIT Defect Prediction Approach at the Commit and Line Levels}\label{sec:approach}
In this section, we present the implementation of our JITLine~approach.
The goal of our JITLine~approach is to predict defect-introducing commits and identify lines that are associated with that defect-introducing commit (i.e., defective lines).
The underlying intuition of our approach is that code tokens that frequently appeared in defect-introducing commits in the past are likely to be fixed in the future.
\noindent \underline{\textbf{Overview.}}
Our approach begins with extracting source code tokens of code changes as features (i.e., token features).
Since our JIT defect datasets are highly imbalanced (i.e., 8\%-13\% defective ratio), we apply a SMOTE technique that is optimized by a Differential Evolution (DE) algorithm to handle the class imbalance issue on a training dataset.
Then, we build commit-level JIT defect prediction model using the rebalanced training dataset.
Next, we generate a prediction for each commit in a testing dataset.
After that, we normalize the prediction score by the amount of code changes (i.e., churn) in order to consider the inspection effort when generating the ranking of defect-introducing commits.
For each commit in the testing dataset, we extract the importance score of each token features using a state-of-the-art model-agnostic technique, i.e., Local Interpretable Model-Agnostic Explanations (LIME).
Finally, we rank defective lines that are associated with a given commit based on the LIME's importance scores.
We describe each step in details below.
\textbf{(Step 1) Extracting Bag-of-Tokens Features.}
Following the underlying intuition of our approach, we represent each commit using Bag-of-Tokens features (i.e., the frequency of each code token in a commit).
To do so, for each commit, we first perform a code tokenization step to break each changed line into separate tokens.
Then, we parse its removed lines or added lines into a sequence of tokens.
As suggested by Rahman~{\em et al.}~\cite{rahman2019natural}, removing these non-alphanumeric characters will ensure that the analyzed code tokens will not be artificially repetitive.
Thus, we apply a set of regular expressions to remove non-alphanumeric characters such as semi-colon (;) and equal sign (=).
We also replace the numeric literal and string literal with a special token (i.e., \texttt{$<$NUM$>$} and \texttt{$<$STR$>$} respectively) to reduce the vocabulary size.
Then, we extract the frequency of code tokens for each commit using the \texttt{Countvectorize} function of the Scikit-Learn Python library.
We neither perform lowercase, stemming, nor lemmatization (i.e., a technique to reduce inflectional forms) on our extracted tokens, since the programming language of our studied systems is case-sensitive.
Otherwise, the meaning of code tokens may be discarded if stemming and lemmatization are applied.
\textbf{(Step 2) Handling class imbalance using an Optimized SMOTE technique.}
Since our JIT defect datasets are highly imbalanced (i.e., 8\%-13\% defective ratio), we apply a SMOTE technique that is optimized by a Differential Evolution (DE) algorithm to handle the class imbalance issue on a training dataset.
The training dataset is splitted into a new training set and a validation set.
The new training set is used to train DE+SMOTE, while the validation set is used to select the best hyper-parameter settings.
We select the SMOTE technique, as prior studies have shown that the SMOTE technique outperforms other class rebalancing techniques~\cite{tantithamthavorn2020impact,agrawal2018better}.
The SMOTE technique starts with a set of minority class (i.e., defect-introducing commits).
For each of the minority class of the training datasets, SMOTE calculates the $k$-nearest neighbors.
Then, SMOTE selects $N$ instances of the majority class (i.e., clean commits) based on the smallest magnitude of the euclidean distances that are obtained from the k-nearest neighbors.
Finally, SMOTE combines the synthetic oversampling of the minority defect-introducing commits with the undersampling the majority clean commits.
We use the implementation of \texttt{SMOTE} function provided by the \texttt{Imbalanced-Learn} Python library~\cite{Imblearn}.
However, prior studies pointed out that the SMOTE technique involves many parameters settings (e.g., $k$ the number of neighbors, $m$ the number of synthetic examples to create, $r$ the power parameter for the Minkowski distance metric), which often impact the accuracy of prediction models~\cite{fu2016tuning,agrawal2018better,tantithamthavorn2020impact,tantithamthavorn2016automated}.
To ensure that we achieve the best performance of the SMOTE algorithm, we optimize the SMOTE technique using a Differential Evolution (DE) algorithm (as suggested by Agrawal~{\em et al.}~\cite{agrawal2018better} and Tantithamthavorn~{\em et al.}~\cite{tantithamthavorn2020impact}).
DE~\cite{Rainer1997} is an evolutionary-based optimization technique, which is based on a differential equation concept.
Unlike a Genetic Algorithm technique that uses crossover as search mechanisms, a DE technique uses mutation as a search mechanism.
First, DE generates an initial population of candidate setting of SMOTE's $k$ nearest neighbors with a range value of 1-20.
Then, DE generates new candidates by adding a weighted difference between two population members to the third member based on a crossover probability parameter.
Finally, DE keeps the best candidate SMOTE's parameter setting that is evaluated by a fitness function of maximizing an AUC value for the next generation.
We use the implementation of the differential evolution algorithm provided by Scipy Python library \cite{SciPy}.
As suggested by Agrawal~{\em et al.}~\cite{agrawal2018better}, we set the population size to 10, the mutation power to 0.7 and a crossover probability (or \texttt{recombination} parameter in Scipy) to 0.3.
\textbf{(Step 3) Building commit-level JIT defect prediction models.}
We build a commit-level JIT defect model using both the Bag-of-Tokens features from Step 1 and the commit-level metrics from McIntosh and Kamei~\cite{McIntosh2018}.
The details of commit-level metrics are provided in the replication package.
Prior work found that different classification techniques often produce different performance measures.
Thus, we conduct an experiment on different classification techniques.
We consider the following well-known classification techniques~\cite{tantithamthavorn2016automated, agrawal2018better, agrawal2019dodge, tantithamthavorn2018impact}, i.e., Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), k-Nearest Neighbours (kNN), and AdaBoost.
For each project, we build the JIT model using the implementation provided by Python Scikit-Learn package.
We find that LR, kNN, and SVM cannot be built with the Qt project due to the high-dimensional feature space, and the model training time for such models (which takes few hours) is considerably larger than RF (which takes few minutes).
Therefore, we only select the Random Forest classification technique for our study.
After we experiment with different parameter settings of trees (a range of 50 to 1,000), we find that our approach is not sensitive to the parameter setting of random forest.
Thus, we set the number of tress of random forest to 300.
\textbf{(Step 4) Computing a defect density of each commit.}
We then generate the prediction probability for each commit in the testing dataset using the \texttt{predict\_proba} function provided by the Scikit-learn Python library.
Then, we compute the defect density as the probability score normalized by the total changed lines of code of that commit $(\frac{\mathrm{Y}(m)}{\mathrm{\#LOC}(c)})$.
The use of defect density is suggested by prior studies \cite{mende2010effort,Kamei2013} who argued that the cost of applying quality assurance activities may not be the same for each code changes.
In other words, a prediction model that prioritizes the largest commit as most defect prone would have a very high recall, i.e., those commits likely contain the majority of defects, yet inspecting all those commits would take a considerable amount of time.
In contrast, a model that recommends slightly less defect-prone commits that are smaller to inspect would be more cost-effective~\cite{Kamei2013}.
\textbf{(Step 5) Generating a ranking of defective lines for a given commit.}
In our studied projects, we found that the average size of the commit varies from 73 to 140 changed lines, but the average ratio of actual defective lines is as low as 51\%-53\%.
Thus, developers still spend unnecessarily effort on locating actual defective lines of that commit~\cite{wattanakriengkrai2020predicting}.
To address this challenge, we propose to generate a ranking of defective lines for a given commit.
For each commit, we compute the importance score of token features using a Local Interpretable Model-agnostic Explanations (LIME) technique.
LIME~\cite{LIME} is a model-agnostic technique that aims to mimic the behavior of the predictions of the defect model by explaining the individual predictions.
Given a commit-level JIT defect prediction model and a commit in the testing dataset, LIME performs the following steps:
\begin{enumerate}
\item {Generate neighbor instances of a test instance $x$.} LIME randomly generates $n$ synthetic instances surrounding the test instance $x$ using a random perturbation method with an exponential kernel function on cosine distance.
\item {Generate labels of the neighbors using a commit-level JIT defect prediction model.} LIME uses the commit-level JIT defect prediction model to generate the predictions of the neighbor instances.
\item {Generates local explanations from the generated neighbors.}
LIME builds a local sparse linear regression model (K-Lasso) using the randomly generated instances and their generated predictions from the commit-level defect model.
The coefficients of the K-Lasso model indicate the importance score of each feature on the prediction of a test instance according to the K-Lasso model.
\end{enumerate}
The LIME's importance score of each token feature ranges from -1 to 1.
A positive LIME score of a token feature ($0<e\leq1$) indicates that the feature has a positive impact on the estimated probability of the test instance (i.e., \textbf{risky tokens}).
On the other hand, a negative LIME score of a token feature ($-1\leq e<0$) indicates that the token feature has a negative impact on the estimated probability (i.e., \textbf{non-risky tokens}).
Once the importance score of each token is computed, we generate the ranking of defect-prone lines using the summation of the importance score for all tokens that appear in that line.
We use the implementation of LIME provided by the \texttt{lime} Python package.
\section{Background}\label{sec:background}
Commits created by developers are often used to describe new features, bug fixes, refactoring, etc.
One commit contains three main pieces of information, i.e., a commit message, a code change, and their meta-data information (e.g., churn, author name).
The commit message is used to describe the semantics of the code changes, while the code change indicates changed lines (i.e., added/modified/deleted lines).
In large-scale software projects, there is a stream of commits that developers need to review and inspect.
However, due to the limited SQA resources, Just-In-Time defect prediction approaches have been proposed to help developers prioritize their limited SQA resources on the most risky commits~\cite{Kamei2013,Kim2008}.
Below, we discuss three state-of-the-art approaches for Just-In-Time defect prediction.
\emph{EALR}~\cite{Kamei2013} is an Effort-Aware JIT defect prediction method using a Logistic Regression model with traditional commit-level software metrics (e.g., churn).
EALR generates a rank of defect-introducing commits by considering the amount of inspection effort---i.e., the predicted probability is normalized by the commit size (i.e., churn).
However, such techniques often rely on handcrafted feature engineering.
\emph{DeepJIT}~\cite{hoang2019deepjit} is an end-to-end deep learning framework for Just-in-Time defect prediction.
DeepJIT automatically generates features using a Convolutional Neural Network (CNN) architecture.
Generally, DeepJIT takes the commit message and the code change as an input into two CNN models in order to generate a vector representation---i.e, one CNN for generating commit message vectors and another CNN for generating code changes vectors.
Finally, the concatenation of both the commit message vector and the code change vector is input into the fully-connected layer to generate the probability of defect-introducing commit.
\emph{CC2Vec}~\cite{CC2Vec} is an approach to learn the distributed representation of commit.
Traditionally, one commit has a hierarchical structure--i.e., one commit consists of changed files, one change file consists of changed hunks, one change hunk consists of changed lines, one changed line consists of changed tokens.
Unlike DeepJIT that ignores the information about the hierarchical structure of code commits, CC2Vec has been proposed to automatically learn the hierarchical structure of code commits using a Hierarchical Attention Network (HAN) architecture.
The goal of CC2Vec is to learn the relationship between the actual code changes and the semantic of that code changes (i.e., the first line of commit messages).
Then, in the feature extraction layer, HAN is used to build vector representations of changed lines; these vectors are then used to construct vector representations of hunks; and then these vectors are aggregated to construct the embedding vector of the removed or added code.
Then, the embedding vectors of the removed code and added code is input into a fully-connected layer to generate a vector that represents the code change.
Recently, Hoang~{\em et al.}~has shown that the combination of CC2Vec and DeepJIT outperforms the stand-alone DeepJIT approach.
In particular, they used CC2Vec to generate a vector representation of code changes.
Then, such code changes vector is concatenated with the commit message vectors and the code change vectors that are generated by DeepJIT to generate a final vector representation.
Finally, the concatenation vector is input into the fully-connected layer to generate the probability of defect-introducing commit.
\section{A Replication Study of the State-of-the-art Deep Learning Approach for JIT Defect Prediction}\label{sec:revisiting}
In this section, we present the motivation, approach, and results of our replication study (RS) of CC2Vec for Just-In-Time defect prediction.
\textbf{Motivation.}
One of the key principles of \emph{Just-In-Time} defect prediction models is to \emph{generate predictions as soon as possible} for a newly arrived commit.
Let's consider $T_1$ as the present (see Figure \ref{fig:cc2vec}), the whole historical data will be used for training a JIT model in order to immediately generate a prediction of a newly arrived commit.
However, CC2Vec requires both training and unlabelled testing datasets for training CC2Vec models (i.e., the periods of $T_0$-$T_1$ and $T_1$-$T_2$), assuming that all unlabelled testing datasets would be available beforehand.
In particular, Hoang~{\em et al.}~ (Section 3.3.3 of the original study~\cite{CC2Vec}) stated that \emph{``CC2Vec is first used to learn distributed representations of the code changes in the whole dataset.
All patches from the training and testing dataset are used since the log messages of the testing dataset are not part of the predictions of the task''}.
This indicates that the unlabelled testing dataset needs to be available beforehand for training CC2Vec models.
However, these assumptions of CC2Vec do not follow the key principles of the Just-In-Time defect prediction:
(1) the predictions of the CC2Vec approach cannot be made immediately for a newly arrived commit; and (2) it is unlikely that the unlabelled testing dataset would be available beforehand when training JIT models.
Thus, it remains unclear what the performance of CC2Vec for Just-In-Time defect prediction is after considering the key principle of Just-In-Time defect prediction (i.e., excluding testing dataset for model training).
In addition, several other performance measures (e.g., F-measure) have not been evaluated in the original study.
Thus, we (RS1) perform a replication study to confirm the merit of previous experimental findings and (RS2) extend their experiment by excluding testing datasets and evaluate with five additional evaluation measures.
\subsection*{\textbf{(RS1) Can we replicate the results of deep learning approaches for Just-In-Time defect prediction?}}
\smallsection{Approach}
To address RS1, we first download the replication package of Hoang~{\em et al.}~\cite{CC2Vec}.
We carefully study the replication package to understand all details.
Then, we execute the source code followed by the instructions and datasets provided by Hoang~{\em et al.}~\cite{CC2Vec}.
Finally, we compute a relative percentage between our results and the original paper as follows: $\% = (\frac{\mathrm{ours}-\mathrm{original}}{\mathrm{original}})\times 100\%$
\smallsection{Results}
\textbf{Similar to the original study~\cite{CC2Vec}, we are able to replicate the results of CC2Vec.}
Table \ref{tab:replication-study} (see the green cells) shows that, in our experiment, CC2Vec achieves an AUC of 0.80 for OpenStack and 0.84 for Qt, while the original paper reported an AUC of 0.81 for OpenStack and 0.82 for Qt.
Our results are only 1\%-2\% different when compared to the original paper.
This finding confirms that the results of CC2Vec are replicable for Just-In-Time defect prediction.
\subsection*{\textbf{(RS2) How does CC2Vec perform for Just-In-Time defect prediction after excluding testing datasets?}}
\smallsection{Approach} To address RS2, we repeat the experiment of Hoang~{\em et al.}~\cite{CC2Vec} in two settings---i.e., the original experiment with training and testing datasets and our experiment with training datasets only.
In addition, we extend their experiment by evaluating the CC2Vec approach using five additional evaluation measures (i.e., F-measure, False Alarm Rate, Distance-to-Heaven, Precision, and Recall).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/cc2vec.pdf}
\caption{The comparison between the workflow of JITLine that can immediately generate predictions and the workflow of CC2Vec+DeepJIT~\cite{CC2Vec} which requires testing dataset to be available beforehand for training CC2Vec+DeepJIT models.}
\label{fig:cc2vec}
\end{figure}
\smallsection{Results} \textbf{After excluding testing datasets when developing the JIT models, we find that the F-measure of CC2Vec is decreased by 38.5\%(0.35$\rightarrow$0.19) for OpenStack. and 45.7\%(0.39$\rightarrow$0.24) for Qt.}
Table \ref{tab:replication-study} (see the red cells) shows the result between two experimental settings (i.e., training+testing vs. training only) with respect to AUC, F1, FAR, d2h, Precision and Recall.
We find that the values of several performance measures (i.e, AUC, F-measure, FAR, d2h) are negatively impacted by the exclusion of the testing datasets.
We find that AUC is decreased by 3.9\% for OpenStack and 3.7\% for Qt, while False Alarm Rates (FAR) are increased by 234.62\% for OpenStack and 270.59\% for Qt.
Similarly, the d2h value is increased by 126.92\% for Openstack and 80\% for Qt.
The higher FAR and d2h of CC2Vec has to do with the substantial increasing Recall to 0.99 for OpenStack and 0.96 for Qt---i.e., CC2Vec predicts most of the commits as defect-introducing (higher Recall), but many of the predictions are incorrect (higher FAR, less Precision).
These findings indicate that the exclusion of testing datasets in model training has a large negative impact on the performance of CC2Vec (i.e., producing higher False Alarm Rates).
Thus, developers have to waste unnecessarily effort on inspecting clean commits that are incorrectly predicted as defect-introducing.
\section{Related Work and Research Questions}
In this section, we discuss the following four main limitations of prior studies with respect to the literature in order to motivate our approach and research questions.
\textbf{First, several traditional machine learning-based JIT approaches have not been compared with the deep learning approaches for JIT defect prediction.}
Recently, researchers found that several simple approaches often outperform deep learning approaches in SE tasks.
For example, Hellendoorn~\cite{hellendoorn2017deep}, Fu and Menzies~\cite{fu2017easy}, Liu~{\em et al.}~\cite{liu2018neural}.
Menzies~{\em et al.}~\cite{menzies2018500+} suggested that researchers should explore simple and fast approaches before applying deep learning approaches on SE tasks.
However, Hoang~{\em et al.}~\cite{CC2Vec} did not compared their CC2Vec approach with other simple approaches (e.g., logistic regression and random forest).
Therefore, we wish to investigate if our approach outperforms the deep learning approaches for Just-In-Time defect prediction.
\textbf{Second, the cost-effectiveness of deep learning approaches for JIT defect prediction has not been investigated.}
Prior work pointed out that different code changes often require different amount of code inspection effort~\cite{mende2010effort,huang2017supervised}---i.e., large code changes often require a high amount of code inspection effort.
However, Hoang~{\em et al.}~\cite{CC2Vec} did not investigate the cost-effectiveness of their CC2Vec approach.
In addition, the CC2Vec approach does not take into consideration the effort required to inspect code changes when prioritizing defect-introducing commits.
Therefore, we wish to investigate if our approach is more cost-effective than the deep learning approaches for Just-In-Time defect prediction.
\textbf{Third, the computational time of deep learning approaches JIT defect prediction has not been investigated.}
Several researchers raised concerns that deep learning approaches are often complex and very expensive in terms of GPU costs/CPU time.
For example, Jiang~{\em et al.}~\cite{jiang2017automatically}'s approach requires 38 hours for training their deep learning models on NVIDIA GeForce GTX 1070.
Menzies~{\em et al.}~\cite{menzies2018500+} found that a simple approach that is 500+ times faster achieves similar performance to deep learning approaches.
Therefore, we wish to investigate if our approach is faster than the deep learning approaches for Just-In-Time defect prediction.
\textbf{Finally, there exists no machine learning approaches for fine-grained Just-In-Time defect prediction at line level.}
Recently, Pascarella~{\em et al.}~\cite{PASCARELLA2019} proposed a fine-grained JIT defect prediction model which based on handcrafted features to prioritize which changed files in a commit should be review first.
However, this approach cannot identify defective lines of the changed files.
Recently, Yan~{\em et al.}~\cite{yan2020just} proposed a fine-grained JIT defect localization at the line level to help developers to locate and address defects using less effort.
Yan~{\em et al.}~\cite{yan2020just} proposed a two-phase approach---i.e., the ML model trained on software metrics (e.g., \#added\_lines) is first used to identify which commits are the most risky, then the N-gram model trained on textual features is finally used to localise the riskiest lines.
On the other hand, a recent work by Wattanakriengkrai~{\em et al.}~\cite{wattanakriengkrai2020predicting} pointed out that a machine learning approach outperforms the n-gram approach.
However, their experiment focused solely on file-level defect prediction---not Just-In-Time defect prediction.
Therefore, we wish to investigate if our approach is more effective than the two-phase approach for Just-In-Time defect prediction.
Considering the limitations yet high impact of prior work, we propose JITLine---a machine learning-based Just-In-Time defect prediction approach that can predict both defect-introducing commits and their associated defective lines.
Then, we formulate the following research questions:
\begin{enumerate}[RQ1)]
\item Does our JITLine~\underline{outperform} the state-of-the-art JIT defect prediction approaches?
\item Is our JITLine~more \underline{cost-effective} than the state-of-the-art JIT defect prediction approaches?
\item Is our JITLine~\underline{faster} than the state-of-the-art JIT defect prediction approaches?
\item How effective is our JITLine~for prioritizing defective \underline{lines} of a given defect-introducing commit?
\end{enumerate}
\section{Discussion}
\subsection{Implications to Practitioners}
\emph{Our JITLine~approach may help practitioners to better prioritize defect-introducing commits and better identify defective lines,}
since we find that our JITLine~approach outperforms (RQ1), more cost-effective (RQ2), faster (RQ3), and more fine-grained (RQ4) than the state-of-the-art approaches (i.e., EALR, CC2Vec, and DeepJIT).
Traditionally, Just-In-Time defect prediction methods only prioritize defect-introducing commits, saving a lot of code inspection effort.
However, we find that the average ratio of actual defective lines for each commit is 50\%.
Thus, developers still spend unnecessarily effort on inspecting clean lines.
In addition to predict defect-introducing commits, our JITLine~approach can also accurately predict defective lines within a defect-introducing commit, saving 17\%-20\% effort that developers need to spend when compared to the baseline approach~\cite{yan2020just}.
\subsection{Implications to Researchers}
\emph{Researchers should consider the key principles of Just-In-Time defect prediction models (i.e., to generate predictions as soon as possible),}
since the results of our replication study show that, when excluding testing datasets, the F-measure of CC2Vec approach is decreased by 38.5\% for OpenStack and 45.7\% for Qt.
In reality, it is unlikely that the unlabelled testing dataset would be available beforehand when training JIT models.
Thus, when conducting an experiment, testing data should be excluded when developing AI/ML models.
\emph{Researchers should explore simple solutions (i.e., Explainable AI approaches~\cite{jiarpakdee2021perception,jiarpakdee2020xai4se,tantithamthavorn2020explainable,wattanakriengkrai2020predicting,rajapaksha2021sqaplanner}) first over complex and compute-intensive deep learning approaches for SE tasks}, since we find that our JITLine~approach outperforms the deep learning approaches for Just-In-Time defect prediction.
This recommendation has been advocated by prior studies in other SE tasks~\cite{hellendoorn2017deep,fu2017easy,liu2018neural,menzies2018500+}.
For example, Menzies~{\em et al.}~\cite{menzies2018500+} suggested that researchers should explore simple and fast approaches before applying deep learning approaches on SE tasks.
Hellendoorn~\cite{hellendoorn2017deep} found that a careful implementation of NLP approaches outperform deep learning approaches.
Liu~{\em et al.}~\cite{liu2018neural} found that simple $k$-nearest neighbours approach outperforms neural machine translation approaches.
\subsection{Threats to Validity}\label{sec:threats}
\emph{Threats to construct validity} relates to the impact of parameter settings of the techniques that our approach relies upon (i.e, SMOTE, DE, Random Forest, and LIME)~\cite{tantithamthavorn2016automated, fu2016tuning, tantithamthavorn2018impact}.
To mitigate this threat, we apply a Differential Evolution algorithm to optimize the parameter setting of the SMOTE technique.
We use the parameter settings of DE, suggested by Agrawal~{\em et al.}~\cite{agrawal2018better}.
We use the default settings of LIME (i.e., the number of samples = 5,000).
For the baseline approaches, we use the best parameter settings provided by the implementation of the DeepJIT~\cite{hoang2019deepjit} and CC2Vec approaches~\cite{CC2Vec}.
Prior work raised concerns that the ground-truths data collection of defect-introducing commits could be delayed \cite{tan2015online,cabral2019class}.
Thus, it is possible that our studied JIT datasets may be missing some of the false negative commits when defects are not fixed (i.e., defect-fixing commits that are not yet fixed).
However, the goal of this paper is not to improve the data construction approach.
Instead, we use the same datasets that were used in the prior work for a fair comparison.
Thus, future work should consider addressing this concern.
\emph{Threats to external validity} relates to the limited number of the studied datasets (i.e., OpenStack and Qt) to ensure a fair comparison with the CC2Vec approach~\cite{CC2Vec}.
Thus, other commit-level datasets can be explored in future work.
\emph{Threats to internal validity} relates to the randomization of several techniques that our approach relies upon~\cite{liem2020run}.
After we repeat our experiments with different random seeds, we observe minor differences (e.g., $\pm0.01$ for AUC).
Nevertheless, our JITLine approach is still the best performer for all RQs.
The used random seed number is reported in our replication package at Zenodo: \url{http://doi.org/10.5281/zenodo.4433498}.
We follow the experimental setting of the original study~\cite{hoang2019deepjit,CC2Vec} (i.e., one single training/testing data split without cross-validation).
Therefore, statistical analysis and effect size analysis are not applied for RQ1, RQ2, and RQ3, since we have only one performance value for each project.
\section{Experimental Setup and Results}\label{sec:results}
In this section, we describe the studied datasets and present the experimental results with respect to our four research questions.
\textbf{Studied Datasets.}
In this paper, we select the dataset of McIntosh and Kamei~\cite{McIntosh2018} due to the following reasons.
First, we would like to establish a fair comparison, using the same training and testing datasets with previous work~\cite{hoang2019deepjit,CC2Vec}, where this dataset was used.
Second, we would like to ensure that our results rely on high quality datasets.
Recently, researchers raised concerns that the SZZ algorithm~\cite{SZZalgo} may produce many false positives and false negatives~\cite{rodriguez2018reproducibility}.
However, the datasets of McIntosh and Kamei~\cite{McIntosh2018} have been manually verified through many filtering steps (e.g., ignore comment updates, ignore white space/indentation changes, remove mislabelled defect-introducing commits).
Finally, we select the datasets of McIntosh and Kamei~\cite{McIntosh2018} with two open-source software systems, i.e., OpenStack and Qt.
Openstack is an opensource software for cloud infrastructure service.
Qt is a cross-platform application development framework written in C++.
Table~\ref{tab:dataset} presents the statistics of the studied datasets.
Below, we present the approach and the results with respect to our four research questions.
\input{sections/RQ1.tex}
\input{sections/RQ2.tex}
\input{sections/RQ3.tex}
\input{sections/RQ4.tex}
\subsection*{\textbf{(RQ4) How effective is our JITLine~for prioritizing defective \underline{lines} of a given defect-introducing commit?}}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{results/rq4_final_change_width.pdf}
\caption{(RQ4) The results of our JITLine~at the line level when compared to the N-gram-based line-level JIT defect prediction approach of Yan~{\em et al.}~\cite{yan2020just} with respect to Top-10 Accuracy($\nearrow$), Recall@20\%Effort($\nearrow$), Effort@20\%Recall($\searrow$), and IFA($\searrow$). The higher ($\nearrow$) or the lower ($\searrow$) the values are, the better the approach is.}
\label{fig:rq4}
\end{figure}
\smallsection{Approach}
To address this RQ, we first need to collect the line-level ground-truth data.
To do so, we start from cloning a git repository of the studied projects.
Then, we use PyDriller~\cite{PyDriller}, a Python library for mining GitHub repository, to identify defect-fixing commits that are associated with each defect-introducing commit that is provided by McIntosh and Kamei~\cite{McIntosh2018}.
Once identified, we examine the diff (a.k.a. code changes) made by the defect-fixing commits to identify lines that are modified/deleted by defect-fixing commits.
Similar to prior work~\cite{da2016framework,rodriguez2018reproducibility}, the lines that were modified or deleted by defect-fixing commits are identified as defective lines, otherwise clean.
Then, we compare our JITLine~with the state-of-the-art line-level JIT defect prediction approach by Yan~{\em et al.}~\cite{yan2020just}.
We implement the N-gram approach using the implementation provided by Hellendoorn~{\em et al.}~\cite{hellendoorn2017deep},
Since Yan~{\em et al.}~\cite{yan2020just} found that the Jelinek-Mercer (JM) smoothing method is the best choice, and the N-gram length has no substantial impact on the average performance, we followed their advice by using the Jelinek-Mercer (JM) smoothing method and the N-gram length of 6.
Finally, we evaluate our approach and Yan~{\em et al.}~\cite{yan2020just} using the following evaluation measures at the line level~\cite{yan2020just,wattanakriengkrai2020predicting}:
\begin{enumerate}
\item Top-10 Accuracy measures the proportion of actual defective lines that are ranked in the top-10 ranking.
Traditionally, developers may need to inspect all changed lines for a given commit---which is not ideal when SQA resources are limited.
A high top-10 accuracy indicates that many of the defective lines are ranked at the top, which is considered effective.
\item Recall@20\%LOC measures the proportion of defective lines that can be found (i.e., correctly predicted) given a fixed amount of effort (i.e., the top 20\% of changed lines of a given defect-introducing commit).
A high value of Recall@20\%LOC indicates that an approach can rank many actual defective lines at the top.
\item Effort@20\%Recall\textsubscript{line} measures the percentage of the amount of effort that developers have to spend to find the actual 20\% defective lines of a given defect-introducing commit.
A low value of Effort@20\%Recall\textsubscript{line} indicates that the developers will spend a little amount of effort to find the 20\% actual defective lines.
\item Initial False Alarm (IFA) measures the number of clean lines that developers need to inspect until finding the first actual defective line for a given commit.
A low IFA value indicates that developers only spend time inspecting only a few number of clean lines to find the first actual defective line.
\end{enumerate}
\smallsection{Results}
\textbf{Our JITLine~approach is 133\%-150\% more accurate than the baseline approach by Yan~{\em et al.}~\cite{yan2020just} for identifying actual defective lines in the top-10 recommendations.}
Figure~\ref{fig:rq4} shows that our approach achieves a median Top-10 Accuracy of 0.7 for OpenStack and 0.5 for Qt, while the baseline approach achieves a Top-10 Accuracy of 0.3 for Openstack and 0.2 for Qt.
In addition, we find that our JITLine~approach can find actual defective lines 25\%-50\% higher than the baseline approach, given the same amount of effort at 20\%LOC.
Figure~\ref{fig:rq4} shows that our approach achieves a median Recall@20\%LOC of 0.20 for OpenStack and 0.21 for Qt, while the baseline approach achieves a median Recall@20\%LOC of 0.16 for OpenStack and 0.14 for Qt.
Our Wilcoxon signed-ranked test also confirms that the difference of Top-10 Accuracy and Recall@20\%Effort between our approach and the baseline is statistically significantly ($p$-value $<$ 0.05) with a Cliff's $|\delta|$ effect size of large ($|\delta|=0.49-0.67$) for both Top-10 Accuracy and Recall@20\%LOC.
\textbf{Our JITLine~approach requires 17\%-27\% less amount of effort than the baseline approach in order to find the same amount of actual defective lines.}
Figure~\ref{fig:rq4} shows that our approach achieves a median Effort@20\%Recall\textsubscript{line} of 0.20 for Openstack and 0.19 for Qt, while the baseline approach achieves a median Effort@20\%Recall\textsubscript{line} of 0.24 for OpenStack and 0.26 for Qt.
Similarly, our approach achieves a median IFA of 0 for OpenStack and 1 for Qt, while the baseline approach achieves a median IFA of 3 for OpenStack and 4 for Qt.
Our Wilcoxon signed-ranked test also confirms that the difference of Effort@20\%Recall\textsubscript{line} and IFA between our approach and the baseline is statistically significant ($p$-value $<$ 0.05) with a Cliff's $|\delta|$ effect size of large ($|\delta|=0.52-0.69$) for Effort@20\%Recall\textsubscript{line} and a Cliff's $|\delta|$ effect size of medium ($|\delta|=0.36-0.39$) for IFA.
\section{Introduction}
Modern software development cycles tend to release software products in a short-term period.
Such short-term software development cycles often pose critical challenges to modern Software Quality Assurance (SQA) practices.
Therefore, continuous code quality tools (e.g., CI/CD, modern code review, static analysis) have been heavily adopted to early detect software defects.
However, SQA teams cannot effectively inspect every commit given limited SQA resources.
Just-in-time (JIT) defect prediction~\cite{Kamei2010, kim2007predicting} is proposed to predict if a commit will introduce defects in the future.
Such commit-level predictions are useful to help practitioners prioritize their limited SQA resources on the most risky commits during the software development process.
In the past decades, several machine learning approaches are employed for developing JIT defect prediction models~\cite{Kim2008, Shivaji2013, Fukushima2014, Rajbahadur2017}.
However, these approaches often rely on handcrafted commit-level features (e.g., Churn).
Recently, several deep learning approaches have been proposed for Just-In-Time defect prediction (e.g., DeepJIT~\cite{hoang2019deepjit} and CC2Vec~\cite{CC2Vec}).
Hoang~{\em et al.}~\cite{CC2Vec} found that their CC2Vec approach outperforms DeepJIT for Just-In-Time defect prediction.
CC2Vec requires both training and unlabelled testing datasets for training CC2Vec models, assuming that all unlabelled testing datasets would be available beforehand.
However, these assumptions of CC2Vec do not follow the key principles of the Just-In-Time defect prediction:
(1) the predictions of the CC2Vec approach cannot be made immediately for a newly arrived commit; and (2) it is unlikely that the unlabelled testing dataset would be available beforehand when training JIT models.
Thus, we perform a replication study to confirm the merit of previous experimental findings and extend their experiment by excluding testing datasets and evaluate with five additional evaluation measures.
\begin{enumerate}[{\bf RS1)}]
\item {\bf Can we replicate the results of deep learning approaches for Just-In-Time defect prediction?}\\
Similar to the original study~\cite{CC2Vec}, we are able to replicate the results of CC2Vec.
\item {\bf How does CC2Vec perform for Just-In-Time defect prediction after excluding testing datasets?}\\
After excluding testing datasets when developing the JIT models, we find that the F-measure of CC2Vec is decreased by 38.5\% for OpenStack and 45.7\% for Qt.
In addition, CC2Vec achieves a high False Alarm Rate (FAR) of 0.87 for OpenStack and 0.63 for Qt, indicating that 63\%-87\% clean commits are incorrectly predicted as defect-introducing.
Thus, developers still waste many unnecessarily effort to inspect clean commits that are incorrectly predicted as defect-introducing.
\end{enumerate}
In addition, Hoang~{\em et al.}~\cite{CC2Vec} did not compare their approach with simple JIT approaches, did not evaluate the cost-effectiveness, did not report the computational time, and cannot perform fine-grained predictions at the line level.
Thus, it remains unclear about the practical value of the CC2Vec approach when considering the amount of effort that developers need to inspect.
In this paper, we propose JITLine---a machine learning-based Just-In-Time defect prediction approach that can both predict defect-introducing commits and identify defective lines that are associated with that commit.
We evaluate our JITLine~approach with the state-of-the-art commit-level JIT defect prediction approaches (i.e., EARL~\cite{Kamei2013}, DeepJIT~\cite{hoang2019deepjit}, and CC2Vec~\cite{CC2Vec}) with respect to six traditional measures (i.e, AUC, F-measure, False Alarm Rate, Distance-to-Heaven, Precision, and Recall), three cost-effectiveness measures (i.e., PCI@20\%LOC, Effort@20\%Recall, P\textsubscript{Opt}).
In addition, we also compare our approach with a baseline line-level JIT defect localization by Yan~\cite{yan2020just} using four line-level effort-aware measures (i.e., Top-10 Accuracy, Recall@20\%LOC, Effort@20\%Recall\textsubscript{line}, Initial False Alarm).
Through a case study of 37,524 total commits that span across two large-scale open-source software projects (i.e., OpenStack and Qt),
we address the following four research questions:
\begin{enumerate}[{\bf RQ1)}]
\item {\bf Does our JITLine~\underline{outperform} the state-of-the-art JIT defect prediction approaches?}\\
Our JITLine~approach achieves F-measure 26\%-38\% higher than the state-of-the-art approaches (i.e., CC2Vec).
Our JITLine~achieves a False Alarm Rate (FAR) 94\%-97\% lower than the CC2Vec approach.
\item {\bf Is our JITLine~more \underline{cost-effective} than the state-of-the-art JIT defect prediction approaches?}\\
Our JITLine~is 17\%-51\% more cost-effective than the state-of-the-art approaches in term of PCI\@20\%Effort.
In addition, our JITLine~can save the amount of effort by 89\%-96\% to find the same number of actual defect-introducing commits (i.e., 20\% Recall) when compared to the state-of-the-art approaches.
\item {\bf Is our JITLine~\underline{faster} than the state-of-the-art JIT defect prediction approaches?}\\
Our JITLine~is 70-100 times faster than the deep learning approaches for Just-In-Time defect prediction.
\item {\bf How effective is our JITLine~for prioritizing defective \underline{lines} of a given defect-introducing commit?}\\
Our JITLine~approach is 133\%-150\% more accurate than the baseline approach by Yan~{\em et al.}~\cite{yan2020just} for identifying actual defective lines in the top-10 recommendations.
Our JITLine~approach requires 17\%-27\% less amount of effort than the baseline approach in order to find the same amount of actual defective lines.
\end{enumerate}
\smallsection{Contributions} The contributions of this paper are as follows:
\begin{itemize}
\item We conduct a replication study of the state-of-the-art deep learning approach (CC2Vec~\cite{CC2Vec}) for JIT defect prediction and extend their experiment by excluding testing datasets with five evaluation measures (Section~\ref{sec:revisiting}).
\item We propose JITLine---a machine learning-based Just-In-Time defect prediction approach that can both predict defect-introducing commits and identify their associated defective lines (Section~\ref{sec:approach}).
\item We evaluate our JITLine~approach at the commit level with the state-of-the-art JIT defect prediction approaches with respect to six traditional measures, three cost-effectiveness measures, and at the line level with four effort-aware line-level measures (Section~\ref{sec:results}).
\item Our results show that our JITLine~approach outperforms (RQ1), more cost-effective (RQ2), faster (RQ3), and more fine-grained (RQ4) than the state-of-the-art approaches.
\end{itemize}
\section{Case Study Design}\label{sec:design}
In this section, we describe the design of our case study experiment that we perform in order to address our research questions.
Figure~\ref{fig:overview} provides an overview of the approach that we apply to each studied system.
The crux of our approach is that we calculate a ground truth performance such that the performance estimates derived from model validation techniques can be compared against it.
We describe each step in the approach below.
\subsection{Studied Systems}
In selecting the studied systems, we identified two important criteria that needed to be satisfied:
\begin{itemize}
\item \textbf{Criterion 1 --- Sufficient EPV}:
Since we would like to study cases where EPV is low-risk (i.e, $\ge 10$) and high-risk ($< 10$), the systems that we select for analysis should begin with a low-risk EPV.
Our rationale is that we prefer under-sampling to over-sampling when producing our sample dataset.
For example, if we were to select systems with an initial EPV of 5, we would need to over-sample the defective class in order to raise the EPV to 10.
However, the defective class of a system with an initial EPV of 15 can be under-sampled in order to lower the EPV to 10.
\item \textbf{Criterion 2 --- Sane defect data}:
Since it is unlikely that more software modules have defects than are free of defects, we choose to study systems that have a rate of defective modules below 50\%.
\end{itemize}
We began our study using the 101 publicly-available defect datasets described in Section~\ref{sec:background}.
To satisfy criterion 1, we exclude the 78 datasets that we found to have an EPV value lower than 10 in Section~\ref{sec:background}.
To satisfy criterion 2, we exclude an additional 5 datasets because they have a defective ratio above 50\%.
Table~\ref{tb:studiedsystems} provides an overview of the 18 systems that satisfy our criteria for analysis.
To combat potential bias in our conclusions, the studied systems include proprietary and open source systems, with varying size, domain, and defective ratio.
\subsection*{\textbf{(RQ3) Is our JITLine~\underline{faster} than the state-of-the-art JIT defect prediction approaches?}}
\begin{table}[t]
\caption{(RQ3) The average CPU and GPU computational time (minutes$\pm$95\% Confidence Interval) of the model training of JIT defect prediction approaches after repeating the experiment 5 times.}
\label{tab:RQ3}
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{l|c|c|c|c|}
\cline{2-5}
& \multicolumn{2}{c|}{\textbf{CPU}} & \multicolumn{2}{c|}{\textbf{GPU}} \\
\cline{2-5}
& \textbf{Openstack} & \textbf{Qt} & \textbf{Openstack} & \textbf{Qt} \\
\hline
\multicolumn{1}{|l|}{\textbf{JITLine}} & 36$\pm$1 secs & 175$\pm$1 secs & - & - \\
\multicolumn{1}{|l|}{\textbf{DeepJIT}} & 70$\pm$7 mins & 143$\pm$7 mins & 2$\pm$0.01 mins & 5$\pm$0.01 mins \\
\multicolumn{1}{|l|}{\textbf{CC2Vec}} & 146$\pm$16 mins & 300$\pm$6 mins & 13$\pm$0.05 mins & 30$\pm$0.10 mins \\ \multicolumn{1}{|l|}{\textbf{EARL}} & 8$\pm$1 secs & 97$\pm$1 secs & - & - \\
\hline
\end{tabular}
}
\end{table}
\smallsection{Approach} To answer this RQ, we measure the CPU computational time of the model training of our approach, and the CPU and GPU computational time of the model training of deep learning approaches (i.e., DeepJIT and CC2Vec).
For our approach, we set \texttt{n\_jobs} argument of the \texttt{RandomForestClassifier} function of Scikit-Learn library to -1 to ensure that all available CPU cores are used in parallel.
For the deep learning baselines, we use \texttt{cpu} function provided by the Pytorch deep learning library to ensure that all available CPU cores are used in parallel.
We perform the experiment using the following equipment: AMD Ryzen 9 5950X 16 Cores/32 Threads Processor, RAM 64GB, NVIDIA GeForce RTX 2080 Ti 11GB.
To ensure that our measurement is accurate and strictly controlled, we reserve the computing resources and ensure that the resources are idle with no other running tasks.
To combat the randomization bias, we repeat the experiment 5 times.
\smallsection{Results} \textbf{Our JITLine~approach is 70-100 times faster than the deep learning approaches for Just-In-Time defect prediction.}
Table~\ref{tab:RQ3} presents the average CPU and GPU computational time (minutes) of the model training of JIT defect prediction approaches after repeating the experiment 5 times.
We find that the model training time of our JITLine~approach takes approximately 1-3 minutes, while the model training time of the deep learning approaches for Just-In-Time defect prediction require 1-5 hours (70 to 300 minutes).
Given the same running cost (on CPU), this finding suggests that our approach is more cost-efficient than the deep learning approaches.
The computation time of the deep learning approaches can be accelerated by using a high-end GPU hardware.
However, we find that the model training time of the deep learning approaches on the GPU device is relatively faster than using the CPU hardware with an additional GPU cost.
Nevertheless, the model training time of deep learning approaches on GPU (2-30 minutes) still takes relatively longer than the model training time of our approach on CPU (1-3 minutes).
\subsection*{\textbf{(RQ2) Is our JITLine~more \underline{cost-effective} than the state-of-the-art JIT defect prediction approaches?}}
\smallsection{Approach}
To answer this RQ, we evaluate our JITLine~ and compare with the four state-of-the-art JIT defect prediction approaches (as mentioned in RQ1) using the following cost-effective measures~\cite{mende2010effort,Kamei2013,agrawal2019dodge,huang2017supervised,yang2016effort}:
\begin{enumerate}
\item PCI@20\%LOC measures the proportion of actual defect-introducing commits that can be found given a fixed amount of effort, i.e., the Top 20\% LOC of the whole project.
A high value of PCI@20\%LOC indicates that an approach can rank many actual defect-introducing commits so developers will spend less effort to find actual defect-introducing commits.
\item Effort@20\%Recall measures the amount of effort (measured as LOC) that developers have to spend to find the actual 20\% defect-introducing commits divided by the total changed LOC of the whole testing dataset.
A low value of Effort@20\%Recall indicates that the developers will spend a little amount of effort to find the 20\% actual defect-introducing commits.
\begin{figure}[t]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/openstack_RQ2_new.pdf}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{results/qt_RQ2_new.pdf}
\end{subfigure}
\caption{(RQ2) The cost-effectiveness of our JITLine~approach compared to the state-of-the-art approaches for Just-In-Time defect prediction with respect to PCI@20\%Recall, Effort@20\%Recall, and P\textsubscript{Opt}.}
\label{fig:rq2}
\end{figure}
\item P\textsubscript{opt} is defined as 1-$\Delta_{\mathrm{opt}}$, where $\Delta_{\mathrm{opt}}$ is the area of the effort-based (i.e., churn) cumulative lift chart between an optimal model and a prediction model.
The effort-based (i.e., churn) cumulative lift chart is the relationship between the cumulative percentage of defect-introducing commits from a prediction model ($y$-axis) and the cumulative percentage of the inspection effort ($x$-axis).
Similar to prior studies~\cite{mende2010effort,agrawal2019dodge,yang2016effort}, we use the normalized version of P\textsubscript{opt}, which is defined as $1-\frac{\mathrm{Area}(\mathrm{Optimal})-\mathrm{Area}(\mathrm{Our})}{\mathrm{Area}(\mathrm{Optimal})-\mathrm{Area}(\mathrm{Worst})}$.
For the \emph{optimal} model and the \emph{worst} model, all commits are ranked by the actual defect density in descending and ascending order, respectively.
For \emph{our} model, all commits are ranked by the estimated defect density $(\frac{\mathrm{Y}(m)}{\mathrm{\#LOC}(c)})$ in descending order.
\end{enumerate}
\smallsection{Results}
\textbf{Our JITLine~approach is 17\%-51\% more cost-effective than the state-of-the-art approaches in term of PCI@20\%LOC.}
Figure~\ref{fig:rq2} presents the cost-effectiveness of our JITLine~approach compared to the state-of-the-art approaches for Just-In-Time defect prediction with respect to the PCI@20\%LOC, Effort@20\%Recall and P\textsubscript{opt} measures.
We find that our JITLine~approach is more cost-effective than the state-of-the-art approaches for three cost-effectiveness measures.
We find that our JITLine~achieves a PCI@20\%LOC of 0.56 for Openstack and 0.70 for Qt, while the state-of-the-art achieves a PCI@20\%LOC of 0.06-0.37 for OpenStack and 0.06-0.60 for Qt.
This finding indicates that given a fixed amount of inspection effort at 20\%LOC, our JITLine~approach can correctly predict 17\%-51\% higher number of actual defect-introducing commits than the state-of-the-art approaches.
\textbf{Our JITLine~approach can save the amount of effort by 89\%-96\% to find the same number of actual defect-introducing commits (i.e., 20\% Recall) when compared to the state-of-the-art approaches.}
Our JITLine~approach achieves an Effort@20\%Recall of 0.04 for Openstack and and 0.02 Qt, while the state-of-the-art approaches achieve an Effort@20\%Recall of 0.11-0.36 for Openstack, and 0.03-0.53 for Qt.
Similarly, our JITLine~approach achieves a P\textsubscript{opt} of 0.82 for OpenStack and 0.89 for Qt, which is 116\% and 178\% higher than the state-of-the-art approaches for OpenStack and Qt, respectively.
In particular, our P\textsubscript{opt} is 7\% to 19\% higher than EALR, 116\% to 178\% higher than DeepJIT, and 105\% to 112\% higher than CC2Vec.
This finding suggests that, to find the same amount of actual defect-introducing commits, our JITLine~approach can reduced the amount of effort by 85\% and 96\% when compared to the state-of-the-art approaches, which may provide the best return on investment.
\subsection*{\textbf{(RQ1) Does our JITLine~\underline{outperform} the state-of-the-art JIT defect prediction approaches?}}
\section{Experimental Setup}
\begin{table}[t]
\caption{The statistics of our studied datasets.}
\label{tab:dataset}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|c|c|c|c|c|}
\cline{2-6}
\multicolumn{1}{c|}{} & \textbf{\#Commits} & \textbf{\begin{tabular}[c]{@{}c@{}}\%Defect-\\ Introducing \\ Commits\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}\#Unique \\ Tokens\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Avg. of \\Commit \\ Size\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Avg. of \\ \%Defective \\ Lines\end{tabular}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Openstack}} & 12,374 & 13\% & 32K & 73 LOC & 53\% \\ \hline
\multicolumn{1}{|l|}{\textbf{Qt}} & 25,150 & 8\% & 81K & 140 LOC & 51\% \\ \hline
\end{tabular}%
}
\end{table}
\subsection*{\textbf{(RQ1) Does our JITLine~\underline{outperform} the state-of-the-art JIT defect prediction approaches?}}
\begin{figure*}[t]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\columnwidth]{results/openstack_RQ1_new.pdf}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\columnwidth]{results/qt_RQ1_new.pdf}
\end{subfigure}
\caption{(RQ1) The evaluation result of our JITLine~approach compared with the state-of-the-art approaches for Just-In-Time defect prediction (i.e., CC2Vec(Train only), DeepJIT, and EALR).}
\label{fig:RQ1_result}
\end{figure*}
\smallsection{Approach}
To answer this RQ, we evaluate our JITLine~using the same training/testing datasets as prior studies~\cite{hoang2019deepjit, McIntosh2018, CC2Vec} to establish a fair comparison.
For training, we use 11,043 commits for OpenStack and 22,579 commits for Qt.
For testing, we use 1,331 commits for OpenStack and 2,571 for Qt.
Since our JIT defect datasets are time-wise, we do not perform cross-validation to avoid the use of testing data in the training data~\cite{jimenez2019importance}.
Then, we compare our JITLine~with the following three state-of-the-art JIT defect prediction approaches (i.e., EARL, DeepJIT, CC2Vec).
The details of the state-of-the-art approach is provided in Section~\ref{sec:background}.
Finally, we evaluate these approaches using the following six traditional evaluation measures~\cite{hoang2019deepjit, wattanakriengkrai2020predicting, CC2Vec, agrawal2018better}.
\begin{enumerate}
\item AUC is an Area Under the ROC Curve (i.e., the true positive rate and the false positive rate). AUC values range from 0 to 1, with a value of 1 indicates perfect discrimination, while a value of 0.5 indicates random guessing.
\item F-measure is a harmonic mean of precision and recall, which is computed as $ \frac{2\times\mathrm{Precision}\times\mathrm{Recall}}{\mathrm{Precision}+\mathrm{Recall}}$.
We use the probability threshold of 0.5 for calculating precision and recall.
\item False Alarm Rate (FAR)~\cite{agrawal2019dodge} measures the ratio of incorrectly predicted defect-introducing commits and the number of actual clean commits $\frac{\mathrm{FP}}{\mathrm{FP}+\mathrm{TN}}$. The lower the FAR value is, the fewer the incorrectly predicted defect-introducing commits that developers need to review.
In other words, a low FAR value indicates that developers will spend
less effort on reviewing the incorrectly predicted defect-introducing commits.
\item Distance-to-Heaven (d2h)~\cite{agrawal2019dodge} is a root mean square of recall and FAR values, which can be computed as $\sqrt{\frac{(1-\mathrm{Recall})^2 + (0-\mathrm{FAR})^2}{2}}$.
A d2h value of 0 indicates that an approach can correctly predict all defect-introducing commits without any false positive.
A high d2h value indicates that an approach is far from perfect, e.g., achieving a high recall value but also have high FAR value and vice versa.
\item Precision measures the ability of an approach to correctly predict defect-introducing commits, which can be calculated as follows: $\mathrm{Precision} = \frac{\mathrm{TP}}{\mathrm{TP+FP}}$. The higher precision, the better model to correctly predict defect-introducing commits.
\item Recall measures the ability to correctly retrieve defect-introducing commits when making a prediction. The calculation of this measure is $\mathrm{Recall} = \frac{\mathrm{TP}}{\mathrm{TP+FN}}$. High recall indicates that the model can obtain a lot of defect-introducing commits during prediction.
\end{enumerate}
\smallsection{Results}
\textbf{Our JITLine~approach achieves an AUC 28\%-73\% higher and an F-measure 26\%-38\% higher than the state-of-the-art approaches (i.e., CC2Vec).}
Figure~\ref{fig:RQ1_result} presents the experimental results of our approach and the state-of-the-art approaches with respect to six evaluation measures, i.e., AUC, FAR, d2h, precision, and recall for Openstack and Qt.
We find that our JITLine~approach achieves the highest AUC value of 0.83 for Openstack and 0.82 for Qt, which is 1\%-8\% higher than CC2Vec, 8\%-10\% higher than DeepJIT, and 28\%-73\% higher than EALR.
We also find that our JITLine~approach achieves the highest F-measure value of 0.33 for Openstack and 0.24 for Qt, which are 26\%-38\% higher than CC2Vec, 300\%-3,300\% higher than DeepJIT, and 500\%-3,200\% higher than EALR.
This finding indicates that our approach outperforms the state-of-the-art approaches in terms of AUC and F-measure.
\textbf{Our JITLine~approach achieves a False Alarm Rate (FAR) 94\%-97\% lower than the CC2Vec approach.}
We find that our JITLine~approach achieves a False Alarm Rate (FAR) of 0.05 for Openstack and 0.02 for Qt, which is similar to DeepJIT (FAR=0.01) and EALR (FAR=0).
Similarly, our JITLine~approach also achieves a d2h of 0.52 for OpenStack and 0.59 for Qt, which is lower than the state-of-the-art approaches (i.e., DeepJIT and EALR).
However, we observe that the d2h of our approach for Qt project is higher than the CC2Vec approach.
For Qt project, we find that the lower d2h value of the CC2Vec approach has to do with the high recall value of 0.96---i.e., the CC2Vec approach predicts most of the commits as defect-introducing, but 63\%-87\% of them are incorrect (i.e., many of them are false positives)--- indicating that developers may spend unnecessarily effort to inspect actual clean commits that are incorrectly predicted as defect-introducing commits when the CC2Vec approach was used.
On the other hand, the high d2h value of our approach has to do with the low recall of 0.16, but our approach achieves a low FAR of 0.02, indicating that the predictions from our JITLine~approach is less likely to predict actual clean commits as defect-introducing.
After considering both the ability of identifying defect-introducing commits (i.e., Recall) and the additional costs (i.e., FAR), our approach still outperforms state-of-the-art approaches (i.e., CC2Vec (only for OpenStack), DeepJIT, and EALR).
\section{Related Work}\label{sec:background}
|
1,116,691,497,133 | arxiv | \section{Abstract}
The amalgamation of different generations of mobile cellular networks around the globe has resulted in diverse data speed experiences for end users. At present there are no defined mechanisms in place for a subscriber of one mobile network operator (MNO) to use the services of a WiFi provider. Cellular and Data Service providers also have no standardized procedures to securely interact with each other, and to allow their subscribers to use third party services on a pay-as-you-go basis. This paper proposes a blockchain-based offloading framework that allows a subscriber of a mobile operator to temporarily use another MNO or WiFi provider’s higher speed network. Smart contracts allow diverse entities such as MNOs, Brokers and WiFi Providers to automatically execute mutual agreements to enable the utilization of third party infrastructure in a secure and controlled manner. To test the proposed framework, the offloading of a subscriber from 3G/4G/4G-LTE/5G networks to a fixed broadband WiFi network was carried out and the results analyzed. The offloading framework was implemented using the ns-3 network simulator, and the Ethereum blockchain smart contract features were used for the settlement of invoices.
\section{Introduction}
The global rollout of 5G networks is now gathering momentum, as more and more countries are deploying this state-of-the-art broadband cellular network technology. However, at the same time many countries still have operational legacy mobile networks such as 4G, 4G-LTE, or even 3G. Even in the places where 5G networks are available, coverage is not always universal, and many pockets exist that still run older generation networks (e.g. 2G/3G/4G). A report published by the GSM Association (GSMA) \cite{gsmareport2020} suggests that at the end of 2019, 4G coverage was about 50\% of the total mobile Internet availability by geographical area. Due to several advantages such as lower cost and fixed broadband/fiber infrastructure, WiFi still provides higher bandwidth speeds to its users, and is popular among small and big organizations and also in retail and residential settings.
Today the world is using mobile cellular technologies such as 3G, 4G, 4G-LTE and 5G with varying data transfer speeds. We believe that an improved user experience can be gained by offloading such cellular network users to local, higher speed networks. For example, if WiFi providers could allow mobile subscribers to use their fixed broadband infrastructure and in return get monetary reward for their services from a MNO, then one can reduce the load on the cellular networks and simultaneously increase the data speed offered to users. The proposed framework allows independent private WiFi operators to be paid for their services by offloading users onto their networks, and that in turn means less capital expenditure investment by the MNOs. Some studies like the one released by OpenSignal \cite{opensignal} suggest that in the future, mobile Internet will transcend WiFi speeds, then offloading can also be performed from WiFi to mobile network infrastructure. The second reason for offloading can be the guarantee of services to subscribers by the MNO where they do not have a license to operate, or have poor signal coverage issues. These subscribers can be offloaded to partner WiFi providers and the subscriber will be able to enjoy enhanced data speeds. The third rationale for offloading is to ensure better service while roaming. For example, a subscriber who does not have roaming enabled on their device, but wants to use the Internet for short periods of time can be offloaded to one of these high-speed networks.
\begin{figure*}[ht]
\centering
\includegraphics[width=5.3in]{Figure1.jpg}
\caption{System architecture of the proposed framework}
\label{fig:figure1}
\end{figure*}
To allow subscriber roaming between operators and the settlement of usage charges, MNOs at present have memoranda of understanding (MoUs) drawn up between them on a bilateral basis. However, MoUs are complex agreements which take time to negotiate, and therefore it would not be practical to negotiate MoUs with large numbers of small and medium sized WiFi providers on a per MNO basis. To enable the offloading process, MNOs can register with an intermediary/broker who can collectively negotiate MoUs with many WiFi providers on their behalf. Additionally, blockchain technologies can be used to allow the participating entities to trust each other by executing smart contracts on a blockchain. This append-only distributed ledger technology (DLT), along with a consensus mechanism allows the implementation of smart contracts in real terms, and also digitally facilitates, verifies and enforces the contract between two or more parties \cite{smartcontractpaper}. Smart contracts can act as a bridging gap between stake-holders, and can provide subscribers with a new level of user experience.
\section{Blockchain-based Subscriber Offloading Framework}
Our proposed offloading framework enables subscribers to temporarily offload their data usage from low-bandwidth to high-bandwidth channels, without changing their network operator. Fig. \ref{fig:figure1} represents the block diagram of the framework and consists of three primary entities, namely a Broker, MNOs and WiFi Providers. The Broker as the name suggests is the ingress of the whole process and is able to coordinate activities between all the entities, as every entity in the system is registered with the Broker. We believe that a GSMA \cite{gsmaofficial} like entity aptly fits the role of the Broker in our proposed system, and all MNOs must register with it. The Broker maintains a blockchain node along with a registration service. The MNO registration process includes setting up a blockchain node for storing smart contracts and transactional data. Organizations that wish to allow their high-speed wireless infrastructure to be used by MNO subscribers must also register with the Broker, along with setting up their corresponding blockchain node. The Broker has oversight during the settlement phase, and also acts as a mediator in the case of any disputes.
The second set of entities in the system are the MNOs that allow their subscribers to opt for offloading to a higher speed network. Reasons for offloading can include low-quality signal coverage, high-speed requirements, roaming to non-serviced areas, or even accessing services provided by particular Data Service providers. This entity includes various functionalities like a blockchain interface, authentication module, smart contract module, and a billing module. The blockchain interface allows various other blockchain nodes to interact with each other, and to update the ledger periodically, including, adding or executing new smart contracts or transactions. The MNO authentication module helps to identify and authenticate a subscriber from the MNOs subscriber database. Since there are many MNOs which are registered with the Broker, and the subscriber should be an active user of that particular MNO, this module identifies a subscriber from an open smart contract, and authenticates its status in order to verify that the user a valid subscriber and is authorized to offload.
The smart contract module interacts with open smart contracts to identify users, and sets the values of the parameters in the contracts based on the authentication status, such as success or failure. This module can access the smart contract's data based on the authorization allowed and helps in executing it. The billing module can access the transactions stored in the blockchain, and create a bill based on the executed smart contract which involves a particular MNO. This module then matches the billing amount from the invoice received from the WiFi Provider and authorizes the payment.
The third set of entities are the WiFi Providers which open up their infrastructure in a controlled manner to the subscribers of MNOs. To expedite the offloading process, a WiFi Provider maintains a blockchain node on its premises. This entity also operates on four modules, such as the blockchain interface, authentication module, smart contract module and billing module. The blockchain interface is responsible for maintaining an up-to-date smart contract and transaction data, along with the full blockchain. The local authentication module ensures the mobile number ownership of the subscriber by validating a one-time password (OTP). This local authentication of the mobile number also avoids the spamming of mobile users or blockchain data. For example, if the local authentication of a mobile number is not concluded successfully, spammers may create millions of offload requests using random mobile numbers, which in turn would create a corresponding number of open smart contracts, and force denial of service attacks against legitimate users. The smart contract module allows the WiFi Provider to create a new smart contract and set its variables based on the subscriber authentication mechanism. Once the subscriber is allowed to offload and subsequently terminates its connection, the billing module records the data usage and writes this as a new transaction to the blockchain. Fig. \ref{fig:figure2} details the various phases of the offloading framework.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Figure2.jpg}
\caption{Sequence Diagram Illustrating the Various Phases}
\label{fig:figure2}
\end{figure*}
\textbf{Phase 1 - One-time Registration:}
The registration process for a new entity such as a MNO or WiFi Provider wishing to join the system commences with the submission of their trusted third party (TTP) issued public key certificate to the Broker. The Broker which maintains its own blockchain node stores the certificate as a transaction on the blockchain. All the communication amongst the entities in the system is carried out using public-key cryptography, and backed by transparent logs to ensure auditing at a later stage \cite{certauth}. MNOs and WiFi Providers join the blockchain network by creating their corresponding blockchain nodes.
\textbf{Phase 2 - Offloading Initiation and Local Authentication:}
In phase 2, an offloading request is initiated by a user/subscriber. Once the subscriber is in the range of a WiFi Provider and wants to request an offload onto their network, the subscriber connects with the wireless access point (WAP) and accesses the landing page of the WiFi Provider. The landing page can be common for internal and external users. Providing their mobile number in both the username and password fields indicates that the user is external and wishes to initiate an offload procedure. The user also selects its associated MNO from the drop-down menu provided on the landing page. The WiFi Provider generates the random one-time password (OTP\_WP) and sends it to the mobile number entered into the landing page by the user. The user then enters OTP\_WP on the next field of the landing page. Once the WiFi Provider tests the validity of the mobile number, it generates a new smart contract and sets the value of Auth\_WP as 1. The WiFi Provider then encrypts the mobile number of the user with the public key (PK\_MNO) of the mobile operator which it obtains from the blockchain, and assigns this encrypted value to the field given in the smart contract. The WiFi Provider also includes the identity of the concerned MNO in the smart contract, so that all other entities know for whom the smart contract is intended.
\textbf{Phase 3 - MNO Level Authentication and Smart Contract Execution:}
Phase 3 describes the process undertaken by a MNO. The open smart contracts stored on blockchain are searched by the MNO. Once it finds the intended contract by matching the subscriber’s operator name, it will fetch the encrypted identity of the subscriber and decrypt it using its private key. The decrypted data reveals the mobile number of the subscriber. The MNO tries to verify the identity of the subscriber against its subscriber database, and if successful sets the value of Auth\_MNO to 1 in the smart contract. The MNO also generates a random one-time password (OTP\_MNO) and sets the value of OTPCheckMNO to OTP\_MNO in the smart contract. OTP\_MNO is also forwarded to the subscriber’s mobile number for further processing. Once the subscriber receives the OTP\_MNO value, it enters it on the landing page of the WiFi Provider. The WiFi Provider in turn assigns the OTP\_MNO value to the OTPCheckUser field of the smart contract. Now, if the values of Auth\_WP and Auth\_MNO are both equal to 1, it deduces that both the WiFi Provider and MNO have validated the subscriber’s identity. If the OTPCheckMNO and OTPCheckUser are the same, it represents that the subscriber is authorized to offload, and the smart contract has been executed on the blockchain. The smart contract can also include other information like total time allowed to offload, or any other conditions with the associated offload which need to be honored by all the parties.
\textbf{Phase 4 - WiFi Access Facilitation:}
In phase 4, the WiFi Provider checks the status of the smart contract. If the contract is executed, the WiFi Provider allows access of its services to the subscriber provided by its infrastructure. Primarily this service is high-speed Internet, but it can also be a wide range of other services offered by WiFi providers. When the subscriber terminates the connection, the WiFi Provider records the data consumed by the subscriber in its logs.
\textbf{Phase 5 - Blockchain Transactions and Billing:}
Phase 5 deals with the transactions and billing-related procedures. The WiFi Provider creates a new transaction in relation to the data consumed by the subscriber and broadcasts it to the blockchain network. This transaction will be verified by all other blockchain nodes, and once verified, will be added to a block by utilizing the proof-of-authority (PoA) consensus mechanism by one of the authorized entities in the system. The PoA algorithm is able to provide faster transaction throughput using a ``identity-as-a-stake" based consensus mechanism. It significantly increases the speed of validating the transactions by generating blocks in a predictable sequence, and hence achieves a better transaction rate when compared with PoW or PoS. The invoice can be generated by the WiFi Provider after an agreed period of time such as a week or a month. The WiFi Provider will access all the transactions made by it from the blockchain data and prepare an invoice based on the mutually agreed per unit price. This invoice will also be verified by the associated MNO and can also be ratified by the broker. Once all parties verify the invoice, the bill will be paid using a mutually agreed out-of-bounds payment mechanism.
The Ethereum blockchain supports the scalability for large scale offloading requests/transactions using a combination of sharding and side-chains. To test the performance of the Ethereum blockchain, researchers measured 4 million transactions for 380 hours \cite{block_performance}. The experiment concluded that throughput decreases whereas latency increases linearly if we increase the block period which is fixed as per the difficulty level of proof-of-work (PoW). As in our proposed framework, we are using PoA strategy, this bottleneck should not affect the overall throughput and latency of the proposed system. To decrease the time required to complete the workload, the study suggests that powerful machines with high memory and CPUs should be used as blockchain node in PoA mode, for example, the computation time for workload can be reduced by 25\% if the memory is increased from 4GB to 24GB. If the network size is considered, it is found that in 90\%-100\% of cases, matches are found for smaller network sizes, whereas the match ratio is merely 60\%-75\% for larger networks.
\section{Case Studies and Result Analysis}
The proposed framework was implemented using the ns-3 network simulator \cite{ns3official}. Various ns-3 nodes were created to simulate the different entities e.g. Subscriber, MNO, WiFi Provider, Broker and a Data Server. To implement the blockchain functionality, the Ethereum \cite{ethereumofficial} blockchain is implemented on the Docker \cite{dockerofficial} platform. Each node in the ns-3 network is connected to a Docker container using the tap-bridge arrangement of ns-3 \cite{dockerconnection}. In the simulation environment, the subscriber is initially connected to the MNO node, and when the smart contract is executed the connection is switched over to the WiFi Provider. The simulation was carried out for 350 seconds on an Ubuntu Linux based computer running a virtual machine with 8GB RAM, Intel i5 2.50 GHz processor, and 100 GB of allocated memory.
The experimentation was conducted over five separate case studies. The first case study takes the global average of Internet speed and latency for fixed broadband and mobile Internet. The bandwidth given for the fixed broadband is assigned to the WiFi Provider link, whereas the bandwidth given for the mobile Internet is assigned to the MNO link. In this case study, the subscriber is offloaded from mobile Internet to fixed broadband as per the speed suggested by the global average. In the subsequent case studies, the subscriber is offloaded from 3G, 4G, 4G-LTE, and 5G mobile Internet to fixed broadband i.e. the WiFi Provider. The various data transfer speeds and latencies considered for all case studies are provided in Table \ref{table:results}.
\begin{table}[ht]
\centering
\caption{Case Studies and Associated Parameters}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{ccccc} \\ \hline
Case Studies & Network Generation & Average Speed (Mbps) & Latency (ms) & Reference \\ \hline
\multirow{2}{*}{\parbox{2cm}{Case Study 1: Global Average}} & Fixed Broadband & 92 & 21 & \cite{speedtest} \\
& Mobile Internet & 46 & 36 & \cite{speedtest} \\ \hline
\multirow{5}{*}{\parbox{2cm}{Case Study 2: Comparative Average }} & Fixed Broadband & 241 & 13 & \cite{speedtest} \\
& 5G & 71 & 20 & \cite{speedtest} \\
& 4G-LTE & 50 & 50 & \cite{edn} \\
& 4G & 10 & 100 & \cite{edn} \\
& 3G & 1.5 & 500 & \cite{edn}
\\ \hline
\end{tabular}
}
\label{table:results}
\end{table}
Fig. \ref{fig:figure3} shows the packet delivery percentages for all case studies. The packet delivery is analyzed for all types of flows, such as when the subscriber's packets are not offloaded, for offloaded packets, for packets transmitted through the WiFi link, and other packets of different users which are being transmitted through MNO network.
\begin{figure}[ht]
\centering
\includegraphics[width=3in]{Figure3.jpg}
\caption{Packet Delivery Analysis for Various Types of Flows}
\label{fig:figure3}
\end{figure}
For the global average case, the packet delivery percentage is the same for all cases except for the MNO flows, however this difference is insignificant. In this case, 99.96\% of packets are delivered for non-offloaded flows, offloaded flows, and WiFi flows, whereas 99.94\% of total packets are delivered for the MNO flows. For all other case studies, it is evident from Fig. \ref{fig:figure3} that 100\% of offloaded flows are delivered, primarily because of the enhanced data speed of the WiFi link. A slightly less number of packets are delivered in the global average case study if compared to all other case studies, because the fixed broadband speed of the global average is lower when compared to other case studies. As the data transfer speed decreases in the 4G and 3G case studies, we can see an increase in packet drop ratio for non-offloaded and MNO flows. If these flows are offloaded to the WiFi network, the packet delivery ratio rises to a better quality of service requirement.
\begin{figure}[ht]
\centering
\includegraphics[width=3.5in]{Figure4.jpg}
\caption{Total Flow Duration Analysis For Offloading and No-offloading}
\label{fig:figure4}
\end{figure}
Fig. \ref{fig:figure4} exhibits the time taken to deliver a 500MB file, and 10 such total requests are made for each WiFi and MNO network. One request of transferring a 500MB file is then offloaded to a high-speed network. From this figure, it is evident that the time taken to transfer files is significantly reduced in the case of the offloaded flow. In the global average case study, the non-offloaded flow takes 10.23 seconds to transfer one file, whereas it is reduced to 0.91 seconds if the request is offloaded. Similarly, the graph shows a drastic reduction of delivery time in all other case studies. The longest time is taken by the 3G network which is 339.11 seconds to deliver the file, whereas it is reduced to only 0.49 seconds if this flow is offloaded to the WiFi network.
\begin{figure}[ht]
\centering
\includegraphics[width=3.5in]{Figure5.jpg}
\caption{Delay and Jitter Sum Analysis for Offloading and No-offloading}
\label{fig:figure5}
\end{figure}
Fig. \ref{fig:figure5} presents the analysis of delay sum and jitter sum for all case studies. The delay sum is the addition of all delays for each packet for the full duration of flow, whereas the jitter sum is the addition of all jitter for every packet for any particular flow. For the global average, the delay sum is calculated as 598.50 seconds. For other case studies such as 5G, 4G-LTE, 4G, and 3G it is computed as 390.32, 806.39, 1749.86, and 8466.05 respectively for the non-offloaded flows. If compared to the delay sum obtained by the offloaded flows, we can see a drastic reduction. The delay sums obtained for the offloaded flows are 45.34, 16.95, 16.75, 16.42, and 16.42 for global average, 5G, 4G-LTE, 4G and 3G.
Jitter sum is also improved in case of offloading, whereas it is high if no offloading is performed. The jitter sum of 1.51 seconds is reduced to 0.53 seconds for the global average case study if offloading is performed. It is reduced to 0.17 seconds from 3.16 for 5G, 0.23 seconds from 1.41 seconds for 4G-LTE, 0.28 seconds from 7.98 seconds for 4G, and impressive 0.28 seconds from 45.63 seconds for the 3G case study if offloading is performed. The enhanced packet delivery ratio, along with improved delivery time, reduced delay sum and reduced jitter sum, enhance the overall quality of experience provided by the MNOs to their subscribers.
\section{Conclusions}
In this paper, an offloading framework is presented with its several advantages and infrastructural benefits. It can facilitate MNOs to allow their subscribers to benefit from third party operator high-speed infrastructure for a particular time period. To the best of our knowledge, there are no such mechanisms that exist as of now which can support such inter-organizational arrangements securely and efficiently. Our proposed framework deploys smart contracts for authentication, thereby allowing the subscriber to offload, and utilizes blockchain transactions to record and generate invoices. The MNO and private WiFi Provider both authenticate the subscriber and its mobile number to rule out any spamming of the system. All the transactions are verified by each participating entity, and new blocks are added using a PoA consensus mechanism to minimize the mining effort.
The proposed framework was simulated using ns-3, and the Ethereum blockchain was integrated into the simulation environment using Docker containers. A total of five case studies i.e. global average speed, 5G, 4G-LTE, 4G, 3G to WiFi offloading were tested. The final analysis shows that offloading results in improved packet delivery ratios, reduced total flow duration, total delay, and total jitter. These parameters suggest that offloading can help in enhancing the end users quality of service experience. A present, Internet speeds vary geographically as well as amongst operators. A user has no choice but to switch operator if they need higher speeds, or services that cannot be provided by their operator. Our proposed offloading framework can be a great leap forward for subscribers who can enjoy higher bandwidth speeds on a on-demand basis.
In the future, challenges such as the scalability of blockchain transactions, simultaneous subscriber load, etc. can be analyzed. Time lag analysis of the high load of subscriber offloading requests can also be carried out. In addition, the same offloading framework can be investigated on automated switching of network traffic from high-congested channels to low-congested channels. This automated and agent-based framework could support load balancing and optimization of next-generation network infrastructure.
|
1,116,691,497,134 | arxiv | \section*{Introduction}
There are several reasons why semileptonic $B$ decays are of interest.
For one thing, the are a small variety of decay products, namely
those which contain charm quarks ($D$, $D^*$, $D^{**}$, etc.)
and those which do not ($\pi$'s etc.). $V_{ub}/V_{cb}$ can be determined
from the relative number of these decays. In addition,
the heavy masses of the b and c quarks suggest that one
might be able to apply perturbative QCD to calculate the strong
corrections to these processes. This hope has been recently
formalised in Heavy Quark Effective Theory\cite{HQET}. Finally,
there's the fact that theoretical uncertainties are much smaller than in
non-leptonic decays which contain a wider variety of hadronic decay
products.
If we write out a parameterisation for the CKM Matrix, we see that it
depends on a complex phase which is resposible for CP violation
in the standard model. The magnitudes of the values for the CKM matrix
elements place limits on the size of this phase and, thus, on the amount
of CP violation in the standard model.
According to the particle data group,
$V_{ub}=.0035\pm.0015$ and $V_{cb}=.040\pm.08$. Recent
values of $V_{ub}$ and $V_{cb}$ in the literature fall in this range
[3-8].
In studying semi-leptonic decays, the first approximation is to neglect
QCD and use a spectator model in which the up quark of the $B$ meson is
not involved in the decay except to recombine with the charm quark. In
such a model, the decay of the $B$ meson into a $D$ meson reduces to that
of the decay of a bottom quark into a charm quark. Quantities that can be
determined directly from experimental data include the square of the
momentum transfer,
\begin{eqnarray}
Q^2&=&(B-D)^2\nonumber\\
&=&m_B^2+m_D^2-2E_BE_D+2\vec p_B\cdot\vec p_D,
\end{eqnarray}
and the lepton energy, $E_l$.
\section*{Inclusive Case}
Now that we have a process involving quanties that can be determined
from experiment, we would want to come up with a theory that relates
these quanties. One such model was the one devised by Altarelli, Cabbibo,
Corbo, Maiani and Martinelli in 1982\cite{Alt}. In this model, the
bottom quark was assumed to be on shell and, thus, given by
\begin{eqnarray}
m_b^2&=&(B-u)^2\nonumber\\
&=&m_B^2+m_u^2-2m_B\sqrt{p_u^2+m_u^2}
\end{eqnarray}
in the $B$ rest frame. This assumption has the advantage that it
avoids having the decay rate depend on an arbitrary overall $1/m_b^5$
that appears in a purely partonic treatment of these decays\cite{Bar}
The up quark momentum was then assumed to
obey a Gaussian distribution,
\begin{equation}
\phi(p)={1\over\pi^{3\over2}p_f^3}exp\left({-p^2\over p_f^2}\right)
\end{equation}
which is normalised according to
\begin{equation}
\int d^3p \phi(p)=1
\end{equation}
and is thus fixed up to an adjustable parameter, $p_f$, known as
the Fermi momentum.
In our case, we want to consider the up quark as being more than a mere
spectator: instead we write the effective
$\overline uBb$ vertex as $\gamma_5 V_B(p_u)$.
The decay rate in this model is
\begin{equation}
\Gamma(B\to\overline ucl\overline\nu)={N|V_{cb}|^2|V_B|^2\over2m_B(2\pi)^8}\int
{d^3p_c\over 2E_c}{d^3p_l\over 2E_l}
{d^3p_\nu\over 2E_\nu}{d^3p_u\over2E_u}\phi(p_u)|M|^2
\delta^4(B-u-c-l-\nu)
\end{equation}
where
\begin{equation}
|M|^2=G_F^2L_{\alpha\beta}H^{\alpha\beta}
\end{equation}
and
\begin{equation}
H^{\alpha\beta}=Tr[\gamma^\alpha(1-\gamma_5)(c\kern -5.5pt /+m_c)\gamma^\beta
(1-\gamma_5)(b\kern -5.5pt /+m_b)\gamma_5(-u\kern -5.5pt /+m_u)\gamma_5
(b\kern -5.5pt /+m_b)].
\end{equation}
It turns out this this integral is difficult to evaluate, the problem being
that the up and charm quarks are not being assumed to combine into a specific
meson. Instead, the quarks are combining to form a cluster $X$ with four
momentum $X=u+c$ and mass $m_X^2=(u+c)^2$. This $m_X$ is arbitrary save for
the fact that, experimentally, $m_X>m_D=1.8963$ GeV while energy conservation
requires that $m_X<m_B$. As a result, we will want to rewrite the hadronic
phase space so that $m_X$ is integrated over this range.
Using standard cluster decomposition techniques,
the decay rate becomes:
\begin{eqnarray*}
\Gamma(B\to X l\overline\nu)={N|V_{cb}|^2|V_B|^2\over2m_B}\int&|M|^2&\phi(p_u)
{1\over(2\pi)^2}{d^4p_c}{d^4p_u}\delta^4(X-u-c)\delta(c^2-m_c^2)
\delta(u^2-m_u^2)\nonumber\\
&\times&{1\over(2\pi)^2}{d^4p_l}{d^4p_\nu}\delta^4(Q-l-\nu)\delta(l^2)
\delta(\nu^2)\nonumber\\
&\times&{1\over(2\pi)^2}{d^4Q}{d^4X}\delta^4(B-Q-X)\delta(Q\cdot Q-Q^2)
\delta(X^2-m_X^2)\nonumber\\
&\times&{1\over(2\pi)^2}dQ^2dm_X^2
\end{eqnarray*}
Using\cite{Bar}
\begin{eqnarray}
\int d_2(X\to ab)&=&(\pi/2)\lambda^{1\over2}(1,a^2/X^2,
b^2/X^2){d\Omega\over 2\pi},
\end{eqnarray}
we get
\begin{equation}
{1\over(2\pi)^2}{d^4Q}{d^4X}\delta^4(B-Q-X)\delta(Q\cdot Q-Q^2)
\delta(X^2-m_X^2)={1\over2\pi}{p_Q\over 2m_B}
\end{equation}
where $p_Q=\lambda^{1\over2}(1,X^2/m_B^2,Q^2/m_B^2)m_B=p_X$.
The remaining delta functions are
\begin{eqnarray*}
\delta(c^2-m_c^2)&=&\delta((X-u)^2-m_c^2)\\
&=&\delta(X^2+m_u^2-2E_XE_u+2p_Xp_ucos\theta_{Xu}-m_c^2)
\end{eqnarray*}
and
\begin{eqnarray*}
\delta(\nu^2)&=&\delta((Q-l)^2)\\
&=&\delta(Q^2-2E_QE_l+2p_QE_lcos\theta_{Ql})
\end{eqnarray*}
These cancel with the cosine integrations in $d^3p_u$ and $d^3p_l$.
The final expression for the decay rate is, thus,
\begin{equation}
\Gamma(B\to X l\overline\nu)={N|V_{cb}|^2
V_B^2\over(2\pi)^6(2m_B)^2}\int|M|^2\phi(p_u)
{p_udp_udE_ld\phi\over16p_QE_u}dQ^2dm_X^2
\end{equation}
If we now compare this formula with data from ARGUS\cite{A1}\cite{A2} and
CLEO\cite{C1} then we find, after minimising with
respect to the parameters $m_u$, $m_c$, $m_b$, $p_f$ and $|V_{ub}|/|V_{cb}|$,
that we get a good fit for parameters in the ranges
\begin{eqnarray*}
m_u&=&.13\pm.38\ {\rm GeV}\\
m_c&=&1.4\pm.4\ {\rm GeV}\\
m_b&=&4.9\pm.3\ {\rm GeV}\\
p_f&=&.5\pm.1\ {\rm GeV}\\
{\rm and}\ V_{ub}/V_{cb}&=&.07\pm.05
\end{eqnarray*}
Note that the ARGUS and CLEO data include
contributions from $b\to c l\overline\nu$ and $b\to u l\overline\nu$
decays. In each case,
the measured electrons were separated into different categories
including electrons from non-$\Upsilon(4S)$ events,
$\psi$ or $\psi(2S)$ decay,
$\tau$ decay or semileptonic $D_s$ decay,
semileptonic $D$ decay,
$\pi^0\to e^+e^-$ decay and
semileptonic $B$ decay,
the latter being the ones that are used to make these plots.
Additional background comes from having hadrons misidentified as electrons.
Note that for $E_l > 2.4$ GeV, electrons from $B$ decay can only come
from charmless semi-leptonic decays.
Using these parameters and taking the areas under
the curves gives us the branching ratio $Br(B\to c
\overline u l\overline\nu)$=10.09\% and $Br(B\to u\overline u l\overline\nu)$
=.16\%. Using
\begin{equation}
\Gamma(B\to\overline ucl\overline\nu)=Br(B\to\overline ucl\overline\nu)/
\Gamma_B
\end{equation}
and knowing\cite{PDG} that $\Gamma_B=(1.52\pm.11)\times10^{-12}(1.52
\times10^{24})$ GeV, $|V_{cb}|$ can be calculated to be $.034\pm.003$.
\section*{Exclusive Case}
In the spectator model, one can differentiate between $D$ and $D^*$
mesons according to whether the daughter meson has spin 0 or 1.
That one can do this was overlooked in a recent paper by V. Barger {\em
et al} that attempted to differentiate between different decay products
in the differential $m_X$ distribution\cite{Kim}.
Mahiko Suzuki\cite{Suz} used this observation to calculate exclusive
rates at zero Fermi momentum. In this frame,
\begin{equation}
H^{\alpha\beta}=M_0^{\alpha}M_0^{\beta}+M_1^{\alpha}M_1^{\beta}
\end{equation}
where
\begin{eqnarray}
M_0^\lambda&\propto&Tr\left[(c\kern-4.5 pt / +
m)\gamma^\lambda
(1-\gamma_5)(b\kern-5. pt / + M)\right]
/[4M\{2m(E_c+m)\}^{1\over2}]
\end{eqnarray}
and
\begin{eqnarray}
M_1^\lambda&\propto&Tr\left[(c\kern-4.5 pt / +
m)\gamma_5\epsilon\kern-5. pt /\gamma^\lambda
(1-\gamma_5)(b\kern-5. pt / + M)
\right]/[4M\{2m(E_c+m)\}^{1\over2}].
\end{eqnarray}
Here $\epsilon_\lambda$ represents the three polarisations
satisfying $\epsilon_\lambda c^\lambda=0$. In the rest frame of $c$,
$\epsilon_\lambda$ is, therefore, given by
\begin{eqnarray}
&\epsilon_\lambda^{(T)}=(0,1,0,0),&(0,0,1,0) \nonumber \\
&\epsilon_\lambda^{(L)}=(0,0,0,1).&
\end{eqnarray}
where $(T)$ and $(L)$ signify transverse and longitudinal polarisations,
repectively.
In the case where the $b$ is not at rest in the $B$ rest frame
$\epsilon$ is defined specifically in the $B$ rest frame.
Now, if instead of considering the light quark as a spectator, we
treat it
as an intermediate decay product in an effective theory involving a
$\overline ubc$ loop, then the relevant traces are
\begin{eqnarray}
&M_0^\lambda\propto Tr\left[(c\kern-4.5 pt / +
m_c)\gamma^\lambda
(1-\gamma_5)(b\kern-5. pt / + m_b)\gamma_5(-u\kern-5.5 pt / +
m_s)\gamma_5
\right]& \\
&M_1^\lambda\propto Tr\left[(c\kern-4.5 pt / +
m_c)\gamma^\lambda
(1-\gamma_5)(b\kern-5. pt / + m_b)\gamma_5(-u\kern-5.5 pt / +
m_s)\epsilon
\kern-4.5 pt /\right]&
\end{eqnarray}
The Suzuki matrix elements are reproduced
as $\vec p$ goes to zero.
Starting with the $B$ meson at rest,
\begin{equation}
\Gamma(B\to D l\overline\nu)={1\over{2m_B(2\pi)^5}}\int {d^3p_D\over
2E_D}{d^3p_l\over
2E_l}
{d^3p_\nu\over 2E_\nu}|S|^2\delta^4(B-D-l-\nu)
\end{equation}
where
\begin{equation}
S= {N^{1\over2}G_FV_{cb}V_BV_D\over2\pi}\int{d^3p_b\over2E_u}|\phi^*(p_u)
\psi(t_u)|^{1\over2}M
\end{equation}
and where $\vec p_u(\vec p_u^\prime)$ and $\vec t_u(\vec t_u^\prime)$
are the up quark momenta in the $B$ and $D$ rest frames, respectively,
$V_B$ and $V_D$ are the vertex constants and $N$
is a normalisation. The wavefunctions $\phi(p_u)$ and $\psi(t_u)$ are
\begin{eqnarray}
\phi(p_u)={1\over\pi^{3\over2}p_f^3}exp\left({-p_u^2\over p_f^2}\right)&
{\rm and}&\psi(t_u)={1\over\pi^{3\over2}t_f^3}exp\left({-t_u^2\over t_f^2}
\right)
\end{eqnarray}
where $p_f$ and $t_f$ are independent adjustable parameters. $t_u$ is
given by
\begin{eqnarray}
\vec t_u^2&=&E_t^2-m_u^2\nonumber\\
&=&[(E_uE_D-\vec p_u\cdot\vec p_D)/m_D]^2-m_u^2
\end{eqnarray}
where $E_t$ is the energy of the up quark in the $D$ rest frame.
The phase space simplifies as follows:
\begin{eqnarray}
&d_3(B\to D l\overline\nu)\propto{d^3p_D\over 2E_D}{d^3p_l\over 2E_l}
{d^3p_\nu\over 2E_\nu}{d^3p_u\over 2E_u}{d^3p_u^\prime\over 2E_u^\prime}
\delta^4(B-D-l-\nu)&\\
&={\pi\over8}{dp_Dp_D^2dp_lp_l^2\over E_DE_lE_uE_u^\prime}dcos\theta_l
d\phi_l\delta(\nu^2)dp_up_u^2dcos\theta_ud\phi_udp_u^\prime p_u^{\prime2}
dcos\theta_u^\prime d\phi_u^\prime.&
\end{eqnarray}
$B-D-l-\nu=0$ and $\nu^2=0$ implies
\begin{eqnarray}
&(B-D-l)^2=0&\\
&\Rightarrow 2p_DE_lcos\phi_l=2E_DE_l-2m_B(E_D+E_l)+(m_D^2+m_B^2)&
\end{eqnarray}
so the phase space becomes
\begin{equation}
d_s(B\to D l\overline\nu)\propto{\pi^2p_D\over8E_D}{dp_DdE_l\over E_u
E_u^\prime}dp_up_u^2dcos\theta_ud\psi_udp_u^\prime p_u^{\prime2}
dcos\theta_u^\prime d\psi_u^\prime.
\end{equation}
If we now insert into this model the parameters given in the previous section
we run into problems: it turns out that we only get agreement with ARGUS
\cite{A3} data for low values of $Q^2$.
This is presumably due to final-state interactions, which in a perturbative
QCD framework are expected to grow as one approaches the end-point of the $Q^2$
distribution.
In the exclusive case, QCD corrections are restricted to those which do
not create additional hadrons, that is quark propagator self-corrections
and vertex corrections. Corrections to the $cD\overline u$ vertex are of
particular interest because they provide a phenomenological explanation
for the discrepancy: the exchange of a gluon between the up and charm quark
can reduce their relative momentum, allowing them to combine to form a
$D$ or $D^*$ meson.
\section*{Conclusion}
This model effectively describes the dependence of
both inclusive and exclusive semileptonic $B$ decays on the Fermi
momentum of the constituent quarks. The parameters that arise naturally
in this model agree with those used in other models.
Given that this model describes both inclusive and exclusive decays,
we can estimate the rate of semileptonic $B$ decays into $D^{**}$
mesons or clusters consisting of $D$'s or $D^{*}$ and $\pi$'s
by subtracting the exclusive rates into $D$ and $D^{*}$ from the
inclusive semileptonic $B$ decays into charmed mesons. Experimentally,
this rate is found to be between 33\% and 41\% of the total semileptonic
rate\cite{who1}\cite{who2}. This model would appear to have
the best chance of accounting for
all possible semileptonic decay products of $B$ mesons.
\bigskip
Figure 1: ${dBr\over dE_l}$ for $m_u$=.13 GeV, $m_c$=1.4 GeV, $m_b$=4.9
GeV, $p_f$=.5 GeV and $V_{ub}/V_{cb}\approx$.07 with data from
ARGUS[11][12] and CLEO[13]
\bigskip
Figure 2: ${dBr\over Q^2}$ for $m_u$=.13 GeV, $m_c$=1.4 GeV, $m_b$=4.9 GeV,
$p_f$=.5 GeV and $V_{ub}/V_{cb}\approx$.07 with data from ARGUS[17]
|
1,116,691,497,135 | arxiv | \section{Introduction}
\pagestyle{plain}
\setcounter{page}{1}
The existence of the trilinear gauge couplings (TGC) is the direct
consequence of the non-Abelian
$\mbox{SU}(2) \times \mbox{U}(1)$
gauge theory which has not been studied in detail.
Precise measurements of these couplings make it possible to test the
standard model. Any deviation from the standard model would indicate
the new physics. There are $2 \times 7$ parameters of couplings in the
effective Lagrangian~\cite{ref:TGC}. By requiring C- and P-invariance,
also $ g_{1}^{\gamma} =1$ by electromagnetic gauge invariance, the number of
parameters reduces to 5:
$ \Delta\gZ \equiv g_{1}^{\mathrm{Z}} -1,\ \Delta\kappa_{\gamma} \equiv \kappa_{\gamma} -1,\ \Delta\kappa_{\mathrm{Z}} \equiv \kappa_{\mathrm{Z}} -1,\ \lambda_{\gamma} $
and $ \lambda_{\mathrm{Z}} $ where all these parameters are vanishing in the standard model.
For $ {\mathrm{W}} ^+$ boson, these parameters can be related to
the electromagnetic charge:
$e_ {\mathrm{W}} = e\, g_{1}^{\gamma} $,
and the static moments~\cite{ref:moments} as,
magnetic dipole moment:
$\mu_ {\mathrm{W}} = \frac{e}{2m_ {\mathrm{W}} } ( g_{1}^{\gamma} + \kappa_{\gamma} + \lambda_{\gamma} )$,
and electric quadrupole moment:
$Q_ {\mathrm{W}} = -\frac{e}{m^2_ {\mathrm{W}} } ( \kappa_{\gamma} - \lambda_{\gamma} )$,
and also those associated with the weak boson Z.
At LEP2, it became, for the first time in the $ {\mathrm{e}} ^+ {\mathrm{e}} ^-$ collider,
to perform the direct measurement of TGC. W pair production plays
a principal role to study $ {\mathrm{WW}\gamma} $ and $ {\mathrm{WWZ}} $ couplings~\cite{ref:Yellow}.
However, these two couplings cannot be separated each other.
Single W production,
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \e \nu \W $~\cite{ref:single_W,ref:Kurihara},
or single gamma production,
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \nu \nu \gamma$~\cite{ref:single_gamma},
can be used to test the $ {\mathrm{WW}\gamma} $ coupling.
At hadron colliders, $ {\mathrm{W}} \gamma$ production has been studied to probe
the $ {\mathrm{WW}\gamma} $ vertex~\cite{ref:UA2,ref:CDF,ref:D0}
where the form factor $\Lambda$
is introduced to assure the unitarity~\cite{ref:Baur}. The TGC limits
derived at LEP are insensitive to the form factor scale and power.
To relate $ {\mathrm{WWZ}} $ and $WWg$ couplings, SU(2)$\times$U(1)
constraints, $ \Delta\gZ = \Delta\kappa_{\mathrm{Z}} + \Delta\kappa_{\gamma} \tan^2\theta_ {\mathrm{W}} $ and $ \lambda_{\mathrm{Z}} = \lambda_{\gamma} $,
are imposed.
In the search for supersymmetric particles, chargino pair production
($ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \chi^+ \chi^-$) where charginos decay
predominantly into sneutrinos and leptons,
it is experimentally difficult if the mass difference between chargino
and sneutrino is small. This is due to the huge backgrounds from two
photon process. Therefore the search for single W events in
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow {\mathrm{W}} ^+ {\mathrm{W}} ^-$ process where one W boson decays to
undetected chargino and neutralino and the other W to the standard model
particles has been proposed~\cite{ref:susy}.
A search for this scenario has been performed by ALEPH.
All results presented in this paper are preliminary except L3 results.
\section{$ \e \nu \W $ Production}
\subsection{Sensitivity to TGC\,($ {\mathrm{WW}\gamma} $)}
\begin{figure}[b]
\begin{center}
\vspace{-1cm}
\epsfig{figure=fig1.ps,height=5.0cm}
\vspace{-1.0cm}
\caption{Feynman diagrams for
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow {\mathrm{e}} ^- \overline{\nu}_ {\mathrm{e}} {\mathrm{W}} ^+$.}
\label{fig:Feynman_enw}
\end{center}
\end{figure}
The single W production, $ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \e \nu \W $, is the standard
model process~\cite{ref:single_boson} as shown in
Fig.\ref{fig:Feynman_enw}. The total cross section is
$\sigma_{ \e \nu \W } = 0.6\: {\mathrm{pb}} $ at the centre-of-mass energy of $183\: {\mathrm{GeV}} $
which is much smaller than WW production
$\sigma_{ {\mathrm{WW}} } = 15.7\: {\mathrm{pb}} $.
Contributions from Z boson exchange diagrams are negligible at LEP
energies. Thus this process offers almost pure sensitivity to the
$ {\mathrm{WW}\gamma} $ coupling~\cite{ref:Kurihara}.
The sensitivity for TGC parameters for the four fermion process of
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow {\mathrm{e}} ^{-} \overline{\nu}_ {\mathrm{e}} \mbox{u} \bar{\mbox{d}}$
is shown in Fig.~\ref{fig:enud}. The total cross section has been
calculated with the SU(2)xU(1) constraints. While $ {\mathrm{WW}} $ production cross
section is minimum at $ \Delta\gZ = \Delta\kappa_{\gamma} = \lambda_{\gamma} = 0$, $ \e \nu \W $ production cross
section is minimum at $ \Delta\kappa_{\gamma} = -1$ and $ \lambda_{\gamma} = 0$. It can be seen that
single W production is sensitive to $ \kappa_{\gamma} $, while it has the modest
sensitivity to $ \lambda_{\gamma} $. This should be compared to the $ {\mathrm{W}} \gamma$
production at hadron colliders~\cite{ref:UA2,ref:CDF,ref:D0} which is
sensitive to $ \lambda_{\gamma} $ or to
$\mbox{b} \rightarrow \mbox{s} \gamma$~\cite{ref:CLEO,ref:ALEPH_Penguin}
which is sensitive to $ \kappa_{\gamma} $ in the $ {\mathrm{WW}\gamma} $ vertex.
\begin{figure}
\begin{center}
\vspace{-1.5cm}
\epsfig{file=fig2.ps,width=9cm}
\vspace{-1.1cm}
\caption{The total cross section for
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow {\mathrm{e}} ^{-} \overline{\nu}_ {\mathrm{e}} \mbox{u} \bar{\mbox{d}}$
as functions of 3 coupling parameters. The lower curves
show the cross sections for W pair production alone,
and the upper curves are for all four fermion diagrams.
For one plot, the other two parameters are fixed to the
standard model values.
Closed points indicate the standard model prediction.}
\label{fig:enud}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
a) $ {\mathrm{W}} \rightarrow e \nu_e$
& b) $ {\mathrm{W}} \rightarrow \mu \nu_\mu$ \\
\epsfig{figure=fig3.ps,width=4.0cm}
& \begin{sideways} \begin{sideways} \begin{sideways}
\epsfig{figure=fig4.ps,width=4.0cm}
\end{sideways} \end{sideways} \end{sideways} \\ \\
c) $ {\mathrm{W}} \rightarrow \tau \nu_\tau$
& d) $ {\mathrm{W}} \rightarrow {\mathrm{q}}{\bar{\mathrm{q}}'}$ \\
\epsfig{figure=fig5.ps,width=3.0cm}
& \epsfig{figure=fig6.ps,width=4.0cm} \\
\end{tabular}
\vspace{0.3cm}
\caption{Candidate events for $ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \e \nu \W $
observed by a)~DELPHI, b)~ALEPH, c)~L3 and d)~OPAL
experiments.}
\label{fig:events}
\end{center}
\end{figure}
\subsection{Single W signal}
Event characteristics of the single W production are as follows.
Due to its small momentum transfer, the outgoing electron escapes
in the beam direction. In the analysis, it is required that the
electron to be un-tagged. This is important to suppress the
contribution from W pair production. The associated neutrino may
carry a large transverse momentum, thus the signature of single W
production is characterized by the large missing momentum.
\noindent
For leptonic decay channel of W boson ($ {\mathrm{W}} \rightarrow l \nu$),
an isolated high $P_t$ lepton with energy at about $40\: {\mathrm{GeV}} $
is the signal. The dominant backgrounds are from $ll\nu\nu$ ($ \e \e \Z $)
processes.
\noindent
For hadronic W decay channel ($ {\mathrm{W}} \rightarrow {\mathrm{q}}{\bar{\mathrm{q}}'}$), the signature is
the acoplanar two jets whose invariant mass is equal to W mass.
The main background is W pair production
($ {\mathrm{W}} \W \rightarrow \tau \nu_\tau {\mathrm{q}}{\bar{\mathrm{q}}'} $). If $\nu_\tau$ carries away
large fraction of energy, $\tau$ becomes invisible.
It is practically impossible to distinguish this case from
the single W production, thus becoming irreducible backgrounds.
The definitions of the single W signal are different among LEP
experiments. The ALEPH collaboration, for example, defines the signal
as~\cite{ref:ALEPH}:
\[ \left\{
\begin{array}{ll}
\theta_{\mbox{e}} < 34\ \mbox{mrad}, & \\
{\mathrm{E}} _{\ell} > 20\ \mbox{GeV and } |\cos\theta_{\ell}|<0.95
&\ \mbox{for}\ {\mathrm{W}} \rightarrow l \nu, \\
\M_{\mbox{qq}'} > 60\ \GeV/c^2 &\ \mbox{for}\ {\mathrm{W}} \rightarrow {\mathrm{q}}{\bar{\mathrm{q}}'},
\end{array}
\right. \]
where $\theta_{\mbox{e}}$ is the polar angle of the scattered electron,
$ {\mathrm{E}} _{\ell}$ and $\theta_{\ell}$ are the energy and polar angle
of leptons from the W decay, respectively. $\M_{\mbox{qq}'}$ is the
invariant mass of the quark pair. These cuts on W decay final states
are necessary to remove the non-resonant four fermion backgrounds.
The selected evens are displayed in Fig.~\ref{fig:events} for four
W decay modes.
Monte Carlo generators of GRC4F~\cite{ref:grc4f},
EXCALIBUR~\cite{ref:EXCALIBUR} and DELTGC~\cite{ref:DELTGC} have been
used to simulate the $ \e \nu \W $ production.
\subsection{Total cross section}
The summary of analyzed data and observed number of events is
given in Table~\ref{tab:smtab}. In addition to W decay to electron
or muon, ALEPH and L3 collaborations have also analyzed the tau decay channel.
ALEPH~\cite{ref:ALEPH} has measured the total cross section of
$ \e \nu \W $ production at $183\, {\mathrm{GeV}} $ as
$\sigma_{ \e \nu \W } = 0.40 \pm 0.17 ( {\mathrm{stat.}} ) \pm 0.04 ( {\mathrm{syst.}} )\: {\mathrm{pb}} $
where the standard model predicts \ $0.41\: {\mathrm{pb}} $.
L3~\cite{ref:L3} has also measured the cross section at
183\,GeV as
$\sigma_{ \e \nu \W } = 0.62\ ^{+0.19}_{-0.18} ( {\mathrm{stat.}} ) \pm 0.04 ( {\mathrm{syst.}} )\: {\mathrm{pb}} $
where $0.50\: {\mathrm{pb}} $ is expected from the standard model.
All these results are consistent with the standard model
expectation.
In Fig.~\ref{fig:TGC_L3}, the cross section as a function of the
centre-of-mass energy is shown as measured by L3 experiment.
\begin{table*}
\begin{center}
\caption{Summary of single W measurement for leptonic and hadronic channels.
$\mbox{N}_{\mbox{obs}}$ is the number of selected data events.
$\mbox{N}_{\mbox{SM}}$ and $\mbox{N}_{ \e \nu \W }$
are the expected number of total events (signal plus backgrounds)
and $ \e \nu \W $ signal events, respectively.}
\label{tab:smtab}
\vspace{0.4cm}
\begin{tabular}{|l||r|r||r|rr||r|rr|} \hline
\raisebox{0pt}[12pt][6pt]{ } &
\raisebox{0pt}[12pt][6pt]{$ {\mathrm{E}} _{\mbox{CMS}}$} &
\raisebox{0pt}[12pt][6pt]{Lumi.} &
\multicolumn{3}{|c||}
{\raisebox{0pt}[12pt][6pt]{$ {\mathrm{W}} \rightarrow l \nu$ }} &
\multicolumn{3}{|c|}
{\raisebox{0pt}[12pt][6pt]{$ {\mathrm{W}} \rightarrow {\mathrm{q}}{\bar{\mathrm{q}}'}$ }} \\
\cline{2-9}
\raisebox{0pt}[12pt][6pt]{ } &
\raisebox{0pt}[12pt][6pt]{(GeV)} &
\raisebox{0pt}[12pt][6pt]{($ {\mathrm{pb}} ^{-1}$)} &
\raisebox{0pt}[12pt][6pt]{$\mbox{N}_{\mbox{obs}}$} &
\raisebox{0pt}[12pt][6pt]{$\mbox{N}_{\mbox{SM}}$} &
\raisebox{0pt}[12pt][6pt]{$(\mbox{N}_{ \e \nu \W })$} &
\raisebox{0pt}[12pt][6pt]{$\mbox{N}_{\mbox{obs}}$} &
\raisebox{0pt}[12pt][6pt]{$\mbox{N}_{\mbox{SM}}$} &
\raisebox{0pt}[12pt][6pt]{$(\mbox{N}_{ \e \nu \W })$} \\
\hline
\raisebox{0pt}[12pt][6pt]{ALEPH~\cite{ref:ALEPH}} &
\raisebox{0pt}[12pt][6pt]{161-183} &
\raisebox{0pt}[12pt][6pt]{78.9} &
\raisebox{0pt}[12pt][6pt]{11} &
\raisebox{0pt}[12pt][6pt]{11.1} &
\raisebox{0pt}[12pt][6pt]{(7.3)} &
\raisebox{0pt}[12pt][6pt]{21} &
\raisebox{0pt}[12pt][6pt]{21.5} &
\raisebox{0pt}[12pt][6pt]{(8.8)} \\
\hline
\raisebox{0pt}[12pt][6pt]{DELPHI~\cite{ref:DELPHI}} &
\raisebox{0pt}[12pt][6pt]{161-183} &
\raisebox{0pt}[12pt][6pt]{73.0} &
\raisebox{0pt}[12pt][6pt]{9} &
\raisebox{0pt}[12pt][6pt]{5.4} &
\raisebox{0pt}[12pt][6pt]{(5.2)} &
\raisebox{0pt}[12pt][6pt]{44} &
\raisebox{0pt}[12pt][6pt]{52.6} &
\raisebox{0pt}[12pt][6pt]{(19.9)} \\
\hline
\raisebox{0pt}[12pt][6pt]{L3~\cite{ref:L3}} &
\raisebox{0pt}[12pt][6pt]{130-183} &
\raisebox{0pt}[12pt][6pt]{88.5} &
\raisebox{0pt}[12pt][6pt]{12} &
\raisebox{0pt}[12pt][6pt]{10.2} &
\raisebox{0pt}[12pt][6pt]{(6.0)} &
\raisebox{0pt}[12pt][6pt]{109} &
\raisebox{0pt}[12pt][6pt]{103.3} &
\raisebox{0pt}[12pt][6pt]{(14.7)} \\
\hline
\raisebox{0pt}[12pt][6pt]{OPAL~\cite{ref:OPAL}} &
\raisebox{0pt}[12pt][6pt]{161-172} &
\raisebox{0pt}[12pt][6pt]{20.3} &
\raisebox{0pt}[12pt][6pt]{2} &
\raisebox{0pt}[12pt][6pt]{2.0} &
\raisebox{0pt}[12pt][6pt]{(0.8)} &
\raisebox{0pt}[12pt][6pt]{4} &
\raisebox{0pt}[12pt][6pt]{2.5} &
\raisebox{0pt}[12pt][6pt]{(1.3)} \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\vspace{-0.3cm}
\epsfig{file=fig7.ps,width=6.5cm}
\vspace{-0.2cm}
\caption{The measured cross section of $ \e \nu \W $ production
as a function of the centre-of-mass energy by L3.}
\label{fig:TGC_L3}
\end{center}
\end{figure}
\subsection{Limits on TGC}
Since WW backgrounds in hadronic W decay channel are irreducible
(S/N=1/1 at best) and the 2/3 of W's decay hadronically,
the pure sensitivity to $ {\mathrm{WW}\gamma} $ vertex of $ \e \nu \W $ production is lost.
This is because W pair production contains both $ {\mathrm{WWZ}} $ and $ {\mathrm{WW}\gamma} $
vertices that cannot be separated.
One is therefore obliged to either
a) fix the irreducible WW backgrounds in hadronic W decay as
the standard model (ALEPH, OPAL),
or
b) vary the WW backgrounds simultaneously according to TGC values
assuming SU(2)$\times$U(1) constraints (DELPHI, L3).
The former takes the conservative approach, and the latter benefits the
information contained in WW backgrounds.
The sensitivity to $ \kappa_{\gamma} $ of single W production which is superior to
WW production is demonstrated in Fig.~\ref{fig:TGC_DELPHI}.
However there are two minima at $ \Delta\kappa_{\gamma} = 0$ (the standard model) and at -2
for single W alone due to the fact that total cross section has the
same value at these points. This double minima structure can be
solved by combining it with the results from single gamma and/or
WW productions.
\begin{figure}
\begin{center}
\vspace{-0.1cm}
\epsfig{file=fig8.ps,width=6cm}
\caption{The log-likelihood functions on $ \Delta\kappa_{\gamma} $ parameter measured
by DELPHI.
The results from W pair production (hadronic, semi-leptonic),
single W and single gamma are shown separately.}
\label{fig:TGC_DELPHI}
\end{center}
\end{figure}
\begin{table}[b]
\begin{center}
\caption{The $95\%\:$C.L. limits on TGC couplings.
Note that L3 gives $95\%\:$C.L. limits with
2 parameter fit, and the other experiments give
1 parameter fit result fixing the rest as
the standard model values.}
\label{tab:tgc}
\begin{tabular}{|l||c|c|} \hline
\raisebox{0pt}[12pt][6pt]{ALEPH} &
\raisebox{0pt}[12pt][6pt]{$-2.6 < \Delta\kappa_{\gamma} < 0.5$} &
\raisebox{0pt}[12pt][6pt]{$-1.6 < \lambda_{\gamma} < 1.6$} \\
\hline
\raisebox{0pt}[12pt][6pt]{DELPHI} &
\raisebox{0pt}[12pt][6pt]{$-0.4 < \Delta\kappa_{\gamma} < 0.9$} &
\raisebox{0pt}[12pt][6pt]{$-1.5 < \lambda_{\gamma} < 1.5$} \\
\hline
\raisebox{0pt}[12pt][6pt]{L3} &
\raisebox{0pt}[12pt][6pt]{$-0.46 < \Delta\kappa_{\gamma} < 0.57$} &
\raisebox{0pt}[12pt][6pt]{$-0.86 < \lambda_{\gamma} < 0.75$} \\
\hline
\raisebox{0pt}[12pt][6pt]{OPAL} &
\raisebox{0pt}[12pt][6pt]{$-3.6 < \Delta\kappa_{\gamma} < 1.6$} &
\raisebox{0pt}[12pt][6pt]{$-3.1 < \lambda_{\gamma} < 3.1$} \\
\hline
\end{tabular}
\end{center}
\end{table}
In Table~\ref{tab:tgc}, the limits on TGC parameters are summarized.
The event yields have been analyzed by Bayesian approach (ALEPH) or by
maximum likelihood fits to event rate (OPAL) and to kinematical
distributions (DELPHI, L3).
One should note that the results on $ \lambda_{\gamma} $ obtained by DELPHI and L3
experiments benefit information contained in W pair production.
The intrinsic sensitivity of single W production alone at
the current statistics of LEP is $| \Delta\kappa_{\gamma} | < 0.5$ ($ \lambda_{\gamma} = 0$) and
$| \lambda_{\gamma} | < 1.6$ ($ \Delta\kappa_{\gamma} = 0$) at $95\%\:$C.L.
provided that the double minima structure for $ \Delta\kappa_{\gamma} $ is resolved.
\section{$ \nu \bar{\nu} \gamma $ Production}
Amongst various physics opportunities such as counting the number of
light neutrino species, the process $ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \nu \bar{\nu} \gamma $ is
also sensitive to $ {\mathrm{WW}\gamma} $ coupling~\cite{ref:single_gamma}.
There are three types of diagrams which contribute to the $ \nu \bar{\nu} \gamma $ final
state as shown in Fig.~\ref{fig:Feynman_nng}.
The first diagram is the radiative return to Z by emitting hard
photon, the second one is t-ch W boson exchange, and the last one
is W boson fusion type which contains a $ {\mathrm{WW}\gamma} $ vertex.
The photon in the radiative return process has energy peaked at
$\mbox{x}_\gamma = {\mathrm{E}} _\gamma/ {\mathrm{E}} _{\mbox{beam}} = 0.74\ \mbox{at}\ 183\: {\mathrm{GeV}} $.
Monte Carlo programs based on KORALZ~\cite{ref:KoralZ} and
DELTGC~\cite{ref:DELTGC} are used.
\begin{figure}[h]
\begin{center}
\vspace{-0.5cm}
\epsfig{file=fig9.ps,width=9cm,height=4cm} \\
\vspace{-2.85cm} \hspace{4.7cm} {\tiny\bf $W$} \\
\vspace{0.5cm} \hspace{4.7cm} {\tiny\bf $W$} \\
\vspace{1.0cm}
\caption{Feynman diagrams for
$ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow \nu_e \bar{\nu}_e \gamma$.}
\label{fig:Feynman_nng}
\end{center}
\end{figure}
Isolated photons have been searched in the analysis. It is found
that the data are in good agreement with the standard model expectation.
\noindent
When extracting coupling parameters with the maximum likelihood method,
the total yield of observed events, the energy and angular distributions
are used as shown in Fig.~\ref{fig:NNG_ALEPH}. The photon energy
region of $\mbox{x}_\gamma \in [0.67,0.76]$ is not used in ALEPH's analysis.
ALEPH~\cite{ref:ALEPH_gam} has obtained the fitted results of
$ \Delta\kappa_{\gamma} = 0.05^{+1.2}_{-1.1} \pm 0.3\ ( \lambda_{\gamma} = 0)$ and
$ \lambda_{\gamma} = -0.05^{+1.6}_{-1.5} \pm 0.3\ ( \Delta\kappa_{\gamma} = 0)$,
where the first error is statistical and the second is systematic.
\noindent
DELPHI~\cite{ref:DELPHI} performs the binned likelihood fit to the
whole photon energy spectrum, and gets
$ \Delta\kappa_{\gamma} = 0.00^{+1.01}_{-1.01} \pm 0.36\ ( \lambda_{\gamma} = 0)$ and
$ \lambda_{\gamma} = 0.72^{+1.12}_{-1.12} \pm 0.36\ ( \Delta\kappa_{\gamma} = 0)$.
Both results are consistent with the coupling parameters equal to zero.
The sensitivity of $ \nu \bar{\nu} \gamma $ to TGC parameters is $2 \sim 3$ times weaker
than that of $ \e \nu \W $, but nevertheless, it contributes to solve the
double minima structure for $ \Delta\kappa_{\gamma} $ in $ \e \nu \W $ production as discussed above.
\begin{figure}
\begin{center}
\vspace{-1.2cm}
\epsfig{figure=fig10.ps,width=9cm}
\vspace{-0.7cm}
\caption{ALEPH measurements on
a)~photon energy (normalized to the beam energy)
and b)~angular distribution of photons for $ \nu \bar{\nu} \gamma $ production
at $183\, {\mathrm{GeV}} $.}
\label{fig:NNG_ALEPH}
\end{center}
\end{figure}
\section{Invisible W Decay}
The ALEPH collaboration~\cite{ref:ALEPH}
has performed the search for the invisible W decay
in $ {\mathrm{e}} ^+ {\mathrm{e}} ^- \rightarrow {\mathrm{W}} ^+ {\mathrm{W}} ^-$. The mixed supersymmetric/standard
model decay has been studied. One W boson decays to chargino and
neutralino, followed by the chargino decay to sneutrino and lepton.
The other W boson decays to the standard model particles.
The whole decay cascade can be illustrated as,
\[ {\mathrm{e}} ^+\ {\mathrm{e}} ^- \rightarrow
\begin{array}[t]{ll}
{\mathrm{W}} & {\mathrm{W}} \\
\hookrightarrow \mbox{\large SM} & \hookrightarrow
\begin{array}[t]{rl}
\chi^\pm \ \chi & \\
\hookrightarrow \ell &
\begin{array}[t]{ll}
\widetilde{\nu} & \\
\hookrightarrow \nu\ \chi. &
\end{array}
\end{array}
\end{array} \]
The supersymmetric decay of W boson becomes practically invisible if
the mass difference between the chargino and the sneutrino
($ \Delta{\mathrm{M}} \equiv m_{\chi^\pm} - m_{\widetilde{\nu}}$) is less than about
$3\, \GeV/c^2 $. However this process still can be tagged by the
$ {\mathrm{W}} $ decay to the standard model particles. Three event topologies of
the final state, single lepton ($e/\mu$), acoplanar lepton pair
(one is soft) and hadrons (missing mass equal to $ {\mathrm{W}} $ mass) have been
studied.
No excess of the signal has been observed and the
results are consistent with the standard model expectation.
The limits at 95\% C.L. on the W boson supersymmetric branching ratio
have been obtained as:
\[ \begin{array}{llcl}
{\cal B}_{susy} & ( \Delta{\mathrm{M}} \approx 0\, \GeV/c^2 ) & < & 1.3\,\%,\\[5pt]
{\cal B}_{susy} & ( \Delta{\mathrm{M}} = 3\, \GeV/c^2 ) & < & 1.9\,\%,\\
\end{array} \]
\noindent
assuming ${\cal B}(\chi^\pm\rightarrow\ell\tilde{\nu})=100\%$ and
$m_{\chi^{\pm}} = 45\, \GeV/c^2 $.
Degenerate ($ \Delta{\mathrm{M}} \approx 0\, \GeV/c^2 $) case gives the
quasi model-independent limit on the invisible W decay width via
direct search. The result translates as
$\Gamma( {\mathrm{W}} \rightarrow \mbox{inv}) < 27\, {\mathrm{MeV}} $ at 95\% C.L..
\section{Conclusion}
Single W production has been studied at LEP. The production cross
section is consistent with the standard model expectation.
It has been shown that $ \e \nu \W $ production is sensitive to the $ {\mathrm{WW}\gamma} $
coupling, in particular to $ \kappa_{\gamma} $. However, the irreducible WW
background in hadronic decay channel of $ \e \nu \W $ does not allow the
clear separation of $ {\mathrm{WW}\gamma} $ and $ {\mathrm{WWZ}} $ couplings.
Single gamma production has also been studied. No deviation from
the standard model is found.
Search for invisible W decay has been performed by ALEPH
and the stringent limit on invisible W decay width of $27\: {\mathrm{MeV}} $
has been obtained at $95\,\%\:$C.L..
The current status and the future perspective on the $ {\mathrm{WW}\gamma} $ coupling
measurement are summarized in Table~\ref{tab:future}.
One sees that the $ {\mathrm{W}} \gamma$ production at Tevatron and
$ \e \nu \W $ production at LEP provide complementary information on TGC.
It is anticipated that $ \e \nu \W $ production at LEP has the sensitivity of
$| \Delta\kappa_{\gamma} | \sim 0.1$ with $500\: {\mathrm{pb}} ^{-1}$ data at higher energies.
In future, one may combine leptonic decay channel of $ \e \nu \W $ and $ \nu \bar{\nu} \gamma $
alone that are purely sensitive to the $ {\mathrm{WW}\gamma} $ coupling.
It is expected that the use of kinematical information and the spin
analysis will further improve the limits.
\begin{table}[h]
\begin{center}
\vspace{-0.3cm}
\caption{The current and future TGC limits at $95\%\:$C.L.
par single experiment.}
\label{tab:future}
\vspace{-0.1cm}
\begin{tabular}{|l||l|r||l|l|} \hline
\raisebox{0pt}[12pt][6pt]{Tevatron~\cite{ref:D0}} &
\raisebox{0pt}[12pt][6pt]{$ {\mathrm{W}} \gamma$} &
\raisebox{0pt}[12pt][6pt]{$93\: {\mathrm{pb}} ^{-1}$} &
\raisebox{0pt}[12pt][6pt]{$\left| \Delta\kappa_{\gamma} \right| < 0.9$} &
\raisebox{0pt}[12pt][6pt]{$\left| \lambda_{\gamma} \right| < 0.3$} \\
\hline
\raisebox{0pt}[12pt][6pt]{LEP} &
\raisebox{0pt}[12pt][6pt]{$ \e \nu \W $} &
\raisebox{0pt}[12pt][6pt]{$80\: {\mathrm{pb}} ^{-1}$} &
\raisebox{0pt}[12pt][6pt]{$\left| \Delta\kappa_{\gamma} \right| < 0.5$} &
\raisebox{0pt}[12pt][6pt]{$\left| \lambda_{\gamma} \right| < 1.6$} \\
\hline \hline
\raisebox{0pt}[12pt][6pt]{LEP2000} &
\raisebox{0pt}[12pt][6pt]{$ \e \nu \W $} &
\raisebox{0pt}[12pt][6pt]{$500\: {\mathrm{pb}} ^{-1}$} &
\raisebox{0pt}[12pt][6pt]{$\left| \Delta\kappa_{\gamma} \right| < 0.1$} &
\raisebox{0pt}[12pt][6pt]{$\left| \lambda_{\gamma} \right| < 0.6$} \\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.5cm}
\section*{Acknowledgements}
The author would like to thank following single W'ers and single
$\gamma$'ers at LEP for discussions and correspondences,
J.~Boucrot, J.B.~Hansen, A.~Jacholkowska and D.~Zerwas (ALEPH),
C.~Matteuzzi, R.L.~Sekulin and O.~Yushchenko (DELPHI),
P.~de Jong and A.~Kounine (L3),
G.~Bella and M.~Verzocchi (OPAL).
\vspace{-0.1cm}
\section*{References}
|
1,116,691,497,136 | arxiv | \section{Introduction}
The motivation for a positive operator valued measure (POVM) is in the quantum information
theory. The outcome statistics of a quantum measurement are described by (one or more) POVMs.
A sequence of measurements on copies of a system in an unknown state will reveal the state.
This process is called quantum state tomography \cite{Paris}.
A POVM is a set $\{E_i\,:\, 1 \le i \le k\}$ of positive operators such that
$\sum_i E_i=I$. A quantum density matrix $\rho$ can be informed by the probability
distribution $\{\mathrm{Tr}\, \rho E_i\,:\, 1 \le i \le k\}$. A density $\rho \in M_n(\bbbc)$ has $n^2-1$
real parameters. To cover all parameters $k\ge n^2$ should hold for the POVM. We can take
projections $P_i$, $1 \le i \le n^2$, such that
$$
\sum_{i=1}^{n^2}P_i=nI, \qquad \mathrm{Tr}\, P_iP_j=\frac{1}{n+1} \quad (i\ne j), \qquad E_i=\frac{1}{n}P_i
$$
and this is called symmetric informationally complete POVM (SIC POVM)
by Zauner \cite{zauner} and it is rather popular now \cite{App, App2, Col, Klapp, renes, zhu}.
Zauner shoved the existence for $n \le 5$ and there has been more mathemtical and numerical
arguments \cite{Fl, scott10}. The existence of a SIC POVM is not known for every dimension.
Another terminology for this is tight equiangular frame. We may also consider less than
$n^2$ projections with similar properties.
A SIC POVM $\{E_i\,:\, 1 \le i \le n^2\}$ of an $n$-level system is optimal for several
arguments. For example, the SIC POVM was optimal in our paper \cite{PR} where the
minimization of
the determinant of the average covariance matrix was studied. Actually, this kind of
optimization is too complicated and a different argument was in \cite{scott}, minimization of
the square of the Hilbert-Schmidt distance of the estimation and the true density. In the
present paper the minimization of the square of the Hilbert-Schmidt distance will be used.
In this paper the subject is the state estimation again, but a part of the $n^2-1$ parameters
is supposed to be known and we want to estimate only the unknown parameters. A POVM
$\{E_i\,:\, 1 \le i \le k\}$ is good when $k<n^2$ and $n^2-k$ parameters are known. It is
obvious that the optimal POVM depends on the known parameters and we use the expression
of conditional SIC POVM. This seems to be a new subject, the existence of such conditional
SIC POVM can be a fundamental question in different quantum tomography problems. The
description of the conditional SIC POVM is the main result in Section 1, however, the
existence is not at all clear. The formalism is in a finite dimensional Hilbert space ${\cal H}$
and a state means a density matrix in $B({\cal H})$. The known parameters determine a traceless
part $B \subset B({\cal H})$ and the operators of the conditional SIC POVM are orthogonal to
$B$. In Section 2 a particular situation is studied, we assume that the diagonal entries of
the state space are given. A mathematical subject called planar difference set in projective
geometry is used there.
\section{The optimality of conditional SIC-POVMs}
We examine the case of $M_n(\bbbc)$. Let us suppose that $\sigma_i$ is an orthonormal basis of
self-adjoint matrices, i.e.
$$
\sigma_i=\sigma_i^*,\quad \<\sigma_i, \sigma_j\>=\delta_{i,j}, \quad i,j \in \{0,1,2,\dots n^2-1\}.
$$
We fix $\sigma_0=\frac{1}{\sqrt{n}}I_n$. (The elements of this basis are often called
generalized Pauli matrices.)
A quantum state $\rho$ satisfies the conditions $\mathrm{Tr}\, \rho=1$ and $\rho\ge 0$. It can
be written in the form
$$
\rho=\sum_{i=0}^{n^2-1} \theta_i \sigma_i,
$$
where $\theta_0=\frac{1}{\sqrt{n}}$.
A necessary condition for the coefficients can be obtained:
\begin{equation}\label{square}
\sum_{i=1}^{n^2} \theta_i^2=\mathrm{Tr}\, \rho^2 \le 1.
\end{equation}
We decompose $M_n(\bbbc)$ to three orthogonal subspaces:
\begin{equation}\label{decomp}
M_n(\bbbc)=A \oplus B \oplus C,
\end{equation}
where $A:=\{ \lambda I_n: \lambda \in {\mathbb C}\}$ is one dimensional. Denote
the orthogonal projections to the subspaces $A,B,C$ by $\mathbf{A},\mathbf{B},\mathbf{C}$.
A density matrix $\rho \in M_n(\bbbc)$ has the form
$$
\rho=\frac{I_n}{n}+\mathbf{B} \rho+\mathbf{C} \rho.
$$
Assume that $\mathbf{B} \rho$ is the known traceless part of $\rho$ and $\mathbf{C} \rho$ is the
unknown traceless part of $\rho$. We use the notation $\rho_*=\rho -\mathbf{B} \rho$. The aim
of the state estimation is to cover $\rho_*$. If the dimension of $B$ is $m$, then the
dimension of $C$ is $n^2-m-1$.
For the state estimation we have to use a POVM with at least $N=n^2-m$ elements. To get
a unique solution we will use POVM with exactly $N$ elements: $\{F_1,F_2,\dots,F_N\}$.
For obtaining optimal POVM, we will use similar arguments to \cite{qubits} which was
a straightforward extension of the idea appeared in \cite{scott}.
If $\{Q_i\,:\, 1\le i \le N\}$ are self-adjoint matrices satisfying the following equation
$$
\rho_*=\frac{1}{n} I+ \sum_{\sigma_i \in C} \theta_i \sigma_i=\sum_{i=1}^N p_i Q_i, \qquad
p_i=\mathrm{Tr}\, \rho F_i,
$$
then $\{Q_i\,:\, 1\le i \le N\}$ is a {\bf dual frame} of $\{F_i\,:\, 1\le i \le N\}$. Then
the state reconstruction formula can be written as
$$
\hat{\rho_*}=\sum_{i=1}^N \hat p_i Q_i.
$$
We define the distance as
$$
\|\rho_*-\hat{\rho_*}\|^2_2 = \mathrm{Tr}\, (\rho_*-\hat{\rho_*})^2
= \sum_{i,j=1}^N\big(p(i)-\hat{p}(i)\big)\big(p(j)-\hat{p}(j)\big)\< Q_i,Q_j\>
$$
and its expectation value is
\begin{eqnarray*}
&& \sum_{i,j=1}^N \big(p(i)\delta(i,j)-p(i)p(j)\big)\<Q_i, Q_j\> \\ && \qquad
= \sum_{i=1}^N p(i)\<Q_i, Q_i\> - \<\sum_{i=1}^N p(i)Q_i, \sum_{j=1}^N p(j) Q_j\>
\\ && \qquad =
\sum_{i=1}^N p(i)\<Q_i, Q_i\> -\mathrm{Tr}\, (\rho_*)^2.
\end{eqnarray*}
We concentrate on the first term which is
\begin{equation}\label{intdo}
\sum_{i=1}^N(\mathrm{Tr}\, F_i \rho ) \<Q_i, Q_i\>
\end{equation}
and we take the integral with respect to the Haar measure on the unitaries $\mathrm{U}(n)$.
Note first that
$$
\int_{\mathrm{U}(n)}U P U^*\,\, d\mu (U)
$$
is the same constant $c$ for any projection of rank 1. If $\sum_{i=1}^n P_i=I_n$, then
$$
n c=\sum_{i=1}^n \int_{\mathrm{U}(n)}U P_i U^*\,\, d\mu (U)= I_n
$$
and we have $c=I_n/n$. Therefore for $A=\sum_{i=1}^n \lambda_i P_i$ we have
$$
\int_{\mathrm{U}(n)}U A U^*\,\, d\mu (U)= \sum_{i=1}^n \lambda_i c=\frac{I_n}{n}\mathrm{Tr}\, A
$$
and application to the integral of (\ref{intdo}) gives
$$
\int \mathrm{Tr}\, F_i (U\rho U^*)\, d\mu (U)=\frac{1}{n}\mathrm{Tr}\, F_i.
$$
So we get the following quantity for the error of the state estimation:
$$
T:=\int E\left( \|U\rho^* U^*-U\hat {\rho^*} U^*\|^2_2\right) d\mu (U)=
\frac{1}{n}\sum_{i=1}^N (\mathrm{Tr}\, F_i) \<Q_i, Q_i\>-\mathrm{Tr}\, (\rho^*)^2
$$
This is to be minimized. Since the second part is constant, our task is to minimize
the first part:
\begin{equation}\label{minprob}
\sum_{i=1}^N (\mathrm{Tr}\, F_i) \<Q_i, Q_i\>
\end{equation}
We define the superoperator:
$$
\mathbf{F} =\sum_{i=1}^N |F_i\>\<F_i| (\mathrm{Tr}\, F_i)^{-1}.
$$
It will have rank $N$, so if $N<n^2$ the inverse of $\mathbf{F}$ does not exists, but we can
use its pseudo-inverse $\mathbf{F}^-$, so $\mathbf{F}^- |\sigma_i\>=0$, if $\sigma_i \in B$. $R_i$
is the canonical dual frame of $F_i$, if
$$
|R_i\>=\mathbf{F}^- |P_i\>,
$$
where $P_i=(\mathrm{Tr}\, F_i)^{-1} F_i$.
\begin{lemma}
For a fixed $F_i$, (\ref{minprob}) is minimal if $Q_i=R_i$, i.e. if we use the canonical
dual frame.
\end{lemma}
\noindent{\it Proof.}
Let us use the notation $W_i=Q_i-R_i$. Then
\begin{eqnarray}
\sum_{i=1}^N \mathrm{Tr}\, F_i | R_i\>\<W_i| &= &
\sum_{i=1}^N \mathrm{Tr}\, F_i | R_i\>\<Q_i|-\sum_{i=1}^N \mathrm{Tr}\, F_i | R_i\>\<R_i| \cr &=&
\sum_{i=1}^N \mathrm{Tr}\, F_i \mathbf{F}^- |P_i\>\<Q_i|-\sum_{i=1}^N \mathrm{Tr}\, F_i \mathbf{F}^- |P_i\>\<P_i| \mathbf{F}^-
\cr & = &\mathbf{F}^- \sum_{i=1}^N \mathrm{Tr}\, F_i |P_i\>\<Q_i|-\mathbf{F}^- \bigg(\sum_{i=1}^N \mathrm{Tr}\, F_i |P_i\>
\<P_i| \bigg)\mathbf{F}^- \cr &
=& \mathbf{F}^- {\bf \Pi}-\mathbf{F}^- \mathbf{F} \mathbf{F}^- = \mathbf{F}^- {\bf \Pi} - \mathbf{F}^- {\bf \Pi}=0, \label{eR}
\end{eqnarray}
where ${\bf \Pi}=\mathbf{A}+\mathbf{C}$, and we use that from
$$
|\rho^*\>=\sum_{i=1}^N \<F_i ||\rho\> |Q_i\>
$$
follows
$$
{\bf \Pi}=\sum_{i=1}^N |Q_i\> \<F_i|.
$$
So we have
\begin{eqnarray*}
\sum_{i=1}^N \mathrm{Tr}\, F_i \<Q_i, Q_i\>
&=&\sum_{i=1}^N \mathrm{Tr}\, F_i \<W_i, W_i\>+\sum_{i=1}^N \mathrm{Tr}\, F_i \<W_i, R_i\>\cr
&&
+
\sum_{i=1}^N \mathrm{Tr}\, F_i \<R_i, W_i\>+\sum_{i=1}^N \mathrm{Tr}\, F_i \<R_i, R_i\> \cr &
=&\sum_{i=1}^N \mathrm{Tr}\, F_i \<W_i, W_i\>+\sum_{i=1}^N \mathrm{Tr}\, F_i \<R_i, R_i\>
\cr &\ge& \sum_{i=1}^N \mathrm{Tr}\, F_i \<R_i, R_i\>.
\end{eqnarray*}
{\hfill $\square$}\medskip
We know the optimal dual frame for a fixed POVM $F_i$, and the following lemma provides a
property for the optimal POVM:
\begin{lemma}
The quantity in (\ref{minprob}) is minimal if
$$
\mathbf{F}=\mathbf{A} + \frac{n-1}{N-1}\mathbf{C}.
$$
\end{lemma}
\noindent{\it Proof.}
From (\ref{eR}) we have
$$
\sum_{i=1}^N (\mathrm{Tr}\, F_i) |R(i)\>\<R(i)|=\mathbf{F}^- {\bf \Pi}=\mathbf{F}^-,
$$
so we have the equation:
$$
\sum_{i=1}^N (\mathrm{Tr}\, F_i) \<R(i), R(i)\>=\mathrm{Tr}\, (\mathbf{F}^-).
$$
Let $\nu_1\ge \nu_2 \ge \dots \ge \nu_{n^2}$ be the eigenvalues of $\mathbf{F}$. Since the rank of
$\mathbf{F}$ is $N$, we have $\nu_i=0$ for $i>N$. We want to minimize
$$
\mathrm{Tr}\, (\mathbf{F}^-)=\sum_{i=1}^N \frac{1}{\nu_i}.
$$
It is easy to check that $\mathbf{A}$ is an eigenfunction of $\mathbf{F}$ with $\nu_1=1$ eigenvalue:
$$
\mathbf{F} |I\>=\sum_{i=1}^N (\mathrm{Tr}\, F_i) |P(i)\> \<P(i),I \>=\sum_{i=1}^N (\mathrm{Tr}\, F_i)
|P(i)\>=\sum_{i=1}^N |F(i)\>=|I\>
$$
and we have the following condition:
$$
\sum_{i=1}^N \nu_i=\mathrm{Tr}\, \mathbf{F} =\sum_{i=1}^N \<P_i,P_i\> \mathrm{Tr}\, F_i \le
\sum_{i=1}^N \mathrm{Tr}\, F_i=\mathrm{Tr}\, I= n.
$$
Combining these conditions we get that the measurement is optimal if $\nu_2=\nu_3=\dots
=\nu_N= \frac{n-1}{N-1}$.
{\hfill $\square$}\medskip
Now we can obtain that the optimal POVM is a conditional SIC-POVM:
\begin{thm}\label{T:cond}
If
\begin{equation}\label{E:As}
\mathbf{F}=\mathbf{A} + \frac{n-1}{N-1}\mathbf{C}.
\end{equation}
then
$$
\sum_{i=1}^N P_i=\frac{N}{n}I,\qquad \mathrm{Tr}\, P_iP_j= \frac{N-n}{n(N-1)} \quad (i \ne j), \qquad
\mathrm{Tr}\, \sigma_k P_i=0 \quad(\sigma_k \in B).
$$
\end{thm}
\noindent{\it Proof.}
Let us use notation $\lambda_i=\mathrm{Tr}\, F_i$, then (\ref{E:As}) has the form:
$$
\sum_{i=1}^{N}\lambda_i |P_i\>\<P_i|=\mathbf{A} + \frac{n-1}{N-1}\mathbf{C}.
$$
Then we have to the following equation:
\begin{equation}\label{E:As7b}
\sum_{i=1}^{N}\lambda_i \<Q|P_i\>\<P_i|Q\>=\<Q|\mathbf{A} +\frac{n-1}{N-1}\mathbf{C} |Q\>
\end{equation}
with $Q := P_k - d\cdot I$.
From $\<P_i|Q\>=\mathrm{Tr}\, P_i P_k- d$ the left hand side of (\ref{E:As7b}) becomes
$$
\sum_{i=1}^{N}\lambda_i \<Q|P_i\>\<P_i|Q\>=\lambda_k (1-d)^2
+\sum_{i\ne k}\lambda_i(\mathrm{Tr}\, P_i P_k- d)^2.
$$
We can compute the right hand side as well:
$$
\mathbf{A} (P_k - d I)=\mathbf{A} P_k - d I=\mathbf{A} (P_k-I/n)+I/n-d I=I(1/n-d),
$$
$$
\<Q|\mathbf{A} |Q\>=(1/n-d) \mathrm{Tr}\, (P_k - d I)=n (1/n-d)^2
$$
When $P_k=\sum_{i=0}^N c_i \sigma_i$, then
$$
\mathbf{C} |Q\>=\sum_{\sigma_i \in C} c_i \sigma_i,
\qquad
\<Q|\mathbf{C} |Q\>=\sum_{\sigma_i \in C} c_i^2.
$$
So (\ref{E:As7b}) becomes
\begin{equation}\label{cond1}
\lambda_k (1-d)^2
+\sum_{i\ne k}\lambda_i(\mathrm{Tr}\, P_i P_k- d)^2
=n (1/n-d)^2 +\frac{n-1}{N-1}\sum_{\sigma_i \in C} c_i^2.
\end{equation}
From (\ref{square}) we have
\begin{equation}\label{cond2}
\sum_{\sigma_i \in C} c_i^2 \le 1-c_0^2=1-1/n.
\end{equation}
This implies
$$
\lambda_k (1-d)^2
\le n (1/n-d)^2 +\frac{n-1}{N-1}(1-1/n),
$$
which is true for every value of $d$, so
$$
\lambda_k
\le \min_{d} \frac{n (1/n-d)^2 +\frac{n-1}{N-1}(1-1/n)}{(1-d)^2}
$$
By differentiating we can obtain that the right hand side is minimal if:
$$
d=\frac{N-n}{n (N-1)}
$$
and then we get
$$
\lambda_k \le \frac{n}{N}.
$$
Since $\sum_{i=k}^N \lambda_k=n$, we have $\lambda_1=\lambda_2=\dots
=\lambda_N=n/N$.
From that follows that there is an equality in (\ref{cond2}) too, so we have
$$
\sum_{\sigma_i \in C} c_i^2 = 1-c_0^2 \quad \Rightarrow \quad c_i=0 \textrm{, if }
\sigma_i \in B \quad\Rightarrow\quad
\mathrm{Tr}\, \sigma_i P_k=0 \textrm{, if } \sigma_i \in B.
$$
On the other hand from (\ref{cond1}) we have
$$
\sum_{i\ne k}\frac{n}{N}\bigg(\mathrm{Tr}\, P_i P_k- \frac{N-n}{n (N-1)}\bigg)^2=0.
$$
So it implies
$$
\mathrm{Tr}\, P_i P_k=\frac{N-n}{n (N-1)} \quad \mbox{if}\quad i\ne k.
$$
{\hfill $\square$}\medskip
One has to be careful about this result though, since we only consider the case
of linear state reconstruction, as it was stated in \cite{scott}. Finding the optimal
statistic in a more general setting requires complicated nonlinear optimalization.
Now we look at some examples related to the previous theorem and we take different $N$ values.
\begin{pl}\label{ex1}
If we do not have any information a priori about the state ($m=0,N=n^2$), then
$$
\mathrm{Tr}\, P_iP_j= \frac{1}{n+1} \quad (i \ne j)
$$
so the optimal POVM is the well-known SIC-POVM (if it exists \cite{renes}).
\end{pl}
\begin{pl}\label{ex2}
If we know the off-diagonal elements of the state, and we want to estimate the diagonal
entries ($m=n^2-n,N=n$), then from Theorem \ref{T:cond} it follows that the optimal POVM has
the properties
$$
\mathrm{Tr}\, P_iP_j= 0 \quad (i \ne j), \quad \sum_{i=1}^n P_i=I,\quad \textrm{ and }\quad P_i
\textrm{ is diagonal.}
$$
So the diagonal matrix units form an optimal POVM. {\hfill $\square$}\medskip
\end{pl}
\begin{pl}\label{ex3}
If we know the diagonal elements of the state, and we want to estimate the off-diagonal
entries ($m=n-1, N=n^2-n+1$), then from Theorem \ref{T:cond} it follows that the optimal POVM
has the properties
$$
\mathrm{Tr}\, P_iP_j= \frac{n-1}{n^2} \quad (i \ne j), \quad
\sum_{i=1}^n P_i=\frac{n^2-n+1}{n}I
$$
and $P_i$ has a constant diagonal. More about this case is in the next section. {\hfill $\square$}\medskip
\end{pl}
\section{Existence of some conditional SIC-POVMs}
Theorem \ref{T:cond} tells that conditional SIC-POVMs are the optimal measurements
if they exist, but it was not written anything about the existence of such POVMs.
The existence of SIC-POVMs for arbitrary dimension is not known and they are a special
case of the conditional SIC-POVMs. We can not expect to give a full description of
SIC-POVMs, but this section contains a particular example. There are seqveral equiangular
frames with less than $n^2$ projections \cite{etf}, but it is not clear,
what parameters are spanned by their complementary part, ie. what the known parameters are.
Intuition suggests that the case when the known part corresponds to a subalgebra of the
full matrixalgebra is especially interesting.
Suppose we know the diagonal elements of a $n$-dimensional density matrix. We want to
construct the related conditional SIC-POVM, that is subnormalized projections $P_i$
forming a symmetric POVM and complementary to the diagonal projections $E_i=|e_i\>\<e_i|
\in M_n(\bbbc)$ ($1 \le i \le n$). These projections form a maximal abelian subalgebra.
Easy dimension counting shows, that we want to construct $N=n^2-n+1$ such projections.
So $\{|e_i\>: 1 \le i \le n\}$ is an orthonormal basis in the space. We set
\begin{equation}\label{E:q}
|\phi\>= \frac{1}{\sqrt{n}} \sum_{i=1}^{n} |e_i\>, \qquad q=e^{2\pi\mathrm{i}/N}
\end{equation}
and a diagonal unitary
$$
U=\mbox{Diag}\,(q^{\alpha_1},q^{\alpha_2},q^{\alpha_3}, \ldots q^{\alpha_n}),
$$
where the integer numbers $0 \le \alpha_i \le N-1$ are differents. Another unitary $T$ permutes
the eigenvectors of $U$:
$$
T|e_i\>=\cases{ |e_{i+1}\> & if $1 \le i \le n-1$, \cr |e_1\> & if $i=n$.}
$$
Note, that $T |\phi\>=T^*|\phi\>=|\phi\>$. We have
\begin{eqnarray*}
|\<U^k \phi,e_j\>|^2 &=&|\<\phi, (U^*)^k e_j\>|^2=|q^{-k\alpha_j}|^2 |\<\phi, e_j\>|^2
=|\<\phi, e_j\>|^2 \\
&=&|\<\phi, T^{j-1} e_1\>|^2=|\<(T^*)^{j-1}\phi, e_1\>|^2 =|\<\phi, e_1\>|^2
\end{eqnarray*}
and the projections $P_k:=|U^k \phi\>\<U^k \phi|$ are complementary to the diagonal projections:
$$
\mathrm{Tr}\, |U^k \phi\>\<U^k \phi|\left( |e_i\>\<e_i| -I/n\right)=0.
$$
It is easy to check that
$$
\sum_{k=1}^N \<e_i, U^k\phi\>\<U^k \phi, e_j\> = \frac{1}{n}\sum_{k=1}^N q^{-\alpha_i k}q^{\alpha_j k}
=\frac{1}{n}\sum_{k=1}^N q^{(\alpha_j-\alpha_i) k}=\frac{N}{n}\delta_{ij},
$$
so we obtain
$$
\sum_{k=1}^N P_k= \frac{N}{n}I
$$
and the sum is multiple of $I$.
We need to choose the numbers $\alpha_1,\alpha_2,\dots, \alpha_n$ such that
$$
\mathrm{Tr}\, P_iP_j=|\<U^i \phi|U^j \phi\>|^2=\frac{1}{n^2}\left|\sum_{m=1}^n q^{(j-i)\alpha_m}\right|^2=
\frac{1}{n^2}t
$$
is constant when $i \neq j$. From the formulas
$$
\sum_j \mathrm{Tr}\, P_iP_j =(N-1)\frac{1}{n^2}t +1, \qquad \sum_j \mathrm{Tr}\, P_iP_j=\mathrm{Tr}\, \left(P_i \sum_j P_j\right)=
\frac{N}{n}
$$
we obtain $t=n-1$.
Next we use a terminology from the paper \cite{Gordon}. The set $G:=\{0,1, \dots, N-1\}$
is an additive group modulo $N$. The subset $D:=\{\alpha_i : 1 \le i \le n\}$ is a
{difference set} with parameters $(N,n,\lambda)$ when the set of differences
$\alpha_i-\alpha_j$ contains every nonzero element of $G$ exactly $\lambda$ times. When
this holds, then we have
$$
\left|\sum_{i=1}^n q^{m\alpha_i}\right|^2=\sum_{i,j=1}^n q^{m(\alpha_i-\alpha_j)}
=n+\sum_{s=1}^{N-1} \lambda q^s=n-\lambda,
$$
where $q$ is from (\ref{E:q}). Here $\lambda=1$. If the appropriate difference set exists,
then there exists a conditional SIC-POVM. Similar constructions of tight equiangular frames
related to difference sets are examined in detail in \cite{diffset}.
The existence of difference sets with parameters $(N,n,1)$ is a known problem, named the
prime power conjecture \cite{Si, Gordon}, and we get the following result:
\begin{thm}
There exists a conditional SIC-POVM with respect to the diagonal part of a density matrix
if $n-1$ is a prime power. Then $N=n^2-n+1$ and the projection $P_i$ ($1 \le i \le N$)
have the properties
$$
\sum_{i=1}^N P_i=\frac{N}{n}I, \qquad \mathrm{Tr}\, P_iP_j=\frac{n-1}{n^2}\quad (i\ne j).
$$
\end{thm}
A few examples about $M=\{\alpha_k: k \}$ is written here:
$$
n=2, \quad M=\{0,1\}, \qquad n=3,\quad M= \{0,1,3\},
$$ $$ n=4,\quad M=\{0,1,3,9\}, \qquad n=5, \quad M=\{0,1,4,14,16\}.
$$
|
1,116,691,497,137 | arxiv | \section{Introduction}
Hepatitis B is a major viral infectious disease that affects a third of the world population, with 240-350 million people having a chronic infection \cite{takk09,WHO04}, and over 129 million new infections having occurred since 2013 \cite{lancet386}. This disease is a significant public health burden, causing 750,000 deaths annually \cite{WHO04}, of which about 300,000 can be attributed to liver cirrhosis and hepatocellular carcinoma \cite{lancet385}. Whilst the prevalence of hepatitis B is relatively low (below 1\%) in Western Europe and North America, it remains significant in south-east Asia and sub-Saharan Africa, where 5-10\% of the adult population are chronically infected \cite{WHO04}.
The disease is caused by the hepatitis B virus (HBV), which is a hepatotropic noncytopathic DNA virus of the {\it Hepadnaviridae} family \cite{seeger00}. There are two main routes of transmission of the HBV virus. One is a vertical (perinatal) transmission from an infected mother to a child, resulting in subsequent infection, which in 90\% of cases becomes chronic \cite{liang,reher05}. The other possibility is a horizontal transmission between adults primarily through sexual contacts, intravenous drug use or poor sanitary habits. This type of transmission usually results in recovery, with only 5-10\% of adults developing chronic infections \cite{liang,reher05}. Multiple branches of the immune system are involved in mounting the response during different phases of the HBV infection. In many viral infections of humans, such as HIV, LCMV, Epstein-Barr, the main contribution to the immune response during the early stages of infection comes from the innate immune response, i.e. natural killer (NK) cells and antiviral cytokines, which aim at reducing the spread of the virus and facilitating the development of an adaptive immune response. Contrary to this general observation, early stages of HBV infection are characterised by a delayed viral production and the lack of production of IFN-$\alpha$/$\beta$ \cite{bert06}. Several potential suggestions have been proposed to explain this, including the possibilities that the initial replication of HBV is very slow, or that the virus does not immediately reach the liver and remains for a period of time in other organs \cite{bert06,wie05}, however, the exact mechanism is still largely unknown. Once the exponential phase of HBV expansions properly starts, it activates the innate response and the cytokines \cite{guido99}, which, in turn, induces adaptive immune response, with cytotoxic T lymphocytes (CTLs) being responsible for killing infected cells, and antibodies against HBV surface antigen (HBsAg) neutralizing virus particles and preventing (re)infection of cells. Interestingly, besides killing HBV-infected hepatocytes, CTLs are able to induce non-cytolytic ``cure" of such cells \cite{abbas,guido99,guidotti1}. An important role in the dynamics of immune response against HBV is played by cytokines, which reduce viral replication \cite{devico, isaacs, kalvakolanu}, activate NK and CTL cells \cite{babiker,guidotti1,tamura}, and facilitate induction of immunity in uninfected target cells \cite{ramsay,wiah}.
A number of mathematical models have looked into various aspects of HBV dynamics and that of the immune response during infection. Ciupe et al. \cite{ciupe1,ciupe2} extended a standard model of immune response to study acute HBV infection and the role of time delay associated with activation and expansion of effector cells, and later they also looked into the role of pre-existing or vaccine-induced antibodies in controlling the HBV infection \cite{ciupe3}. Min et al. \cite{min_kuang} have used a standard incidence function rather than a mass action to account for a finite liver size and susceptibility to HBV infection, while Gourley et al. \cite{gourley} have developed a time-delayed extension of this model. Hews et al. \cite{hews} have used a logistic growth for hepatocyte population and a standard incidence to help the model better represent available data and achieve more realistic values for the basic reproduction number. Yousfi et al. \cite{yousfi} have analysed possible mis-coordination between different branches of adaptive immune response, more specifically, the CTLs and the antibodies, during HBV infection. In terms of the effects of cytokines on mediating immune response, Wiah et al. \cite{wiah} have studied a model that besides the CTLs and antibodies also includes $\alpha$- and $\beta$-interferons, whose role is taken to convert susceptible hepatocytes into infection-resistant cells. Kim et al. \cite{kim12} adapted an earlier model for hepatitis C to include cytokines implicitly through allowing effector cells to cause a non-cytolytic recovery of the infected cells, and a similar approach has also been used by other researchers \cite{dahari,lewin01,sypsa05} who considered a constant rate of non-cytolytic cure alongside treatment.
In this paper we focus on the interplay between various branches of the immune system during HBV infection, with particular emphasis on explicitly modelling the role of cytokines in mediating immune response and controlling viral replication. In the next section we discuss the details of underlying biological processes associated with the immune response against HBV and derive a corresponding mathematical model. Section 3 contains analytical and numerical studies of stability of various steady states. In Section 4 we perform numerical simulations of the model to illustrate different dynamical regimes, as well as to investigate the effects of different types of treatment. The paper concludes with the discussion of results and open questions.
\section{Model derivation}
In order to analyse various aspects of immune response to HBV infection, we build on the methodology of some earlier HBV models \cite{nowak,perelson,yongmei}. The host liver cells are divided into populations of uninfected cells $T(t)$, HBV-infected cells $I(t)$, and refractory cells $R(t)$. Healthy hepatocytes are assumed to be produced at a constant rate $\lambda$, die at a rate $d$, and they are infected by virions (free virus particles) at a rate $\beta$. New HBV virions $V(t)$ are produced by the infected cells at a rate $p$, and they are cleared at a rate $c$. Interactions between all cell populations are illustrated in Fig.~\ref{sys_dia}.
Adaptive immune response consists of HBsAg-specific antibodies $A(t)$ that destroy virions at a rate $k$, and HBV-specific CTLs, also referred to as effector cells, $E(t)$. After viral clearance, because of the long-lived plasma and memory B cells, antibody level is kept at some homeostatic level \cite{ciupe3}. To model this, we assume that antibodies are produced at a constant rate $\lambda_a$, and die at per capita rate $d_a$. During infection, antibodies are produced at rate $q$ proportional to the viral load. Whilst antibodies are responsible for eliminating free virus, CTLs instead kill infected cells at a rate $\mu_2$. Some models assume certain basal level of CTLs $s/d_e$ in the absence of infection, where $s$ is the source of CTLs, and $1/d_e$ is their average lifespan \cite{ciupe1, ciupe2}. We will instead assume the dynamics of effector cells in the absence of infection to have the form of logistic growth with the proliferation rate $r_e$ and the carrying capacity $E_{max}$. Upon infection, the immune response is activated, and the population of effector cells will expand at rate $\alpha IE$ \cite{ciupe1, ciupe2}. Similarly to effector cells, in the absence of infection, NK cells are assumed to obey logistic growth with the linear growth rate $r_n$ and the carrying capacity $N_{max}$.
Let us now focus on the role of cytokines in the immune dynamics. Type-1 interferons IFN-$\alpha/\beta$, to be denoted by $F_1(t)$, are produced by infected cells \cite{busca,guidotti1} at a rate $p_1$, and they are destroyed at a rate $\delta_1$.
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig1.png}
\caption{A diagram of immune response to HBV infection. Blue circles indicate host cells (uninfected, infected, and refractory cells), green circles denote adaptive immune response (antibodies, CTLs), yellow circles show cytokines (type-1 and type-2 interferon), red circle is the innate immune response (NK cells), and grey indicates virus particles (virions).
}\label{sys_dia}
\end{figure}
Type-2 interferons IFN-$\gamma$, denoted as $F_2(t)$, are produced by CTLs and NKs (natural killer cells) $N(t)$ \cite{guidotti1,devico, guilhot, herbein} at rates $p_2$ and $p_3$, respectively, and they are lost at a rate $\delta_2$. Both types of interferons have the capacity to render the uninfected cells protected from infection through making them resistant to infection \cite{wiah, julkunen,price}, or by turning them into refractory cells \cite{ramsay,ramshaw}. Therefore, the combined effect of interferons making uninfected cells refractory is taken to be $\varphi_1(F_1+F_2)$ per uninfected cell, and refractory cells can lose their viral resistance at a rate $\rho$ \cite{ciupe1}. During infection, IFN-$\alpha/\beta$ are able to activate NK cells \cite{pawelek}, while IFN-$\gamma$ induces protein-10 (CXCL-10) that recruits NK cells \cite{afzal,babiker} and can also activate NK cells \cite{guidotti1}. Hence, the combined effect of interferons on activating NK cells is taken to occur at a rate $q_1NF_1+q_2NF_2$. Besides positive contribution to the production of new NK cells, IFN-$\alpha/ \beta$ also increase the cytotoxicity of NK cells and CTLs \cite{abbas}. On the other hand, IFN-$\gamma$ increases the expression of MHC antigen acting to help CTLs destroy infected cells \cite{tamura}, and it also enhances the activity of NK cells \cite{schroder, carnaud}. Thus, both types of interferons increase cytolytic activity of NKs and CTLs, and hence, we will assume that NKs and CTLs destroy infected cells at rates $\mu_1(1+s_1F_1+s_2F_2)IN$ and $\mu_2(1+s_1^{\prime}F_1+s_2^{\prime}F_2)IE$, respectively. Moreover, antiviral cytokines, such as IFN-$\gamma$ and TNF-$\alpha$, can non-cytopathically purify viruses from infected cells \cite{guidotti1}, so that HBV-specific CTLs and NK cells can effectively ``cure" infected cells through a non-cytolytic antiviral activity mediated by IFN-$\gamma$ \cite{guidotti1, devico, guidotti2, biron}. Hence, infected cells can be lost due to non-cytolytic response of IFN-$\gamma$ at a rate $\varphi_2IF_2$. Studies have shown that IFN-$\gamma$ can activate a number of intracellular mechanisms that suppress viral replication \cite{devico, isaacs, kalvakolanu, stark}, while IFN-$\alpha/\beta$ can stimulate the activation of intracellular antiviral pathways to limit the development and spread of viral replication \cite{guidotti1}. Thus, both types of interferons help infected cells reduce production of new virus particles, so infected cells produce virions at a rate $p/(1+s_3F_1+s_4F_2)$.
With the above assumptions, the complete model for immune response to HBV infection takes the form
\begin{equation}\label{sys1}
\begin{array}{l}
\displaystyle{\frac{dT}{dt}=\lambda-dT-\beta VT+\rho R-\varphi_1T(F_1+F_2),}\\\\
\displaystyle{\frac{dI}{dt}=\beta VT-\delta I-\mu_1(1+s_1F_1+s_2F_2)IN-\mu_2(1+s_1^{\prime}F_1+s_2^{\prime}F_2)IE-\varphi_2IF_2,}\\\\
\displaystyle{\frac{dF_1}{dt}=p_1I-\delta_1F_1,}\\\\
\displaystyle{\frac{dF_2}{dt}=p_2E+p_3N-\delta_2F_2,}\\\\
\displaystyle{\frac{dN}{dt}=r_nN\left(1-\frac{N}{N_{max}}\right)+(q_1F_1+q_2F_2)N,}\\\\
\displaystyle{\frac{dE}{dt}=r_eE\left(1-\frac{E}{E_{max}}\right)+\alpha IE,}\\\\
\displaystyle{\frac{dR}{dt}=\varphi_1T(F_1+F_2)+\varphi_2IF_2-\rho R,}\\\\
\displaystyle{\frac{dV}{dt}=\frac{p}{1+s_3F_1+s_4F_2}I-cV-kAV,}\\\\
\displaystyle{\frac{dA}{dt}=\lambda_a-d_aA-kAV+qV.}\\
\end{array}
\end{equation}
To reduce the complexity of the model and the number of free parameters, we introduce the following rescaled parameters
\[
\begin{array}{l}
\displaystyle{\hat{d}=\frac{d}{r_n},\quad \hat{\beta}=\frac{\beta \lambda_a}{d_ar_n},\quad \hat{\rho}=\frac{\rho \lambda_ad}{r_n\lambda d_a},\quad \hat{\delta}=\frac{\delta}{r_n},\quad \hat{s}_i=s_i\frac{\lambda_a}{d_a},\quad i=1, 2, 3, 4,}\\\\
\displaystyle{\hat{\mu}_1=\frac{\mu_1N_{max}}{r_n},\quad \hat{\mu}_2=\frac{\mu_2E_{max}}{r_n},\quad \hat{\varphi}_i=\frac{\varphi_i\lambda_a}{d_ar_n},\quad \hat{p}_1=\frac{p_1}{r_n},\quad \hat{p}_2=\frac{p_2d_aE_{max}}{r_n\lambda_a},\quad \hat{p}_3=\frac{p_3d_aN_{max}}{r_n\lambda_a},}\\\\
\displaystyle{\hat{r}_e=\frac{r_e}{r_n},\quad \hat{\alpha}=\frac{\alpha \lambda_a}{r_nd_a},\quad \hat{p}=\frac{p}{r_n},\quad \hat{c}=\frac{c}{r_n},\quad \hat{k}=\frac{k\lambda_a}{r_nd_a},\quad \hat{d}_a=\frac{d_a}{r_n},\quad \hat{q}=\frac{q}{r_n},}\\\\
\displaystyle{\hat{s}_i^{\prime}=s_i^{\prime}\frac{\lambda_a}{d_a},\quad \hat{\delta}_i=\frac{\delta_i}{r_n},\quad \hat{q}_i=\frac{q_i\lambda_a}{r_nd_a},\quad i=1,2,}
\end{array}
\]
and new variables
\[
\begin{array}{l}
\displaystyle{\hat{t}=r_nt,\quad T=\frac{\lambda}{d}\hat{T},\quad I=\frac{\lambda_a}{d_a}\hat{I},\quad F_1=\frac{\lambda_a}{d_a}\hat{F}_1,\quad
F_2=\frac{\lambda_a}{d_a}\hat{F}_2,\quad N=N_{max}\hat{N},\quad E=E_{max}\hat{E},}\\\\
\displaystyle{R=\frac{\lambda_a}{d_a}\hat{R},\quad V=\frac{\lambda_a}{d_a}\hat{V},\quad A=\frac{\lambda_a}{d_a}\hat{A}.}
\end{array}
\]
Substituting these variables into the model (\ref{sys1}) and dropping all hats gives the following non-dimensionalised system of equations
\begin{equation}\label{sys2}
\begin{array}{l}
\displaystyle{\frac{dT}{dt}=d(1-T)-\beta V T+ \rho R- \varphi_1 T(F_1+F_2),}\\\\
\displaystyle{\frac{dI}{dt}=\beta VT- \delta I-\left[\mu_1(1+s_1 F_1+ s_2 F_2)N+\mu_2(1+s'_1 F_1+s'_2 F_2)E+\varphi_2F_2\right]I,}\\\\
\displaystyle{\frac{dF_1}{dt}=p_1I-\delta_1F_1,}\\\\
\displaystyle{\frac{dF_2}{dt}=p_2 E+p_3 N-\delta_2 F_2,}\\\\
\displaystyle{\frac{dN}{dt}=N(1-N)+(q_1F_1+q_2F_2)N,}\\\\
\displaystyle{\frac{dE}{dt}=r_e E(1-E)+\alpha IE,}\\\\
\displaystyle{\frac{dR}{dt}=\varphi_1 T(F_1+F_2)+\varphi_2 IF_2-\rho R,}\\\\
\displaystyle{\frac{dV}{dt}=\frac{p}{1+s_3 F_1+s_4 F_2} I-c V-kAV,}\\\\
\displaystyle{\frac{dA}{dt}=d_a(1-A)-kAV+qV.}
\end{array}
\end{equation}
It is straightforward to show that this system is well-posed, i.e. its solutions with non-negative initial conditions remain non-negative for all $t\geq 0$.
\section{Steady states and their stability}
We begin our analysis of the system (\ref{sys2}) by looking at its steady states
\[
S^*=(T^{\ast},I^{\ast},F_1^{\ast},F_2^{\ast},N^{\ast},E^{\ast},R^{\ast},V^{\ast},A^{\ast}),
\]
that can be found by equating the right-hand sides of equations in (\ref{sys2}) to zero and solving the resulting system of algebraic equations. Due to the high dimensionality of the system (\ref{sys2}), it can admit a significant number of possible steady states. Hence, in order to systematically find and analyse all of them, we begin with steady states characterised by the absence of virus particles, i.e. $V^{\ast}=0$, which immediately implies $I^{\ast}=F_1^{\ast}=0$ and $T^{\ast}=A^{\ast}=1$. There are four such steady states,
\[
\begin{array}{l}
\displaystyle{S_1^{\ast}=(1,0,0,0,0,0,0,0,1),\quad S_2^{\ast}=\left(1,0,0,\frac{p_2}{\delta_2},0,1,\frac{\varphi_1p_2}{\rho\delta_2},0,1\right),}\\\\
\displaystyle{S_3^{\ast}=\left(1,0,0,\frac{p_3}{\delta_2-p_3q_2},\frac{\delta_2}{\delta_2-p_3q_2},0,\frac{\varphi_1p_3}{\rho(\delta_2-p_3q_2)},0,1\right),}\\\\
\displaystyle{S_4^{\ast}=\left(1,0,0,\frac{p_2+p_3}{\delta_2-p_3q_2},\frac{\delta_2-p_3q_2+p_2q_2+p_3q_2}{\delta_2-p_3q_2},1,\frac{\varphi_1(p_2+p_3)}{\rho(\delta_2-p_3q_2)},0,1\right).}
\end{array}
\]
Whilst the steady states $S_1^{\ast}$ and $S_2^{\ast}$ are feasible for any values of parameters, $S_3^{\ast}$ and $S_4^{\ast}$ are only biologically feasible, provided $\delta_2-p_3q_2>0$. Linearisation of the system (\ref{sys2}) near each of these steady states shows that $S_1^{\ast}$, $S_2^{\ast}$ and $S_3^{\ast}$ are always unstable, while $S_4^{\ast}$ is stable if the following condition holds
\begin{equation}\label{DF_stab}
K<K_c,\qquad K=\frac{p\beta(\delta_2-p_3q_2)^3}{(c+k)(p_2s_4-p_3q_2+p_3s_4+\delta_2)},
\end{equation}
with
\begin{equation}
\begin{array}{l}
K_c=\delta p_3^2 q_2^2+\mu_1p_2^2q_2s_2-\mu_1p_2p_3q_2^2+\mu_1p_2p_3q_2s_2-\mu_2p_2p_3q_2s'_2+\mu_2p_3^2q_2^2-\mu_2p_3^2q_2s'_2\\\\
-2\delta \delta_2p_3q_2+\delta_2\mu_1p_2q_2+\delta_2\mu_1p_2s_2-\delta_2\mu_1p_3q_2+\delta_2\mu_1p_3s_2+\delta_2\mu_2p_2s'_2-2\delta_2\mu_2p_3q_2\\\\
+\delta_2\mu_2p_3s'_2-p_2p_3q_2\varphi_2-p_3^2q_2\varphi_2+\delta\delta_2^2+\delta_2^2\mu_1+\delta_2^2\mu_2+\delta_2p_2\varphi_2+\delta_2p_3\varphi_2.
\end{array}
\end{equation}
When $K=K_c$, equilibrium $S_4^{\ast}$ undergoes a steady-state bifurcation, and for $K>K_c$, this steady state is unstable.
For $V^{\ast}\neq 0$, one has to distinguish between two cases, $k=q$ and $k\neq q$. For $k=q$, one finds $A^{\ast}=1$, and there are four associated steady states with different combinations of $E^{\ast}=0$ or $E^{\ast}\neq 0$, and $N^{\ast}=0$ or $N^{\ast}\neq 0$. The first of these, $S_5^{\ast}$, characterised by the absence of CTLs and NKs, i.e. $E^{\ast}=0$ and $N^{\ast}=0$, has other components given by
\[
\begin{array}{l}
\displaystyle{T^{\ast}=\frac{(c+k)(dp_1s_3+\delta \delta_1)}{cdp_1s_3+dkp_1s_3+\beta p\delta_1},\quad I^{\ast}=\frac{d\delta_1(p\beta-c\delta-k\delta)}{\delta(cdp_1s_3+dkp_1s_3+\beta p\delta_1)},}\\\\
\displaystyle{F_1^{\ast}=\frac{dp_1(p\beta-c\delta-k\delta)}{\delta(cdp_1s_3+dkp_1s_3+\beta p\delta_1)},\quad F_2^{\ast}=0,}\\\\
\displaystyle{R^{\ast}=\frac{dp_1\varphi_1(c+k)(dp_1s_3+\delta \delta_1)(p\beta-c\delta-k\delta)}{\delta \rho (cdp_1s_3+dkp_1s_3+\beta p\delta_1)^2},
\quad V^{\ast}=\frac{d\delta_1(p\beta-c\delta-k\delta)}{\beta (c+k)(dp_1s_3+\delta \delta_1)},}
\end{array}
\]
and this steady state is always unstable. The steady state $S_6^{\ast}$ with $E^{\ast}=0$ and $N^{\ast}\neq 0$ has components given by
\[
\begin{array}{l}
\displaystyle{I^{\ast}=\frac{\delta_1 F_1^{\ast}}{p_1},\quad F_2^{\ast}=\frac{1+q_1F_1^{\ast}}{a}, \quad N^{\ast}=\frac{\delta_2F_2^{\ast}}{p_3},\quad V^{\ast}=\frac{pI^{\ast}}{(c+k)(1+s_3F_1^{\ast}+s_4F_2^{\ast})},}\\\\
\displaystyle{T^{\ast}=\frac{d+\varphi_2I^{\ast}F_2^{\ast}}{d+\beta V^{\ast}},\quad R^{\ast}=\frac{\varphi_1T^{\ast}(F_1^{\ast}+F_2^{\ast})+\varphi_2I^{\ast}F_2^{\ast}}{\rho},}
\end{array}
\]
where $F_1^{\ast}$ satisfies the cubic equation
\[
b_3(F_1^{\ast})^3+b_2(F_1^{\ast})^2+b_1F_1^{\ast}+b_0=0,
\]
where the coefficients $b_1$, $b_2$ and $b_3$ are always positive, and
\[
b_0=dp_1\left[-a^3pp_3\beta+(c+k)(a+s_4)(a^2p_3\delta+a\delta_2\mu_1+ap_3\varphi_2+s_2\delta_2\mu_1)\right],\quad a=\frac{\delta_2-p_3q_2}{p_3}.
\]
The steady state $S_6^{\ast}$ is also always unstable.\\
Similarly, the steady state $S_7^{\ast}$ with $E^{\ast}\neq 0$ and $N^{\ast}=0$ has its state variables given by
\[
\begin{array}{l}
\displaystyle{I^{\ast}=\frac{\delta_1 F_1^{\ast}}{p_1}, \quad F_2^{\ast}=\frac{p_2}{\delta_2}\left(1+\frac{\alpha \delta_1F_1^{\ast}}{r_ep_1}\right), \quad E^{\ast}=\frac{r_e+\alpha I^{\ast}}{r_e},\quad V^{\ast}=\frac{pI^{\ast}}{(c+k)(1+s_3F_1^{\ast}+s_4F_2^{\ast})},}\\\\
\displaystyle{T^{\ast}=\frac{d+\varphi_2I^{\ast}F_2^{\ast}}{d+\beta V^{\ast}},\quad R^{\ast}=\frac{\varphi_1T^{\ast}(F_1^{\ast}+F_2^{\ast})+\varphi_2I^{\ast}F_2^{\ast}}{\rho},}
\end{array}
\]
with $F_1^{\ast}$ satisfying the cubic equation
\[
m_3(F_1^{\ast})^3+m_2(F_1^{\ast})^2+m_1F_1^{\ast}+b_0=0,
\]
where $m_1$, $m_2$ and $m_3$ are positive, and
\[
m_0=dp_1^3r_e^3\left[-\beta p\delta_2^2+(c+k)(p_2s_4+\delta_2)(\mu_2p_2s_2^{\prime}+\delta \delta_2+\mu_2\delta_2+\varphi_2p_2)\right].
\]
This steady state is unstable for any parameter values.
The last steady state $S_8^{\ast}$ with $E^{\ast}\neq 0$ and $N^{\ast}\neq 0$ has components
\[
\begin{array}{l}
\displaystyle{I^{\ast}=\frac{\delta_1 F_1^{\ast}}{p_1},\quad E^{\ast}=\frac{r_e+\alpha I^{\ast}}{r_e},\quad F_2^{\ast}=\frac{\alpha p_2\delta_1+r_e p_1\left[
p_2+p_3(1+q)\right]}{r_e p_1 (\delta_2-p_3q_2)},\quad N^{\ast}=\frac{\delta_2F_2^{\ast}-p_2E^{\ast}}{p_3},}\\\\
\displaystyle{V^{\ast}=\frac{pI^{\ast}}{(c+k)(1+s_3F_1^{\ast}+s_4F_2^{\ast})},\quad T^{\ast}=\frac{d+\varphi_2I^{\ast}F_2^{\ast}}{d+\beta V^{\ast}},\quad
R^{\ast}=\frac{\varphi_1T^{\ast}(F_1^{\ast}+F_2^{\ast})+\varphi_2I^{\ast}F_2^{\ast}}{\rho},}
\end{array}
\]
and $F_1^{\ast}$ satisfies a cubic equation. It does not prove possible to determine stability of this steady state in a closed form, so is has to be done numerically.
\newpage
\begin{figure}
\hspace{0.3cm}
\includegraphics[scale=0.52]{fig2.png}\vspace{-0.5cm}
\caption{Stability of the disease-free steady state $S_{4}^{\ast}$ with parameter values from Table~\ref{parameter table}. Black grid area indicates the region where there are no feasible steady states. Colour code denotes maximum real part of the largest characteristic eigenvalue for the disease-free steady state $S_4^{\ast}$ when it is feasible.}
\label{DF}
\end{figure}
For $k\neq q$, we again have four options, depending on whether $E^{\ast}=0$ or $E^{\ast}\neq 0$, and $N^{\ast}=0$ or $N^{\ast}\neq 0$. Similar to the case $k=q$, the steady states $S_9^{\ast}$ with $E^{\ast}=N^{\ast}=0$, $S_{10}^{\ast}$ with $E^{\ast}=0$ and $N^{\ast}\neq 0$ and $S_{11}^{\ast}$ with $E^{\ast}\neq 0$ and $N^{\ast}=0$, are always unstable. The steady state $S_{12}^{\ast}$ with all components being positive cannot be found in a closed form.
The cases $k=q$ and $k\neq q$ have to be considered separately, since for $k\neq q$ one has a relation $V^{\ast}=d_a(1-A^{\ast})/(kA^{\ast}-q)$, which cannot be directly used in the case $k=q$ with $A^{\ast}=1$. However, it is straightforward to show that as $k\to q$, the steady states $S_9^{\ast}$, $S_{10}^{\ast}$, $S_{11}^{\ast}$ and $S_{12}^{\ast}$ converge to $S_5^{\ast}$, $S_6^{\ast}$, $S_7^{\ast}$ and $S_8^{\ast}$, respectively. Of these steady states, only $S_4^{\ast}$ and $S_{12}^{\ast}$ (or equivalently $S_8^{\ast}$ for $k=q$) can potentially change stability, as all other steady states are unstable for any parameter values.
To gain a better understanding of how stability of different steady states is affected by various parameters in the model, we perform numerical stability and bifurcation analysis.
\begin{figure}
\centering
\includegraphics[scale=0.58]{fig3.png}\vspace{-0.5cm}
\caption{Stability of the endemic steady state $S_{12}^{\ast}$ with parameter values from Table~\ref{parameter table}. White area shows the region where the endemic steady state is not feasible, but the disease-free steady state $S_{4}^{\ast}$ is feasible and stable. Black grid area indicates the region where there are no feasible steady states. Colour code denotes maximum real part of the largest characteristic eigenvalue for the endemic steady state $S_{12}^{\ast}$ when it is feasible.}
\label{endemic}
\end{figure}
Baseline values of parameters are given in Table~\ref{parameter table} in the Appendix, though one should note that at this stage it is only feasible to explore different qualitative scenarios, as the actual values of many of these parameters have not yet been measured, or significant variations in their values have been reported.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{fig4.png}\vspace{-0.7cm}
\caption{Bifurcation diagram (a) and periods of periodic solutions (b) with parameter values from Table~\ref{parameter table}. (a) In this figure $p_3=0.3$. The blue line shows the endemic steady state, and the red line shows the disease-free steady state, with solid (dashed) lines corresponding to stable (unstable) steady states. At $k=6.277$ and $k=10.74$ there is a Hopf bifurcation of the endemic steady steady state, and at $k=11.2389$ there is a transcritical bifurcation. Between the two HB points there is a stable periodic solution, the minimum and maximum of $T$ are shown in green. (b) This figure shows the dependence of the period of periodic solutions on $k$ for $p_3=0.1$ (black), $p_3=0.3$ (blue), $p_3=0.5$ (red).}
\label{bifurcation}
\end{center}
\end{figure}
Figure~\ref{DF} shows regions of feasibility and stability of the disease-free steady state $S_4^{\ast}$. Our earlier analysis indicates that this steady state is only feasible, provided $\delta_2-p_3q_2>0$, which means that this steady state can only exist if the rate $p_3$ of production of IFN-$\gamma$ by NK cells, and the rate $q_2$ at which IFN-$\gamma$ in turn upregulates the production of new NK cells, are not too large, as illustrated in Fig.~\ref{DF}(a) and (b). Stability of the disease-free steady state $S_4^{\ast}$ is determined by the value of $K$ defined in (\ref{DF_stab}), and Figs.~\ref{DF}(a) and (b) suggest that increasing $p_3$ can stabilise this equilibrium if it were previously unstable, which should be expected, as increasing the number of NK cells and the amount of IFN-$\gamma$ leads to a more effective eradication of the viral population. Similarly, increasing the rate of clearance of virions by antibodies $k$, the rate at which IFN-$\gamma$ inhibits production of new virus particles $s_4$, or the rate of IFN-$\gamma$-induced conversion from infected cells to refractory cells $\varphi_2$, all lead to the stabilisation of the disease-free steady state. At the same time, comparison of Fig.~\ref{DF}(a) with (c) and (d) indicates that if antibodies are not very effective, i.e. if $k$ is small, it is easier to clear the infection, i.e. achieve stability of the disease-free steady state, by increasing production of IFN-$\gamma$ by NK cells, since both $s_4$ and $\varphi_2$ have to be increased very significantly before the stability can be achieved.
Figure~\ref{endemic} illustrates how regions of feasibility and stability of the endemic steady state $S_{12}^{\ast}$ depend on system parameters. Comparison of Fig.~\ref{endemic}(a) with Fig.~\ref{DF}(a) suggests that as the disease-free steady state loses its stability, the endemic steady state becomes biologically feasible and stable. However, for very small values of $p_3$, there is a certain range of $k$ values, for which the endemic steady state is also unstable, and one could expect the appearance of periodic solutions.
\begin{figure}
\centering
\includegraphics[scale=0.56]{fig5.png}\vspace{-0.2cm}
\caption{Stability of the endemic steady state $S_{12}^{\ast}$ with parameter values from Table~\ref{parameter table}. White area shows the region where the endemic steady state is not feasible, but the disease-free steady state $S_{4}^{\ast}$ is feasible and stable. Colour code denotes maximum real part of the largest characteristic eigenvalue for the endemic steady state $S_{12}^{\ast}$ when it is feasible.}
\label{fig_extra}
\end{figure}
This is illustrated in more detail in the bifurcation diagram shown in Fig.~\ref{bifurcation}(a), which indicates that when one fixes some small value of $p_3$ and increases $k$, the endemic steady state does indeed lose its stability via a supercritical Hopf bifurcation, and then regains it at a subcritical Hopf bifurcation for yet higher value of $k$. In the range of $k$ values where the endemic steady state $S_{12}^{\ast}$ is unstable, one observes a stable periodic orbit, whose period increases with $k$ but reduces with $p_3$, as shown in Fig.~\ref{bifurcation}(b). The effects of varying $s_4$ and $\varphi_2$ on stability of $S_{12}^{\ast}$ are similar to those of varying $p_3$, with the exception that for small $k$, increasing $s_4$ or $\varphi_2$ does not make this steady state infeasible, i.e. biologically irrelevant. Figures~\ref{endemic}(b) and (f) are quite similar to each other in that for each value of $k$, there is some minimal value of the infection rate $\beta$ or production rate of new virions $p$, above which the endemic steady state $S_{12}^{\ast}$ becomes biologically feasible and stable. If $k$ is small, then further increases of $\beta$ or $p$ do not have effect on stability, and $S_{12}^{\ast}$ remains stable, whilst for higher $k$ increasing either $\beta$ or $p$ results in the loss of stability through a supercritical Hopf bifurcation. A very interesting behaviour is observed in Fig.~\ref{endemic}(d), which shows that for $k$ small or very large, the stability of $S_{12}^{\ast}$ is unaffected by changes in the rate of production of new antibodies $q$, whereas for an intermediate range of $k$, $S_{12}^{\ast}$ is unstable for small $q$ but gains stability as $q$ is increased. This is quite counter-intuitive, as one would normally expect that if more antibodies are produced for the same viral load, this would help clear the infection. Since $k$ is also the rate at which antibodies are binding free virus and, hence, are removed, this means that it is the balance between $k$ and $q$ that determines whether the infection is maintained at a steady level, i.e. $S_{12}^{\ast}$ is stable, or if periodic oscillations appear in the dynamics. Similar behaviour can be observed in Fig.~\ref{endemic}(e), which shows that the endemic steady state $S_{12}^{\ast}$ is unstable for small $\rho$, i.e. for long periods of viral resistance, but it stabilises as the duration of viral resistance reduces, i.e. for higher values of $\rho$.
In order to better understand the role of cytokines in system's dynamics, we present in Fig.~\ref{fig_extra} stability of the endemic steady state depending on cytokine-related parameters. Figures~\ref{fig_extra}(a) and (b) suggest that increasing the rates $s_1$ and $s_2$ at which IFN-$\alpha/\beta$ and IFN-$\gamma$ enhance cytolytic activity of NK cells, or the rates $s_3$ and $s_4$ at which these interferons inhibit production of new virions, results in stabilisation of the endemic steady state $S_{12}^{\ast}$. One should note, however, that while increasing the rates $s_1$ or $s_3$, associated with IFN-$\alpha/\beta$ only acts to make the endemic steady state more stable, increasing the rates $s_2$ or $s_4$ associated with IFN-$\gamma$ can actually make the endemic steady state biologically irrelevant, thus helping clear the infection by moving the system to a stable disease-free steady state. This suggests the profoundly different effects of IFN-$\alpha/\beta$ and IFN-$\gamma$ on viral dynamics. A similar phenomenon is observed when one investigates the role of cytokines in producing refractory cells from either uninfected or infected cells. Increasing the rate $\varphi_1$ of conversion of uninfected cells into refractory cells, which involves contributions from both types of interferon, results in destabilisation of the endemic steady state. On the other hand, increasing the rate $\varphi_2$ of non-cytolytic cure of infected cells by IFN-$\gamma$ initially stabilises the endemic steady state, but subsequent increase makes the endemic steady state infeasible, thus leading to clearance of infection, as shown in Fig.~\ref{fig_extra}(c). We have also looked into the effects of both types of interferon on enhancing cytotoxic activity of CTLs, as represented by parameters $s'_1$ and $s'_2$. In this case, numerical calculations suggest that the stability of the endemic steady state is not sensitive to $s'_1$, implying that this particular contribution from IFN-$\alpha/\beta$ does not help clear the infection. In this respect, IFN-$\gamma$ plays a more important role, since increasing $s'_2$ above a certain level makes the endemic steady state biologically irrelevant, so the system reverts to a stable disease-free state. Finally, Figure~\ref{fig_extra}(d) shows that increasing the rates $q_1$ and $q_2$ of cytokine-related activation of NK cells leads to stabilisation of the endemic steady state, however, increasing the rate $q_2$ associated with IFN-$\gamma$ beyond certain level results in this steady state becoming biologically irrelevant, thus eradicating the viral infection.
\section{Numerical simulations}
To demonstrate different types of dynamical behaviour that can be exhibited by the model (\ref{sys2}) in various parameter regimes, we solve this system numerically using the baseline values of parameters given in Table~\ref{parameter table} in the Appendix, and the results are shown in Figs.~\ref{NS1}, \ref{NS2}, \ref{NS3}. In all these figures, the free virus $V(t)$ exhibits the behaviour that is qualitatively similar to that of the number of infected cells, hence, we plot instead the dynamics of the population of refractory cells $R(t)$. Figure~\ref{NS1} illustrates the dynamics of immune response when the condition (\ref{DF_stab}) holds. In this case, the initial viral growth leads to an increase in the numbers of NKs and CTLs, as well as both types of interferons, which results in the successful clearance of the HBV infection, upon which type-1 interferons are also destroyed, and the system settles on a stable disease-free steady state $S_4^*$. Figure~\ref{NS2} shows the dynamics in the case where the endemic steady state $S_{12}^*$ is feasible and stable. One observes that the initial viral growth is suppressed by the combined effects of different branches of the immune system. However, the approach to the endemic steady state is oscillatory with the amplitude of oscillations decaying, with each subsequent viral peak being smaller than the previous one. In the case when the endemic state is unstable due to Hopf bifurcation, one observes stable oscillations, as shown in Fig.~\ref{NS3}. Biologically, these would correspond to the so-called ``flare-ups" \cite{chang14,per01}, where the infection is never completely cleared, but through the interactions between the virus and the immune system, there are periods of very low viral activity followed by the periods of acute viral growth. This situation is reminiscent of the infection-induced autoimmune reaction, where initial viral infection can lead to a breakdown of immune tolerance, so that even in the absence of any exogenous factors or subsequent infections, patients exhibit periods of remission and relapses \cite{BN12,BN15}. It is worth noting that the behaviour shown in Fig.~\ref{NS3} has the hallmarks of slow-fast dynamics, or relaxation oscillations, that are not uncommon in models of immune response \cite{Len00,Mer78}. At every ``flare-up", there is a significant growth in the number of infected cells that triggers the proliferation of both types of interferon, as well as the growth in the populations of CTLs and natural killer cells. All of them are growing very quickly, resulting in a fast immune response that reduces the infection, but as the number of infected cells subsides, so do all the various populations associated with the immune response. Hence, the infection is not completely cleared but rather is kept in check at a very small level. Now, as the population
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig6.png}
\caption{Numerical solution of the model (\ref{sys2}) with parameter values from Table~\ref{parameter table}, and $p_3=3$, $k=8$. In this case, the disease-free steady state $S_4^{\ast}$ is stable, so immune system is able to clear the initial infection.}
\label{NS1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig7.png}
\caption{Numerical solution of the model (\ref{sys2}) with parameter values from Table~\ref{parameter table}, and $p_3=0.3$, $k=0.3$. In this case, the system approaches a stable endemic steady state $S_{12}^{\ast}$.}
\label{NS2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig8.png}
\caption{Numerical solution of the model (\ref{sys2}) with parameter values from Table~\ref{parameter table}, and $p_3=0.3$, $k=8$. In this case, both the disease-free $S_{4}^{\ast}$ and the endemic steady state $S_{12}^{\ast}$ are unstable, and the system exhibits a periodic solution.}
\label{NS3}
\end{center}
\end{figure}
\noindent of susceptible cells recovers, which is happening on a much longer time-scale, more of these cells become the target of free virus, resulting in a new episode of high viral load, and the cycle repeats.
\begin{figure}
\centering
\includegraphics[scale=0.6]{fig9.png}
\caption{Effects of NAs and interferon therapy on the dynamics of HBV with parameter values from Table~\ref{parameter table}, and $k=7$, $\beta=30$.(a) Stability plot for the endemic steady state $S_{12}^{\ast}$, with the colour code denoting maximum real part of the largest characteristic eigenvalue for the endemic steady state when it is feasible. White area shows the region where the endemic steady state $S_{12}^{\ast}$ is not feasible, and the disease-free steady state $S_{4}^{\ast}$ is stable. (b) Dependence of the critical drug efficacy (${\epsilon}_c$) on $k$, with disease being cleared for $\epsilon_{tot}>\epsilon_c$, with $p_3=0.1$ (black line), $p_3=0.9$ (blue line), $p_3=2$ (red line).}
\label{treatment}
\end{figure}
As a next step, we look into effects of antiviral treatments on HBV. There are two main types of drugs used to treat HBV infection: nucleot(s)ide analogues (NAs), such as lamivudine, adefovir, entecavir, tenofovir, telbivudine, famciclovir, telbivudine, clevudine, and IFN-based therapy, which includes stand-alone IFN-$\alpha$ (roferon, intron) or pegylated interferon peg-IFN-$\alpha$2a/2b \cite{dahari,kim12,packer,sypsa05,takk09}. These treatments individually \cite{min,nowak} and in combinations \cite{colo06,lewin01} result in either reducing the production of new virus particles, or in blocking {\it de novo} infections. Mathematically, one can represent these two effects by a modified viral production rate $(1-\epsilon)p$ and a modified transmission rate $(1-\eta)\beta$, where $0\leq\epsilon\leq 1$ and $0\leq\eta\leq 1$ are drug efficacies associated with inhibiting viral production and preventing new infections, respectively. In order to characterise the overall effectiveness of treatment, it can be helpful to consider a cumulative parameter describing the total drug effectiveness ${\epsilon}_{tot}$, which is defined as $1-\epsilon_{tot}=(1-\eta)(1-\epsilon)$ \cite{dahari}. This would allow one to determine a critical drug efficacy, $\epsilon_c$, corresponding to stability boundary of the disease-free steady state $S_4^{\ast}$, so that this steady state would be stable for $\epsilon_{tot}>\epsilon_c$. With these modifications, new equations for the numbers of healthy and infected cells, as well as the free virus, have the form
\begin{equation}\label{treat}
\begin{array}{l}
\displaystyle{\frac{d{T}}{d{t}}={d}(1-{T})-\beta (1-\eta) VT+{\rho} {R}-{\varphi}_1 {T}({F}_1+{F}_2),}\\\\
\displaystyle{\frac{d{I}}{d{t}}={\beta}(1-\eta){V}{T}-{\delta} {I}-{\mu}_1(1+{s}_1{F}_1+{s}_2{F}_2){I}{N}-{\mu}_2(1+{s}_1^{\prime}{F}_1+{s}_2^{\prime}{F}_2){I}{E}-{\varphi}_2{I}{F}_2,}\\\\
\displaystyle{\frac{d{V}}{d{t}}=\frac{{p}(1-\epsilon)}{1+{s}_3{F}_1+{s}_4{F}_2}{I}-{c}{V}-{k}{A}{V},}
\end{array}
\end{equation}
with the rest of the equations remaining the same as in the main model (\ref{sys2}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{fig10.png}
\caption{Numerical solution of the model (\ref{sys2}) with treatment (\ref{treat}) and parameter values from Table~\ref{parameter table} and $(k=0.3$, $p=0.3)$ in (a-b), and $(k=8$, $p=0.3)$ in (c-d). In all plots, blue colour denotes a rescaled number of uninfected cells $T(t)$, and red colour denotes a rescaled number of infected cells $I(t)$. (a)-(b) Treatment of the chronic infection with ${\epsilon}_{tot}<\epsilon_c$ $(\eta=0.6$, $\epsilon=0.5)$ (a), and ${\epsilon}_{tot}>\epsilon_c$ $(\eta=0.9$, $\epsilon=0.6)$ (b). (c)-(d) Treatment of the relapsing infection with ${\epsilon}_{tot}<\epsilon_c$ $(\eta=0.2$, $\epsilon=0.1)$ (c), and ${\epsilon}_{tot}>\epsilon_c$ $(\eta=0.2$, $\epsilon=0.4)$ (d).}
\label{num_sim_treat}
\end{center}
\end{figure}
Figure \ref{treatment} (a) shows that for parameter values from Table~\ref{parameter table}, if $\eta > 0.7646$, then pure NAs therapy is sufficient to destabilise the endemic steady state and thus clear the infection, and similarly, if $\epsilon > 0.7646$, then just IFN-therapy can make the disease-free steady state $S_4^{\ast}$ stable. This Figure also suggests that disease clearance can be achieved if the combined efficacy ${\epsilon}_{tot}$ exceeds some critical value $\epsilon_c$. Figure \ref{treatment} (b) illustrates how this critical combined efficacy $\epsilon_c$ varies with the rate $k$ of clearance of free virus by antibodies and the rate $p_3$ of production of type-2 interferons by NK cells. One observes that the critical combined efficacy $\epsilon_c$ decreases with $k$, implying that the faster the free virus is cleared by antibodies, the less stringent is the requirement on the efficacy of treatment to clear the infection, and for sufficiently high $k$ the disease clearance can be achieved even in the absence of treatment. Surprisingly, for the same value of $k$, having a higher rate of production of type-2 interferons by NK cells requires a higher combined efficacy $\epsilon_c$ for viral clearance.
Figure~\ref{num_sim_treat} illustrates the effect of using combined NAs and interferon therapy on chronic and relapsing HBV infections. In both regimes, application of treatment with sub-optimal efficacy, i.e. with ${\epsilon}_{tot}<\epsilon_c$, does not cause qualitative change in the system dynamics but results in an increased number of uninfected cells and a decreased number of infected cells. On the contrary, for ${\epsilon}_{tot}>\epsilon_c$, in both cases the number of infected cells is reduced to zero, and the system approaches a stable disease-free steady state $S_4^{\ast}$, which corresponds to a successful clearance of infection.
\section{Discussion}
In this paper we have derived and analysed a new model for HBV infection with particular emphasis on interactions between different branches of immune system, including innate immune response as exemplified by NK cells, adaptive immune response represented by HBV-specific cytotoxic T cells and antibodies, and various cytokines. During infection the cytokines play an important role in recruitment of innate and adaptive immune factors, and they also help them to be more effective, as well as facilitate non-cytolytic cure of infected cells.
Stability analysis of the steady states has shown how various parameters affect the dynamics of immune response, with some of the results being intuitively clear, and others being quite unexpected. Naturally, increasing the number of NK cells, the rate of clearance of free virus by antibodies, the rate of inhibition of viral production by IFN-$\gamma$, or the rate of conversion from infected to refractory cells, all facilitate a more efficient clearance of infection, making the disease-free steady state stable. Once the disease-free steady state loses its stability, the endemic equilibrium becomes biologically feasible and stable. For sufficiently small values of the rate of production of IFN-$\gamma$ by NK cells, the endemic steady state can lose its stability via Hopf bifurcation, giving rise to stable periodic solutions. We have found that for a very small or a very large rate of free virus clearance by antibodies, the stability of the endemic steady state is unaffected by how quickly the new antibodies are produced, whereas for an intermediate range of virus clearance rate, this steady state is unstable for low production of antibodies, and gains stability as the rate of antibody production is increased. This is a very surprising result, as normally one would expect that a higher rate of production of antibodies for the same viral load leads to a clearance of infection, rather than stabilisation of a chronic state. The implication of this observation is that it is not the individual rates of production of antibodies and viral clearance, but rather the balance between them that determines whether the system maintains a chronic infection or exhibits periodic oscillations.
In terms of the role of cytokines on mediating various branches of immune response, a surprising result of the analysis is that increasing the rates at which IFN-$\alpha/\beta$ and IFN-$\gamma$ increase cytolytic activity of NK cells or inhibit production of free virus, actually leads to stabilisation of the endemic steady state. The major difference in the effects of cytokines IFN-$\alpha/\beta$ and IFN-$\gamma$ lies in the observation that whilst increasing the rates associated with IFN-$\alpha/\beta$ just results in the stabilisation of an otherwise unstable endemic steady state, increasing the same rates for IFN-$\gamma$ can result in making the endemic steady state biologically irrelevant, thus qualitatively changing the dynamics. The same result holds for IFN-$\gamma$-facilitated non-cytolytic cure of infected cells. If the production of IFN-$\gamma$ by NK cells is too high, this makes all steady states of the system unstable, leading to persistent oscillations, thus maintaining the infection.
We have also looked into modelling the dynamics of HBV treatment with nucleot(s)ide analogues and/or stand-alone or pegylated interferons. Since these treatments are known to act by reducing the appearance of new infections and blocking production of free virus, we have looked at how the combined drug efficacy depends on these two properties. Numerical studies have shown the existence of a minimum drug efficacy required to clear the infection, and, unexpectedly, this critical drug efficacy is actually increasing with the rate of production of IFN-$\gamma$ by NK cells.
There are several directions in which the model presented in this paper can be extended. One important aspect of the immune dynamics is the non-instantaneous nature of several important processes, such as the lag between infection and recruitment of CTLs, production of new virus particles once a cell becomes infected, the time required for viral cell entry etc \cite{beau08,nelson00}. Mathematically, this can be represented by including discrete of distributed time delay for each of the associated processes, which would make the model more realistic but would also make the analysis much more involved. Furthermore, it is known that antibodies do not kill the virus particles directly, but rather stick to them, creating a virus-antibody complex \cite{ciupe3}. These complexes are not stable forever and can experience some dissociation, hence, explicitly including them into the model can provide better insights into the dynamics.
\section*{Acknowledgements.} FFC acknowledges the support from Chancellor's Studentship from the University of Sussex.
\bibliographystyle{ieeetr}
|
1,116,691,497,138 | arxiv | \section{Introduction} {\label{sec:I}}
In a pioneering work, Anderson (1967) showed that in the thermodynamic limit adding an impurity as a perturbation in a many-particle system consisting of free electron gas results in a vanishing overlap between the initial unperturbed ground state and the final ground-state leading to a phenomenon known as the orthogonality catastrophe (OC)\cite{anderson1967}.
This idea was extended by Nozieres and De Dominicis into a dynamical theory of the absorption process by studying the long-time behavior of the core-hole Green's function\cite{noz1969}.
A number of studies have uncovered drastic modifications to the fermionic system due to its interaction with a single impurity. Some of the examples include, the x-ray edge effect~\cite{Mahan1967,schotte1969,gunnar1978,ohtaka1983,leggett1987}, the Kondo problem~\cite{yuval1970,Kondo1983,Kagan1992,latta2011}, impurity in a semiconductor quantum dot~\cite{hakan2011,wolfgang2012}, impurity interacting with a Luttinger
liquid~\cite{gogolin1993,pusti2006}, heavy particle in a fermionic bath~\cite{OhtakaRMP,Prokofev1993,rosch1995,rosch1998} and impurity interaction with the fermionic many-body environment in ultracold atomic systems~\cite{demler2011,fukuhara2011,knap2012,fukuhara2013,Schmidt_2018}.
Lately, a new class of materials, the 3D and 2D topological insulators (TI) having unusual surface states has generated tremendous interest~\cite{HK2010,ZQ2011}.The TIs exhibit insulating behavior in the bulk but have surface states which are metallic
and are described by the relativistic Dirac equation~\cite{kane2005,bernevig2006,bernevigN2006,roy2006,fu2007,fuprb2007, moore2007,qi2008}.
They have an odd number of gapless Dirac-cones in which the spin and momentum are locked together into a spin-helical state and are protected by the time-reversal symmetry (TRS). The physics of 3D TIs has been studied in quite detail, some of the questions addressed include magnetoelectric response in TI~\cite{Essin2009,Qi2009,Maciejko2010}, integer quantum hall effect~\cite{DHLee2009}, competition between localization and anti-localization~\cite{Hai-Zhou2011,Garate2012}, the effects of phonon and disorder on transport~\cite{Qiuzi2012}, bulk-surface coupling~\cite{Kush2014}, impurity dynamics at the particle-hole symmetric point~\cite{Caracanhas},
role of magnetic and nonmagnetic impurities from the point of view of their effect on local charge/spin density of states and also on the surface states of 3D TI with Dirac spectrum has been an extremely active area of research \cite{liu2009,chen2010,biswas2010,he2011,zhu2011,annicaR2012,annica2012,qaium2012,lee2013,annica2014,ochoa2015,zyuzin2016,bobkova2016} etc. At the same time, a number of work on the interacting surface states of 2D TI which are the helical Luttinger liquid have been made. These include studies on the Kondo effect in the helical edge liquid~\cite{Maciejko2009}, Coulomb drag~\cite{Zyuzin2010}, spin susceptibility~\cite{Klinovaja2013,Meng2013}, transport~\cite{Schmidt2013,Meng2014}, structure factor~\cite{suh2014}, the role of inelastic scattering channels on transport~\cite{Schmidt2012,Jan2012,Maestro2013},
etc.
In this work, we consider the interaction of Dirac fermions with a nonmagnetic mobile impurity.
The unusual surface states of TI provide an intriguing new scenario for the study of the phenomenon of orthogonality catastrophe in these systems.
We find that similar to that in a fermionic bath~\cite{Prokofev1993,rosch1995,rosch1998}, the physics of the heavy particle in a bath of Dirac fermions is strongly influenced by the presence or absence of infrared singularity. In the $D=2$ the interaction between the bath and a particle with the recoilless mass generates an infinite number of low-energy particle-hole pairs resulting in an incoherent behavior of the heavy particle, i.e., the quasiparticle weight vanishes. The spectral
function, in turn, exhibits a power-law divergence at the renormalized energy.
In contrast, the recoil of the heavy particle suppresses the phase space available for particle-hole generation resulting in non-zero quasi-particle weight and consequently a $\delta$-function peak in the spectral-function.
However, a part of the spectral-weight is transferred to the incoherent part which exhibits a square-root singularity. We find that the Maxwell-Boltzmann distribution of the mobile impurity governs the typical momentum transfer between the impurity and the Dirac fermions resulting in a $T^{-3/2}$ temperature dependence of the mobility of the impurity.
The study of interaction effects between the mobile
impurity and 1D helical Luttinger liquid reveals that
the quasiparticle weight vanishes except for the scenario
when the momentum of the mobile impurity with mass
$M$ exceeds $M v$, where $v$ is the sound velocity. This result is in agreement with an earlier study of a single spin-down fermion interacting with the bath of spin-up fermions~\cite{Kantian2014}. As for mobility, in the absence of a magnetic field, the mobility of the impurity is limited by forwarding scattering-processes only and diverges exponentially.
However, turning on the magnetic-field results in a power-law divergence at low-temperatures.
The paper is organized as follows: Section~\ref{sec:II} includes
a general description of our model along with the Green's function of the Dirac fermions in TI for the 2D case.
In section~\ref{sec:III} the linked cluster technique has been used to develop a formalism for obtaining the Green's function of impurity interacting with the Dirac fermions. In addition, the long-time behavior of the impurity Green's function for the recoilless and the recoil case and the corresponding spectral-function have been studied. In section~\ref{sec:IV} the temperature dependence of the mobility of the impurity has been obtained. In section~\ref{sec:V} we establish the model for the 1D case and discuss impurity Green's function for the recoilless and the recoil case. The mobility of impurity interacting with 1D helical liquid is discussed in detail in section~\ref{sec:VI} followed by a section on the summary of our results.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Model.pdf}
\caption{(Color online) Schematic picture of our model where we consider a semiconductor placed on top of the surface of a 3D TI with a conventional insulator in between them. We assume a very large energy barrier for the electrons to hop on either side. A single mobile electron in the conduction band of the semiconductor (with completely filled valence band) behaves like an impurity in the environment of Dirac fermions.}
\label{model_fig}
\end{figure}
\section{Model for the 2D Case} {\label{sec:II}}
We consider the motion of a heavy particle with mass $M$ having a parabolic dispersion (for example in a 2D-semiconductor) and constrained to move in two-dimensions. The semiconductor is placed on top of the surface of a 3D topological insulator separated by a thin insulating layer (see Fig.~\ref{model_fig}). We make following three assumptions: the bulk is insulating and does not influence the physics, absence of tunneling between the TI and the semiconductor and that the heavy mass $M$ interacts with the Dirac fermions
via a contact potential. The low energy effective Hamiltonian of the 2D Dirac fermions has the following form~\cite{liu2014,Zhou2017} $H_{D} =\hbar v_F(\hat{z}\times\hat{k})\cdot \hat{\tau}$, where $\tau$'s are the Pauli matrices. Performing a simple unitary transformation the Hamiltonian of
this composite system can be written as
\begin{eqnarray}
H=\frac{p^2}{2M}+\hbar v_F(\vec{\sigma}\cdot\vec{k}) +\sum\limits_{q} V(q)\rho(q)n(-q),
\label{ini_H1}
\end{eqnarray}
where $p$ is the momentum of the particle and the second term represents the transformed low-energy effective Hamiltonian of the Dirac fermions.
Henceforth we will work in the $\hbar=1$ and $v_F=1$ units unless specified otherwise. The third term is the interaction term,
where the potential $V(q)=U/A$ is momentum independent and the density operators $\rho(q)$ and $n(q)$ correspond to the Dirac fermions and the impurity particle, respectively.
The second quantized form of the Hamiltonian in Eq.(\ref{ini_H1}) acquires the following form
\begin{eqnarray}
&&H=\sum_{P}\epsilon_p\hat{a}^{\dagger}_p\hat{a}_p + \sum_{k,\alpha,\beta}\hat{c}_{k\alpha}^\dagger (\vec{\sigma}\cdot\vec{k})\hat{c}_{k\beta} + V, \notag
\label{ini_H2}
\end{eqnarray}
where $\epsilon_p=p^2/2M$ is the energy of the particle and the interaction potential in the second quantized notation is
\begin{eqnarray}
V= U\sum_{\sigma, k_1,k_2,q} \hat{a}_{k_2-q}^{\dagger}\hat{a}_{k_2} \hat{c}_{k_1+q,\sigma}^{\dagger}\hat{c}_{k_1,\sigma} .
\end{eqnarray}
The corresponding zero temperature Matsubara Green's function for the Dirac fermions on the surface of a 3D TI has the following form,
\begin{equation}
\mathcal{G}(k,i\omega)=\frac{1}{2}\sum_{\eta=\pm 1}\left[\frac{\hat{I}+\eta(\vec{\sigma}\cdot \vec{\bar{k}})/\xi_k}{i\omega-\eta\,\xi_k+\mu_F}\right],
\end{equation}
where $\vec{\bar{k}}=k_x\hat{e}_1+k_y\hat{e}_2+\Delta \hat{e}_3$ and $\xi_k=\sqrt{k^2 +\Delta^2}$ is the dispersion relation of Dirac fermions and $\Delta$ is the mass term which opens up a gap in the TI. Considering the Dirac fermions in the upper band only,
the expression for the Green's function in the momentum-time representation is given by
\begin{eqnarray}
\hat{\mathcal{G}}(k,t)=\frac{\Big[\hat{I}+\frac{ (\vec{\sigma}\cdot\vec{\bar{k}})_{\alpha \beta}}{\xi_k}\Big]}{2i}\bigg[\theta(t)(1-n_k)-\theta(-t)n_k\bigg]e^{-i\bar{\xi}_k t},\qquad
\end{eqnarray}
where $\bar{\xi}_k=\xi_k-\mu_F$.
\section{Impurity Green Function}{\label{sec:III}}
In the following, we will utilize the linked cluster method to obtain the expression for the Green's function of an impurity particle interacting with the surface states of a 3D TI.
The approach is similar to the one used for the polaron problem, here instead, we will incorporate the interaction between the impurity particle and the Dirac fermions. The expression for the impurity Green's function to all orders in interaction has the following form:
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Link_C.pdf}
\caption{(Color online) (i) The bare propagator and the interaction vertex. (ii) First order, and (iii) second order diagrams. Only the connected diagrams are relevant for the impurity Green's function calculation.}
\label{Diag}
\end{figure}
\begin{eqnarray}
G(p,t) =\sum_{n=0}^{\infty} \mathcal{M}_n (p,t),
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{M}_{n}(p,t)= \frac{(-i)^{n+1}}{n!} \int_{0}^{t}dt_1 \cdot\cdot\cdot\int_{0}^{t}dt_n\mathcal{C}_n,\quad
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{C}_n= \langle\hat{a}_p (t) V(t_1)\cdot\cdot\cdot V(t_n)\hat{a}_{p}^\dagger (0)\rangle.
\end{eqnarray}
In terms of the cumulants, $\mathcal{S}_n$, the Green's function can be re-expressed as
\begin{eqnarray}
G(p,t)=G_0\exp\big[\sum_{n=1}^{\infty} \mathcal{S}_n (p,t)\big],
\end{eqnarray}
where
$G_0 (p,t)= -i\Theta(t)\exp(-i\epsilon_p t)$ is the free Green's function of the impurity particle. The dominant contributions to the Green's function is already contained in the first two cumulants given by\cite{Mahan1967,hart1971,Prokofev1993,rosch1995,rosch1998}
\begin{eqnarray}
\mathcal{S}_1=G_0^{-1}(p,t)\mathcal{M}_1,~\text{and}~ \mathcal{S}_2=G_0^{-1}(p,t)\mathcal{M}_2-\frac{1}{2!} \mathcal{S}_1 ^2.\notag
\end{eqnarray}
We first evaluate the $\mathcal{M}_1$ term,
\begin{eqnarray}
&&\mathcal{M}_1=(-i)^2\int_{0}^{t}dt_1 \mathcal{C}_1,\notag
\end{eqnarray}
where
\begin{eqnarray}
&&\mathcal{C}_1
=U \sum_{k_1,k_2,q} \big\langle\mathcal{T}
\hat{a}_p(t)\hat{a}_{k_1+q}^\dagger(t_1)\big\rangle\big\langle\mathcal{T}
\hat{a}_{k_1}(t_1)\hat{a}_{p}^\dagger(0)\big\rangle\notag\\
&&\times~\big\langle\mathcal{T} \hat{c}_{k_2-q,\sigma}^{\dagger}(t1)\hat{c}_{k_2,\sigma} (t1)\big\rangle.\label{C1}
\end{eqnarray}
In the above expression for $\mathcal{C}_1$, we employ the Wick's theorem to decompose the averages involving more than two fermionic operators into products of bare Green's function.
Note that only the connected diagram has been included (see Fig.~\ref{Diag}). The first two terms of Eqn.~\ref{C1} represent the bare impurity Green's function and the last commutator gives us the occupation number for Dirac fermions. Performing the integration over time we obtain,
$\mathcal{M}_1=-iU\sum_{k_1}G_0 (p,t)n_{k_1}t$, where $n_{k_1}$ is the Fermi distribution function.
Thus the first cumulant is
\begin{eqnarray}
\mathcal{S}_1=\frac{\mathcal{M}_1}{G_0}=-i\frac{U}{A}\sum_{k_1}n_{k_1}t.
\end{eqnarray}
The second cumulant is obtained from the $\mathcal{M}_2$ term which involves scattering at two different times
\begin{eqnarray}
\mathcal{M}_2=\frac{(-i)^3}{2!}\int_{0}^{t}dt_1\int_{0}^{t}dt_2~ \mathcal{C}_2,\notag
\end{eqnarray}
where,
\begin{widetext}
\begin{eqnarray}
\mathcal{C}_2 =\frac{U^2}{A^2}\sum
\Big\langle\mathcal{T}\Big\{a_p(t)a^{\dagger}_{k_2 -q}(t_1)a_{k_2}(t_1)c^{\dagger}_{k_1 +q,\sigma_1}(t_1) c_{k_1,\sigma_1}(t_1)a^{\dagger}_{k_4 -\bar{q}}(t_2)a_{k_4}(t_2)c^{\dagger}_{k_3 +\bar{q},\sigma_2}(t_2)
c_{k_3,\sigma_2}(t_2)a_p^\dagger(0)\Big\}\Big\rangle.
\end{eqnarray}
As before, we will use the Wick's theorem to simplify the above expression.
At the outset we will disregard the disconnected diagrams and also the terms which are obtained from squaring the first cumulant (see Fig.~\ref{Diag}). The Dirac fermion commutators yield
\begin{eqnarray}
\big\langle\mathcal{T}\left\{c^{\dagger}_{k_1 +q,\sigma_1}(t_1)c_{k_1,\sigma_1}(t_1)c^{\dagger}_{k_3 +\bar{q},\sigma_2}(t_2)c_{k_3,\sigma_2}(t_2)\right\}\big\rangle\notag\hspace{0.25cm}
=\delta_{k_1,k_3+\bar{q}}\delta_{k_3,k_1+q}\text{Tr}\Big[\hat{\mathcal{G}}(k_1,t_1-t_2)\hat{\mathcal{G}}(k_3,t_2-t_1)\Big],\notag
\end{eqnarray}
and from the impurity creation and annihilation operators we obtain:
\begin{eqnarray}
\mathcal{Z}= \Big\langle\mathcal{T}\Big\{a_p(t)a^{\dagger}_{k_2 -q}(t_1)a_{k_2}(t_1)a^{\dagger}_{k_4 -\bar{q}}(t_2)a_{k_4}(t_2)a_p^\dagger(0)\Big\}\Big\rangle=e^{-i\epsilon_pt}\sum_{\eta=\pm}\Theta[\eta(t_1-t_2)]\exp\Big[ i\eta(\epsilon_p-\epsilon_{p+\eta q})(t_1 -t_2) \Big]
\notag.
\end{eqnarray}
Thus $\mathcal{S}_2$ is given by
\begin{eqnarray}
\mathcal{S}_2=\frac{G_0^{-1}(p,t)}{2}\frac{U^2}{A^2}\sum_{k_1,k_2}\int dt_1 dt_2\,\mathcal{Z}\,\,\text{Tr}\Big[\mathcal{G}_{k_1}(t_1-t_2)\mathcal{G}_{k_2}(t_2-t_1)\Big]\notag.
\end{eqnarray}
Performing the integration on time, we obtain
\begin{eqnarray}
\mathcal{S}_2=\frac{U^2}{A^2}\sum_{k_1 k_2}\Big[1+\hat{k}_1\cdot\hat{k}_2\Big](1-n_{k_2})n_{k_1}\Bigg[\frac{it}{\tilde{\Delta}}
-\frac{1-e^{-i\tilde{\Delta}t}}{\tilde{\Delta}^2}\Bigg],\label{cumu2}
\end{eqnarray}
where, $\tilde{\Delta}(k_1,k_2) = \epsilon_{p+k_1-k_2}-\epsilon_p +\xi_{k_2}-\xi_{k_1}$.
Note that the chiral form in~(\ref{cumu2}) is a feature of the particle-hole pairs in the Dirac sea.
Putting together $\mathcal{S}_1$ and $\mathcal{S}_2$ we obtain the following expression for the impurity Green's function
\begin{eqnarray}
iG(p,t)=\Theta(t) \exp \big[ -i\tilde{\epsilon}_p t+ \mathcal{X}(t)\big],\hspace{0.95cm}
\label{green_I}
\end{eqnarray}
where the renormalized energy $\tilde{\epsilon}_p$ is given by
$$\tilde{\epsilon}_p=\epsilon_p+\frac{U}{A}\sum_{k}n_k - \frac{U^2}{A^2}\sum_{k_1,k_2}\Big[1+\hat{k}_1\cdot\hat{k}_2\Big]\frac{(1-n_{k_2})n_{k_1}}{\tilde{\Delta}(k_1,k_2)},$$
while the function $\mathcal{X}(t)$ which will be our object of interest encodes the non-trivial $t$ dependence and is given by
\begin{eqnarray}
\mathcal{X}(t) = -\frac{U^2}{A^2}\sum_{k_1 k_2}\Big[1+\hat{k}_1\cdot\hat{k}_2\Big](1-n_{k_2})n_{k_1}\frac{1-e^{-i\tilde{\Delta}t}}{\tilde{\Delta}^2}.\hspace{0.95cm}
\label{Xt}
\end{eqnarray}
The following change of variables: $k_1 \rightarrow k$ and $k_2 \rightarrow k+q=k_q$,
allows us to rewrite $\mathcal{X}(t)$ in the following compact form
\begin{eqnarray}
\mathcal{X}(t) = \frac{U^2}{ A}\sum_q\int \frac{d\omega}{\pi}~\text{Im}\Pi(q,\omega-\epsilon_{p+k-k_q}+\epsilon_p ) \frac{1-e^{-i\omega t}}{\omega^2},
\label{Xt1}
\end{eqnarray}
where the imaginary part of the zero-temperature polarization operator $\text{Im}\Pi(q,\omega) $ is given by
\begin{eqnarray}
{\text{Im}}\Pi(q,\omega)&=&-\frac{\pi}{A}\sum_{k} \left[1+\hat{k}\cdot \hat{
k}_q\right]n_{k}\left[1-n_{k_q}\right]\delta(\omega-\xi_{k+q}+\xi_{k}).\label{ImPi}
\end{eqnarray}
We will make use of the expressions given in Eqs.~(\ref{Xt1})
and (\ref{ImPi}) to evaluate the behavior of Green's function in the limiting case of infinite mass and for the finite mass scenario.
\end{widetext}
\subsection{Infinite mass and Non-recoil of Impurity}
In the limit of heavy mass, $\tilde{\Delta}$ can be approximated as $ \xi_{k_q}-\xi_{k}$ and the polarization term in~(\ref{Xt1}) as $~\text{Im}\Pi(q,\omega) $.
Since our primary goal is to obtain the behavior of the Green's function in the long-time limit,
it suffices to consider the momentum integration $\rho(\omega)= \int d^2q/(2\pi)^2\text{Im}\Pi(q,\omega)$, arising from the low-frequency regime of the polarization function.
We will split the integration in to three regions and explicitly compare their contributions. In the regions $q_0<q<q_1$ and $q_2<q<q_3$ shown in Fig.~(\ref{allowed_region}), the polarization operator
has the following form~\cite{Sarkar}
\begin{eqnarray}\label{ImPiB}
\text{Im}\Pi (q,\omega)=-\frac{1}{2\pi\sqrt{q^2-\omega^2}}\bigg[\mathcal{F}(2\mu_F+\omega)-\mathcal{F}(\zeta)\bigg], \qquad
\end{eqnarray}
where $\mathcal{F}(x)$ and $\zeta$ are as given in the appendix~(\ref{pol_func}).
The evaluation of the integral in the first region yields $-2 \omega^{5/2}\Delta/\pi^2\sqrt{k_F}$ while the $q_2<q<q_3$
region yields a larger contribution $ -(16\omega^{3/2}\sqrt{k_F}/3\pi^2) \text{max}[\Delta^2 /\sqrt{2}k_F,\omega/5]$.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Im_pi_non_recoil.pdf}
\caption{(Color online) Non-recoil scenario: the allowed particle-hole regions in the $(q,\omega)$ plane.~(a)~For gapless excitation.~(b)~For $\Delta\ne 0$ and $\Delta\ll\mu_F$. Here $q_{0/1}=\mp k_F\pm\sqrt{(\mu_F\pm\omega)^2-\Delta^2}$ and $q_{2/3}=k_F+\sqrt{(\mu_F\mp\omega)^2-\Delta^2}$. (c) The imaginary part of charge susceptibility $\text{Im}\Pi(q,\omega)$ as a function of $q$. The inset shows $\text{Im}\Pi(q,\omega)$ for $\omega = 0.1$ and small values of $q$. (d) Linear behavior of $\rho(\omega)$ as a function of $\omega$ for a range of $\mu_F$ and fixed $\Delta$.}
\label{allowed_region}
\end{figure}
The largest contribution is obtained from the $q_1<q<q_2$ region wherein the polarization function is given by
\begin{eqnarray}\label{ImPiA}
\text{Im}\Pi_A (q,\omega)=- \frac{1}{2\pi\sqrt{q^2-\omega^2}}\sum_{\eta=\pm}\eta\mathcal{F}(2\mu_F+ \eta\omega).\qquad
\end{eqnarray}
The leading term obtained upon the momentum integration yields linear in $\omega$ term given by
\begin{eqnarray}
&&\int \frac{qdq}{2\pi}\text{Im}\Pi(q,\omega) \approx -\frac{\omega}{2\pi^2}\int_{q_1}^{q_2} qdq\frac{\partial \mathcal{F}(x)}{\partial x }\Big{|}_{x=2\mu_F}\notag\\
&&=-\frac{1}{\pi^2} \int_{q_1}^{q_2} qdq\frac{\omega (4\mu_F^2-q^2)}{\sqrt{4\mu_F^2 (q^2-\omega^2) -q^2 (4 \Delta^2 + q^2 -\omega^2) }}\notag\\
&&\approx-\frac{k_{F}^2\omega}{\pi}.
\end{eqnarray}
In addition, we obtain a second linear in $\omega$ term, which however, is smaller by a factor of $\Delta^2/\mu_F^2$. Keeping the dominant term in $\mathcal{\chi}(t)$ and in the long-time limit we obtain~\cite{hart1971}
\begin{eqnarray}
\mathcal{\chi}(t)
\approx -\frac{k_{F}^2 U^2}{\pi^2}\log(1+it\omega_c),
\end{eqnarray}
where $\omega_c$ is the bandwidth and is taken to be of the order of Fermi-energy.
Thus the behavior of the Green's function~(\ref{green_I}) in the long time limit is determined by the $t$ and the $\log t$ term both of which are in the exponential.
The latter term leads to a power-law decay of the Green's function $\propto 1/t^{\nu}$, where $\nu=k_F^2 U^2/\pi^2$ and is responsible for the orthogonality catastrophe.\\
Besides the Green's function, the spectral function of the heavy particle acquires drastic modification as compared to the free case. The spectral function is given by
\begin{eqnarray}
\mathcal{A}(\epsilon)=-2\,{\text{Im}}\bigg[\int_{-\infty}^{\infty}
dt e^{i\epsilon t}G(t)\bigg]=\frac{e^{-\tilde{\epsilon}}}{i\mu_F}\int_{1-i \infty}^{1+i\infty}dz \frac{e^{{z\tilde{\omega} }}}{z^{\nu}}\notag,
\end{eqnarray}
where $\tilde{\epsilon}=(\epsilon-\tilde{\epsilon}_p)/\mu_F$. First consider the case $\tilde{\epsilon}<0$, since $e^{{z\tilde{\epsilon} }}/z^{\nu}$ is analytic everywhere for $Re(z) >1$,
the contour of integration can be pushed to $Re(z) >1$ and $|z|\rightarrow\infty$. The integrand vanishes everywhere for the modified contour, therefore $\mathcal{A}(\epsilon)=0$ for $\tilde{\epsilon}<0$.
On the other hand, for $\tilde{\epsilon}>0$, the integrand is analytic everywhere except for the negative real axis where it has a branch cut.
Therefore, the contour can be deformed on to the negative real axis and we obtain
\begin{eqnarray}
\mathcal{A}(\epsilon)=\frac{2}{\mu_F}\text{Im}\big[\int_0^\infty dr r^{-\nu} e^{-r\tilde{\epsilon}} e^{i\pi\nu}\big]=\Theta(\tilde{\epsilon})\frac{2\pi}{\mu_F}\frac{e^{-\tilde{\epsilon}}\tilde{\epsilon}^{\nu-1}}{\Gamma(\nu)}.\notag\\
\end{eqnarray}
Thus the spectral function is no longer a delta-function peaked at the renormalized energy $\tilde{\epsilon}_p$, instead
due to the large number of particle-hole excitations has a
power-law singularity given by $\mathcal{A}(\epsilon) \propto \Theta(\epsilon-\tilde{\epsilon}_p)/(\epsilon-\tilde{\epsilon}_p)^{1-\nu}$. Thus the localized impurity acts as an incoherent excitation due to its interaction with the Dirac electrons and decays with time. \\
\subsection{Recoil Case: Suppression of Orthogonality Catastrophe}
The above-discussed scenario is significantly modified when considering an impurity with finite mass. In a typical scattering event involving an impurity atom and a particle-hole pair
with momentum $q$ and energy $\omega$ (where $q v_F\gtrsim \omega$) the impurity momentum changes by $q\sim\sqrt{2M \omega}$.
Thus for $\sqrt{2M \omega} \ll 2k_F$, the phase-space available for low-energy scattering is severely restricted.
This, in turn, is reflected in the deviation of $\rho(\omega)$ from the linear behavior and results in a modified Green's function.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Im_pi_recoil.pdf}
\caption{(Color online)
Recoil scenario: (a) The particle-hole excitation regions
for gapped excitation of Dirac fermions and $\tilde{\omega} =\omega -q^2/2M$. (b) Plot of $\text{Im}\Pi(q,\tilde{\omega})$ as a function of $q$ for fixed values of $\omega$. (c) The $\omega^{3/2}$ behavior of $\rho(\omega)$ for different $\mu_F$ and fixed $\Delta$. }
\label{allowed_region2}
\end{figure}
Following earlier discussion, $\rho(\omega)$ for the recoil case is given by,
\begin{eqnarray}\label{recoil_DOS1}
\rho(\omega)=\int\frac{d^2 q}{4\pi^2}\,\,\text{Im}\Pi(q,\omega- \epsilon_{\vec{p} + \vec{q}} +\epsilon_{ \vec{p}} ).
\end{eqnarray}
In the limit of small frequency and vanishingly small momentum of the impurity, the limits of integration (see Fig.~\ref{allowed_region2}) are from $\omega$ to $\sqrt{2M\omega}$, where $\sqrt{2M\omega} \ll 2k_F $ and we have assumed $\Delta\ll \mu_F$. Using the expression for the polarization operator given in Eq.~(\ref{App_ImPi}) and replacing $\omega\rightarrow \omega -q^2/2m$, $\rho(\omega)$ acquires the following form,
\begin{eqnarray}\label{recoil_DOS2}
\rho(\omega)=-\frac{1 }{\pi^2}\int_{\omega}^{\sqrt{2M\omega}} dq \frac{(\omega-\frac{q^2}{2M}) q}{\sqrt{q^2-(\omega-\frac{q^2}{2M})^2}}\Bigg[\frac{4\mu_F^2-q^2}{\sqrt{4\mu_F^2-\zeta^2}}\Bigg],\notag
\end{eqnarray}
where the leading order result is given by
$\rho(\omega)= -g \omega^{3/2}$, with the proportionality constant being $g= \frac{4\sqrt{2M}}{3\pi^2}k_F$. Thus compared to the infinite mass scenario, recoil of the impurity causes
suppression of the particle-hole excitation and as will be shown below the impurity quasiparticle weight remains non-zero.
The quasiparticle weight $\text{Z}_0$ is obtained from evaluating the time independent part of $\mathcal{X}(t)$~(\ref{Xt1}), i.e.,
$$U^2\int \frac{d\omega}{\pi}~ \frac{\rho(\omega)}{\omega^2},$$
yielding $\text{Z}_0\approx \exp[-2gU^2\sqrt{\omega_c}/\pi] $.
As in the infinite mass case the linear in time term in the exponential, $\exp(-i\epsilon_p t)$,
gets trivially renormalized to
$\bar{\epsilon}_p$.
However, unlike the log term which is responsible for the
strong suppression of the Green's function of the infinite mass, here the time-dependent integral of $\mathcal{X}(t)$ results in a $t^{-1/2}$ term, specifically
$$-U^2\int \frac{d\omega}{\pi}~ \frac{\rho(\omega)}{\omega^2}e^{-i\omega t}\approx gU^2\frac{e^{-i\pi/4}}{\sqrt{\pi}\sqrt{t}}\approx \frac{gU^2e^{-i\pi/4}}{\sqrt{\pi}}t^{-1/2}. $$
Therefore the long-time behavior of the Green's function acquires the form
\begin{align}
G(p,t)= -i\Theta(t)\text{Z}_0\exp\left(-i\bar{\epsilon}_p t+\frac{gU^2e^{-i\pi/4}}{\sqrt{\pi}}t^{-1/2}\right).
\end{align}
We note that for $t\rightarrow\infty$ the second term in the exponential vanishes and
we are left with a Green function describing a well defined quasiparticle excitation with $Z_0<1$.
As before, an insightful perspective into the nature of excitations is revealed from the behavior of the spectral function $\mathcal{A}(\epsilon)$.
The small contribution to the Green's function due to the $t^{-1/2}$ term allows for a perturbative treatment of the spectral function. Therefore, the spectral function can be split into a coherent and incoherent part. The coherent part is given by $\mathcal{A}^{\text{Coh.}}(\epsilon) \approx \text{Z}_0\delta(\epsilon-\bar{\epsilon}_p)$.
On the other hand, the incoherent part has a square-root singularity with the following expression,
\begin{eqnarray}
\mathcal{A}^{\text{Incoh.}}(\epsilon) \approx 2 g U^2 \text{Z}_0 \frac{\Theta(\epsilon-\bar{\epsilon}_p)}{\sqrt{\epsilon-\bar{\epsilon}_p}},\label{incoh}
\end{eqnarray}
and is obtained by performing a partial series expansion of the above Green's function and taking the imaginary part of the Fourier transform of $\delta G \propto -i\Theta(t)e^{-i\bar{\epsilon}_pt}t^{-1/2}$, where we have made use of the following result: $\int_0^\infty e^{i\alpha t}dt/\sqrt{t}=\sqrt{\pi}e^{i\text{sgn}(\alpha)\pi/4}/\sqrt{|\alpha|}$.
The non-zero quasiparticle weight and the delta-function in the spectral function attest to the well-behaved quasiparticle like excitation. At the same time, the weaker square-root singularity in the incoherent part is indicative of the remnants of the orthogonality physics that is significantly subdued due to the relatively fewer number of particle-hole excitations generated in the recoil process.
\section{Mobility of Impurity}{\label{sec:IV}}
In this section we will obtain the low temperature behavior of the DC mobility which is given by $\mu =e \tau/M $, where $\tau$ is the transport time~\cite{rosch1998}. We estimate $\tau$ by first calculating the inverse quasiparticle lifetime for a mobile impurity with momentum $p$ using the Fermi's golden rule \cite{Pines1}
\begin{eqnarray}
\frac{1}{\tau_p}=- \int&& \frac{d\omega d^2q}{2\pi^2}U^2(q
\frac{1}{e^{\beta \omega}-1}
\text{Im}\Pi(q,\omega)\notag \\
&&\times~
\delta(\omega+\epsilon_{p}-\epsilon_{p+q}) ,
\end{eqnarray}
where the above expression is a modified version of the standard formula for the life-time of fermions which has an additional $[1-n_F(\epsilon_{p+q})]$ factor.
The term represents the probability that the scattered state is unoccupied, which in our case is simply set to unity as the corresponding impurity state remains unoccupied. The identical expression is obtained from the on-shell imaginary part of the self-energy of the mobile impurity.
The statistical average of $1/\tau_p$ is performed with respect to the Boltzmann weight factor. We denote the average as $\langle 1/\tau_p \rangle$ given by
\begin{eqnarray}
\Big\langle\frac{1}{\tau_p}\Big\rangle=\frac{\beta}{2\pi M }\int d^2p~\frac{1}{\tau_p} e^{-\beta\epsilon_p}.\hspace{0.5cm}
\label{avglt}
\end{eqnarray}
For our purpose the above expression is useful as the time-scale obtained from it yields the same order of magnitude and the temperature dependence as the transport time.
The energy scale in the integral of Eq.~(\ref{avglt}) is set by the temperature. Therefore, the contribution to the integrals are dominated by the regions $p,q\sim \sqrt{2MT}$ and $\omega\sim T$.
In the low temperature regime ($T\ll k_F^2/M$) the typical momentum transferred $q$ satisfies $q\ll k_F$, moreover, $\omega/qv_F\ll 1$ which implies the polarization operator can be expanded in the ratio $\omega/qv_F$ yielding
$\text{Im}\Pi(q,\omega)\approx -(4/\pi)\mu_F\omega/qv_F$.
Performing the angular integration removes the $\delta$-function and yields
\begin{eqnarray}
\Big\langle\frac{1}{\tau_p}\Big\rangle=&&\frac{4\mu_F U^2_0}{\pi^3 v_F }(MT^3)^{1/2}\int_{0}^{\infty} \tilde{p}~d\tilde{p} e^{-\tilde{p}^2/2} \int_{0}^{\infty} d\tilde{q}\notag \\&& \int_{-\tilde{p}^2/2}^{\infty} d\tilde{\omega}
\frac{1}{e^{\tilde{\omega}}-1}
~\frac{\tilde{\omega}}{\sqrt{(\tilde{p}\tilde{q})^2-(\tilde{\omega} -\tilde{q}^2/2)^2}},\hspace{0.5cm}
\end{eqnarray}
where we have used dimensionless variables
$\tilde{p}=p/\sqrt{MT}$, $\tilde{q}=q/\sqrt{MT}$ and $\tilde{\omega}=\omega/T$. The lower cut-off on the frequency integration is imposed by the $\delta$-function which forbids the frequency range $\omega < -\epsilon_p$. We note that the dimensionless integral is of order $\mathcal{O}(1)$, while the change of variables allows us to extract the $T^{3/2}$ temperature dependence of the inverse scattering time. The above result emphasizes the fact that mobility of impurity interacting with Dirac fermions on the surface of TI in the low temperature region diverges with decreasing temperature as $\mu \propto T^{-3/2}$.\\
\section{Interaction of impurity with the helical edge state}{\label{sec:V}}
So far we have considered the interaction of an isolated impurity with that of the surface states of a 3D TI. Similar to a 3D TI, a 2D TI
has an insulating bulk and metallic edge states.
The pair of gapless-edge states have specific chirality (also called helical edge-states) and are time-reversed partners of each other. These are the 1D helical modes in which backscattering due to the nonmagnetic impurities is forbidden.
A gap in the spectrum can be introduced by breaking time-reversal symmetry which is typically achieved by an external magnetic field.
In this section, we will first develop the formalism to describe the interaction of an isolated mobile impurity with that of an interacting helical liquid followed by the study of Green's function in the non-recoil and recoil case.
The non-interacting Hamiltonian of a helical liquid in the presence of a magnetic field has the following form
\begin{align}\label{HL}
H_{\text{HL}}^0=\int dx\psi^\dagger(x)(-i\hbar \partial_x\sigma_z +B\sigma_x -\epsilon_F)\psi(x),
\end{align}
where $B$ is the Zeeman field applied along the $x$-direction (taken to be perpendicular to the spin-quantization axis) and
the dispersion
is given by $\epsilon_{\pm}=\pm\sqrt{v^2p_x^2+B^2}$. $\gamma_p=\tan^{-1}(p/B)$, while $\hat{u}$ and $\hat{l}$ correspond to upper and lower bands respectively.
We consider the scenario wherein the lower band is completely filled (henceforth it will be ignored) whereas the upper band is filled till the Fermi momentum $\pm k_F$. Thus the field operator $\psi(x)$
has the following form
\begin{align}
\psi(x)=\big[\hat{u}(k_F)\psi_R(x)e^{ik_Fx} +\hat{u}(-k_F)\psi_L(x)e^{-ik_Fx}\big],\nonumber
\end{align}
where $\psi_R(x)$ and $\psi_L(x)$
are the slow degrees of freedom about the points $k_F$ and $-k_F$, respectively, and the fermion spin texture is given by
\begin{align}\hat{u}(p) = \frac{1}{2}\big\{ a_{+}+\frac{p}{|p|} a_{-},a_{+}-\frac{p}{|p|} a_{-}\big\},\end{align}
where $a_{\pm}=\sqrt{1\pm\frac{B}{ \sqrt{B^2+p^2}}}$.
We express $\psi_R(x)$ and $\psi_L(x)$ in terms of the slowly varying bosonic fields $\phi(x)$ and $\theta(x)$ as follows
\begin{align}\label{Ffields}
\psi_R(x)=\frac{1}{\sqrt{2\pi a_0}}e^{i(\theta-\phi)},~~\psi_L(x)=\frac{1}{\sqrt{2\pi a_0}}e^{i(\phi+\theta)},
\end{align}
where $a_0$ is the short distance cutoff and the bosonic fields satisfy the commutation relation: $[\phi(x),\theta(y)]=-i\pi\text{sign}(x-y)/2 $. Plugging~(\ref{Ffields}) in to~(\ref{HL}) the Hamiltonian acquires the standard quadratic form in terms of the bosonic fields~\cite{giamarchi}
\begin{align}\label{Hfree}
H_{\text{HL}}^0=v_F\int \frac{dx}{2\pi} [(\partial_x \phi)^2 + (\partial_x \theta)^2].
\end{align}
The Hamiltonian~(\ref{Hfree}) is modified by including the interaction terms $1/2\int dx dx' U_e(x-x')\rho(x)\rho(x') $, where the density operator is given by
\begin{eqnarray}
&&\rho(x) =\psi_R^{\dagger}(x)\psi_R(x) +\psi_L^{\dagger}(x)\psi_L(x)+\frac{B}{\sqrt{B^2+k_{F}^{2}}}{}\nonumber\\ && \times[\psi_R^{\dagger}(x)\psi_L(x)e^{-i2k_Fx} +\psi_L^{\dagger}(x)\psi_R(x)e^{i2k_Fx}].
\end{eqnarray}
It is worth noting that the $2k_F$ component of the density in a helical liquid is allowed due to the presence of the magnetic-field.
The interaction corrections
arising from the forward-scattering terms:
$\psi_{R/L}^{\dagger}(x)\psi_{R/L}(x)\psi_{R/L}^{\dagger}(y)\psi_{R/L}(y)$ and $\psi_{R/L}^{\dagger}(x)\psi_{R/L}(x)\psi_{L/R}^{\dagger}(y)\psi_{L/R}(y)$ yield $\frac{\tilde{U}_e(0)}{2\pi^2} \int dx (\partial_x \phi)^2$ term to the Hamiltonian, where $\tilde{U}_e(k)$ is the $k^{th}$ mode of the $U_e$ potential. On the other hand, from the
back-scattering terms $$\psi_{R/L}^{\dagger}(x)\psi_{L/R}(x)\psi_{L/R}^{\dagger}(y)\psi_{R/L}(y)e^{\mp i2k_F(x-y)},$$ one obtains correction to the Hamiltonian which is proportional to the square of the field-strength
and given by~\cite{Zyuzin2010}
$$ -\frac{B^2}{B^2+k_F^2}\frac{\tilde{U}_e(2k_F)}{2\pi^2} \int dx (\partial_x \phi)^2.$$
The interaction modified Hamiltonian thus acquires the following form
\begin{align}\label{HInt}
H_{\text{HL}}=v\int \frac{dx}{2\pi} [\frac{1}{K}(\partial_x \phi)^2 + K[\pi \Pi
(x)]^2],
\end{align}
where $\Pi(x)= \partial_x\theta(x)/\pi$, $v=\sqrt{v_F(v_F+r)}$, $K=\sqrt{v_F/(v_F+r)}$ and $r= [\tilde{U}_e(0)-B^2/(B^2+k_F^2)\tilde{U}_e(2k_F)]/\pi$. In terms of the bosonic annihilation operator
\begin{align}\label{ann}
b_p=\frac{1}{\sqrt{2|p|K}} [-\frac{|p|\phi_p}{\sqrt{\pi}} + i\frac{K}{\sqrt{\pi}} \Pi_p],
\end{align}
the potential term due to the interaction of the mobile impurity with the bosonic excitation, $V=U\int dx a^\dagger (x)a(x) \rho(x)$, is
given by
\begin{eqnarray}\label{Eq:Int1D}
V&=& U'\sum_{k,q}i\text{sgn}(q)\sqrt{\frac{|q|}{2\pi L}}a^{\dagger}_{k+q}a_k (b_q+b^\dagger_{-q}),
\end{eqnarray}
where $U'=UK$. We have neglected the large momentum transfer terms as we consider the simpler scenario for which the $B-$field is switched off.\\
Thus the full hamiltonian with the impurity interaction term acquires the form
$$ H=\sum_{k}\epsilon_k a^\dagger _k a_k +v \sum_p |p| b^\dagger_p b_p +V.$$
With this expression for the Hamiltonian, we will employ the linked cluster expansion technique to describe the modifications to the impurity Green's function~\cite{Kantian2014}.
As before, the interaction modified impurity Green's function has the form $ G(k,t)= G_0(k,t)e^{\sum_i \mathcal{S}_{i}}$, where $ G_0(k,t)=-i \theta(t)e^{-i\epsilon_p t}$. It suffices to focus till the second cummulant. The first cumulant, $\mathcal{S}_1 =-i\int dt_1 \langle|a_k(t)V(t_1)a^\dagger _k (0)|\rangle/G_0$ vanishes as it involves averaging over a single boson
operator. The non-vanishing contribution arises from the second cumulant:
$\mathcal{S}_2(t)=G_0^{-1}\mathcal{M}
_2-\mathcal{S}_1^2/2$, where
$$\mathcal{M}_2(t)=\frac{(-i)^3}{2}\int dt_1 \int dt_2 \langle|a_k(t)V(t_1)V(t_2)a_k^\dagger (0)|\rangle.$$
As in the 2D case
only the connected diagrams need be considered.
In terms of the unperturbed Green's function the second cumulant has the following form,
\begin{eqnarray}
&& \mathcal{S}_2(t)=(-i)^3\sum_{q}V^2(q)\int_0^t dt_1 \int_0^{t_1} dt_2 G_0(k,t-t_1)\notag \\
&& G_0(k+q,t_1-t_2)G_0(k ,t_2)D_0(q,t_1-t_2)/G_0(k,t),
\end{eqnarray}
where $D(q,t_1-t_2)=-i\theta(t_1-t_2)e^{-iv|q| (t_1-t_2)}-i\theta(t_2-t_1)e^{iv|q| (t_1-t_2)}$ is the zero temperature time ordered bosonic Green's function. Performing the integration over $t_2$ and $t_1$ we obtain
\begin{eqnarray}\label{S2_1D}
\mathcal{S}_2(k,t)=-\int d\omega \rho(\omega,k)\bigg[-\frac{it }{\omega}+\frac{1 - e^{-it\omega }}{\omega^2}
\bigg],
\end{eqnarray}
where
\begin{eqnarray}\label{rho-1D}\rho(\omega,k)=\frac{U'^2}{2\pi}\int \frac{dq}{2\pi} |q|\delta(\omega-\epsilon_{k+q}+\epsilon_k-v|q|).
\end{eqnarray}
We note that similar to the 2D case, the first term (linear in time term) in ~(\ref{S2_1D}) renormalizes the impurity, whereas it is again the second term which determines the long time asymptotics of the impurity Green's function.
\subsection{Non-recoil case}
For the non-recoil case which also corresponds to $M=\infty$, the impurity energy terms drop out from the $\delta$-function, therefore the $\rho$
term acquires the simple form
\begin{equation}
\rho(\omega)=\frac{U'^2}{2\pi }\int \frac{dq}{2\pi}~|q|\delta(\omega-v|q|)=\frac{U'^2}{2\pi^2 v^2 }\omega,
\end{equation}
where $\omega>0$.
The long time asymptotics in particular the decay of impurity Green's function is determined by the following term of $\mathcal{S}_2$
$$ -\frac{U'^2}{2\pi^2 v^2 }\int \frac{d\omega (1-e^{-i\omega t})}{\omega}\approx -\frac{U'^2}{2\pi^2 v^2 } \log(t\omega_c).$$
The Green's function thus has a power-law decay given by
\begin{equation}\label{GF:decay1}G(t) \propto t^{-\frac{U'^2}{2\pi^2v^2}},
\end{equation}
resulting in a non-Lorentzian spectral function.
The above calculation confirms the well-known fact that in a 1D system the introduction of heavy impurity leads to orthogonality catastrophe.\\
\subsection{Recoil case}
Consider first the scenario for small impurity momentum, in particular $k\ll Mv$. Unlike the 2D case, where the impurity exhibits quasiparticle behavior even at very low momenta, in 1D the decay-behavior of the Green's function remains unchanged and is given by Eq.~\ref{GF:decay1} implying a non-quasiparticle behavior.
Consider next the scenario $k< Mv$, but $(Mv-k)/k\sim 1$. The long-time behavior of the impurity is determined by
$\rho$ near the small frequencies and the corresponding $\omega$ expansion of $\rho$ yields the following form
\begin{eqnarray}
\rho(\omega)&=&\frac{U'^2M}{2\pi }\int \frac{dq}{2\pi}|q|\sum_{i=1}^{2}[\frac{\delta(q-\frac{M\omega}{Mv+(-1)^ik})}{Mv+(-1)^ik}]\notag\\
&=&\frac{U'^2M^2\omega}{2\pi^2} \Big[ \frac{M^2v^2+k^2}{(M^2v^2-k^2)^2}\Big].
\end{eqnarray}
The Green's function, therefore, exhibits power-law decay given by
\begin{eqnarray}
G(k,t) \propto t^{-\frac{U'^2}{2\pi^2}\frac{v^2+k^2/M^2}{(v^2-k^2/M^2)^2}},
\end{eqnarray}
where the exponent is now $k-$dependent and the $k/Mv\ll 1$ limit (\ref{GF:decay1}) is recovered from the above equation. Inspite of the decay behavior, for $k\gg \sqrt{2M/\tau_0}$ (where $\tau_0= e^{2\pi^2v^2/U'^2}/\omega_c$) a quasiparticle type behavior is expected till time $t\sim\tau_0$.
Finally consider the scenario wherein the initial impurity momentum is large, i.e., $k> M v$.
In this case, the main contribution from the $\delta$-function integration yields a frequency independent term, $\rho(\omega)= U'^2 M/2\pi^2$, arising from the $q \approx 2(k-Mv)$ region.
Thus from Eq.~(\ref{S2_1D}), it is easy to deduce that the decay term of the Green's function results in a conventional Fermi-liquid type term~\cite{Kantian2014}, i.e., $e^{-t/\tau}$
where the life-time is given by $1/\tau \approx U'^2M/4\pi $ (thus for $v\gg U'$ the excitation is well defined). The oscillatory term on the other hand acquires contribution from a rather unusual term given by $(U'^2 M/2\pi^2)t\log t\omega_c$, which can be neglected in comparison to $k^2/2M$ for $t<\tau$ as long as $v\gg U'\sqrt{\log (\omega_c/U'^2M)}$.
This criterion on $v$ also implies that the subleading contribution from the second $q-$region ($\approx M\omega/(Mv+k)$ where the $\delta$-function is non-zero) can be neglected.\\
\section{Mobility of Impurity in 1D}{\label{sec:VI}}
The temperature dependence of the mobility of impurity constrained to move in 1D and interacting with the helical edge modes exhibit contrasting behavior in the presence and the absence of a magnetic field.
We will again focus our attention on the low temperture regime $T\ll k_F^2/M$.
Consider first the scenario without the magnetic field, as discussed earlier the back-scattering processes will be absent and
only the forward scattering processes governed by the interaction term
~(\ref{Eq:Int1D}) are allowed.
Focussing on the weak-coupling limit we utilize the Boltzmann equation approach to analyze the temperature depedence of the mobility. In the presence of an external electric field $E$, the steady state Boltzmann equation for the momentum distribution function $f_{p,t}$ is given by
\begin{eqnarray}\label{Eq:Boltz}
eE \frac{\partial f_{p,t}}{\partial p}=\sum_{k} [f_{k,t}\Gamma(k; p)-f_{p,t}\Gamma(p; k)],
\end{eqnarray}
where $e$ is taken to be the charge of the heavy particle. The effects of the scattering processes are encoded on the RHS which is also the collision integral. As in the 2D case, the equilibirum distribution function of the impurity is given by the Maxwell-Boltzmann distribution function $f^0_k = N e^{-\beta k^2/2M }$, where the normalization constant is $N = \sqrt{ 2\pi \beta/M}$. Indeed, in the equilibrium scenario the LHS vanishes,
therefore, the following detailed balance equation $ f^0_{k}\Gamma(k; p)=f^0_{p}\Gamma(p; k)$ is neccessarily satisfied.
The scattering rate $\Gamma$ obtained using the Fermi-Golden rule has the form
\begin{eqnarray}
\Gamma(k; p) =\frac{U^2}{vL}\Big[\omega_q (n_q+1)\delta(\frac{p^2}{2M}-\frac{k^2}{2M} +\omega_q)+\notag\\
\omega_q n_q\delta(\frac{p^2}{2M}-\frac{k^2}{2M} -\omega_q)\Big],\quad
\end{eqnarray}
where $q=p-k$ and $n_q$ is the equilibriun Bosonic distribution function.
Consider the first term of~(\ref{Eq:Boltz}) where the summation in $k$ implies that the typical impurity momentum is $k\sim\sqrt{MT}$ while the energy conservation criterion forces phonons with momentum $q\sim Mv\approx Mv_F$ to take part in the scattering process, however, this is an exponentially rare process since $T\ll Mv_F^2$.
Thus the contribution to friction due to this term and following similar arguments due to the second term is exponentially suppressed.
Consequently, mobility diverges exponentially.
Turning on the magnetic field opens up the back-scattering channel, thus these processes can in principle yield finite contributions to the mobility. The interaction term now has an additional term given by
\begin{eqnarray}
\label{eq:BS}
\frac{U}{2\pi a_0}\frac{B}{\sqrt{B^2+ k_F^2}}\int dx a^\dagger(x) a(x) \cos(2\phi-2k_F x).
\end{eqnarray}
Consider the possibility of $2k_F$ momentum transfer to the impurity particle with momentum $k\sim \sqrt{MT}$, in this case the energy transferred will be $\sim k_F^2/M$. Since the temperature regime we are considering is much smaller than this energy scale,
it is again an exponentially suppressed process.
Thus unsurprisingly this process will also not cause impediments to the impurity flow.\\
It turns out that even though the second order process arising from~(\ref{eq:BS}) is perturbatively weaker in comparison to
the first order back-scattering process, yet it yields dominant contribution to the scattering rate at low temperatures. The interaction term for a second order process can be written as~\cite{NetoF}
$$V_{2}=\mathcal{V}_2 \int dx a^\dagger (x) a(x)\psi^\dagger_R\psi_R\psi^\dagger_L\psi_L,$$
where $\mathcal{V}_2=U^2B^2/[(B^2+k_F^2)\epsilon_{2k_F}]$. In terms of the bosonic annihilation operator~(\ref{ann}) the interaction term is given by,
\begin{eqnarray}\label{NetoV}
&&V_{2}=-\frac{\mathcal{V}_2}{8 \pi L}\sum_{p,k_1,k_2}\sqrt{|k_1| |k_2|}~a^\dagger_{p+k_1+k_2}a_{p}
\Big[\Big(\frac{k_1 k_2}{|k_1 k_2|} K- K^{-1}\Big){}\nonumber\\
&&\times b^\dagger_{-k_1}b^\dagger_{-k_2}+\Big(\frac{k_1 k_2}{|k_1 k_2|} K+ K^{-1}\Big) b_{k_1}b^\dagger_{-k_2}\Big]+h.c.
\end{eqnarray}
The first term of the above equation represents a scattering process which involves a mobile impurity with an initial momentum $p$ getting scattered into the state $p+k_1+k_2$ via the creation of two phonons with momentum $-k_1$ and $-k_2$. This process requires the initial energy of the mobile impurity to be $\sim Mv_F^2$ and hence an unfavorable process.
Similar argument holds for its hermitian conjugate pair.
We can therefore approximate~(\ref{NetoV}) as
\begin{eqnarray}\label{NetoV1}
V_{2}\approx-\frac{\mathcal{V}_2}{4 \pi L}\sum_{p,k_1,k_2}&&\sqrt{|k_1| |k_2|}
\Big(\frac{k_1 k_2}{|k_1 k_2|} K+ K^{-1}\Big){}\nonumber\\
&& a^\dagger_{p+k_1+k_2}a_{p}b_{k_1}b^\dagger_{-k_2}.
\end{eqnarray}
The interaction term now represents the scattering of the mobile impurity via the destruction and creation of phonons. The requirement for this scattering process to be relevant is that both the initial and final energies of the mobile impurity and the phonons are $\sim T$. As will be discussed below this requirement is satisfied.
The collision integral i.e., the RHS of~(\ref{Eq:Boltz}) with this new interaction term is given by
\begin{eqnarray}\label{Eq:coll}
I(p)= \sum_{q} [-f_{p}\Gamma_2(p; p+q)+f_{p+q}\Gamma_2(p+q; p)],
\end{eqnarray}
where $\Gamma_2$ is the scattering rate. Defining the non-equilibrium distribution function as $f_{p}=f^0_{p} h_p$ and using a similar detailed balance equation as discussed earlier one obtains,
$I(p)= \sum_{q} f^0_{p+q} (h_{p+q}-h_{p}) \Gamma_2(p+q; p).$ Using Fermi's golden rule the full expression for the collision integral can be written as
\begin{widetext}
\begin{eqnarray}\label{Eq:coll2}
I(p)= \frac{\mathcal{V}_2^2}{32 \pi^3
\int dq d\bar{q}
|k_1k_2|(h_{p+q}-h_{p})
\Big(K^2+ K^{-2}+2\frac{k_1 k_2}{|k_1 k_2|} \Big) f^0_{p+q}n_{k_1}(n_{k_2}+1) \delta\Big(\frac{p^2}{2M}-\frac{(p+q)^2}{2M}+\omega_{k_2}
-\omega_{k_1}\Big),
\end{eqnarray}
\end{widetext}
where $k_2= -(q+\bar{q})/2$ and $k_1= (\bar{q}-q)/2$.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{FEY1.pdf}
\caption{(Color online) The scattering process can be divided into two regions shown by the shaded and the unshaded region in the $(q,\bar{q})$ plane. The solid lines represent the scattering of the impurity and the wiggly lines represent the phonons.}
\label{FEY1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{ENG.pdf}
\caption{(Color online)~(a)~Represents phonon scattering in the $q>0$ and $q>\bar{q}$ regions. The momentum transferred is $q$, however, the energy transfer is $v\bar{q}$.~(b) The energy change of the mobile impurity: $p^2/2M-(p+q)^2/2M$. (c) ~Represents phonon scattering in the $\bar{q}>0$ and $\bar{q}>q$ regions. The momentum transferred is $q$, and the energy transferred is $vq$.(d) The energy change of the mobile impurity has formally the same expression $p^2/2M-(p+q)^2/2M$. However, the energy-momentum constraint is satisfied for $q\sim Mv_F$ and with the corresponding energy of the phonon $\sim Mv_F^2$. An alternate description is given in the main text.}
\label{ENG}
\end{figure}
The evaluation of the $\delta$-function can be divided into the following two cases, $\omega_{\frac{|q+\bar{q}|}{2}}-\omega_{\frac{|\bar{q}-q|}{2}} =\pm v q$ and $\omega_{\frac{|q+\bar{q}|}{2}}-\omega_{\frac{|\bar{q}-q|}{2}}=\pm v \bar{q}$. These are the unshaded and the shaded regions of Fig.~\ref{FEY1}, respectively. The former scenario is irrelevant since the $\delta$-function imposes constraint similar to the one discussed before, i.e., the requirement that the phonons have energy $\sim Mv_F^2$. The later case on the other hand is achieved for the range $q^2 > \bar{q}^2$ where the momentum transfer $ q$ changes the direction of phonons, i.e., $k_1$ and $-k_2$ are in the opposite direction, however, the energy of phonon hardly changes (see~Fig.\ref{ENG}). This is reflected from the $\delta$-function constraint which fixes the energy transfer to $|v\bar{q}|=|\xi_{p+q}-\xi_{p}|$, where $|\bar{q}/q|\sim\sqrt{T/Mv_F^2} \ll 1$. Therefore~(\ref{Eq:coll2}) reduces to
\begin{eqnarray}\label{Eq:coll3}
I(p)= \frac{\mathcal{V}_2^2 (K+ K^{-1})^{2}}{128 \pi^3
\int dq &&q^2 f^0_{p+q}(h_{p+q}-h_{p})\nonumber\\
&& \times n_{\frac{q}{2}}(n_{\frac{q}{2}}+1).
\end{eqnarray}
We will next consider the limit of weak electric field $E$. Following Feynman~\emph{ et al.,}~\cite{Feynman} $h_p$ is expanded to linear order in $E$
as $h_p=1+ pE{\mathcal{H}}$, where $\mathcal{H}$ is a weakly varying even function of $p$.
The integral is evaluated to yield
\begin{align}\label{Eq:coll4}
I(p)=\frac{2\pi T^5}{15}\mathcal{V}_2^2 (K+ K^{-1})^{2} E{\mathcal{H}}\frac{\partial f^0_p}{\partial p}.
\end{align}
Under the steady-state condition, the LHS of~(\ref{Eq:Boltz}) is simply given by $eE\partial f_p \approx eE\partial f^0_p (1+ pE{\mathcal{H}})$. Thus comparing it with~(\ref{Eq:coll4}), we obtain
$$\mathcal{H} \approx \frac{15}{2\pi T^5} \frac{e }{\mathcal{V}_2^2(K+ K^{-1})^{2}}. $$
With the non-equilibrium distribution determined, the mobility, $\mu$, of impurity in the presence of electric-field $E$ can be easily calculated and is
given by
\begin{align}
\mu = \frac{\int dp p^2 f_0(p) E{\mathcal{H}}/M}{E \int dp f_0(p)}= \mathcal{H} T\propto \frac{1}{T^4B^4}.
\end{align}
Thus with the help of the above ansatz it is easy to deduce that the mobility diverges as $T^{-4}$. Similar conclusion
was reached in an earlier work~\cite{NetoF} using
an alternate approach.
It is worth noting that inhere the power-law divergent behavior is achieved only in the presence of a magnetic field. In the absence of a magnetic field the mobility diverges exponentially at low temperatures.
\section{Summary}
To summarize, we have presented a detailed study of the Green's function and the mobility of a single nonmagnetic impurity interacting with the bath of 2D and 1D Dirac fermions.
In the 2D scenario, impurity Green's function exhibits different behavior in the non-recoil and the recoil case.
A crucial ingredient for the analysis is the density of particle-hole excitations evaluated by the momentum integration of the imaginary part of the polarization function of the Dirac fermions.
The non-recoil case results in the generation of a large number of particle-hole excitations whose density varies linearly with
$\omega$ and this, in turn, results in a power-law decay of the Green's function $\propto 1/t^\nu$, where $\nu =k_F^2U^2/\pi^2$.
The impurity can no longer be described in terms of the quasiparticle picture, in particular, the spectral-function is modified from $\delta$-function and manifests a sharp cut-off for energies less than the renormalized impurity energy, while a power-law suppression is exhibited for energies greater than it, given by $A(\epsilon)\propto \Theta (\epsilon - \epsilon_p)/(\epsilon - \epsilon_p)^{1-\nu}$. In contrast, the energy-momentum constraint in the recoil case implies reduced phase-space for the particle-hole excitations resulting in a $\omega^{3/2}$ dependence of the density of states. The resulting Green's function has a pure oscillatory part, implying non-zero quasiparticle weight, in addition, an oscillatory part multiplied by a decaying $t^{-1/2}$ term. While the former is responsible for a delta-function peak in the spectral function, the latter yields an incoherent part that exhibits square-root singularity.
The temperature dependence of the mobility of the impurity has been estimated by performing a statistical average on the inverse quasiparticle lifetime with respect to the Boltzmann weight factor.
The mobile impurity interacts with a particle-hole excitation having typical energy $\omega\sim T$ and momentum $q\sim \sqrt{2MT}$. In this regime, the polarization function acquires a particularly simple form and the temperature dependence of the mobility is revealed to be $T^{-3/2}$.
For the case of a mobile impurity interacting with the 1D helical modes,
similar to the Green's function behavior in 2D, the Green's function in the non-recoil case exhibits power-law suppression at long times with $G\sim t^\nu$, where $\nu=-U'^2/2\pi^2v^2$. Unlike the 2D case, this behavior persists even for the recoil scenario, albeit for a finite range of momentum. In particular, for $k< Mv$, the long-time decay exponent of the Green's function acquires a momentum dependence in the exponent given by $$\nu =-\frac{U'^2}{2\pi^2}\frac{v^2+k^2/M^2}{(v^2-k^2/M^2)^2},$$ whereas for $k>Mv$
the Green's function has a conventional Fermi-liquid type of decay with the decay time given by $\tau^{-1}=U'^2M/4\pi$.
The temperature dependence of the mobile impurity interacting with the 1D helical modes exhibits contrasting behavior with or without the magnetic field. In the absence of a magnetic field only the forward scattering process is allowed, the energy and momentum constraint forces exponential divergence of the mobility as the temperature is lowered.
Turning on the magnetic field opens up the back-scattering channel, nevertheless, at the lowest order in interaction, the mobility retains the exponential divergence. However,
the second-order back-scattering process allows a scattering process in which the energy transferred between the mobile impurity and the phonons is negligible compared to the temperature. Using the
Feynman's ansatz we solve the Boltzmann equation to obtain $T^{-4}$ divergence of the mobility which also diverges with respect to the magnetic field as $B^{-4}$.
\section{ACKNOWLEDGMENTS}
We would like to thank B. Braunecker and V. Zyuzin for useful discussions. S.G. is grateful
to SERB for the support via the grant number
EMR/2016/002646.
\section{Appendix}\label{pol_func}
The noninteracting generalized polarization function for the Dirac fermion in the TI is given by
\begin{eqnarray}
\Pi(q,\omega_n)=-\int _{\text{K}} \text{Tr}\Big[\sigma_0\mathcal{G}_{\text{K}}\sigma_0\mathcal{G}_{{\text{K+Q}}}\Big],\label{eq:PF}
\end{eqnarray}
where $\text{Tr}$ denotes the trace, ${\text{K}}=(\vec{k},\Omega)$ and ${\text{Q}}=(\vec{q},\omega)$.
The corresponding zero temperature single particle Matsubara Green's function used in the above equation has the following form
\begin{equation}
\mathcal{G}(k,i\Omega)=\frac{1}{2}\sum_{\alpha=\pm 1}\left[\frac{\hat{I}-\alpha(\vec{\sigma}\cdot \vec{\bar{k}})/\xi_k}{i\Omega_n+\alpha\,\xi_k+\mu_F}\right],
\end{equation}
where $\alpha= \pm 1$ represents valence and conduction bands respectively, $\vec{\bar{k}}=k_x\hat{e}_1+k_y\hat{e}_2+\Delta\hat{e}_3$,
and $\xi_k= \sqrt{k^2+\Delta^2}$.
The Pauli matrix $\sigma$ acts on the spin degrees of freedom.
Following the standard frequency summation and the analytical continuation $i\omega\rightarrow \omega+i0^+$, we obtain the following form of the polarization function,
\begin{eqnarray}
\Pi(q,\omega) = -\int \frac{d^2k}{(2\pi)^2}\sum_{\alpha,\alpha'=\pm1}\Bigg[1+ \alpha \alpha'\frac{\vec{k}_\cdot\big(\vec{k}+\vec{q} \big)}{\xi_k\xi_{k+q}} \Bigg]\notag\\
\times \frac{n_F(-\alpha \xi_k)-n_F(-\alpha'\xi_{k+q})}
{\Big(\alpha\xi_k-\alpha'\xi_{k+q}-\omega - i0^+\Big)}.\qquad
\end{eqnarray}
The nonzero contribution to the imaginary part of the polarization function from the upper to upper band $(u\rightarrow u)$ transitions is as follows,
\begin{widetext}
\begin{eqnarray}
\text{Im}\Pi(q,\omega)=-\pi\int \frac{d^2k}{(2\pi)^2}\Bigg[1+\frac{\vec{k}_\cdot\big(\vec{k}+\vec{q}\big)}{\xi_k\xi_{k+q}}\Bigg]\Big(n_F(\xi_k)-n_F(\xi_{k+q})\Big)\delta(\omega+\xi_k-\xi_{k+q})
\end{eqnarray}
After delta-function intergration we obtain
\begin{eqnarray}
\text{Im}\Pi(q,\omega)=-\frac{1}{2\pi}\int_{{\text{Max}}[(\mu_F-\omega),\Delta]}^{\mu_F} \frac{d\xi_k}{\sqrt{q^2-\omega^2}}\Bigg[\frac{(2\xi_k+\omega)^2-q^2}{\sqrt{(2\xi_k+\omega)^2-\zeta^2}}\Bigg]
\end{eqnarray}
\[
\text{Im}\Pi(q,\omega) =- \frac{1}{2\pi\sqrt{q^2-\omega^2}}\,\,\,\times \left \{
\begin{tabular}{ccc}
$\mathcal{F}\big(2\mu+\omega\big)-\mathcal{F}\big(2\text{max}[\mu-\omega,\Delta]+\omega\big) \hspace{1.0cm}:1A$ \\
$\mathcal{F}\big(2\mu+\omega\big)-\mathcal{F}\big(\zeta\big) \hspace{0.5cm} \hspace{3.35cm}:2A$
\end{tabular}
\right \},
\]
\begin{eqnarray}
{\text{where}}\,\,\zeta=\sqrt{q^2 +4q^2\Delta^2/(q^2-\omega^2)},\quad{\text{and}}\quad \mathcal{F}(x)=\frac{1}{4}\Bigg\{\Big[\zeta^2-2q^2\Big]\log\big(\sqrt{x^2-\zeta^2}+x\big) +x\sqrt{x^2-\zeta^2}\Bigg\}.\label{App_ImPi}
\end{eqnarray}
The allowed regions for the transitions are
\begin{eqnarray}
&&1A:\omega<\mu-\sqrt{(q-k_F)^2+\Delta^2}\notag\\
&&2A:\pm\mu\mp \sqrt{(q-k_F)^2+\Delta^2}<\omega<-\mu+\sqrt{(q+k_F)^2+\Delta^2}.\hspace{4.95cm}\notag
\end{eqnarray}
Next the simillar contribution from lower to upper band $(l\rightarrow u)$ transitions are,
\begin{eqnarray}
\text{Im}\Pi(q,\omega)=-\frac{1}{2\pi}\int_{\Delta}^{\omega-\mu_F} \frac{d\xi_k}{\sqrt{\omega^2-q^2}}\Bigg[\frac{-(2\xi_k-\omega)^2+q^2}{\sqrt{-(2\xi_k+\omega)^2+\zeta^2}}\Bigg]
\end{eqnarray}
\[
\text{Im}\Pi(q,\omega)=- \frac{1}{2\pi\sqrt{\omega^2-q^2}}\,\,\,\times \left \{
\begin{tabular}{ccc}
$\mathcal{F}'\big(\omega-2 \mu\big)-\mathcal{F}'\big(-\zeta\big) \hspace{1.8cm}:1B$ \\
$\mathcal{F}'\big(\zeta\big)-\mathcal{F}'\big(-\zeta\big) \hspace{2.75cm}:2B$ \\
$\mathcal{F}'\big(\zeta\big)-\mathcal{F}'\big(-\zeta\big) \hspace{2.75cm}:3B$\notag
\end{tabular}
\right \},
\]
%
\begin{eqnarray}
{\text{where}}\qquad \mathcal{F}'(x)=\frac{1}{4}\left[\big(2q^2-\zeta^2\big) \tan^{-1}\bigg(\frac{x}{\sqrt{\zeta^2-x^2}}\bigg) +x\sqrt{x^2-\zeta^2}\right],
\end{eqnarray}
simillarly the allowed regions in the $(q,\omega)$ plane are
\begin{eqnarray}
&&1B : \mu+\sqrt{(q-k_F)^2+\Delta^2}<\omega<\mu+\sqrt{(q+k_F)^2+\Delta^2}\notag\\
&&2B:\omega>\mu+\sqrt{(q+k_F)^2+\Delta^2}\notag\\
&&3B : \omega>(2k_{F});\,\, \& \,\,\sqrt{q^2+4\Delta^2} <\omega< \mu+\sqrt{(q-k_F)^2+\Delta^2}.\notag
\end{eqnarray}
\end{widetext}
|
1,116,691,497,139 | arxiv | \section{Introduction}
Nowadays, many machine learning problems in computer vision require to process spherical data found in various applications; for instance, omnidirectional RGB-D images such as Matterport \cite{chang2017matterport3d:}, 3D LiDAR scans from self-driving cars \cite{dewan2016motion-based} and molecular modelling \cite{boomsma2017spherical}. Unfortunately, naively mapping spherical signals to $\mathbb{R}^2$ and then using planar convolution neural networks (CNNs) is destined to fail, because this projection will result in space-varying distortions, and make shift equivariance ineffective.
Actually, the success of planar CNNs is mainly attributed to their shift equivariance \cite{cohen2016group}: shifting an image and then feeding it through multiple layers is the same as feeding the original image and then shifting the resulted feature maps. Since there do not exist translation symmetries in the spherical domain, a good principle of modifying planar CNNs to spherical CNNs is to convert the shift equivariance property to 3D rotation equivariance in the spherical domain. Motivated by this, \cite{cohen2018spherical} and \cite{esteves2018learning} propose spherical CNNs that are rotation equivariant. However, these methods represent the sphere using the spherical coordinates, which over-sample near the poles and cause significant distortion.
To avoid the impact of distortion, many recent works process spherical data using much more uniform representations. Among these methods, \cite{cohen2019gauge} and \cite{zhang2019orientation-aware} approximate the sphere using the icosahedron and propose Icosahedral CNN and orientation-aware CNN, respectively. Specifically, Icosahedral CNN \cite{cohen2019gauge} is rotation equivariant while orientation-aware CNN \cite{zhang2019orientation-aware} is beneficial for some orientation-aware tasks, such as semantic segmentation with preferred orientation. However, these methods need project spherical data to the icosahedron, resulting in inaccurate representations.
Actually, there exist some discretizations of the sphere that are both uniform and accurate, like the icosahedral spherical mesh \cite{baumgardner1985icosahedral} and the HealPIX \cite{gorski2005healpix}. However, these representations are non-Euclidean structured grids \cite{bronstein2017geometric}, which have no uniform locality, thus conventional convolutions defined in the Euclidean case (e.g., square lattices) cannot work on them. Accordingly, \cite{jiang2019spherical} propose MeshConvs, which use orientable parameterized partial differential operators (PDOs) to process spherical signals represented by non-Euclidean structured grids. However, MeshConvs are not rotation equivariant.
In order to address the above problems, we combine the advantages of \cite{cohen2019gauge} and \cite{jiang2019spherical} together, and propose PDO-e{$\text{S}^\text{2}$}CNN, which is an orientable rotation equivariant spherical CNN based on PDOs. The distinction from \cite{cohen2019gauge} is that our model is orientation-aware and can work on much more accurate non-Euclidean structured representations instead of icosahedron, and the difference from \cite{jiang2019spherical} is that ours is rotation equivariant.
Our contributions are as follows:
\begin{itemize}
\item We use PDOs to design an orientable spherical CNN that is exactly rotation equivariant in the continuous domain.
\item The equivariance of the PDO-e{$\text{S}^\text{2}$}CNN becomes approximate after the discretization, and it is the first time that the theoretical equivariance error analysis is provided when the equivariance is approximate in the spherical domain.
\item PDO-e{$\text{S}^\text{2}$}CNNs show greater parameter efficiency and perform very competitively on spherical MNIST classification, 2D-3D-S image segmentation and QM7 atomization energy prediction tasks.
\end{itemize}
The paper is organized as follows. In Related Work, we review some works related to spherical CNNs. In Prior Knowledge, we introduce some prior knowledge to make our work easy to understand. In PDO-e{$\text{S}^\text{2}$}CNN, we use orientable parameterized PDOs to design PDO-e{$\text{S}^\text{2}$}CNN, which is exactly equivariant over $SO(3)$ in the continuous domain. In Implementation, we use Taylor's expansion to estimate PDOs accurately, implement PDO-e{$\text{S}^\text{2}$}CNN in the discrete domain, and provide the equivariance error analysis. In Experiments, we evaluate our method on multiple tasks.
\section{Related Work \label{section2}}
The most straightforward method to process spherical signals is mapping them into the planar domain via the equirectangular projection \cite{su2017learning}, and then using 2D CNNs. However, this projection will result in severe distortion. \cite{coors2018spherenet:} and \cite{zhao2018distortion-aware} implement CNNs on the tangent plane of the spherical image to reduce distortions. Even though, such methods are not equivariant in the spherical domain.
Actually, many works \cite{cohen2016group,cesa2019general,shen2020pdo,sosnovik2019scale,weiler20183d,ravanbakhsh2017equivariance} focus on incorporating equivariance into networks. For spherical data, some works \cite{bruna2014spectral,frossard2017graph-based,perraudin2019deepsphere:,defferrard2020deepsphere:} represent the sampled sphere as a graph connecting pixels according to distance between them and utilize graph-based methods to process it. \cite{perraudin2019deepsphere:} propose DeepSphere using isotropic filters, and achieve rotation equivariance. \cite{defferrard2020deepsphere:} improve DeepSphere and achieve a controllable tradeoff between cost and equivariance. However, the isotropic filters they use significantly restrict the capacity of models.
Also, there exist some works \cite{cohen2018spherical,esteves2018learning,kondor2018clebsch-gordan} using anisotropic filters to achieve rotation equivariance. Specifically, \cite{cohen2018spherical} extend the group equivariance theory into the spherical domain and use a generalized Fourier transform for implementation. However, these methods only work on nonuniform grids which over-sample near the poles. \cite{cohen2019gauge} further extend group equivariance to gauge equivariance, which is automatically $SO(3)$ equivariant in the spherical domain. However, their theory cannot show how the feature maps transform w.r.t. rotation transformations explicitly whereas ours can, which makes our theory more transparent and explainable.
\cite{cohen2019gauge} implement gauge equivariant CNNs on the surface of the icosahedron. The icosahedron is not an accurate discretization of the sphere, so their equivariance is weak. By contrast, our method can be applied on accurate discretizations of the sphere, achieving much better equivariance consequently.
Particularly, empirical results \cite{jiang2019spherical,zhang2019orientation-aware} show that orientation-aware CNNs can be beneficial for some tasks with orientation information. \cite{zhang2019orientation-aware} use north-aligned filters to achieve orientation-awareness, while \cite{jiang2019spherical} use orientable PDOs. In addition, \cite{jiang2019spherical} can process spherical signals on non-Euclidean structured grids easily using PDOs. However, their models are not rotation equivariant. Our PDO-e{$\text{S}^\text{2}$}CNN furthermore incorporates equivariance into the model, and introduces a new weight sharing scheme across filters, which brings greater parameter efficiency.
\section{Prior Knowledge\label{section3}}
\subsection{Parameterization of $\mathcal{S}^2$ and $SO(3)$}
We use $\mathcal{S}^2$ and $SO(3)$ to denote a sphere and a group of 3D rotations, respectively. Formally,
\begin{align*}
&\mathcal{S}^2 =\{(x_1,x_2,x_3)|\|x\|_2=1\},\\
&SO(3) = \{R\in \mathbb{R}^3|R^TR=I,\det(R)=1\}.
\end{align*}
We use the ZYZ Euler parameterization for $SO(3)$. An element $R\in SO(3)$ can be written as
\begin{equation*}
R=Z(\alpha_R)Y(\beta_R)Z(\gamma_R),
\end{equation*}
where ZYZ-Euler angles $\alpha_R \in [0,2\pi),\beta_R \in [0,\pi]$ and $\gamma_R \in [0,2\pi)$, and $Z(\alpha)$ and $Y(\beta)$ are rotations around $z$ and $y$ axes, respectively. To be specific,
\begin{scriptsize}
\begin{align*}
Z(\alpha)=\left[
\begin{array}{ccc}
\cos\alpha & -\sin\alpha & 0\\
\sin\alpha & \cos\alpha & 0\\
0 & 0 & 1\\
\end{array}
\right],Y(\beta)=\left[
\begin{array}{ccc}
\cos\beta & 0 & \sin\beta\\
0 & 1 & 0\\
-\sin\beta & 0 & \cos\beta\\
\end{array}
\right].
\end{align*}
\end{scriptsize}
Accordingly, we have a related parameterization for the sphere. An element $P\in \mathcal{S}^2$ can be written as $P(\alpha,\beta)=Z(\alpha)Y(\beta)n$, where $n$ is the north pole, i.e., $n=(0,0,1)^T$. Conversely, we can also calculate $\alpha$ and $\beta$ if $P=(x_1,x_2,x_3)^T$ is given. To be specific, if $P=(0,0,1)^T$, we take $\alpha=\beta=0$; if $P=(0,0,-1)^T$, we take $\alpha=0$ and $\beta=\pi$; otherwise, we have
\begin{small}
\begin{align*}
\alpha =
\begin{cases}
\arccos \left(\frac{x_1}{\sqrt{x_1^2+x_2^2}}\right) & \text{$x_2\geq 0$}\\
2\pi-\arccos \left(\frac{x_1}{\sqrt{x_1^2+x_2^2}}\right) &\text{$x_2< 0$}
\end{cases},\,\,\beta =\arccos(x_3).
\end{align*}
\end{small}
\begin{figure}
\centering
\includegraphics[scale=0.25]{pic/fig1.pdf}
\caption{(a) $\mathcal{S}^2\simeq SO(3)/SO(2)$. $SO(3)$ can be viewed as a bundle of circles over the sphere; (b) Group equivariance on $SO(3)$. Transforming an input by a transformation $g\in SO(3)$ and then passing it through the mapping $T$ is equivalent to first mapping it through $T$ and then transforming the representation. }
\label{figure1}
\end{figure}
This parameterization makes explicit the fact that the sphere is a quotient $\mathcal{S}^2\simeq SO(3)/SO(2)$\footnote{Given a group $\mathcal{G}$ and its subgroup $\mathcal{H}$, the left cosets $g\mathcal{H}$ of $\mathcal{H}$ partition $\mathcal{G}$, where $g\in \mathcal{G}$. We denote the set of left cosets as $\mathcal{G}/\mathcal{H}$. $E\simeq F$ denotes that $E$ is homeomorphic to $F$.}, where $SO(2)$ is the subgroup of $SO(3)$ and contains the rotations around the $z$ axis. Elements of the subgroup $SO(2)$ leave the north pole invariant, and have the form $Z(\gamma)$. The point $P(\alpha,\beta)\in \mathcal{S}^2$ is associated with the coset representative $\bar{P}=Z(\alpha)Y(\beta)\in SO(3)$. This element represents the left coset $\bar{P}\cdot SO(2)=\{\bar{P}Z(\gamma)|\gamma\in[0,2\pi)\}$. Intuitively, $SO(3)$ can be viewed as a bundle of circles ($SO(2)$) over the sphere, as we show in Figure \ref{figure1}(a). In this way, $\forall R\in SO(3)$, $R\in \bar{P}_RSO(2)$, where $\bar{P}_R=Z(\alpha_R)Y(\beta_R)$. As a result, we can parameterize $R$ as $(P_R,A_R)$, where $P_R=\bar{P}_Rn\in \mathcal{S}^2$ and $A_R\in SO(2)$. Specifically, $A_R$ is a 2D rotation matrix, which is a simplification of $Z(\gamma_R)$, i.e.,
\begin{equation*}
A_R=\left[
\begin{array}{p{1.1cm}<{\centering} p{1.1cm}<{\centering}}
$\cos\gamma_R$ & $-\sin\gamma_R$ \\
$\sin\gamma_R$ & $\cos\gamma_R$ \\
\end{array}
\right].
\end{equation*}
\subsection{Group Actions on Spherical Functions}
Inputs and feature maps can be naturally modeled as functions in the continuous domain. Specifically, we model the input $s$ as a smooth function on $\mathcal{S}^2$ and the intermediate feature map $so$ as a smooth function on $SO(3)$. Particularly, the smoothness of $so$ means that if we use the parameterization of $SO(3)$ mentioned above, the feature map $so(P,A)$ is smooth w.r.t. $P$ when $A$ is fixed. So $so$ can also be viewed as a smooth spherical function with infinite channels indexed by $A\in SO(2)$. We use $C^{\infty}(\mathcal{S}^2)$ and $C^{\infty}(SO(3))$ to denote the function spaces of $s$ and $so$, respectively .
In this way, rotation transformations acting on inputs and feature maps can be mathematically formulated as follows. \\
\textbf{Actions on Inputs}\quad Suppose that $s\in C^\infty(\mathcal{S}^2)$ and $\widetilde{R} \in SO(3)$, then $\widetilde{R}$ acts on $s$ in the following way:
\begin{align*}
\forall P \in \mathcal{S}^2,\quad \pi^{S}_{\widetilde{R}}[s](P)=s\left({\widetilde{R}}^{-1}P\right).
\end{align*}
\textbf{Actions on Feature Maps}\quad Suppose that $so \in C^\infty(SO(3))$ and $\widetilde{R} \in SO(3)$, then $\widetilde{R}$ acts on $so$ in the following way:
\begin{align}
\forall R \in SO(3),\quad \pi^{SO}_{\widetilde{R}}[so](R)=so\left({\widetilde{R}}^{-1}R\right).
\label{31}
\end{align}
If we use the parameterization of $SO(3)$, (\ref{31}) is of the following more intuitive form:
\begin{align*}
\pi^{SO}_{\widetilde{R}}[so](P_R,A_R)&=so\left(P_{\widetilde{R}^{-1}R},A_{\widetilde{R}^{-1}R}\right)\\
&=so\left(\widetilde{R}^{-1}P_R,A_{\widetilde{R}^{-1}R}\right),
\end{align*}
where $(P_R,A_R)$ is the representation of $R$ and $P_{\widetilde{R}^{-1}R}=\widetilde{R}^{-1}Rn=\widetilde{R}^{-1}P_R$.
\subsection{Group Equivariance}
Equivariance measures how the outputs of a mapping transform in a predictable way with the transformation of the inputs. To be specific, let $T$ be a mapping, which could be represented by a deep neural network from the input feature space to the output feature space, and $\mathcal{G}$ is a transformation group. $T$ is called group equivariant if it satisfies
\begin{align*}
\forall g \in \mathcal{G},\quad T[\pi_g[f]]=\pi^{\prime}_g[T[f]],
\end{align*}
where $f$ can be any input feature map in the input feature space, and $\pi _g$ and $\pi^{\prime}_g$ denote how the transformation $g$ acts on input features and output features, respectively.
In our theory, we take the group $\mathcal{G}$ as $SO(3)$, and then focus on utilizing PDOs to design a neural network equivariant to $SO(3)$, as shown in Figure \ref{figure1}(b).
\begin{figure}
\centering
\includegraphics[scale=0.3]{pic/fig2.pdf}
\caption{For any $P\in \mathcal{S}^2$, a homeomorphism $\varphi_P$ maps the chart $U_P\subset \mathcal{S}^2$ to an open subset $\widetilde U_P\subset \mathbb{R}^2$. The sphere is presented by a level-$3$ icosahedral mesh. }
\label{figure2}
\end{figure}
\section{PDO-e{$\text{S}^\text{2}$}CNNs \label{section4}}
\subsection{Chart-based PDOs }
We define an atlas to help define PDOs acting on the spherical functions uniformly. To be specific, an atlas for $\mathcal{S}^2$ is a collection of charts whose domains cover $\mathcal{S}^2$. We denote the atlas as $\{(U_P,\varphi_P)|P\in \mathcal{S}^2\}$, where $U_P$ is an open subset of $\mathcal{S}^2$ containing $P$ and $\varphi_P:U_P\rightarrow \widetilde{U}_P$ is a homeomorphism from the chart $U_P$ to an open subset $\widetilde{U}_P=\varphi_P(U_P)\subset \mathbb{R}^2$ and $\varphi_P(P)=0$. The form of $\varphi_P$ is given by
\begin{equation}
\varphi_P^{-1}(x_1,x_2)=\bar{P}
\left(x_1,x_2,\sqrt{1-|x|^2}\right)^T.
\label{phi}
\end{equation}
In this way, as shown in Figure \ref{figure2}, for any point $P\in \mathcal{S}^2$ (except poles), $x_1$ resp. $x_2$ point to the north-south and east-west directions in the chart $U_P$, and the homeomorphism $\varphi_P$'s are uniformly defined over the sphere, which relate to orientable and uniform PDOs over the sphere.
In order to use PDOs, we suppose that the spherical function $s$ is smooth and denote it as $s\in C^{\infty}(\mathcal{S}^2)$. $s$ can always be extended to a smooth function $\bar{s}$ defined on $\mathbb{R}^3$, and we denote it as $\bar{s}\in C^{\infty}(\mathbb{R}^3)$. We emphasize that we need not obtain $\bar{s}$ explicitly from the given $s$, whereas we only use this notation for ease of derivation. Then the PDOs $\partial/\partial x_i$ and $\partial^2/\partial x_i\partial x_j(i,j=1,2)$\footnote{We only consider the PDOs up to the second order in this work.} act on the spherical function $s$ in the way that these PDOs act on the composite function $\bar{s}\cdot \varphi_P^{-1}\in C^{\infty}(\mathbb{R}^2)$\footnote{We use $[\cdot]$ to denote that an operator acts on a function.}. Formally, $\forall P\in \mathcal{S}^2$,
\begin{align*}
\frac{\partial}{\partial x_i}[s](P)&=\frac{\partial}{\partial x_i}\left[\bar{s}\cdot\varphi_P^{-1}\right](0),\\
\frac{\partial^2}{\partial x_i\partial x_j}[s](P)&=\frac{\partial^2}{\partial x_i\partial x_j}\left[\bar{s}\cdot\varphi_P^{-1}\right](0).
\end{align*}
By contrast, \cite{jiang2019spherical} define PDOs based on the spherical coordinates, which have high resolution near the pole and low resolution near the equator. So the scales of their PDOs are dependent on the latitudes. By contrast, the scales of our chart-based PDOs are independent of locations, resulting in much more uniform feature extration. Our definition of PDOs is also different from that in conventional manifold calculus in that we can deal with second-order PDOs without defining a smooth vector field. Actually, it is impossible to define a non-trivial smooth vector field over the sphere due to the hairy ball theorem \cite{milnor1978analytic}.
\subsection{Rotated Parameterized Differential Operators \label{rotated}}
Following \cite{jiang2019spherical,ruthotto2018deep,shen2020pdo}, we parameterize convolution kernels using a linear combination of PDOs. Specifically, we refer to $H$ as a parameterized second-order polynomial of $2$ variables, i.e.,
\begin{equation}
H(u,v;\bm{w}) = w_1 + w_2u+ w_3v + w_4u^2+w_5uv+ w_6 v^2,\label{poly}
\end{equation}
where $\bm{w}$ are learnable parameters. If we take $u=\partial/\partial x_1$ and $v=\partial/\partial x_2$, then $H(\partial/\partial x_1,\partial/\partial x_2;\bm{w})$ becomes a linear combination of PDOs. For example, if $H(u,v;\bm{w})=u^2+uv$, then $H(\partial /\partial x_1,\partial/\partial x_2;\bm{w})=\partial^2/\partial x_1^2+\partial^2/\partial x_1\partial x_2$.
We rotate these PDOs with a $2\times2$ rotation matrix $A\in SO(2)$, and obtain the following rotated parameterized differential operators:
\begin{align}
\chi^{(A)}=H\left(\frac{\partial}{\partial x_1^{(A)}},\frac{\partial}{\partial x_2^{(A)}};\bm{w}\right),
\label{aa}
\end{align}
where
\begin{equation}
\left(\frac{\partial}{\partial x_1^ {(A)}},\frac{\partial}{\partial x_2^ {(A)}}\right)^T= A^{-1} \left(\frac{\partial}{\partial x_1},\frac{\partial}{\partial x_2}\right)^T.
\label{21}
\end{equation}
As a compact form, we can also rewrite (\ref{21}) as
\begin{equation}
\nabla_x^{(A)} = A^{-1}\nabla_x,\label{gradient}
\end{equation}
where $\nabla_x=(\partial/\partial x_1,\partial/\partial x_2)^T$ is the gradient operator. (\ref{21}) is equivalent to first rotating the coordinate system by $Z$, and then calculating gradients. In addition, it is easy to get that
\begin{align}
\left(\nabla_x^{(A)}\right)^2&\coloneqq
\left[
\begin{array}{cc}
\frac{\partial}{\partial x_1^ {(A)}}\frac{\partial}{\partial x_1^ {(A)}} & \frac{\partial}{\partial x_1^ {(A)}}\frac{\partial}{\partial x_2^ {(A)}}
\vspace{5pt}\\
\frac{\partial}{\partial x_1^ {(A)}}\frac{\partial}{\partial x_2^ {(A)}} & \frac{\partial}{\partial x_2^ {(A)}}\frac{\partial}{\partial x_2^ {(A)}}
\end{array}
\right]\label{nabla2}\\
&= A^{-1} \left[
\begin{array}{cc}
\frac{\partial^2}{\partial x_1^2} & \frac{\partial^2}{\partial x_1\partial x_2}\\
\vspace{-8pt}\\
\frac{\partial^2}{\partial x_1\partial x_2} & \frac{\partial^2}{\partial x_2^2}\\
\end{array}
\right]A=A^{-1}\nabla_x^2A. \notag
\end{align}
To make it more explicit, we emphasize that by the definition in (\ref{aa}), $\chi^{(A)}$'s are identical polynomials w.r.t. $\partial/\partial x_1^{(A)}$'s and $\partial/\partial x_2^{(A)}$'s, but different polynomials w.r.t. $\partial/\partial x_1$ and $\partial/\partial x_2$. To be specific,
\begin{scriptsize}
\begin{align}
\chi^{(A)}=&w_1+(w_2,w_3)\nabla_x^{(A)}+
\bigg\langle\left[
\begin{array}{cc}
w_4 & \frac{w_5}{2}\\
\frac{w_5}{2} & w_6\\
\end{array}
\right],
\left(\nabla_x^{(A)}\right)^2
\bigg\rangle\notag\\
=&w_1+(w_2,w_3)A^{-1}\nabla_x+
\bigg\langle\left[
\begin{array}{cc}
w_4 & \frac{w_5}{2}\\
\frac{w_5}{2} & w_6\\
\end{array}
\right],
A^{-1}\nabla_x^2A \bigg\rangle\notag\\
=&w_1+(w_2,w_3)A^{-1}\nabla_x+
\bigg\langle A\left[
\begin{array}{cc}
w_4 & \frac{w_5}{2}\\
\frac{w_5}{2} & w_6\\
\end{array}
\right]A^{-1},
\nabla_x^2\bigg\rangle,
\label{coef}
\end{align}
\end{scriptsize}
where $\langle\cdot,\cdot \rangle$ denotes the inner product. Particularly, these differential operators $\chi^{(A)}$'s share parameters $\bm{w}$, indicating great parameter efficiency.
From another point of view, the rotation of differential operators can also be viewed as changing the coefficients of PDOs (see (\ref{coef})), without changing the orientations of PDOs. Consequently, the rotated parameterized differential operators, $\chi^{(A)}$'s, and the subsequent PDO-e{$\text{S}^\text{2}$}CNN are still orientable. By contrast, some rotation equivariant spherical CNNs, such as Icosahedral CNNs \cite{cohen2019gauge}, assume no preferred orientation, so they are not orientable.
\subsection{Equivariant Differential Operators}
We define two mappings, $\Psi$ and $\Phi$, using the above-mentioned differential operators, $\chi^{(A)}$'s. To be specific, we use $\Psi$ to deal with inputs, which maps an input $s$ to a feature map defined on $SO(3)$: $\forall R\in SO(3)$,
\begin{align}
\Psi [s](R) = \Psi [s](P_R,A_R)=\chi^{(A_R)}[s](P_R).
\label{psi}
\end{align}
Then, we use $\Phi$ to deal with the resulting feature maps, which maps one feature map defined on $SO(3)$ to another feature map defined on $SO(3)$: $\forall R\in SO(3)$,
\begin{align}
\Phi [so](R) &= \Phi [so](P_R,A_R)\notag\\
&=\int_{SO(2)} \chi^{(A_R)}_{A}\,\,[so](P_R,A_RA) d\nu(A)\label{phi2},
\end{align}
where $\nu$ is a measure on $SO(2)$. As for $\chi_A^{(A_R)}$, we use the subscript $A$ to distinguish the differential operators parameterized by different $\bm{w}_A$'s. The $so$ on the right hand side should be viewed as a spherical function indexed by $A_RA$ when the operator $\chi^{(A_R)}_A$ acts on it.
Finally, we prove that the above two mappings, $\Psi$ and $\Phi$, are equivariant under arbitrary rotation transformation $\widetilde{R}\in SO(3)$ and show how the outputs transform w.r.t. the transformation of inputs. The proofs of theorems can be found in the Supplementary Material.
\begin{theorem}
If $s \in C^{\infty}(\mathcal{S}^2)$ and $so \in C^{\infty}(SO(3))$, $\forall \widetilde{R}\in SO(3)$, we have
\begin{align}
\Psi \left[\pi^{S}_{\widetilde R}[s]\right]&=\pi^{SO}_{\widetilde R}\left[\Psi [s]\right],\label{equi1}\\
\Phi \left[\pi^{SO}_{\widetilde R}[so]\right] &= \pi^{SO}_{\widetilde R}\left[\Phi [so]\right].\label{4}
\end{align}
\label{theorem1}
\end{theorem}
\subsection{Equivariant Network Architectures}\label{general}
It is easy to use the above-mentioned two equivariant mappings, $\Psi$ and $\Phi$, to design an equivariant network. To be specific, according to the working spaces, we set a $\Psi$ as the first layer, followed by multiple $\Phi$'s, inserted by pointwise nonlinearities $\sigma(\cdot)$, e.g., ReLUs, which do not disturb the equivariance. Finally, we can get an equivariant network architecture $T[s]=\Phi^{(L)}\left[\cdots\sigma\left(\Phi^{(1)}\left[\sigma(\Psi[s])\right]\right)\right]$. \begin{theorem}
If $ s \in C^{\infty}(\mathcal{S}^2)$, $\forall \widetilde{R}\in SO(3)$, we have
\begin{align*}
T\left[\pi^{S}_{\widetilde R}[s]\right]= \pi^{SO}_{\widetilde R}\left[T[s]\right].
\end{align*}
\label{theorem3}
\end{theorem}
That is, transforming an input $s$ by a transformation $\widetilde R$ (forming $\pi^{S}_{\widetilde R}$) and then passing it through the network $T$ gives the same result as first mapping $s$ through $T$ and then transforming the representation.
As discussed above, we only consider the case where inputs, $s$, and intermediate feature maps over $SO(3)$, $so$, only consist of single channel. In fact, our theory can be easily extended to a more general case where inputs and feature maps consist of multiple channels, and we only need to use multiple $\Psi$'s and $\Phi$'s to process inputs and generate outputs.
Besides, in conventional CNNs, we always use $1\times 1$ convolutions to change the numbers of channels without introducing too many parameters. In PDO-e{$\text{S}^\text{2}$}CNN, this can be easily achieved by taking $\bm{w}$ as a one-hot vector. The details are given in the Supplementary Material. We can also incorporate equivariance into other architectures, e.g., ResNets, because shortcut connections do not disturb equivariance.
\section{Implementation \label{section5}}
\subsection{Icosahedral Spherical Mesh}
In practice, spherical data are always given on discrete domain, instead of continuous domain. The icosahedral spherical mesh \cite{baumgardner1985icosahedral} is among the most uniform and accurate discretization of the sphere. Specifically, a spherical mesh can be obtained by progressively subdividing each face of the unit icosahedron into four triangles and reprojecting each node to unit distance from the origin. We start with the unit icosahedron as the level-0 mesh, and each progressive mesh resolution is one level above the previous. The level-3 icosahedral mesh is shown in Figure \ref{figure2}. The subdivision scheme for triangles also provides a natural coarsening and refinement scheme for the grid, which allows for easy implementations of downsampling and upsampling routines associated with CNN architectures. We emphasize that our method is not limited to the icosahedral spherical mesh, but can also use other discrete representations of the sphere easily, like the HealPIX \cite{gorski2005healpix}. In this work, we use the icosahedral spherical mesh for ease of implementation.
\subsection{Estimation of Partial Derivatives \label{espdo}}
We view the input spherical data $\bm{I}$ as a discrete function sampled from a smooth spherical function $s$ on the icosahedral spherical mesh vertices $\Omega\subset \mathcal{S}^2$, where $\bm{I}(P)=s(P),\forall P\in \Omega$, and use a numerical method to estimate partial derivatives at $P\in \Omega$ in the discrete domain. Firstly, we use $\varphi_P$ to map $P$ and $Q_i(i=1,2,\cdots,m)$ into an open set $\widetilde U_P\subset \mathbb{R}^2$, where $Q_i\in \Omega$ are the neighbor nodes of $P$ (see Figure \ref{figure2})\footnote{We only consider the neighbor nodes of $P$, in analogy with the commonly-used $3\times 3$ convolutions in planar CNNs.}. As a result, we get $\varphi_P(P)=0$, and $\varphi_P(Q_i)=(x_{i1},x_{i2})$, where $\forall i=1,2,\cdots,m$,
\begin{equation*}
\left(x_{i1},x_{i2},\sqrt{1-x_{i1}^2-x_{i2}^2}\right)^T
=\bar{P}^{-1}Q_i.
\end{equation*}
We denote $f_P=\bar s\cdot \varphi_P^{-1}$, so $f_P(0)=s(P)=\bm{I}(P)$ and $f_P(x_{i1},x_{i2})=s(Q_i)=\bm{I}(Q_i)$. We use Taylor's expansion to expand $f_P$ at the original point, then we have that $\forall i=1,2,\cdots,m$,
\begin{small}
\begin{align}
f_P(x_{i1},x_{i2})=&f_P(0,0) + x_{i1}\frac{\partial f_P}{\partial x_1}+x_{i2}\frac{\partial f_P}{\partial x_2}+\frac{1}{2}x_{i1}^2\frac{\partial^2 f_P}{\partial x_1^2}\notag\\
&+x_{i1}x_{i2} \frac{\partial^2 f_P}{\partial x_1\partial x_2} +\frac{1}{2}x_{i2}^2\frac{\partial^2 f_P}{\partial x_2^2} + O(\rho_i^3)\label{fp}
\end{align}
\end{small}
where all above partial derivatives are evaluated at $(0,0)$, and $\rho_i=\sqrt{x_{i1}^2+x_{i2}^2}$. Thus we have
\begin{footnotesize}
\begin{equation*}
\left[
\begin{array}{p{2.7cm}<{\centering}}
$\vdots$ \\
$f_P(x_{i1},x_{i2})-f_P(0)$ \\
$\vdots$\\
\end{array}
\right]
\approx
\left[
\begin{array}{p{0.2cm}<{\centering} p{0.2cm}<{\centering}p{0.2cm}<{\centering}p{0.55cm}<{\centering}p{0.3cm}<{\centering}}
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$x_{i1}$ & $x_{i2}$ & $\frac{x_{i1}^2}{2}$ & $x_{i1}x_{i2}$ & $\frac{x_{i2}^2}{2}$\\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
\end{array}
\right]D_P,
\label{approx}
\end{equation*}
\end{footnotesize}
where $D_P$ is a partial derivatives matrix:
\begin{equation*}
D_P=\left(\frac{\partial f_P}{\partial x_1},\frac{\partial f_P}{\partial x_2},\frac{\partial^2 f_P}{\partial x_1^2},\frac{\partial^2 f_P}{\partial x_1x_2},\frac{\partial^2 f_P}{\partial x_2^2}\right)^T\bigg|_{x_1=x_2=0}.
\end{equation*}
We denote the above approximate equations as $F_P\approx V_PD_P$, and use the least square method to estimate $D_P$:
\begin{equation*}
{\hat D_P}=\mathop{\arg\min}_{D} \|V_PD-F_P\|_2= (V_P^TV_P)^{-1}V_P^TF_P.
\end{equation*}
Actually, we can easily estimate any partial derivatives using the similar method so long as we employ the appropriate Taylor's expansions. By contrast, \cite{jiang2019spherical} can only deal with limited PDOs, including $\partial/\partial x_1,\partial/\partial x_2$, and the Laplacian operator.
\subsection{Discretization of $SO(2)$}
As it is impossible to go through all the $A\in SO(2)$ in (\ref{psi}) and (\ref{phi2}), we need to discretize $SO(2)$. To be specific, we discretize the continuous group $SO(2)$ as the $N$-ary cyclic group $C_N$, where $C_N=\{e=A_0,A_1,\cdots,A_{N-1}\}$, and
\begin{equation*}
A_i=\left[
\begin{array}{p{1.1cm}<{\centering} p{1.1cm}<{\centering}}
$\cos\frac{2\pi i}{N}$ & $-\sin\frac{2\pi i}{N}$ \\
$\sin\frac{2\pi i}{N} $ & $\cos\frac{2\pi i}{N}$ \\
\end{array}
\right].
\end{equation*}
Correspondingly, (\ref{psi}) should be discretized as: $\forall P\in \Omega$ and $i=0,1,\cdots,N-1$,
\begin{scriptsize}
\begin{align*}
&\widetilde \Psi [\bm{I}](P,i)=\widetilde \chi^{(A_i)}[\bm{I}](P)\\%= \widetilde \chi^{(Z_i)}\left[\bar s\cdot \varphi_P^{-1}\right](0)\\
=&\left(w_1+(w_2,w_3)A_i^{-1}\widehat\nabla_x+\bigg\langle A_i\left[
\begin{array}{cc}
w_4 & \frac{w_5}{2}\\
\frac{w_5}{2} & w_6\\
\end{array}
\right]A_i^{-1},
\hat\nabla_x^2\bigg\rangle\right)\left[f_P\right](0)\notag\\
=&w_1f_P(0)+(w_2,w_3)A_i^{-1}\hat\nabla_x\left[f_P\right](0)\\
&+\bigg\langle A_i\left[
\begin{array}{cc}
w_4 & \frac{w_5}{2}\\
\frac{w_5}{2} & w_6\\
\end{array}
\right]A_i^{-1},
\hat\nabla_x^2\left[f_P\right](0)\bigg\rangle\notag,
\end{align*}
\end{scriptsize}
where the partial derivatives are estimated using $\bm{I}$. In this way, when viewed as a spherical function, the output $\widetilde\Psi[\bm{I}]$ consists of $N$ channels, instead of infinite channels indexed by $A\in SO(2)$. Similarly, (\ref{phi2}) is discretized as: $\forall P\in \Omega$ and $i=0,1,\cdots,N-1$,
\begin{align*}
&\widetilde\Phi [\bm{F}](P,i)=\frac{\nu(SO(2))}{N}\sum_{j=0}^{N-1} \widetilde\chi^{(Z_i)}_{Z_j}\,\,[\bm{F}](P,i\textcircled{+}j),
\end{align*}
where the intermediate feature map $\bm{F}$ is an $N$-channel discrete function sampled from the smooth function $so\in C^{\infty}(SO(3))$, i.e., $\bm{F}(P,i)=so(P,A_i)$, and $\textcircled{+}$ denotes the module-$N$ addition. As a result, $\widetilde \Psi$ and $\widetilde \Phi$ become discretized PDO-e{$\text{S}^\text{2}$}Convs. Particularly, batch normalization \cite{ioffe2015batch} should be implemented with a single scale and a single bias per PDO-e{$\text{S}^\text{2}$}Conv feature map in order to preserve equivariance.
\subsection{Equivariance Error Analysis}
As shown in Theorem \ref{theorem1}, the equivariance of PDO-e{$\text{S}^\text{2}$}Convs $\Psi$ and $\Phi$ is exact in the continuous domain, and it becomes approximate because of discretization in implementation. In (\ref{fp}), it is easy to verify that $O(\rho_1)=O(\rho_2)=\cdots =O(\rho_m)$ from the definition of icosahedral spherical mesh, and we write $O(\rho_i)=O(\rho)$ for simplicity. Then, we have the following equivariance error analysis.
\begin{theorem}
$\forall \widetilde R\in SO(3),$
\begin{align}
&\widetilde\Psi\left[\pi_{\widetilde R}^S[\bm{I}]\right]=\pi_{\widetilde R}^{SO}\left[\widetilde\Psi [\bm{I}]\right]+O(\rho),\label{app1}\\
&\widetilde\Phi\left[\pi_{\widetilde R}^{SO}[\bm{F}]\right]=\pi_{\widetilde R}^{SO}\left[\widetilde\Phi [\bm{F}]\right]+O(\rho)+O\left(\frac{1}{N^2}\right),\label{app2}
\end{align}
where transformations acting on discrete inputs and feature maps are defined as $\pi_{\widetilde R}^S[\bm{I}](P)=\pi_{\widetilde R}^S[s](P)$ and $\pi_{\widetilde R}^{SO}[\bm{F}](P,i)=\pi_{\widetilde R}^{SO}[so](P,A_i)$, respectively.
\label{theorem4}
\end{theorem}
Particularly, we note that \cite{shen2020pdo} use PDOs to design an equivariant CNN over the Euclidean group, and achieve a quadratic order equivariance approximation for 2D images in the discrete domain. However, they can only deal with the data in the Euclidean space. Virtually, we extend their theory to the non-Euclidean geometry, i.e., the sphere. By contrast, we can only achieve a first order equivariance approximation w.r.t. the grid size $\rho$, as the representation of the sphere we use is non-Euclidean structured.
\section{Experiments \label{section6}}
We evaluate our PDO-e{$\text{S}^\text{2}$}CNNs on three datasets. The data preprocessing, model architectures and training details for each task are provided in the Supplementary Material for reproducing our results.
\subsection{Spherical MNIST Classification}
We follow \cite{cohen2018spherical} in the preparation of the spherical MNIST, and prepare non-rotated training and testing (N/N), non-rotated training and rotated testing (N/R) and rotated training and testing (R/R) tasks. The training set and the test set include 60,000 and 10,000 images, respectively. We randomly select 6,000 training images as a validation set, and choose the model with the lowest validation error during training. Inputs are on a level-4 icosahedral spherical mesh. For fair comparison with existing methods, we evaluate our method using a small and a large model, respectively.
\begin{table*}[t]
\centering
\begin{tabular}{l|c|ccc|c}
\hline
Model & R.E. & N/N & N/R & R/R & \#Params\\
\hline
S2CNN \cite{cohen2018spherical} & \cmark & $96$ & $94$ & $95$ & 58k \\
UGSCNN \cite{jiang2019spherical} &\xmark & $99.23$ & $35.60$ & $94.92$ & 62k \\
HexRUNet-C \cite{zhang2019orientation-aware} & \xmark & $99.45$ & $29.84$ & $97.05$ & 75k \\
\hline
\textbf{PDO-e{$\text{S}^\text{2}$}CNN} & \cmark &$99.44\pm 0.06$ &$90.14\pm 0.58$ &$98.93\pm 0.08$ & 73k \\
\hline\hline
SphereNet \cite{coors2018spherenet:} & \xmark & $94.4$ &- & - & 196k\\
FFS2CNN \cite{kondor2018clebsch-gordan} & \cmark & $96.4$ & $\bm{96}$ & $96.6$ & 286k\\
Icosahedral CNN \cite{cohen2019gauge}&\cmark & $99.43$ & $69.99$ & $99.31$ & 182k\\
\hline
\textbf{PDO-e{$\text{S}^\text{2}$}CNN} & \cmark &$\bm{99.60\pm 0.04}$ &$94.25\pm 0.29$ & $\bm{99.45\pm 0.05}$ & 180k\\
\hline
\end{tabular}
\caption{Results on the spherical MNIST dataset with non-rotated (N) and rotated (R) training and test data. The second column marks whether these models are rotation-equivariant (R.E.) in the spherical domain.}
\label{tab1}
\end{table*}
As shown in Table \ref{tab1}, when using the small model (73k), our method achieves $99.44\%$ test accuracy on the N/N task. The result decreases to $90.14\%$ on the N/R task, mainly because of the equivariance error after discretization. HexRUNet-C achieves comparable results using slightly more parameters, but it performs significantly worse on N/R and R/R tasks for lack of rotation equivariance. S2CNN performs better on the N/R task because it is nearly exactly equivariant. However, it cannot perform well on two more important tasks, N/N and R/R, because of the distortion from nonuniform sampling. We argue that these two tasks are more important because the training and the test sets of most tasks are of identical distributions.
When using the large model (180k), our method results in new SOTA results on the N/N and R/R tasks (99.60\% and 99.45\%), respectively, which improve the previous SOTA results (99.45\% and 99.31\%) significantly. Note that the previous SOTA results have been very competitive even for planar MNIST, and the error rates are further reduced by more than 20\% using our method.
Also, we obtain a more competitive result (94.25\%) on the N/R task. By contrast, Icosahedral CNN only achieves $69.99\%$ test accuracy because it is only equivariant over the icosahedral group, which merely contains $60$ rotational symmetries. FFS2CNN performs the best on this task because it is also nearly exactly equivariant and use much more parameters, but it performs significantly worse on other tasks (N/N and R/R) because of the distortion in representation from nonuniform sampling.
\subsection{Omnidirectional Image Segmentation}
Omnidirectional semantic segmentation is an orientation-aware task since the natural scene images are always up-right due to gravity. We evaluate our method on the Stanford 2D-3D-S dataset \cite{2017arXiv170201105A}, which contains 1,413 equirectangular images with RGB+depth channels, and semantic labels across $13$ different classes. The input and output spherical signals are at the level-5 resolution. We use the official 3-fold cross validation to train and evaluate our model, and report the mean intersection over union (mIoU) and pixel accuracy (mAcc).
\begin{table}[t]
\centering
\small
\begin{tabular}{l|cc|c}
\hline
Model & mAcc & mIoU & \#Params \\
\hline
UNet & 50.8 & 35.9 & - \\
Icosahedral CNN\iffalse \cite{cohen2019gauge} \fi& 55.9 & 39.4 &- \\
\hline
\cite{eder2020tangent} & 50.9 & 38.3 & -\\
UGSCNN \iffalse\cite{jiang2019spherical}\fi & 54.7 & 38.3 & 5.18M\\
HexRUNet \iffalse\cite{zhang2019orientation-aware}\fi & 58.6 & 43.3 & 1.59M \\
\hline
\textbf{PDO-e{$\text{S}^\text{2}$}CNN} & $\bm{60.4\pm 1.0}$ & $\bm{44.6\pm 0.4}$ & 0.86M\\
\hline
\end{tabular}
\caption{mAcc and mIoU comparison on 2D-3D-S at the level-5 resolution.}
\label{tab3}
\end{table}
We report our main result in Table \ref{tab3}. As pointed out in \cite{zhang2019orientation-aware}, the 2D-3D-S dataset is acquired with preferred orientation, thus an orientation-aware system can be beneficial. Our model significantly outperforms icosahedral CNN, mainly because that our model is orientation-aware, while the latter assumes no preferred orientation. Compared with HexRUNet, an orientation-aware model, our method still performs significantly better, because we can process spherical data inherently, whereas HexRUNet can only process icosahedron data, which makes big difference. In addition, we use far fewer parameters (0.86M vs. 1.59M), showing great parameter efficiency from weight sharing across rotated filters. The detailed statistics of per-class for this task is shown in the Supplementary Material.
\subsection{Atomization Energy Prediction}
Finally, we apply our method to the QM7 dataset \cite{blum2009970,rupp2012fast}, where the goal is to regress over atomization energies of molecules given atomic positions $p_i$, and charges $z_i$. This dataset contains 7,165 molecules, and each molecule contains up to $23$ atoms of $5$ types (H, C, N, O, S). We use the official 5-fold cross validation to train and evaluate our model, and report the root mean square error (RMSE).
\begin{table}[t]
\centering
\begin{tabular}{l|c|c}
\hline
Model &RMSE & \#Params \\
\hline
MLP/Random CM & $5.96\pm 0.48$ & -\\
S2CNN & 8.47 & 1.4M\\
FFS2CNN & 7.97 & 1.1M \\
\hline
\textbf{PDO-e{$\text{S}^\text{2}$}CNN} &$\bm{3.78\pm 0.07}$ & 0.4M\\
\hline
\end{tabular}
\caption{Experimental results on the QM7 task.}
\label{tab7}
\end{table}
As shown in Table \ref{tab7}, compared with other spherical CNNs, including S2CNN and FFS2CNN, our model halves the RMSE using far fewer parameters (0.4M vs. 1M+), showing greater performance and parameter efficiency. Our method also significantly outperforms a very competitive model, the MLP trained on randomly permuted Coulomb matrices (CM) \cite{montavon2012learning}. In addition, this MLP method is unlikely to scale to large molecules, as it needs a large sample of random permutations, which grows exponentially with the numbers of molecules.
\section{Conclusions}
In this work, we define chart-based PDOs and then use them to design rotation-equivariant spherical CNNs, PDO-e{$\text{S}^\text{2}$}CNNs. PDO-e{$\text{S}^\text{2}$}CNNs are easy to implement on non-Euclidean structured representations, and we analyze the equivariance error from discretization. Extensive experiments verify the effectiveness of our method.
One drawback of our work is that the equivariance cannot be preserved as well as S2CNN and FFS2CNN do in the discrete domain. In future work, we will explore more representations of the sphere and better numerical calculation methods to improve the equivariance in the discrete domain.
\section*{ Acknowledgements}
This work was supported by the National Key Research and Development Program of China under grant 2018AAA0100205. Z. Lin is supported by NSF China (grant no.s 61625301 and 61731018), Major Scientific Research Project of Zhejiang Lab (grant no.s 2019KB0AC01 and 2019KB0AB02), Beijing Academy of Artificial Intelligence, and Qualcomm.
|
1,116,691,497,140 | arxiv |
\section{Computational Interpretation of the Model}
\label{sec:computational-significance}
To illustrate the full computational significance of our reformulation (especially the bidirectional version), we first need to digress slightly, and explain Eilenberg's (\citeyear{Eil74}) \emph{$X$-machine} model of computation. This is an extremely powerful computational model, which easily captures (and extends) the power of the Turing machine. We will then show that a particle's trajectory can be regarded as an \Xm drawn in spacetime, and that (a minor variant of) this machine computes its own amplitude (as a trajectory).
\subsection{{\Xm}s}
\label{sec:xms}
An \Xm $M = F^\Lambda$ (where $X$ is a data type) is a finite state machine $F$ over some alphabet $A$, together with a \emph{labelling} function $\Lambda \colon a \mapsto a^\Lambda \colon A \to R(X)$, where $R(X)$ is the ring of relations of type $X \leftrightarrow X$.
Each word $w = a_1 \dots a_n$ in the language $\Mod{F}$ recognised by the machine $F$ can be transformed by $\Lambda$ into a relation $w^\Lambda$ on $X$, using the scheme
\[
w^\Lambda = {a_1}^\Lambda \circ \dots \circ {a_n}^\Lambda
\]
and taking the union of these relations gives the relation $\Mod{F^\Lambda}$ computed by the machine,
\[
\Mod{F^\Lambda} = \bigcup \Set{ w^\Lambda | w \in \Mod{F} } \enspace .
\]
If we want to model a relation of type $Y \leftrightarrow Z$, for data types $Y \neq Z$, we equip the machine with encoding and decoding relations, $E: Y \to X$ and $D: X \to Z$. Then the behaviour computed by the extended machine is the relation $E \circ \Mod{F^\Lambda} \circ D$.
Although the language $\Mod{F}$ is necessarily regular, the computational power of the \Xm model is unlimited. For, given any set-theoretic relation $\zeta \colon Y \to Z$, we can compute it using the trivial (2-state, 1-transition)-machine with $X = Y \times Z$, by picking any $\Fixed{z} \in Z$, and using the encoder $y^E = \Tuple{y,\Fixed{z}}$, the decoder $\Tuple{y,z}^D = z$, and labelling $a^\Lambda = \overline{\zeta}$, where $\Tuple{y, \Fixed{z}}^{\overline{\zeta}} = \Tuple{y, \zeta(y)}$. For now, given any $y \in Y$, we have $\Mod{F^\Lambda} = \bigcup \Set{a^\Lambda} = \overline{\zeta}$, and
\[
y^{(E \circ \Mod{F^\Lambda} \circ D)}
= y^{(E \circ \zeta \circ D)}
= \Tuple{y, \Fixed{z}}^{(\overline{\zeta} \circ D)}
= \bigcup \Tuple{y, \zeta(y)}^D
= \zeta(y) \enspace .
\]
\subsection{Computation by admissible machines}
In our case, all of the path relations we consider will be constant multipliers of the form $k_c \colon z \mapsto zc$, where $c, z \in \Cset$. The resulting machine behaviour will therefore be a set of such multipliers, and we can meaningfully form their sum (which is again a multiplier). For reasons that will shortly become clear, however, we will restrict attention to those paths which visit each state of the machine at least once. We therefore define the \emph{additive behaviour} of such a machine $M = F^\Lambda$ to be the function $\Mod{M}^{+}$ on $\Cset$ given by
\[
\Mod{M}^{+}(z) = \sum \Set{ w^\Lambda(z) | w \in \Mod{F}, \text{ $w$ visits each state of $F$ at least once } }
\]
If $M$ is a machine of this form, we will declare the behaviour of $M$ to be the function $\Mod{M}^{+}$, and speak of $M$ as an \emph{additive $X$-machine}. Any finitary path $\Path{q} = q_I \to q_1 \to \dots \to q_\df \to q_F$ generates an additive \Xm $M_{q}$ with state set $\Set{q_I, q_1, \dots, q_\df, q_F}$, alphabet $A = \Set{h_0, \dots, h_\df}$, and transitions $\{ q_n \xrightarrow{h_n} q_{n+1} \;|\; n = 0, \dots, \df \}$. Each transition in the machine is a hop along the path, and is naturally associated with the function ${h_n}^\Lambda = \lambda z . (z . \HA{q_{n+1}|q_n}) : \Cset \to \Cset$ that multiplies any input amplitude $z$ by the hop amplitude $\HA{q_{n+1}|q_n}$. If $M_q$ is an additive \Xm generated by some path $\Path{q}$ with initial state $q_I$, final state $q_F$, and intermediate states in $R$, we shall say that $M$ is \emph{admissible}, and that $\Path{q}$ \emph{generates} $M$. We claim that each path computes its own amplitude, when considered as the machine it generates.
\paragraph{Computation by the unidirectional model.}
For unidirectional machines, each hop $h_n$ involves a jump forward in time, so the states $\Set{q_n}$ must all be distinct, and the path $\Path{q}$ forms a future-pointing chain through spacetime. Consequently, the machine $M_{q}$ recognises precisely one string, and the additive and standard behaviours of the \Xm are identical. The function computed by this path maps each $z \in \Cset$ to
\begin{equation}
z^{\left[ ({h_0}^\Lambda) \circ \dots \circ ({h_\df}^\Lambda) \right]}
= z \times \HA{q_{n+1}|q_n} \HA{q_n|q_{n-1}} \dots \HA{q_1|q_0}
= z \times \psi\Path{q} \enspace .
\label{eq:path-comp}
\end{equation}
As claimed, therefore, each (unidirectional) trajectory directly computes its own contribution to the amplitude of any path containing it.
\paragraph{Computation by the bidirectional model.}
Equation (\ref{eq:path-comp}) holds also for unidirectional paths in bidirectional machines, but the general physical interpretation is more complicated, because of the possibility of loops. Essentially, we need to distinguish carefully between two related questions, viz.
\begin{itemize}
\item what is the amplitude that the path $\Path{q}$ is traversed?
\item what is the amplitude that the path $\Path{q}$ is \emph{observed} to have been traversed?
\end{itemize}
To see why, let us suppose that the path $\Path{q}$ contains only one loop, and that $m$ is minimal such that $q_{m+1} = q_{n+1}$ for some $n$ satisfying $m < n$; write the associated sequence of hops as a concatenation of three segments, viz. $h_0 \dots h_\df = u.v.w$, where $u = h_0 \dots h_m$, $v = h_{m+1} \dots h_n$ and $w = h_{n+1} \dots h_\df$. Since $v$ represents a spacetime loop from $q_{m+1}$ back to $q_{n+1} = q_{m+1}$, there is no observable difference between any of the paths $u.v^j.w$, for $j \geq 1$. Consequently, while the amplitude for the path $\Path{q}$ is just $\psi\Path{q}$, the amplitude that this path is \emph{observed} is instead the amplitude $\psi^*\Path{q} = \sum_{j=1}^{\infty}{ \psi\Path{u} \times \left(\psi\Path{v}\right)^j \times \psi\Path{w}}$.
More generally, given the machine $F$ generated by any bidirectional trajectory $\Path{q}$, and any two strings $\alpha$, $\beta$ which are recognised by $F$, \emph{and which visit each state at least once}, there will be no observable difference between $\alpha$ and $\beta$. Consequently, if we define
\[
F^{+} = \Set{ w^\Lambda | w \in \Mod{F}, \text{ $w$ visits each state at least once } }
\]
then the amplitude $\psi^{+}$ that $\Path{q}$ is \emph{observed} to have been the path traversed will satisfy, for $z \in \Cset$,
\[
z . \psi^{+}
= \sum \Set{ w^\Lambda(z) | w \in F^{+} }
= \Mod{F^\Lambda}^{+}(z)
\]
and once again, if we think of $\Path{q}$ as an additive \Xm, it computes its own contribution to the amplitude of any path containing it.
\section{Concluding Arguments}
\label{sec:conclusions}
Recall that an additive \Xm $M$ is \emph{admissible} provided there is some finitary bidirectional path $\Path{q}$ that generates it. Say that two paths $\Path{q}_1$ and $\Path{q}_2$ are \emph{equivalent}, provided they generate precisely the same admissible machine $M$. Clearly, this \emph{is} an equivalence relation, and given any path $\Path{q}$, there will some equivalence class $\widetilde{q}$ containing it. Moreover, the amplitude $\Mod{M}^{+}$ is given by summing the amplitudes of the various paths in $\widetilde{q}$. Consequently, summing over all paths is the same as summing over all admissible machines, so that (regarding $\psi(q_F,q_I)$ as a multiplier),
\[
\psi(q_F,q_I) = \sum \Set{ \Mod{M}^{+} | \text{ $M$ is admissible } } \enspace ,
\]
and $\psi(q_F, q_I)$ can be regarded as integrating all of the admissible machine amplitudes. In the bidirectional formulation, then, the nature of motion in quantum theory reveals itself to be inherently computational. It is not that trajectories can be computed; rather, they \emph{are} computations. As a particle hops through spacetime, it simultaneously \emph{constructs} and \emph{executes} a computational state machine, and the amplitude computed by this machine is precisely the amplitude of the trajectory that constructed it.
In section \ref{sec:models:digital-physics}, we noted how digital physics assumes the existence of a computation that computes each universe's history, which suggests that the \Quote{computer} which executes the computation is somehow external to the universes being constructed. In contrast, the bidirectional model is telling us that each universe is a \emph{process}, in which each trajectory is a sub-process which computes its own amplitude. Moreover, all of these sub-processes interact with one another non-locally, because hop amplitudes are based on the classical action, and this in turn depends on the ever-changing spacetime distribution of the other particles. In other words, as we have argued elsewhere, quantum theory is best thought of, not in terms of computation, but in terms of \emph{interactive formal processes} \citep{Sta07}.
Clearly, this idea has echoes of \ItBit, and indeed the bidirectional model helps explain Wheeler's delayed-choice experiment. The apparent paradox relies on two assumptions concerning the experimental set-up. First, the photon must pass through the barrier in order to be observed on the other side; and second, we can reliably identify a time by which the photon has travelled beyond the barrier (we need to make our delayed choice after this time). Both of our reformulations refute the first assumption (the discontinuous nature of hop-based motion means that the Intermediate Value Theorem cannot be invoked to prove that the trajectory necessarily passes through the barrier), while the bidirectional model also refutes the second assumption, since there is no reliable sense in which the decision can be said to have been made \Quote{after} the trajectory intersects the barrier. Thus the delayed-choice experiment contains no paradox, and there is nothing to explain.
We should also be clear as to what our reformulation does \emph{not} say. Throughout this discussion we have focussed on the computational nature of trajectories, but it should be stressed that there is an important distinction to be be drawn between what a process \emph{does}, and how that process is \emph{structured}. This is the same distinction as that highlighted in section \ref{sec:models:digital-physics} between Schmidhuber's and Tegmark's versions of the computational universe hypothesis: whereas Schmidhuber considers process evolutions to be computable, Tegmark requires instead that their descriptions be computable. In our case, while we know that each trajectory computes its amplitude, we cannot say that the amplitude itself is necessarily \Quote{computable} in the Turing sense, because we cannot as yet identify the extent to which the two forms of computation are related. As a \emph{process}, each trajectory is computational, but the \emph{values} it manipulates need not be.
\subsection{Open questions}
\paragraph{(a)} Clearly, we need to determine the relationship between trajectory computations and Turing computations. There must certainly be some such relationship, because the admissible \Xm model underpinning trajectory computation is closely related to the Finite State Machine, which in turn underpins the basic structure of the Turing machine. Are values (like the processes that generate them) constrained to be computable in any standard sense?
\paragraph{(b)} Although we have exchanged continuous motion for motion based on discrete hops, we have not as yet done away with continuous spaces in their entirety, because many of the expressions given in this paper make use of integration. As we argued above, continuity is not directly observable, so we would prefer a purely discrete model. We should therefore investigate the extent to which the formulation presented here can be re-expressed in purely formal terms, for example using the $\pi$-calculus (a standard theoretical vehicle for modelling mobile distributed process-based systems) \citep{Mil99,SW01}. More straightforwardly, can we adapt the models presented here---for example, by replacing integrals with sums---to generate a truly \emph{discrete} models of physics?
\paragraph{(c)} Suppose we impose the condition that whenever a particle hops inside some arbitrary region (which we can think of as the interior of an event horizon), it cannot hop back out again. This will have a global influence upon trajectory amplitudes in the bidirectional model, because every journey would otherwise have had the option to include hops that pass through the excluded region. In particular, the observed positions of geodesics (assuming these can be modelled in terms of finite trajectories?) can be expected to change position, whence the presence of the excluded region will generate a perceived \Quote{warping} of spacetime geometry. Does this warping agree with the warping predicted by, \eg general relativity? Can the bidirectional model be extended to give a model of quantum gravity?
\paragraph{(d)} Feynman's original path-integral methods appear to make various assumptions which we have rejected, including such mainstays of real-world observation as the \emph{arrow of time} and the \emph{continuity of motion}. The status of these assumptions in Feynman's formulation needs, therefore, to be considered in more depth than has been possible here. It may be that they are spurious elements of his construction which play no actual r\^ole, and which are therefore logically independent of his formulae. But if they do indeed play a relevant part in his formulation, they must necessarily become \emph{provable theorems} within both the unidirectional and bidirectional models presented here, because our models agree with Feynman's \emph{by construction}. That is, any property that is (a) expressible in terms of `what is seen by observers', and (b) `built-into' Feynman's assumptions, must necessarily reappear from our own equations, since these give identical results when used to calculate amplitudes.
\theendnotes
\begin{acknowledgements}
This research was supported in part by the EPSRC HyperNet project (Hypercomputation Research Network, grant number EP/E064183/1).
\end{acknowledgements}
\section{A Finitary Formulation}
\label{sec:finitary-formulation}
In section \ref{sec:standard-formulation} we showed how the amplitude $\phi(q_F,q_I)$, that the particle $P$ travels from $q_I$ to $q_F$ along some path lying entirely within the non-empty open spacetime region $R = X \times T$, is given by $\phi = \lim_{\df \to \infty} \PHI{\df}$. If we now write
\begin{equation}
\DPhi{n} = \PHI{n} - \PHI{n-1} \enspace ,
\label{eq:dphi}
\end{equation}
it follows from the identity $\PHI{\df} = (\PHI{\df} - \PHI{\df-1}) + \dots + (\PHI{1} - \PHI{0}) + \PHI{0}$ that
\[
\lim_{\df \to \infty} \PHI{\df}
=
\lim_{\df \to \infty} \left( \PHI{0} + \sum_{n=1}^{\df}{ \DPhi{n} } \right)
=
\PHI{0} + \sum_{n=1}^{\infty}{ \DPhi{n} } \enspace .
\]
This replacing of a limit with a sum is a key feature of our model, since it allows us to describe a system in terms of a set of mutually distinct finite sets of observations. We can think of this sum in terms of \emph{correction factors}. For, suppose you were asked to estimate the amplitude $\phi(q_F, q_I)$ that some object or particle $P$ will be observed at $q_F$, given that it had already been observed at $q_I$ and was constrained to move within the region $R$. With no other information to hand, your best bet would be to assume that $P$ follows some action-minimising classical path, and so the estimate you give is the associated amplitude $\Braket{q_F|q_I}$. Some time later, you realise that one or more observations may have been made on the particle while it was moving from $q_I$ to $q_F$, and that this would have perturbed the amplitude. To take account of these possibilities, you add a series of correction factors to your original estimate; first you add $\DPhi{1}$ in case 1 observation had taken place, instead of the 0 observations you had originally assumed. Then you add $\DPhi{2}$ in case there were actually 2 observations, and so on. Each $\DPhi{n}$ takes into account the extra information acquired by performing $n$ observations instead of $n-1$, and since the overall estimate needs to take all of the corrections into account, we have $\phi = \PHI{0} + \sum{\DPhi{n}}$.
The simple truth, however, is that \emph{continuous motion cannot be observed}, because making an observation takes time. The best we can ever do is to make a series of distinct measurements showing us where an object was at finitely many closely-spaced instants $t_1, t_2, \dots, t_\df$ during the relocation from $q_I$ to $q_F$. The classical spirit within us then tells us to extrapolate these discrete points into a continuous curve (namely, that path which \Quote{best} joins the points). It is as if we draw the individual locations on celluloid, and then play a mental film projector to give ourselves the comfortable impression of continuous movement. But this mental film projector---represented in the standard formulation by the construction of $\lim \phi_\df$---is no part of physical observation; it represents instead an \emph{assumption} about the way the world \Quote{ought to be}. All we can truthfully say is that the object was at such and such a location $x_n$ when we observed it at time $t_n$, and was subsequently at location $x_{n+1}$ at time $t_{n+1}$. Regardless of underlying reality (about which we can say virtually nothing), the \emph{observed} universe is inherently discrete. We can ask ourselves how the motion appears if no observations are made; the composite answer, taking into account all potential observers, is given by some amplitude $\psi_0$. If we ask how it appears if precisely $\df$ observations are made during the relocation from $q_I$ to $q_F$, we get another amplitude $\psi_\df$. Since these possibilities are all mutually exclusive, and account for every possible finitely observed relocation from $q_I$ to $q_F$, the overall amplitude that the relocation happens is the sum of these amplitudes, namely some function $\psi = \sum{\PSI{\df}}$.
Although they both involve infinite sums, these two descriptions are very different, because $\PSI{n}$ tells us the amplitude for a path with a specific number of hops, while $\DPhi{n}$ describes what happens when we \emph{change} the number of hops. Nonetheless, prompted by the formal structural similarity of the equations $\phi = \PHI{0} + \sum_1^\infty \DPhi{n}$ and $\psi = \sum_0^\infty \PSI{n}$, we shall equate the two sets of terms, and attempt to find solutions. By requiring $\PSI{0} = \PHI{0}$ and $\PSI{n} = \DPhi{n}$ for positive $n$, this will ensure that the description we generate---no matter how unnatural it might appear at first sight---satisfies $\phi = \psi$, whence it describes exactly the same version of physics as the standard formulation.
The surprising feature in what follows is that the description we generate is \emph{not} unnatural. Quite the opposite. To see why, we need to remember that amplitudes are normally given in the form $\PHI{n} = \exp{\left\{i(S_1 + \dots + S_n))/\hbar\right\}}$. In very rough terms, we can think of the various $S$ values as being essentially equal, so that $\PHI{n} \approx \exp{\left\{inS/\hbar\right\}}$. When we compute $\DPhi{n}$, we are asking how $\PHI{n}$ changes when $n$ changes; in other words, we can think of $\DPhi{n}$ in fairly loose terms as a measure of $\nicefrac{d\PHI{n}}{dn}$. Again arguing loosely, we can calculate $\nicefrac{d\PHI{n}}{dn} \approx \nicefrac{iS\PHI{n}}{\hbar}$, and now it becomes clear why equating the two sets of terms works, for in essence, $\DPhi{n}$ is approximately proportional to $\PHI{n}$. Since $\PSI{n}$ is structurally similar to $\PHI{n}$, in the sense that both measure the amplitude associated with a sequence of jumps, it is not surprising to find a similar relationship holding between $\DPhi{n}$ and $\PSI{n}$. Since the equations we form will eventually include integrals with normalisation factors, these factors will effectively absorb any remaining constants of proportionality.
\subsection{Paths, Actions and Amplitudes}
\label{sec:paths}
The standard formulation assumes that each trajectory $x(t)$ is a consistently future-pointing\endnote
{
As explained in his 1965 Nobel Prize address, Feynman \citeyear{Fey65} subsequently described
anti-particles as particles moving `backwards in time'. In effect, our own approach adopts this
temporal bi-directionality, and places it centre-stage.
}
spacetime path; this is implicit in the continuity of the representation $x \equiv x(t)$, which assigns one location to each $t$ in the interval $[t_I, t_F]$. Since our formulation rejects this assumption, we need to provide a different definition for \emph{paths}.
We shall assume the abstract existence of a clock, represented by the integer variable $\tau$, used to indicate the order in which observations occur. Each time the clock ticks, \ie for each $\tau = 0, 1, 2, \dots$, the particle is observed to exist at some space-time location $q_\tau = \Tuple{x_\tau, t_\tau}$. We call each transition $q_\tau \to q_{\tau+1}$ a \emph{hop}. A finite sequence of consecutive hops $q_0 \to \dots \to q_{\df+1}$ constitutes a \emph{path}. As before, we take $q_0 = \Tuple{x_I,t_I}$ and $q_{\df+1} = \Tuple{x_F,t_F}$, and consider the properties of an arbitrary path from $q_I$ to $q_F$ via $\df$ intermediate points, all of which are required to lie in the prescribed space-time region $R = X \times T$.
We again write $\Path{q_1, \dots, q_\df}$ for the path $q_I \to q_1 \to \dots \to q_\df \to q_F$. However, whereas the intervals $t_{n+1} - t_n$ were formerly fixed to have identical duration $\nicefrac{\tPath}{(\df+1)}$, there is no constraint on the temporal separation $t_{\tau+1} - t_{\tau}$ in the finitary formulation; the path $q_0 \to \dots \to q_{\df+1}$ therefore has $2\df$ degrees of freedom, or \emph{twice} the number in the standard formulation. Notice that we now write $q_n$ rather than $\Fixed{q}_n$, to show that the value $t_n$ is no longer fixed.
What is not clear at this stage is whether hops need necessarily always be future-pointing. The standard formulation forces this on us through its assumption that some continuous motion $t \mapsto x(t)$ is being observed, but this assumption is no longer relevant. We shall therefore describe two finitary formulations, one in which hops are unidirectional in time, and one in which space and time are treated symmetrically, in that hops can move both forwards and backwards in time as well as space. Both models are related to computation theory, but the second is by far the more interesting, both from a computational, and a physical, point of view. The mathematical distinction between the two models is minor. If time is unidirectional into the future, then $t_{\tau+1}$ must lie in the range $t_\tau < t_{\tau+1} \leq \tMax$. Otherwise, it can take any value in $T$.
In the standard formulation, any unobserved motion from one observation to the next is assumed to be classical, and its amplitude is determined by minimising the classical action $S$. Since we no longer assume that any such motion exists, we shall simply assume that each hop $q \to q'$ has a \emph{hop amplitude}, denoted $\HA{q'|q}$, and that this amplitude (when it is non-zero) is associated with an abstract \emph{hop action}, denoted $s_h(q', q)$, by the formula $\HA{q'|q} = e^{i s_h(q', q) / \hbar}$. One of our tasks will be to identify the function $s_h$.
The amplitude associated with the path $\Path{q_1, \dots q_\df}$ is defined, as usual, to be the product $\HA{q_F|q_\df} \times \dots \times \HA{q_1|q_I}$. The amplitude computed by summing over all paths of this length will be denoted $\PSI{n}$, so that the overall \emph{finitary amplitude} that the particle moves from $q_I$ to $q_F$ along a sequence of hops lying entirely within $R$ is just $\psi(q_F, q_I) = \sum_{n=0}^{\infty}{ \PSI{n} }$.
\subsection{The Finitary Equations}
Consider again the formulae giving the amplitude that a particle $P$ follows a path from $q_I$ to $q_F$ that lies entirely within the region $R$, \emph{subject to the assumption} that $q_F$ occurs later than $q_I$---the standard formulation isn't defined when this isn't the case. We can write these in the form
\begin{align}
\phi &= \PHI{0} + \sum_{n=1}^{\infty}{ \DPhi{n} } \label{eq:phi-rec} \\
\psi &= \PSI{0} + \sum_{n=1}^{\infty}{ \PSI{n} } \label{eq:psi-rec}
\end{align}
whence it is clear that one particular solution can be obtained by solving the infinite family of equations
\begin{align}
\PSI{0} &= \PHI{0} \label{eq:base} \\
\PSI{n} &= \PHI{n} - \PHI{n-1} \quad \text{ (\ie $\PSI{n} = \DPhi{n}$) \quad for $n > 0$ } \label{eq:step}
\end{align}
to find the hop-action $s_h$. Since the terms $\PHI{n}$ and $A_n$ are those of the standard formulation, we shall henceforth assume that $S$, $\PHI{n}$, $\DPhi{n}$ and $A_n$ are all \emph{known functions} in what follows.
\subsection{Solving the Equations}
As usual, we shall assume that $q_F$ occurs later than $q_I$ (so that $\PHI{n} = \PHI{n}(q_F,q_I)$ is defined for each $n$). We shall be careful to distinguish locations $\Fixed{q} = \Tuple{x, \Fixed{t}}$ for which the time of observation is fixed in the standard formulation, from those of the form $q = \Tuple{x,t}$ used in the finitary version, for which the value of $t$ is variable. Note first that (\ref{eq:phi-n}) can be rewritten to give us a recursive definition of $\PHI{\df}$, viz.
\begin{equation}
\begin{aligned}
\PHI{\df}&(q_F,q_I)
=
\frac{1}{A_\df}
\int{
\Braket{q_F | \Fixed{q}_\df} dx_\df
\Braket{\Fixed{q}_\df | \Fixed{q}_{\df-1}} dx_{\df-1}
\dots
\Braket{\Fixed{q}_2 | \Fixed{q}_1} dx_1
\Braket{\Fixed{q}_1 | q_I }
} \\
&=
\frac{A_{\df-1}}{A_\df}
\int{
\Braket{q_F | \Fixed{q}_\df} dx_\df
\frac{1}{A_{\df-1}}
\int{
\Braket{\Fixed{q}_\df | \Fixed{q}_{\df-1}} dx_{\df-1}
\dots
\Braket{\Fixed{q}_2 | \Fixed{q}_1} dx_1
\Braket{\Fixed{q}_1 | q_I }
}
} \\
&=
\frac{A_{\df-1}}{A_\df}
\int{
\Braket{q_F | \Fixed{q}_\df}
\PHI{\df-1}(\Fixed{q}_\df, q_I) \; dx_\df
} \\
\end{aligned}
\label{eq:phi-int}
\end{equation}
and an identical derivation gives $\PSI{\df}$ in the form
\begin{equation}
\PSI{\df}(q_F,q_I)
=
\frac{B_{\df-1}}{B_\df} \int_X{ \int_{T'}{ \HA{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }}
\label{eq:psi-n}
\end{equation}
where the $B_n$ are normalisation factors, and the integration range $T'$ depends on whether we allow hops to jump backwards in time, or insist instead that they move only forwards (we consider the two cases separately, below).
Using (\ref{eq:phi-int}) to substitute for $\PHI{\df}$ in the definition (\ref{eq:dphi}) of $\DPhi{n}$ gives
\[
\begin{aligned}
\DPhi{\df}(q_F,q_I)
&= \PHI{\df}(q_F,q_I) - \PHI{\df-1}(q_F,q_I) \\
&= \left[
\frac{A_{\df-1}}{A_\df}
\int{
\Braket{q_F | \Fixed{q}_\df}
\PHI{\df-1}(\Fixed{q}_\df, q_I) \; dx_\df
}
\right]
- \PHI{\df-1}(q_F,q_I)
\enspace .
\end{aligned}
\]
The case $\df = 0$ is worth noting in detail. The amplitudes $\PHI{0}(q_F,q_I)$ and $\PSI{0}(q_F,q_I)$ describe the situation in which $P$ moves from $q_F$ to $q_I$ without being observed. In the standard formulation, it is assumed in such circumstances that $P$ follows some classical path for which the action $S$ is minimal, while in the finitary formulation we assume that the particle \emph{hops} directly from $q_I$ to $q_F$. The amplitudes for these behaviours are $\Braket{q_F|q_I}$ and $\HA{q_F|q_I}$, respectively. However, we need to remember that $\PHI{0}$ and $\PSI{0}$ are defined in terms of their contribution to the \emph{overall} amplitudes $\phi$ and $\psi$; it is important, therefore, to include the relevant normalisation factors. We therefore define, in accordance with (\ref{eq:phi-n}), (\ref{eq:phi-rec}), (\ref{eq:psi-rec}) and (\ref{eq:psi-n}),
\[
\PHI{0}(q_F,q_I) = \frac{1}{A_0} \Braket{q_F|q_I}
\qquad \text{ and } \qquad
\PSI{0}(q_F,q_I) = \frac{1}{B_0} \HA{q_F|q_I} \enspace ,
\]
so that, whenever $q_F$ occurs later than $q_I$,
\begin{equation}
\HA{q_F|q_I} = \sigma \Braket{q_F|q_I}
\label{eq:base-HA}
\end{equation}
where
\[
\sigma = B_0 / A_0 \enspace .
\]
Taking principal logarithms on both sides of (\ref{eq:base-HA}) now gives
\[
s_h(q_F, q_I) = S(q_F, q_I) - i \hbar \log \sigma
\]
and if we assume that $s_h$ should be real-valued (the classical action $S$ is always real-valued), then $\log \sigma$ must be a real multiple of $i$, say $\sigma = e^{i \rho}$ where $\rho \in \Rset$, whence $\SqMod{\sigma} = 1$. Consequently, $\SqMod{\HA{q_F|q_I}} = \SqMod{\HA{q_F|q_I}}$, and the two formulations assign the same standard and finitary probabilities to the relocation $q_I \to q_F$, whenever this is unobserved and future-directed. Moreover, since
\[
s_h(q_F, q_I) = S(q_F, q_I) + \rho \hbar
\]
we see that our earlier intuition is essentially confirmed: the hop-action $s_h$ (the best estimate of the path-amplitude, given that no observations will be made) is just the classical action $S$, though possibly re-scaled by the addition of a constant action of size $\rho\hbar$ (which we can think of as a kind of `zero-point' action). For the purposes of this paper, the values of $\rho$ and $\sigma = e^{i\rho}$ are essentially arbitrary; we shall leave $\rho$ (and hence $\sigma$) an undetermined parameter of the model, in terms of which
\begin{equation}
B_0 = \sigma A_0 \label{eq:B-0}
\end{equation}
and
\begin{equation}
s_h(q_F, q_I) = S(q_F, q_I) + \rho \hbar \quad \text{ if $q_F$ occurs after $q_I$ . } \label{eq:sh-forward}
\end{equation}
The physical significance of $\rho$ is discussed briefly in Section \ref{sec:bidirectional-model}, in relation to \emph{null-hops}.
\subsection{The Unidirectional Model}
\label{ref:unidirectional-model}
If we wish to allow only future-pointing hops---we shall call this the \emph{unidirectional} model---there is little left to do. We know from (\ref{eq:base}) and (\ref{eq:step}) that each function $\PSI{n}$ is defined in terms of the known functions $\PHI{0}$ and $\DPhi{n}$. It only remains to identify the hop amplitude $s_h$ and the normalisation factors $B_n$. As explained above, our solutions will be given in terms of the undetermined phase parameter $\sigma$.
Since the side-condition on (\ref{eq:sh-forward}) is satisfied, the hop amplitude is given in terms of the classical action by the formula $\HA{q'|q} = \sigma \Braket{q'|q} = \sigma \exp\{ i S(q',q) / \hbar \}$, whenever $q'$ follows $q$.
To find the normalisation factors, we note first that (\ref{eq:B-0}) gives us the value $B_0 = \sigma A_0$ directly. Next, when $\df > 0$, we observe that, since $t_{\df}$ must come after $t_{\df-1}$, the range $T'$ in (\ref{eq:psi-n}) is the interval $(t_{\df-1},t_F)$. Consequently,
\begin{equation}
\begin{aligned}
\PSI{\df}(q_F,q_I)
&= \frac{B_{\df-1}}{B_\df} \int_X{ \int_{t_{\df-1}}^{t_F}{ \HA{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }} \\
&= \frac{\sigma B_{\df-1}}{B_\df} \int_X{ \int_{t_{\df-1}}^{t_F}{ \Braket{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }}
\enspace .
\end{aligned}
\label{eq:psi-step-soln}
\end{equation}
When $\df = 1$, (\ref{eq:psi-step-soln}) can be rewritten
\[
\begin{aligned}
\PSI{1}(q_F,q_I)
&= \frac{\sigma B_0}{B_1} \int_X{ \int_{t_I}^{t_F}{ \Braket{q_F|q_1} \PSI{0}(q_1,q_I) \; dt_1 \; dx_1 }} \\
&= \frac{\sigma B_0}{B_1} \int_X{ \int_{t_I}^{t_F}{ \Braket{q_F|q_1} \frac{1}{B_0}\HA{q_1|q_I} \; dt_1 \; dx_1 }} \\
&= \frac{\sigma^2 }{B_1} \int_X{ \int_{t_I}^{t_F}{ \Braket{q_F|q_1} \Braket{q_1|q_I} \; dt_1 \; dx_1 }}
\end{aligned}
\]
and, since $\PSI{1} = \DPhi{1}$, this gives us
\[
B_1 = \left(\frac{ \int_X{ \int_{t_I}^{t_F}{ \Braket{q_F|q_1} \Braket{q_1|q_I} \; dt_1 \; dx_1 }} }{ \DPhi{1}(q_F,q_I) }\right) \sigma^2 \enspace .
\]
Finally, for $\df > 1$, (\ref{eq:psi-step-soln}) becomes
\[
\begin{aligned}
\DPhi{\df}(q_F,q_I)
&=\PSI{\df}(q_F,q_I) \\
&= \frac{\sigma B_{\df-1}}{B_\df} \int_X{ \int_{t_{\df-1}}^{t_F}{ \Braket{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }} \\
&= \frac{\sigma B_{\df-1}}{B_\df} \int_X{ \int_{t_{\df-1}}^{t_F}{ \Braket{q_F|q_\df} \DPhi{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }} \\
\end{aligned}
\]
and hence $B_\df$ can be defined recursively, as
\[
B_\df = \frac{\sigma B_{\df-1}}{\DPhi{\df}(q_F,q_I)} \int_X{ \int_{t_{\df-1}}^{t_F}{ \Braket{q_F|q_\df} \DPhi{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }}
\enspace .
\]
\subsection{The Bidirectional Model}
\label{sec:bidirectional-model}
Far more interesting is the case where hops are allowed to jump backwards as well as forwards in time. It is important to note that the derivation of $B_{\df}$ given above for the unidirectional model no longer works, because it relies on using (\ref{eq:base-HA}) to replace $\HA{q_F|q_\df}$ with $\sigma \Braket{q_F|q_\df}$, and on (\ref{eq:step}) to replace $\PSI{n+1}(q_\df, q_I)$ with $\DPhi{n+1}(q_\df,q_I)$. But our use of (\ref{eq:base-HA}) assumes that $q_F$ occurs after $q_\df$, and that of (\ref{eq:step}) that $q_\df$ comes after $q_I$, and neither assumption is generally valid in the bidirectional model. Consequently, before we can make progress, we need to decide how $\HA{q'|q}$ should be defined when the hop $q \to q'$ moves \emph{backwards} in time.
To address this problem, we recall the standard interpretation of \emph{anti-matter} as \Quote{matter moving backwards in time}. For example, the Feynman diagram in Figure \ref{fig:feyn} shows how the annihilation of \eg an electron and a positron (its antiparticle) to form two photons can be interpreted instead as showing an electron that moves forward in time, interacts with the photons, and then returns into the past.
\begin{figure}[!htb]
\centering
\parbox{.8\linewidth}{
\centering
\begin{fmffile}{epscatter}
\begin{fmfgraph*}(100,50)
\fmfleft{i1,i2} \fmfright{o1,o2}
\fmf{photon}{i2,v1}
\fmf{photon}{v2,o2}
\fmfdot{v1,v2}
\fmf{electron}{i1,v1,v2,o1}
\end{fmfgraph*}
\end{fmffile}
\caption{Anti-matter can be thought of as matter moving backwards in time.
A particle arrives at bottom left, and the corresponding antiparticle (shown
as usual with the arrow reversed) at bottom right; they annihilate to produce
two gamma rays, emitted top left and top right. Time advances up the page.}
\label{fig:feyn}
}
\end{figure}
Accordingly, whenever we are presented with a backwards hop by the particle $P$, we re-interpret it as a \emph{forwards} hop by the appropriate anti-particle, $\Anti{P}$. Writing $\Anti{S}$ for the classical action associated with the antiparticle $\Anti{P}$, we therefore define
\begin{align}
s_h(q_F,q_I) &=
\begin{cases}
\rho \hbar + S(q_F, q_I) & \text{ if $q_I$ is earlier than $q_F$, and } \\
\rho \hbar + \Anti{S}(q_I, q_F) & \text{ if $q_I$ is later than $q_F$. }
\end{cases}
\label{eq:backwards-hops}
\end{align}
It is tempting to assume that $\Anti{S}$ is just the negative of $S$, but this need not be the case. For example, since photons are their own anti-particles, they would require $\Anti{S} = S$. Or consider an electron moving in both an electric and a gravitational field. If we replaced it with a positron, the electric forces would reverse, but the gravitational forces would remain unchanged, and the overall change in action would reflect both effects.
\paragraph{Spatial hops - the physical meaning of $\sigma$.}
What about purely spatial hops that move the particle $P$ sideways in space, without changing its temporal coordinate? There are two cases to consider. If $q_F = q_I$, the particle has not actually moved, and the classical solution $S(q,q) = 0$ holds valid. Consequently, we can simply extend our existing solution by defining $s_h(q,q) = \rho \hbar$, or $\HA{q|q} = \sigma$. This, then, explains the physical significance of $\sigma$---it is the amplitude associated with the \emph{null hop}, \ie that hop which leaves the particle in its original location from one observation to the next.
If $q_F$ and $q_I$ differ in their $x$ (but not their $t$) values, we shall simply take $\HA{q_F|q_I} = 0$; \ie we ban all such hops (this definition is, of course, purely arbitrary, and other definitions may be more appropriate in regard to other investigations\endnote
{
For example, suppose we know (from wave-equation methods, say) that $P$ has amplitude $\eta(x)$ to be
at location $\Fixed{x} = \Tuple{x,\Fixed{t}}$, for each $x \in X$. A more intuitive solution might then be to take
$\HA{\Fixed{x}|\Fixed{y}} = {\eta(\Fixed{x})}/{\eta(\Fixed{y})}$. This gives $\HA{\Fixed{x}|\Fixed{x}} = 1$ in
agreement with the \Quote{classical amplitude}, but also provides information about the relative amplitudes of all
other spatial locations at time $\Fixed{t}$.
};
but for our current purposes the specific choice of purely spatial hop action makes little difference, because the paths in question contribute nothing to the integrals we shall be constructing). This doesn't mean, of course, that a path cannot be found from $q_I$ to a simultaneous location $q_F$---it can, via any past or future location---but that more than one hop is required to complete the journey. Indeed, the possibility of purely spatial relocations is highly significant, since one could interpret them as explaining quantum uncertainty: one cannot say definitely where a particle is at any given time $t$, precisely \emph{because} it is able to relocate from one location to another, with no overall change in $t$.
\paragraph{Solving the Equations.}
As before, we know from (\ref{eq:base}) and (\ref{eq:step}) that each function $\PSI{n}$ is defined in terms of the known functions $\PHI{0}$ and $\DPhi{n}$, and it remains to identify the hop amplitude $s_h$ and the normalisation factors $B_n$. Once again, our solutions will be given in terms of the undetermined phase parameter $\sigma$. As always, we assume that $t_I < t_F$, although we allow individual hops to move backwards through time.
To define the hop amplitude, we appeal to (\ref{eq:backwards-hops}), and the relationship $\HA{q'|q} =$ $e^{ i s_h(q', q) / \hbar }$. Taken together with our discussion of spatial hops, these allow us to define $s_h$ fully:
\[
\HA{q_F|q_I} =
\begin{cases}
\sigma \; \Anti{\Braket{q_I|q_F}} & \text{ if $q_F$ is earlier than $q_I$, } \\
\sigma \; \Braket{q_F|q_I} & \text{ if $q_F$ is later than $q_I$ } \\
\sigma & \text{ if $q_F = q_I$, and } \\
0 & \text{ otherwise. }
\end{cases}
\]
where $\Anti{\Braket{q_I|q_F}} = \exp{\{ i \Anti{S}(q_I, q_F) / \hbar \}}$ is the \Quote{classical amplitude} associated with the antiparticle. This idea extends throughout the functions defined in this section; for example, when $q'$ is earlier than $q$, we write $\Anti{\PSI{n}}(q,q')$ for the amplitude that the antiparticle follows some path $q' \to q$ lying entirely within $R$. We will see below that the amplitude functions $\PSI{n}(q',q)$ and $\Anti{\PSI{n}}(q',q)$ are, as one would expect, related to one another in a mutually recursive way.
Now we consider the normalisation constants $B_n$. We already know that $B_0 = \sigma A_0$, so we consider the case when $n > 0$. Because hops are allowed to move in both directions through time, the integration range $T'$ in (\ref{eq:psi-n}) is the whole of $T$. Consequently, (\ref{eq:psi-n}) becomes
\[
\PSI{\df}(q_F,q_I)
= \frac{B_{\df-1}}{B_\df} \int_X{ \int_T{ \HA{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df \; dx_\df }} \enspace .
\]
The integral over $T$ splits into three parts, depending on the value of $t_\df$ relative to $t_I$ and $t_F$. We have
\begin{equation}
\begin{aligned}
\PSI{\df}(q_F,q_I)
&= \frac{B_{\df-1}}{B_\df}
\int_X{ \int_T{ \HA{q_F|q_\df} \PSI{\df-1}(q_\df,q_I) \; dt_\df }\; dx_\df } \\
&= \frac{B_{\df-1}}{B_\df}
\int_X{ \left[ I_L(x_\df) + I_M(x_\df) + I_R(x_\df) \right] dx_\df }
\end{aligned}
\label{eq:psi-bi}
\end{equation}
where $I_L(x_\df)$ is the integral over $[\tMin, t_I]$, $I_M(x_\df)$ over $[t_I, t_F]$ and $I_R(x_\df)$ over $[t_F, \tMax]$.
When $\df=1$, (\ref{eq:psi-bi}) becomes
\[
\PSI{1}(q_F,q_I) = \frac{B_0}{B_1} \int_X{ \left[ I_L(x_1) + I_M(x_1) + I_R(x_1) \right] dx_1 }
\]
and the integrals $I_L$, $I_M$ and $I_R$ are defined by
\[
\begin{aligned}
I_L(x_1) &= \sigma \int_{\tMin}^{t_I}{ \HA{q_F|q_1} \PSI{0}(q_1,q_I) \; dt_1 }
=&& \frac{\sigma^2}{B_0} \int_{\tMin}^{t_I}{ \Braket{q_F|q_1} \Anti{\Braket{q_I|q_1}} \; dt_1 } \\
I_M(x_1) &= \sigma \int_{t_I}^{t_F} { \HA{q_F|q_1} \PSI{0}(q_1,q_I) \; dt_1 }
=&& \frac{\sigma^2}{B_0} \int_{t_I}^{t_F} { \Braket{q_F|q_1} \Braket{q_1|q_I} \; dt_1 } \\
I_R(x_1) &= \sigma \int_{t_F}^{\tMax}{ \HA{q_F|q_1} \PSI{0}(q_1,q_I) \; dt_1 }
=&& \frac{\sigma^2}{B_0} \int_{t_F}^{\tMax}{ \Anti{\Braket{q_1|q_F}} \Braket{q_1|q_I} \; dt_1 } \enspace .
\end{aligned}
\]
Thus $I_L(x_1) + I_M(x_1) + I_R(x_1) =$
\[
\frac{\sigma^2}{B_0} \left[
\int_{\tMin}^{t_I}{ \Braket{q_F|q_1} \Anti{\Braket{q_I|q_1}} }
+ \int_{t_I}^{t_F} { \Braket{q_F|q_1} \Braket{q_1|q_I} }
+ \int_{t_F}^{\tMax}{ \Anti{\Braket{q_1|q_F}} \Braket{q_1|q_I} }
\right]
\]
and $\PSI{1}(q_F,q_I)$ equals
\[
\frac{\sigma^2}{B_1} \left[
\int_{\tMin}^{t_I}{ \Braket{q_F|q_1} \Anti{\Braket{q_I|q_1} } }
+ \int_{t_I}^{t_F} { \Braket{q_F|q_1} \Braket{q_1|q_I} }
+ \int_{t_F}^{\tMax}{ \Anti{\Braket{q_1|q_F}} \Braket{q_1|q_I} }
\right] \enspace .
\]
On the other hand, (\ref{eq:dphi}) tells us that $\PSI{1} = \DPhi{1}$, and so $B_1$ equals
\[
\begin{aligned}
\frac{\sigma^2}{ \DPhi{1}(q_F,q_I) } &\times \\
\left[
\int_{\tMin}^{t_I} \right. & \left. { \Braket{q_F|q_1} \Anti{\Braket{q_I|q_1}} }
+ \int_{t_I}^{t_F} { \Braket{q_F|q_1} \Braket{q_1|q_I} }
+ \int_{t_F}^{\tMax}{ \Anti{\Braket{q_1|q_F}} \Braket{q_1|q_I} }
\right]
\enspace .
\end{aligned}
\]
Finally, when $\df > 1$, the integrals $I_L$, $I_M$ and $I_R$ are given by
\begin{itemize}
\item
$ I_L(x_\df) = \sigma \int_{\tMin}^{t_I}{ \Braket{q_F|q_\df} \Anti{\DPhi{\df-1}(q_I,q_\df)} \; dt_\df }$;
\item
$ I_M(x_\df) = \sigma \int_{t_I}^{t_F} { \Braket{q_F|q_\df} \DPhi{\df-1}(q_\df,q_I) \; dt_\df }$;
\item
$ I_R(x_\df) = \sigma \int_{t_F}^{\tMax}{ \Anti{\Braket{q_\df|q_F}} \DPhi{\df-1}(q_\df,q_I) \; dt_\df }$,
\end{itemize}
and (\ref{eq:psi-bi}) gives us $B_\df$ recursively,
\[
\begin{aligned}
B_\df
= \frac{\sigma B_{\df-1}}{\DPhi{\df}(q_F,q_I)}
&\int_X \left\{
\int_{\tMin}^{t_I}{ \Braket{q_F|q_\df} \Anti{\DPhi{\df-1}(q_I,q_\df)} \; dt_\df } \right. \\
&\qquad \qquad
+ \int_{t_I}^{t_F} { \Braket{q_F|q_\df} \DPhi{\df-1}(q_\df,q_I) \; dt_\df } \\
&\qquad \qquad \qquad \qquad
+ \left. \int_{t_F}^{\tMax}{ \Anti{\Braket{q_\df|q_F}} \DPhi{\df-1}(q_\df,q_I) \; dt_\df }
\right\} \enspace .
\end{aligned}
\]
\section{Introduction}
\label{sec:introduction}
According to the Church-Turing Thesis (CTT), all effective computational behaviours can be simulated by Turing machines \citep{Kle52}. Although CTT was proposed in the context of formal mathematical systems, it is widely accepted that it can be applied more generally; in particular, given that physical devices are routinely used for computational purposes, it is now widely assumed that all (finitely-resourced, finitely-specified) physical machine behaviours can be simulated by Turing machines. However, this extended claim\endnote
{
Andr\'eka \textit{et al.} (\citeyear{ANN08}) argue that the physical variant of CTT was first
considered as far back as the 1930s.
}
(known in the philosophy and computer science literature as \emph{Thesis M} \citep{Gan80,Cop02}, and in physics literature as the \emph{physical Church-Turing Thesis}; see \eg \citep{Deu85,Pen90} and references therein) is not by any means a logical consequence of CTT, since it is not clear that every physical machine can meaningfully be said to `compute something' in the same sense as Turing machines. Proponents of \emph{digital physics} \citep{Wol02,Llo06,Teg08} stretch CTT still further, interpreting it to mean that \emph{all} physical behaviours (whether machine-generated or not) are Turing-simulable.
The main aim of this paper is to investigate Thesis M and its extensions in more detail. Is it actually true that all physical behaviours are necessarily computable, or are there behaviours which go beyond the Turing limit? We will show that quantum theory can be reformulated in a way that partially resolves this question, by explaining why physical behaviours can indeed \emph{always} be regarded as `computing something' in the strict state-machine sense. While our approach does not rule out the possibility of hypercomputation completely, it limits the form such hypercomputation must take.
As we recall in section \ref{sec:motivation}, this question has been debated indirectly over many decades \citep{Sta06}; but it has become prominent recently with the rise of quantum computation and digital physics. As is well known, Shor's (\citeyear{Sho94}) algorithm can factorise integers faster than any Turing program, and this already suggests that quantum theory has super-Turing potential. However, we need to distinguish carefully what we mean by `hypercomputation' in this context. Where a computational model---for example, Deutsch's (\citeyear{Deu85}) Universal Quantum Computer (UQC)---computes the same class of functions as the Turing machine, albeit potentially faster, we call it a \emph{super-Turing} model. If it is capable of computing functions which \emph{no} Turing machine can compute, we call it \emph{hypercomputational}. In particular, then, while the UQC is an apparently super-Turing model, it is well known that it is not hypercomputational, whence its implementation would not resolve the question whether hypercomputation is physically feasible.
\subsection{Layout of the paper}
\label{sec:layout-of-the-paper}
We begin in section \ref{sec:motivation} by considering briefly what is already known concerning the relationship between physics and (hyper)computation. After summarising the information-theoretic approach familiar from \ItBit, we review three known hypercomputational systems: non-collision singularities in the Newtonian $n$-body problem; the Swansea \textit{Scatter Machine Experiment} (also Newtonian); and Hogarth's cosmologically inspired family of $SAD$ computers. We then focus on quantum theory, where it is unclear whether any hypercomputational model has yet been established. The question then arises whether a new approach might be able to resolve the issue. We will show that this is indeed the case, though only to a limited extent, by deriving a first-principles reformulation of Feynman's path-integral model; we review the standard formulation briefly in section \ref{sec:standard-formulation}, and present our new formulation in section \ref{sec:finitary-formulation}.
In our version of Feynman's model, there is no such thing as a continuous trajectory. Instead, whenever a particle moves from one spacetime event to another, it does so by performing a finite sequence of `hops', where each hop takes the particle directly from one location to another, with no intervening motion. Although this seems somewhat iconoclastic, we argue that `finitary' motion of this kind is the only form of motion actually supported by observational evidence.
In section \ref{sec:computational-significance} we consider the computational significance of the model, insofar as it addresses the question whether hypercomputation is physically feasible. From a mathematical point of view it makes little difference whether we allow `hops' to move a particle backwards as well as forwards in time, and we consider both models. In each case, the motion of a particle from one location to another generates a finite state machine (technically, an extended form of FSM called an \Xm \citep{Eil74}), where the machine's states are spacetime locations, and its transition labels reflect the (classical) action associated with each hop. In unidirectional time, the regular language generated by such a machine comprises just a single word, but if we allow time to be bidirectional, the availability of loops ensures that infinite regular languages can be generated. In both cases, when the motion is interpreted as an \Xm, the function computed by the motion can be interpreted as an amplitude, and if we sum the amplitudes of all machines with a given initial and final state, we obtain the standard quantum mechanical amplitude for the particle to move from the initial to the final location.
Section \ref{sec:conclusions} concludes our argument, and includes suggestions for further research. We note in particular that certain assumptions inherent in Feynman's original model must be regarded as \emph{provable theorems} of the model presented here; this includes both the \emph{continuity of observed motion} and the \emph{arrow of subjective time}.
\section{Motivation}
\label{sec:motivation}
In this section we review various arguments both for and against the physical feasibility of hypercomputation, and its converse, digital physics; for a more complete discussion of hypercomputational models, readers are invited to consult our earlier surveys of the field \citep{Sta03,Sta06}. The question, whether hypercomputational behaviours are physically feasible, obviously depends on ones conception of physics itself. Hypercomputational systems have been identified with respect to both relativistic and Newtonian physics. Where quantum theory is concerned, however, the situation is less clear cut.
\subsection{Digital physics}
\label{sec:models:digital-physics}
Proponents of digital physics argue that the Universe \emph{as a whole} is essentially computational, in the sense that its entire history can be viewed as the output of a digital computation \citep{Sch97}. The underlying idea appears first to have been proposed by Zuse, who suggested as early as 1967 that the Universe might be computed by a deterministic cellular automaton inhabited by \Quote{digital particles} \citep{Zus67,Zus69}.
Wheeler's subsequent (\citeyear{Whe90}) \ItBit conception reflected his conviction that information is just as physical as mass and energy, and indeed the relationship between information and gravitation has remained central to theories of quantum gravity ever since \cite{Bek72,Bek73} realised that black holes must possess intrinsic entropy. Likewise, Hawking's observation that black holes can evaporate \citep{Haw74} forces us to ask what happens to quantum correlations that previously existed between particles on either side of the event horizon? Quantum theory appears to be inconsistent with causality in such a situation \citep{SL05}.\endnote
{
There is as yet no empirical evidence that Hawking radiation, the mechanism by which evaporation takes place,
exists in Nature. However, the final stages of a primordial micro black hole's evaporation should theoretically
result in a burst of gamma-rays; one of the goals of the GLAST satellite, launched by NASA on 11th June 2008,
is to search for such flashes.
}
The \ItBit doctrine focusses on the relationship between observation and information. Just as observations provide information, so information can affect observations, as was graphically illustrated (at first theoretically and eventually experimentally) by Wheeler's famous \Quote{delayed-choice experiment}, a modified version of the dual-slit experiment. As is well known, if one slit in a barrier is covered over, photons passing through the apparatus behave like particles, but when both slits are opened the \Quote{particles} demonstrate interference effects. Wheeler asked what would happen if the decision to cover or uncover a slit were made \emph{after} the photon had passed through the barrier, but before the outcome were detected. In practice, the photon's behaviour reflects the decision the experimenter will eventually make, even though this decision occurs after the encounter with the barrier has taken place. This suggests that the outcome of an experiment involves an interaction between the apparatus and the observer; the results you get are in some sense changed by the questions you decide to ask; or as Wheeler put it, \QuoteDbl{Every \Quote{it} -- every particle, every field of force, even the spacetime continuum itself -- derives its function, its meaning, its very existence entirely -- even if in some contexts indirectly -- from the apparatus-elicited answers to yes-or-no questions, binary choices, bits} \citep{Hor91}.
\cite{Sch97,Sch00} has investigated a model of physics in which all possible realities are the outcomes of computations. By considering algorithmic complexity, we can examine the probability that a randomly selected universe would conform to any given set of behaviours; specific physical situations can be examined and predictions made, some of which might, in principle, be subject to experimental verification. It is important to note, however, that the type of physics this model generates is \emph{not} generally consistent with conventional wisdom. For example, because digital physics assumes that universes are inherently deterministic, Schmidhuber's model rejects the notion that beta decay is truly random. Similarly, his model suggests that experiments carried out on widely-separated, but initially entangled, particles, should display non-local algorithmic regularities, a prediction which, he notes, \Quote{runs against current mainstream trends in physics}.
A related concept is Tegmark's \emph{Mathematical Universe Hypothesis}. \cite{Teg08} notes that, if a complete Theory of Everything (TOE) exists, then the Universe must necessarily be a mathematical structure. In essence, this is because a \emph{complete} TOE should make sense to any observer, human or otherwise, whence it ought to be a formal theory devoid of \Quote{human baggage}; consequently the TOE (and hence the Universe it specifies) is a purely mathematical structure. While this argument can obviously be challenged---it is entirely possible that pure mathematics is itself a form of human baggage and that the concept \Quote{mathematical structure} has no meaning to creatures whose brains have evolved differently to our own---Tegmark shows that it entails a surprisingly wide range of consequences, but interestingly, these do \emph{not} include computability. Rather, Tegmark introduces an additional \emph{Computable Universe Hypothesis}, according to which the relations describing the Universal structure can be implemented as halting computations. This is similar to Schmidhuber's model, except that it is the relationships between objects that are deemed computable, rather than their evolution through time.
\subsection{Examples of physical hypercomputation}
\label{sec:why-reformulate}
A key feature of the digital physics models described above---as well as, e.g. Zizzi's (\citeyear{Ziz04}) loop quantum gravity model---is that the models take the assumption of an information- or computation-based universe as their \emph{starting point}, and then ask what consequences follow. This is inevitable, since the authors are ultimately interested in identifying experiments which might provide evidence in support of (or which falsify) their models. Clearly, however, if experiments are to distinguish between digital physics and \Quote{conventional wisdom}, it must first be necessary that digital physics and the standard model are not equivalent. It follows, therefore, that digital physics cannot tell us about the feasibility or otherwise of hypercomputation in \Quote{standard} quantum theory.
Unfortunately, this is precisely the question we wish to answer. Rather than invent a \emph{new} model of physics that is computational by fiat, we wish to determine whether the \emph{standard} model is computational. Our approach, which we outline in some detail in sections \ref{sec:standard-formulation} and \ref{sec:finitary-formulation}, is to reformulate (a small part of) the existing model in such a way that its computational nature becomes intuitively obvious. Before doing so, however, we should explain why this task is worth undertaking---as \cite{Zus69} put it, \QuoteDbl{Is Nature digital, analog or hybrid? And is there essentially any justification for asking such a question?}
\subsubsection{Newtonian models (and a challenge to digital physics)}
\label{sec:models:newtonian}
It is not often appreciated that standard Newtonian physics supports both super-Turing and hypercomputational behaviours, but as \cite{Xia92} has shown, the Newtonian $n$-body problem exhibits \Quote{non-collision singularities}, solutions in which massive objects can be propelled to infinity in finite time. This is particularly problematic for those models of digital physics which claim the Universe is generated by essentially local interactions, like those connecting processes in a cellular automaton, because the laws of physics are typically considered to be time-reversible. Consequently, if a particle can be propelled \emph{to} infinity in finite time, it should also be possible for a particle to arrive \emph{from} infinity in finite time. Clearly, however, there is no earliest time at which such an emerging particle first arrives in the Universe (the set of times at which the emerging particle exists does not contain its greatest lower bound). Consequently, if all objects in the Universe have finite extent and finite history, the particle's \Quote{emergence at infinity} must involve some non-local form of interaction between infinitely many of these objects. On the other hand, Xia's model depends implicitly on an idealised version of Newtonian physics, in which gravitationally bound point-masses can approach arbitrarily closely (some such idealisation is unavoidable, as the system needs to supply unbounded kinetic energy to the escaping object as it accelerates away to infinity). While this means that Xia's result doesn't actually undermine the case for digital physics in `real-world' terms, it reminds us that the situation is considerably more complicated than it might at first appear.
A recent series of investigations, reported in Beggs \textit{et al.} (\citeyear{BCLT08}), concerns a collision-based computational system called the \emph{Scatter Machine Experiment} (SME), in which a projectile is fired from a cannon at an inelastic wedge in such a way that it bounces into a detector either to one side (\emph{up}) of the apparatus or the other (\emph{down}); if the projectile hits the vertex, various scenarios can be posited. The wedge is fixed in position with its vertex at some height $x$ whose binary expansion we wish to compute. The cannon can also be moved up and down, but whereas $x$ can take any real value, we only allow the cannon to be placed at heights $u$ which can be expressed in the form $u = m/2^n$ for suitable $m$ and $n$. By repeatedly firing and then re-aligning the cannon, we can attempt to compute the binary expansion of $x$, one digit at a time. The class of sets which are decidable in polynomial time, when a certain protocol is used to run the SME, is exactly $P/poly$ (the complexity class of languages recognized by a polynomial-time Turing machine with a polynomial-bounded advice function). Since $P/poly$ is known to contain recursively undecidable languages \citep{Gol08}, it follows that the scatter machine experiment---despite its evident simplicity---is behaving in a hypercomputational way.
\subsubsection{Relativistic models}
\label{sec:models:relativistic}
The $SAD_n$ hierarchy is a family of computational models which exploit the properties of certain singularities in \emph{Malament-Hogarth} spacetimes \citep{Hog92, EN02}. These are singularities with computationally useful properties; in particular, if a test particle falls into the singularity, it experiences infinite proper time during its journey; but an outside observer sees the entire descent occurring in finite time. By exploiting such a singularity, we can easily solve the Halting Problem. For suppose we want to know whether some program $P$ halts. We set it running on a computer, and then send that computer into the singularity. From our vantage point, the entire process lasts just a finite length of time, say $T$ seconds. From the computer's point of view the descent takes forever, so if $P$ is going to halt, it will have enough time to do so. We therefore program the computer's operating system so that, if $P$ halts, a rocket is launched---this is possible for this kind of singularity---so as to arrive at some previously determined place and time, somewhat more than $T$ seconds (from our point of view) after the computer is launched. We then travel to the rendezvous point. If a rocket arrives at the scheduled time, we know that $P$ must have halted. If no rocket arrives, we know that the operating system never had cause to launch it, and we conclude that $P$ ran forever.
Hogarth refers to this hypercomputational system as an $SAD_1$ computer; it uses a standard Turing machine to run the underlying program $P$, but gains hypercomputational power from the geometrical properties of the spacetime in which that Turing machine finds itself. If we now adapt the construction to use a sequence of $SAD_1$ computers in an attempt to decide some question, the resulting ($SAD_1$ + singularity) system is called an $SAD_2$ machine, and so on. Finally, by dovetailing a sequence of machines, one from from each level of the hierarchy, and sending the whole lot into an appropriate singularity, we obtain an $AD$ machine. The $SAD_n$ machines decide precisely those first order sentences which occupy the $n^\mathrm{th}$ level of the Arithmetic Hierarchy, while the $AD$ machine can decide the whole of arithmetic \citep{Hog04}.
The physicality of Malament-Hogarth spacetime is, however, debatable, since it clearly violates the Cosmic Censorship Hypothesis \citep{Pen98}; in addition, there are various technical problems associated with the transmission of the successful-completion signal from computer to observer \citep{EN93, EN96}. However, the related approach of N{\'e}meti \textit{et al.} \citep{NA06, ND06} exploits instead the properties of slow-Kerr (\ie massive slowly-rotating uncharged) black holes, whence Cosmic Censorship is no longer an issue; they have, moreover, addressed the technical problems concerning signal transmission \citep{ANN08} (see also the related paper, elsewhere this volume).
\subsubsection{Quantum theoretical models}
Quantum mechanics is, perhaps, mankind's most impressive scientific achievement to date; it enables us to predict various physical outcomes with remarkable accuracy across a wide range of both everyday and exotic situations. In addition, as \ItBit demonstrates, there are clear parallels between quantum theory and information theory; since computation is largely seen as the study of information processing, it is not surprising that the field has proven fertile ground for researchers in both digital physics and hypercomputation theory.
One possible hypercomputational model in quantum theory is Kieu's adiabatic quantum algorithm for deciding Hilbert's Tenth problem, concerning the solution of Diophantine equations. Since this problem is known to be recursively undecidable \citep{Mat93}, Kieu's algorithm---essentially a method for searching infinite sets in finite time---must be hypercomputational. Although Kieu's claims are controversial and his algorithm has been disputed by various authors, he has sought to address these criticisms in a forthcoming paper \citep{Kie08}. For the time being, therefore, the jury is out.
\section{The Standard Path-Integral Formulation}
\label{sec:standard-formulation}
As we explained in section \ref{sec:why-reformulate}, we aim to reformulate the standard version of quantum theory from first principles in such a way that its computational aspects become essentially self-evident. We begin by recapitulating the (non-relativistic) path-integral formulation originally presented in \citep[\S\S3--4]{Fey48}; see also \citep{Fey65}. Given initial and final locations $q_I = \Tuple{x_I,t_I}$ and $q_F = \Tuple{x_F,t_F}$ (where $t_F > t_I$), the goal of the standard formulation is to determine the amplitude $\phi(q_F,q_I)$ that a particle $P$ follows a trajectory $q_I \to q_F$ lying entirely within some prescribed non-empty open space-time region $R$. As Feynman shows, this amplitude can then be used to generate a \Schrodinger wave-equation description of the system, whence this formulation is equivalent to other standard (non-relativistic) models of quantum theory. In Section \ref{sec:finitary-formulation}, we will develop a generalised finitary formulation of the same amplitude, and show that it is equivalent to the standard path-integral formulation presented below.
For the sake of illustration, we shall assume that space is 1-dimensional, so that spatial locations can be specified by a single coordinate $x$---the extension to higher dimensions is straightforward. Furthermore, we shall assume in this paper that the region $R$ is a simple rectangle of the form $R = X \times T$, where $X$ and $T = (\tMin, \tMax)$ are non-empty open intervals in $\Rset$; this does not limit our results, because open rectangles form a base for the standard topology on $\Rset^2$, and all of our formulae are derived via integration.\endnote
{
Integrating over a union of disjoint rectangles is the same as summing the component integrals:
given any integrable function $f(x,t)$ defined on a disjoint union $R = \bigcup_{\alpha}{ R_\alpha }$, we
have $\int_{R}{ f } = \sum_{\alpha}{ \int_{R_\alpha}{ f } }$.%
}
Suppose, then, that a particle $P$ is located initially at $q_I = \Tuple{x_I, t_I}$, and subsequently at $q_F = \Tuple{x_F, t_F}$, and that its trajectory from $q_I$ to $q_F$ is some continuous path lying entirely within the region $R = X \times T$. Choose some positive integer $\df$, and split the duration $\tPath = t_F - t_I$ into $\df+1$ equal segments: for $n=0, \dots, \df+1$, we define $t_n = t_I + \nicefrac{n \tPath}{(\df+1)}$, so that $t_0 = t_I$ and $t_{\df+1} = t_F$. We write $x_0, \dots, x_{\df+1}$ for the corresponding spatial locations, and define $q_n = \Tuple{x_n, t_n}$. While each of the values $x_n$ can vary from path to path, the values $t_n$ are fixed. To distinguish this situation from the situation below (where $t_n$ is allowed to vary), we shall typically write $\Fixed{q} = \Tuple{x, \Fixed{t}}$ for those locations $q_n$ whose associated $t_n$-value is fixed. We will also sometimes write $\Path{\Fixed{q}}$ or $\Path{\Fixed{q}_1, \dots, \Fixed{q}_{\df}}$ for the arbitrary path $q_I = \Fixed{q}_0 \to \Fixed{q}_1 \to \dots \to \Fixed{q}_\df \to \Fixed{q}_{\df+1} = q_F$. Apart from the fixed values $x_0 \equiv x_I$ and $x_{\df+1} \equiv x_F$, each of the $x_n$ is constrained only by the requirement that $x_n \in X$, whence the path $\Path{\Fixed{q}}$ has $\df$ degrees of freedom.
In classical physics, the \emph{action} associated with a path $p$ is given by $S = \int_p{ L\ dt }$, where the function $L = L(x(t),\dot{x}(t))$, the \emph{Lagrangian}, is a function of position $x$ and velocity $\dot{x}$, only. However, to form this integral we need to specify the motion of the particle in each subinterval $(\Fixed{t}_n, \Fixed{t}_{n+1})$, so we assume that $P$ follows some path $\Fixed{q}_n \to \Fixed{q}_{n+1}$ that is classically permissible. Each segment $\Fixed{q}_n \to \Fixed{q}_{n+1}$ of the path has associated classical action $S(\Fixed{q}_{n+1},\Fixed{q}_n)$, and probability amplitude $\Braket{\Fixed{q}_{n+1}|\Fixed{q}_n}$ defined for all $q$ and (subsequent) $q'$ by $\Braket{q'|q} = \exp{\left\{ i S(q',q) / \hbar \right\}}$. The action $S$ is determined by the classical \emph{Principle of Least Action}. This says that the classical path is one which minimises this action, so that $S(q',q) = \min \int_{t}^{t'}{ L\, dt }$. The total action associated with the path is $S\Path{\Fixed{q}_1, \dots, \Fixed{q}_{\df}} = \sum_n { S(\Fixed{q}_{n+1},\Fixed{q}_n) }$ and the associated amplitude is the product
$ \Braket{q_F | \Fixed{q}_\df}
\Braket{\Fixed{q}_\df | \Fixed{q}_{\df-1}}
\dots
\Braket{\Fixed{q}_2 | \Fixed{q}_1}
\Braket{\Fixed{q}_1 | q_I }
$.
Summing over all such paths now yields the composite amplitude
\begin{equation}
\PHI{\df}(q_F,q_I) =
\frac{1}{A_\df}
\int{ \Braket{q_F | \Fixed{q}_\df} dx_\df
\Braket{\Fixed{q}_\df | \Fixed{q}_{\df-1}} dx_{\df-1}
\dots
\Braket{\Fixed{q}_2 | \Fixed{q}_1} dx_1
\Braket{\Fixed{q}_1 | q_I } }
\label{eq:phi-n}
\end{equation}
where $A_\df$ is a normalisation factor. All that remains is to take the limit as $\df \to \infty$, subject to the assumption that the resulting path $x = x(t)$ is continuous. This gives us the required amplitude $\phi(x_F,x_I)$ that the particle travels from $q_I$ to $q_F$ by a trajectory that lies entirely\endnote
{
Strictly, only the internal points of the trajectory are required to lie in $R$.
Either (or both) of the endpoints $q_I$ and $q_F$ can lie
outside $R$, provided they are on its boundary.
}
within $R$:
\[
\phi(q_F,q_I)
=
\lim_{ \df \to \infty }
\frac{1}{A_\df}
\int{ \Braket{q_F | \Fixed{q}_\df} dx_\df
\Braket{\Fixed{q}_\df | \Fixed{q}_{\df-1}} dx_{\df-1}
\dots
\Braket{\Fixed{q}_2 | \Fixed{q}_1} dx_1
\Braket{\Fixed{q}_1 | q_I } }
\enspace .
\]
|
1,116,691,497,141 | arxiv |
\section{Introduction}
\label{introduction}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth]{figures/LOB_illustration3.eps}
\caption{(\textbf{A}) A snapshot of the limit order book. (\textbf{B}) Workflow of a price forecasting task using LOB data with machine learning models.}
\label{fig:lob}
\end{figure}
Limit order books (LOBs) are used by financial exchanges to match buyers and sellers of a particular instrument and act as an indicator of the supply and demand at a given point in time. It can be described as a self-evolving process with complex spatial and temporal structures revealing the price dynamics at the microstructural level. Market making, optimal execution and statistical arbitrage strategies, all require a good understanding of the LOB and its dynamics. Figure \ref{fig:lob} (A) shows a snapshot of the LOB with both the bid (buy) and ask (sell) order volumes accumulated at each price level. The mid-price is the average of the best (lowest) ask price and the best (highest) bid price and the difference between them is referred to as the bid-ask spread. The LOB gets updated continuously with order placements, cancellations and executions.
The use of algorithmic trading strategies and the digitisation of exchange activities has made available a tremendous amount of LOB data for practitioners and researchers to study the market dynamics using data-driven approaches. This led to a surge in interest for big data applications in the financial markets and machine learning (including deep learning) models have become a trend in the quantitative finance domain \cite{buehler2019deep}, \cite{wiese2020quant}. One of the classical and popular tasks using limit order books is the short-term price forecasting task, which is studied extensively in the academic literature and is valuable from the commercial perspective for markets including equities, currencies and commodities.
Limit order book data come in different degrees of granularity, including \emph{Level-1} data providing the best bid/ask prices and volumes, \emph{Level-2} data providing the same data across a certain number of price levels and \emph{Level-3} data containing the non-aggregated orders placed by market participants. This information is captured and represented in vectors or matrices before they are fed into machine learning models. In our work, we focus on LOB data representations to be used as input signals to machine learning models. As we mentioned, a vector/matrix organisation of the raw limit order book information is necessary for upcoming machine learning processes. This information organisation scheme leads to the initial representation, which is the foundation of model performance.
Sometimes, this transformation from raw data to feature vectors is referred to as \emph{feature engineering}, if feature extraction techniques are applied manually to the raw data. This requires a good and comprehensive domain knowledge to make sure the extracted features match the learning task. By contrast, \emph{representation learning}, also called \emph{feature learning}, is an automated approach to discover an optimal representation for the data. The major difference between feature engineering and representation learning is whether the representation is formed in a purely data-driven way. Also, it is common for a machine learning system to involve both feature engineering and representation learning with multiple levels of representation appearing at different stage of processing (see figure \ref{fig:lob} (B)).
The performance of machine learning models is heavily influenced by the data representation scheme \cite{bengio2013representation}. For neural networks, the representation learning and the predictor are combined within the network structure and are trained together towards the same target function. In this case, the original representation of LOB, \textit{i.e.} the input representation to neural networks, becomes the foundation of the entire model. Presently, the price level-based data representation scheme is used in almost all recent studies \cite{tsantekidis2017using,tsantekidis2017forecasting,tran2018temporal,zhang2019deeplob,mahfouz2019importance,sirignano2019deep,tsantekidis2020using,wallbridge2020transformers} applying machine learning including deep learning models on LOB data. However, this representation scheme is rarely discussed or investigated towards its compatibility with machine learning especially deep learning models and robustness under unexpected situations. This lack of investigation would lead to potential risks under adversarial perturbations. We claim that, the robustness of LOB representation is the foundation to all LOB-related machine learning tasks and in this paper, we propose an insight to challenge the commonly-used level-based LOB representation for machine learning models.
\paragraph{Our contributions} This paper is, to our knowledge, the first work bringing adversarial robustness to LOB representations and pointing out the critical flaws of the commonly-used LOB representation with machine learning models. It proposes a perturbation paradigm to the LOB to examine the robustness of data representation in price forecasting tasks. The experimental results confirm our concerns about the current level-based LOB representation as well as machine learning models designed based on this representation scheme. Based on these concerns, this paper presents desiderata of LOB representations and proposes new schemes to represent LOBs, which lead to more effective and robust machine learning models.
\section{Representing Limit Order Books}
\begin{figure*}[!tb]
\centering
\includegraphics[width=\textwidth]{figures/perturbation.eps}
\caption{(\textbf{A}) Original LOB data with 10 levels on ask and bid side without perturbation. (\textbf{B}) LOB data with 10 levels after data perturbation. Red blocks represent intentionally placed perturbation orders with order volume = 1. Compared with the original one, the new 10-level data representation has a much narrower vision on the market.}
\label{fig:perturbation}
\end{figure*}
\subsection{Drawbacks of Current Representation}
The current LOB representation is a time-series of multiple levels of orders in LOBs, which is commonly-used as inputs to most machine learning models for LOB predictions and financial benchmark datasets. Each input data point is in the format of $\vec x \in \mathbb{R}^{T\times 4L}$. Temporally, a history of $T$ LOB snapshots is stacked to reflect the evolution of the market based on events such as order placement, cancellation, execution etc. Spatially, a LOB snapshot in this level-based representation is a vector $x_{t} = \left\{p_{a}^{i}(t), v_{a}^{i}(t), p_{b}^{i}(t), v_{b}^{i}(t)\right\}_{i=1}^{L}$ containing $L$ price levels in each side of the LOB. $p_{a}^{i}(t)$, $p_{b}^{i}(t)$ are the ask and bid prices for price level $i$ and $v_{a}^{i}(t)$, $v_{b}^{i}(t)$ are the ask and bid volumes respectively. Reflected in the data format, $4L$ is the length of the vector representing each snapshot of LOB at a certain time point.
From the data perspective, this representation has some particular characteristics. The most intuitive one is that the price and volume for each LOB level are tied together - any disentanglement or distortion to this would result in invalid representations. In addition, the spatial structure across different levels is not homogeneous since there is no assumption for adjacent price levels to have fixed intervals. We also realise the instability and non-smoothness of the representation due to occasional shifts in price levels - the previous best bid/ask data can suddenly shift to second best bid/ask channel if a new order is placed with a better price.
These characteristics may not be a big problem for human understanding but they explain the drawbacks of this LOB representation when treated as inputs to machine learning models. Firstly, it is fundamentally assumed in machine learning that signals from the same channel (input dimension) are from the same source. In this case, `level' is a manually defined concept based on the price ranks of current orders in LOB, which can hardly be treated as the same source especially when the information of a level shifts to the channel of another level due to certain actions. Secondly, homogeneous relationship is a basic assumption for convolutional neural networks (CNN) due to the parameter sharing mechanism. Thus, the heterogeneous features of LOB data representation may reduce model robustness when learning with CNN models. Furthermore, the way how the information is organised from multiple levels makes it vulnerable to perturbations - a small perturbation would leads to shift of price levels and thus the representation would be affected dramatically due to this shift.
\subsection{Risks under Adversarial Perturbations}
Adversarial perturbations are common and inevitable in most real-world systems, which can happen in any component of the data processing chain. Perturbations can be very subtle but still significantly harmful to delicate systems. If such systems are built without considering risks under possible perturbations during the model design and development stage, it would lead to instability and vulnerability after deployment when facing real-world conditions.
In the financial domain, it is always important to ensure that the potential risks are noticed and controlled to ensure the systematic reliability. One way to examine this is by designing hypothetical conditions which might not have been seen before but can possibly happen in the future. Especially, when we have a relatively good understanding of the drawbacks of a certain system based on its design, we can develop scenarios with \emph{adversarial perturbations} to better identify risks. Unlike adversarial samples, which are targeted specifically on individual samples, adversarial perturbations are considered as a general condition that can be applied to all inputs in the same manner.
We present the perturbations by assuming that the data is perturbed by small size orders at empty price levels beyond the best ask/bid prices. This perturbation ensures no change is made to the mid-price before and after perturbation to make sure the prediction labels are not affected. In some LOB data for equities, the price difference between adjacent price levels is sometimes larger than the tick size (the minimum price increment change allowed). This is especially prevalent in small-tick stocks and can result in the entire LOB shifting even if a small order of the minimum allowable size is placed at a price in between the existing price levels.
We illustrate this data perturbation with a synthetic LOB example as shown in Fig. \ref{fig:perturbation}. Fig. \ref{fig:perturbation} (A) shows the synthetic LOB snapshot with 10 price levels in both ask and bid sides of the LOB (marked as L1-L10) before any perturbation. We assume the tick size is 0.01 and the minimum order size present in our data is 1. In this LOB snapshot, the mid-price is 10.00 with bid-ask spread equal to 0.04. We can observe some price levels where no orders are placed, such as 10.03, 10.06 in the ask side and 9.96, 9.94 in the bid side. To perturb this LOB data, one can place orders with allowed minimum order size to fill these empty price levels. These minimum size orders may seem to be not influential since 1) they do not effect the mid price, 2) their volumes are tiny. However, the LOB representation changed dramatically after this perturbation (see Fig. \ref{fig:perturbation} (B)). Approximately half of the original price level information is no longer visible after perturbation (e.g. ask-side L5 to L10 information is not included in representation after perturbation) and while the rest are preserved, they are shifted to different levels in the LOB representation (e.g., the ask-side L2 appears in ask-side L3 after perturbation).
Intuitively, this perturbation has two impacts from the machine learning point of view. Firstly, it shifts the 40-dimensional input space dramatically. For example, the Euclidean distance between these two 40-dimensional vectors before and after perturbation is 344.623 whereas actually the total volume of orders applied is only 10. This means that the level-based representation scheme does not bring local smoothness. Furthermore, it narrows the scope of vision of machine learning models to `observe' the market. As shown in the LOB data visualisation plot in Fig. \ref{fig:perturbation}, the gray areas are masked out for the model input after perturbation.
We assume that, such perturbation with tiny amount of orders would ideally have limited impact to the future price movement trend. However, risks under perturbations to be even dilated when limit order book data is presented in the current way (level-based). In the following sections, we are going to prove this with experimental results.
\subsection{Desiderata of Robust Representation}
As we mentioned, the performance of machine learning models relies hugely on representations, either learned via representation learning or designed via feature engineering. We would like to propose some desiderata for improving the robustness of LOB-related data representations and machine learning models designed on top of certain representations. These desiderata come from two perspectives. The first perspective is from the information transformation, focusing on whether the representation is a meaningful, comprehensive and efficient way to reflect the original information. The second perspective is from the machine learning point of view, concerning about whether this representation is compatible and appropriate for the machine learning model to be used in real tasks. Note that, data can be represented differently in storage, transition or analysis and our desiderata only applies on the representation directly fed to machine learning models as inputs.
\begin{itemize}
\item \textbf{Region of interests} The entire limit order book may contain hundreds of price levels with a large range of price. A complete representation including all price levels leads is not always necessary for all the tasks. Thus, an appropriate region of interests needs to be placed to the limit order book to reach a balance between complexity and performance.
\item \textbf{Efficiency}: A LOB representation should organise data in a efficient manner to reduce the \emph{curse of dimensionality}. For example, limit order books snapshot can be represented with extremely sparse vector including all price ticks appeared in the history. This kind of representation is complete, easy to understand but very inefficient both in storage and in computation.
\item \textbf{Validity}: A LOB data representation should include a clear and simple definition of validity.
\item \textbf{Smoothness}: A LOB Data representation should not change dramatically under subtle perturbations in the market.
\item \textbf{Compatibility}: Basic assumptions needs to be matched between data representations and learning models. If not, these models may contain unknown risks due to invalid fundamental settings. For example, convolutional neural networks (CNN) assumes homogeneous spatial (or temporal depending on the convolution direction) relationship due to its parameter sharing mechanism. Thus, if the input representation does not match the assumption of homogeneosity, the learned shared features may suffer from the risk of being invalid or non-meaningful.
\end{itemize}
\section{Spatial-temporal Representation in mid-price-centred Moving Windows}
We propose here to represent limit order books with fixed size moving windows centred at the mid-price of the current time point, which is referred to as the moving window representation (MW). For predicting short-term price movement, limit orders near the mid-price plays an important role compared with orders placed far away from the mid-price. Thus, we are more concern about limit order information near the mid-price. We set a 2-dimensional window for the region of interests containing $N$ LOB histories and $2W+1$ continuous price levels stepped by the tick size $\Delta p$. This window provides a vision for limit orders within price range $[p(t) - W\Delta p, p(t) + W\Delta p]$ and within a short history.
\begin{figure*}[!tb]
\centering
\includegraphics[width=0.9\textwidth]{figures/new_rep.eps}
\caption{Spatial-temporal Representation in mid-price-centred Moving Windows. Red/blue represents ask/bid volumes. (\textbf{A}) Moving window representation. (\textbf{B}) Accumulated moving window representation. (\textbf{C}) Smoothed moving window representation.}
\label{fig:new_rep}
\end{figure*}
In the two dimensional matrix $x \in \mathbb{R}^{N \times 2W+1}$, each element $x_{n, i}, n=\{1,...,N\}, i = \{0,...,2W\}$ of the moving window representation indicates the volume of limit orders at price $p(t) - W\Delta p + i$ and at LOB snapshot $t-N+n$. To distinguish from ask and bid orders, we mark $x_{n, i} > 0$ for ask-side limit orders and $x_{n,i} < 0$ for bid side limit orders and the volume size is given by $|x_{n,i}|$. Fig. \ref{fig:new_rep} (A) visualises this moving window representation of a example in real FI-2010 LOB dataset (N = 40 and W = 20). This representation is re-organisation of limit order book information like level-based representation scheme but avoids some of its drawbacks. First, all numerical numbers in this representation are volumes instead of volume-price couples, which avoids the risks of invalidity if disentanglement happens in future black-box models. This representation is also spatially homogeneous since distances between adjacent elements (spatially) all equals to the tick size. Thus, it doesn't have the issue of level shifts when empty ticks are filled. Because it includes all the empty ticks within the vision scope, this representation may look sparse, \textit{i.e.} with a considerable amount of zero elements in the representation.
We can analyse the moving window representation based on the desiderata we proposed before. We choose the regions of interests around the mid-price instead of considering a wide range of price levels appeared in the history. This is a compromise of representational comprehensiveness and efficiency. With such compromise, our representation shows similar space requirement as the commonly-used representation with 10 price levels at each side. The determination of the region of interests can be depending on the tick-size, stock price, accuracy requirement, computational resources etc. of specific tasks. By encoding price information implicitly in element location and volume information explicit in element value, we now have homogeneosity in this moving window representation and should be compatible with majority of machine learning models including CNNs. Furthermore, the advantage of this disentanglement is obvious - each change to the limit order book would be reflected reasonably in the representation without level shifts and similar LOBs would have similar representations.
Based on this moving window representation, we introduce two variations - the accumulated moving window representation (accumulated MW), and the smoothed moving window representation (smooth MW). The accumulated moving window representation is a equivalent variation of the moving window representation, which can be easily transformed mutually. Each element $x_{n, i}, n=\{1,...,N\}, i = \{0,...,2W\}$ in the accumulated moving window representation is the sum of total volumes up to the corresponding price level on each side in the n-th snapshot. In financial area, this accumulation of limit orders refers to the \emph{market depth}, which considers the entire levels of the limit order book. The market depth reflects the market's capability to absorb market orders without having the price being affected dramatically by large-scale orders. This accumulated volume representation is a time series version of market depth information and aligned regarding to the current mid-price (see Fig. \ref{fig:lob} (B)). The smoothed moving window representation is a processed version of the moving window representation using Gaussian kernels. Via Gaussian kernels, we approximately create a potential distribution of limit orders across different prices and make the matrix less sparse. Fig. \ref{fig:new_rep} (C) illustrates an example of smoothed volume representation which is edited based on Fig. \ref{fig:new_rep} (A).
\section{Related Works}
Early works using high frequency limit order book data usually use handcrafting features from the LOB combined with naive learning models like support vector machines (SVMs) to indicate future price movement\cite{kercheval2015modelling}. As deep learning methods start to show their power in various other areas like computer vision, researches also seek for deeper architectures to solve large-scale problems in the financial area - multi-layer perceptrons \cite{mahfouz2019importance}, recurrent neural networks \cite{sirignano2019universal}, long short-term memory \cite{tsantekidis2017using}, convolutional neural networks \cite{zhang2019deeplob,zhang2018bdlob} self-attention transformer networks \cite{wallbridge2020transformers} and recently Seq2Seq networks\cite{zhang2021multi}.
Although deep learning is gaining attention in the financial area, it is rarely discussed in literature about representation schemes of financial data. A vast majority of (even not all) the machine learning models, including those we mentioned above, and benchmark datasets arrange LOB in the level-based representation (e.g. \cite{ntakaris2018benchmark,huang2011lobster}). As we emphasize in previous sections, a LOB representation which is efficient and convenient from the perspective of human understanding and matching engine does not necessarily mean it is an appropriate representation scheme for machine learning models to learn features from. The study of the importance of robust data representation, the criteria for evaluating the quality of the representation, and the variety of methods for learning these representations is studied extensively in the machine learning literature with \cite{bengio2013representation} providing a survey of these methods.
In our work, we focus on the representation of financial market microstructure data. \cite{bouchaud2018trades}, \cite{abergel2016limit} study the structure and empirical properties of limit order books and provide a set of statistical properties (referred to as \textit{stylized facts}) using NASDAQ exchange data. On the other hand, \cite{lehalle2018market} discusses the practical aspects and issues of market structure, design, price formation, discovery and the behaviour of different actors in limit order book markets. A significant amount of research in recent years focused on applying deep learning models on limit order book data for the purposes of price forecasting or price movements classification.
\section{Experimental Design}
\subsection{Benchmark Dataset and Model Inputs}
We use the FI-2010 dataset \cite{ntakaris2018benchmark} as the benchmark dataset, which consists of LOB data from 5 stocks in the Helsinki Stock Exchange during normal trading hours (no auctions) for 10 trading days. This dataset takes into account 10 price levels on the bid/ask sides of the limit order book, which are updated according to events such as order placement, executions and cancellations. In our experiments, we take into account the history of the LOB snapshots for future price movement prediction. Thus, each input data point is a short time series with input dimension $T \times 40$ where T is the total amount of historical snapshots. The prediction target is the micro-movement $l_t = \frac{m_+(t)-p_t}{p_t}$ where $m_+(t) = \frac{1}{k} \sum^k_{i=1}p_{t+i}$ is the smoothed mid-price with prediction horizon $k$. The movement is further categorised into three classes - 1:up ($l_t>0.002$), 2:stationary ($-0.002 \le l_t \le 0.002$), 3:up ($l_t<-0.002$). We choose the FI-2010 prediction labels with prediction horizon k-50 as the targets for model training and testing. Notice that, the FI-2010 dataset provides inputs and outputs already being pre-processed with the approaches mentioned above.
We consider 4 different representations - the level-based representations which is commonly used in the literature (Level-based), moving window (MW), Accumulated moving window (Accumulated MV), Smoothed moving window (Smoothed MW). All these representation schemes encode exactly the same information - spatially, price and volume of 10 price levels on each side and temporally, 10 most recent LOB snapshots as market history. For level-based representations, the input data dimension is $10\time 40$ and for moving window representations, the input data dimensions are $10 \times 41$.
We additionally generate LOB testing datasets with adversarial perturbations by adding orders to limit order books. All 4 input representations now need to represent each perturbed LOB (original LOB + order perturbation) in their own manner. We design 4 perturbation paradigms - the LOB data is not perturbed (`None') and the LOB data is perturbed by placing minimum-size orders to fill the ticks on the ask side only (`ask side'), on the bid side only (`bid side'), on both the ask and bid side (`both sides'). The idea of adversarial perturbations is to examine the model's robustness under unexpected subtle perturbations. Robust models should show good generalisation against subtle perturbations and their prediction should not be influenced dramatically even they never seen exact examples like the perturbed ones before. Thus, all these methods are trained with the same FI-2010 training dataset (unperturbed) as how people usually train these models in production but only being tested in perturbation paradigms mimic unexpected adversarial perturbations or even attacks.
\subsection{Price Forecasting Models}
In this paper, we apply 5 machine learning models for price forecasting tasks, including 3 basic benchmark models, \textit{i.e.} linear (logistic regression), multi-layer perceptron (MLP), long short-term memory (LSTM), and 2 deep learning models, DeepLOB \cite{zhang2019deeplob} and temporal convolutional networks (TCNs \cite{bai2018empirical}). The baseline linear model is demonstrated as a multi-class logistic regression. It includes a linear layer coming up with a softmax activation evaluating the probability of multiple categories. Multilayer perceptrons are generic solutions to machine learning problems without requiring prior knowledge about the spatial-temporal structure of the data. The multi-layer perceptron model we apply has two hidden layers with 100 and 50 neurons respectively with ReLU activation. LSTMs are recurrent neural networks focusing on capturing temporal dynamics of the data. We apply a shallow LSTM model in this paper with only 1 LSTM layer consisting of 20 units. The DeepLOB model is a deep learning solution for price forecasting using LOBs. It combines convolutional neural networks with LSTM and is designed specifically based on the level-based inputs. It's hidden layers consists of 3 convolutional layers, 1 inception module and 1 LSTM layer in a sequence, aiming to capture both temporal dynamics and spatial structure of LOB data. Since DeepLOB is not compatible with other LOB representations, we apply temporal convolutional networks to our moving window representations to demonstrate their performance under deep learning settings. TCNs, which capture the spatial-temporal information, are CNN alternatives of conventional RNNs when solving time series problems. The most important structure within TCNs are causal convolutional layers .In this implementation, we stack 3 causal layers (32 channels each) to create a relatively deep but not over-complicated model.
\section{Results}
\begin{table*}[!tb]
\small
\begin{tabular}{c | c c | c c | c c | c c}
\toprule
\multirow{2}{*}{Perturbation} &
\multicolumn{2}{c}{Level-based (\%)} & \multicolumn{2}{c}{MW (\%)} & \multicolumn{2}{c}{ Accumulated MW (\%)} & \multicolumn{2}{c}{ Smoothed MW (\%)} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
& Accuracy & F-score & Accuracy & F-score & Accuracy & F-score & Accuracy & F-score \\
\midrule
& \multicolumn{8}{c}{Linear} \\
\midrule
None & 52.98$\pm$0.14 & 38.50$\pm$0.38 & \textbf{59.57$\pm$0.00} & \textbf{53.66$\pm$0.01} & 54.24$\pm$0.00 & 41.12$\pm$0.01 & 58.85$\pm$0.00 & 52.66$\pm$0.00\\
Ask side& 49.81$\pm$0.09 & 42.47$\pm$0.25 & \textbf{59.57$\pm$0.00} &\textbf{53.66$\pm$0.00} & 54.23$\pm$0.00 & 41.09$\pm$0.00 & 58.85$\pm$0.00 & 52.65$\pm$0.00 \\
Bid side& 51.65$\pm$0.10 & 35.83$\pm$0.16 &\textbf{59.57$\pm$0.00} & \textbf{53.66$\pm$0.01} & 54.24$\pm$0.01 & 41.12$\pm$0.01 & 58.85$\pm$0.00 & 52.66$\pm$0.00\\
Both sides & 49.40$\pm$0.14 & 41.54$\pm$0.04 & \textbf{59.57$\pm$0.00} & \textbf{53.66$\pm$0.00} & 54.24$\pm$0.00 & 41.10$\pm$0.00 & 58.85$\pm$0.00 & 52.66$\pm$0.00\\
\midrule
\midrule
& \multicolumn{8}{c}{MLP} \\
\midrule
None & 60.14$\pm$4.73 & 53.96$\pm$8.41 & 69.52$\pm$0.68 & 67.88$\pm$0.65 & \textbf{71.27$\pm$0.36} & \textbf{69.59$\pm$0.35} & 65.59$\pm$0.30 & 63.69$\pm$0.32\\
Ask side & 55.75$\pm$2.30 & 47.08$\pm$4.71 & 69.52$\pm$0.68 & 67.88$\pm$0.65 & \textbf{71.27$\pm$0.36} & \textbf{69.59$\pm$0.35} & 65.59$\pm$0.30 & 63.69$\pm$0.32\\
Bid side & 55.05$\pm$4.08 & 45.74$\pm$8.01 & 69.54$\pm$0.68 & 67.90$\pm$0.65 & \textbf{71.28$\pm$0.36} & \textbf{69.59$\pm$0.35} & 65.60$\pm$0.30 & 63.70$\pm$0.32\\
Both sides & 50.26$\pm$0.89 & 38.88$\pm$5.11 & 69.53$\pm$0.68 & 67.89$\pm$0.65 & \textbf{71.28$\pm$0.36} & \textbf{69.60$\pm$0.35} & 65.59$\pm$0.30 & 63.69$\pm$0.32\\
\midrule
\midrule
& \multicolumn{8}{c}{LSTM} \\
\midrule
None & 70.74$\pm$0.22 & 68.45$\pm$0.33 & 75.15$\pm$0.40 & 73.89$\pm$0.38 & \textbf{77.46$\pm$0.17} & \textbf{76.18$\pm$0.19} & 65.90$\pm$0.41 & 64.44$\pm$0.38\\
Ask side & 65.85$\pm$1.43 & 63.09$\pm$1.16 & 75.14$\pm$0.40 & 73.89$\pm$0.38 & \textbf{77.46$\pm$0.17} & \textbf{76.18$\pm$0.19} & 65.90$\pm$0.41 & 64.44$\pm$0.38\\
Bid side & 63.33$\pm$1.58 & 60.39$\pm$1.78 & 75.16$\pm$0.40& 73.88$\pm$0.38& \textbf{77.47$\pm$0.17} & \textbf{76.19$\pm$0.19} & 65.90$\pm$0.41 & 64.44$\pm$0.38\\
Both sides & 57.79$\pm$3.14 & 54.35$\pm$2.74 & 75.15$\pm$0.40 & 73.89$\pm$0.38 & \textbf{77.47$\pm$0.17} & \textbf{76.19$\pm$0.19} & 65.90$\pm$0.41 & 64.44$\pm$0.38\\
\midrule
\midrule
& \multicolumn{8}{c}{DeepLOB \cite{zhang2019deeplob}}
\\
\midrule
None & 77.30$\pm$0.08 & 77.23$\pm$0.10 & / & / & / & / & / & / \\
Ask side & 69.02$\pm$0.33 & 67.42$\pm$0.25 & / & / & / & / & / & / \\
Bid side & 63.94$\pm$0.93 & 60.70$\pm$1.33 & / & / & / & / & / & / \\
Both sides & 51.35$\pm$1.74 & 39.59$\pm$3.73 & / & / & / & / & / & / \\
\midrule
\midrule
& \multicolumn{8}{c}{TCN}\\
\midrule
None & / & / & 77.83$\pm$0.33 & 76.47$\pm$0.34 & \textbf{78.81$\pm$0.38} & \textbf{77.29$\pm$0.47} & 65.73$\pm$0.34 & 63.63$\pm$0.46\\
Ask & / & / & 77.82$\pm$0.32 & 76.21$\pm$0.34 & \textbf{78.80$\pm$0.38} & \textbf{77.27$\pm$0.47} & 65.72$\pm$0.34 & 63.63$\pm$0.46 \\
Bid & / & / & 77.83$\pm$0.33 & 76.22$\pm$0.35 & \textbf{78.82$\pm$0.38} & \textbf{77.30$\pm$0.46} & 65.72$\pm$0.34 & 63.63$\pm$0.46 \\
Both sides & / & / & 77.83$\pm$0.33 & 76.21$\pm$0.34 & \textbf{78.80$\pm$0.38} & \textbf{77.28$\pm$0.46} & 65.72$\pm$0.34 & 63.63$\pm$0.46 \\
\bottomrule
\end{tabular}
\caption{Price forecasting model performance under data perturbation. Each model is trained with a non-perturbed training set and when testing the model, we apply various data perturbation. None: no perturbation. Ask-side: perturbation only applied to the ask-side of data. Bid-side: perturbation only applied to the bid-side of the data. Both sides: perturbation applied to both ask and bid sides.}
\label{table:performance}
\end{table*}
\subsection{Price Forecasting Model Performance}
Table \ref{table:performance} demonstrates the testing performance of these machine learning models in the price movement forecasting tasks. Since the testing set is unbalanced, we use 4 different metrics (scores) to evaluate and compare the performance - Accuracy (\%), Precision (\%), Recall (\%) and F-score (\%). Among these metrics, Accuracy (\%) is measured as the percentage of predictions of the test samples exactly matches the ground truth, which is the unbalanced accuracy score where as the rest metrics are all averaged across classes in an unweighted manner to eliminate the influence of data imbalance. All the experiments are repeated 5 times with different random seeds to get an averaged performance with standard deviations.
It can be observed from Table. \ref{table:performance} that the ranking of model performance regardless of the input representation or testing paradigm - deep models (DeepLOB, TCN) -> LSTM -> MLP -> Linear. We take the model performance using level-based representation as an instance. The linear model (accuracy = 52.98\%, F-score = 38.50\%) is not capable to learn complex features either spatially or temporally due to its simplicity. The MLP model with multiple hidden layers in theory can learn arbitrary features. However, the feature extraction in MLP are not that effective under limited parameter capacity due to the lack of explicitly defined data structure. Thus, the MLP model performance (accuracy = 60.14\%, F-score = 53.96\%) is much lower than LSTM (accuracy = 60.14\%, F-score = 53.96\%), which has clear definition over the temporal structure during learning. The DeepLOB model builds an additional convolutional architecture on top of a LSTM to enable both spatial and temporal feature extraction and significantly outperforms those relatively simple models (Linear, MLP, LSTM) with accuracy = 77.30\%, F-score = 77.23\%. Similarly, the TCN model also shows leading performance over other comparing methods.
In Table. \ref{table:performance}, we can compare the performance horizontally with different input representations. By just replacing the level-based representation with moving windows, the forecasting performance of the same model is boosted by 7\% for Linear (Level-based v.s. MW), 11\% for MLP (Level-based v.s. Accumulated MW) and 7\% for LSTM (Level-based v.s. Accumulated MW). Especially, LSTM with the accumulated MW representation can already reach an approximate level of performance as the much more complex deepLOB model. This means that both MW and Accumulated MW are more effective representations than the commonly-used level-based one for machine learning models to learn meaningful features for forecasting future price movements. In general, accumulated MW demonstrates the best performance among all the representation schemes. This proves from the data-driven perspective that the market depth is important information for understanding the market.
\subsection{Robustness under Perturbations}
For the level-based representation, we observe a performance decay of all the machine learning models under unexpected perturbations, from the simplest linear model to the most sophisticated DeepLOB model. In addition, the performance decline alters under different types of perturbations. When the perturbation is applied to both sides, the performance decrease becomes more severe - 11\% accuracy decrease on MLP, 12\% on LSTM and over 25\% on DeepLOB. Similar trends can also be viewed for other evaluation metrics. By contrast, our moving window representations show no obvious performance decay under perturbations in all experiments with different models.
From the these performance decay results, we find that DeepLOB, the best performed model under normal condition as well as the most complicated one, is also the most vulnerable one under perturbation (the largest performance decay). Its predictive accuracy decreases to 47.5\% and the F-score is only 22.2\%, which even underperforms logistic regression. The reason behind this phenomenon may be a combination of various factors. On one hand, the complexity of model is related to overfitting, which may reduce the generalisation ability and become unstable under the perturbation. Also as we mentioned in earlier sections, CNN assumes homogeneous spatial relationship but the level-based LOB representation is obviously heterogeneous, which leads to a mismatch between the data representation and the network characteristics. Once the spatial relationship is further broken due to perturbation, the CNN descriptors may not be able to extract meaningful features and thus cause malfunction of the entire predictor.
\begin{figure*}[!tb]
\centering
\includegraphics[width=0.6\textwidth]{figures/confusion.eps}
\caption{Confusion matrices for corresponding experimental results of the level-based representation in Table. \ref{table:performance}}
\label{fig:cm}
\end{figure*}
Fig. \ref{fig:cm} further illustrates more details behind the numerical performance metrics in the form of a confusion matrix about the performance decay with the level-based representation. The logistic regression model basically classify a majority of samples as `Stationary' no matter whether perturbation is applied. Similarly in MLP, about half of the `Up' and `Down' samples are misclassified as `Stationary' ones. Both LSTM and DeepLOB shows confusion matrices with obvious diagonal feature without perturbation - more than half of the samples from each class are classified the same as their true labels. When the perturbation is applied, LSTM shows performance decrease but the still near half of samples are correctly classified. DeepLOB, however, fails in the perturbation condition by misclassifying almost all the data to `Stationary' class (see DeepLOB+Both in Fig. \ref{fig:cm}).
\section{Conclusion \& Future Work}
In this paper, we discussed the importance of data representations to machine learning models applied to LOB-related tasks and highlighted the drawbacks and risks when using non-robust representations. Further, we proposed new representation schemes that avoid these drawbacks. To reveal this, we implemented price forecasting tasks with multiple benchmark models and data representations. We also designed data perturbation scenarios to test not only the performance but also the robustness of these machine learning models with various representation schemes including the commonly-used level-based representation and our moving window representations.
We show that how the information is organised and represented as the input has large impact to the model performance. In our case, by replacing the level-based representation with our moving window representations, performance of the same model increases significantly. This reveals that our moving window representations are more effective and suitable for machine learning models. In addition, the level-based representation brings vulnerability to models even under subtle perturbations, which leads to significant performance decay especially when models are more sophisticated. Our moving window representations, on the contrary, are almost immune to this perturbation and thus are more stable and reliable.
Like previous literature, we also show that machine learning models especially deep learning models can be a promising solution to financial problems. Especially, we can adopt existing machine learning solutions (e.g. TCNs) which was designed to solve similar problems in other areas to solve financial problems. The TCN structure we used in this paper was originally designed for video-based segmentation but matches nicely with price forecasting problem here and delivers satisfying results. Our future work would focus on extending representation-related robustness to more tasks dimensionality reduction, reinforcement learning etc.
|
1,116,691,497,142 | arxiv | \section{Introduction}
Search engines are the single most widely used Internet service \cite{purcell2012}. For this reason, significant efforts have been dedicated to improve the interaction between search engines and their users, as evident from thousands of studies analyzing these interactions. Moreover, because of the popularity of search engines, their data has been used to study human behavior in areas ranging from politics \cite{diaz2016} to health \cite{yomtov2016}.
One aspect which has received relatively little attention is the influence of user demographics, especially age and gender, on search engine use. This is surprising, since gender \cite{newman2008} and age \cite{pennebaker2003} are known to influence the use of language. Because of this, it might be assumed that language variation could be a useful for providing the best information to users, and could cause bias in the way results are provided to users.
Nevertheless, two seminal studies examined the correlation between demographics of search engine users and their topics of interest \cite{weber2010,weber2011}, finding that the topics people queried about varied by age and gender. Later, Bi \cite{bi2013} showed that search engine queries could be used to predict the age and gender of users. In a laboratory-based eye tracking study, Lorigo et al. \cite{lorigo2006} found gender differences in the way search engine results pages are analyzed. More recently Mehrotra et al. \cite{mehrotra2017} examined age and gender differences in the perception of search engine results, finding only minor differences therein.
We note that in contrast to the dearth of literature on the effects of age and gender on search engine queries, a large body of work exists on detecting demographics of writers from their writings, e.g., for use in forensic analysis. A review of such methods is provided in Koppel et al. \cite{koppel2009computational}. Other work has focused on age and gender identification from social media \cite{schwartz2013,rangel2013}, as well as prediction of social class \cite{reoctiuc2015}. However, in contrast with both these lines of work, which rely on long text with complete sentences, search engine queries are (as shown below) short and often incomplete.
The richness of search engine queries, reflecting a broad range of human behaviors, has led researchers analyze these data to learn about aspects of health which are difficult to study in other ways \cite{yomtov2016}. These studies begun from an examination of broad aspects of public health, e.g., the prevalence of influenza in a population \cite{polgreen2008} and the effect of dietary deficiencies on certain chronic pains \cite{giat2018}. More intriguingly, recent work has demonstrated that insights pertaining to individuals can be found their queries. These include, for example, the ability to identify precursors (including risk factors) for disease \cite{yomtov2015automatic}, and to screen for several forms of cancer, including pancreatic \cite{paparrizos2016}, ovarian and cervical \cite{soldaini2017}.
In contrast to population-level analysis, where models are trained to predict area-level measures of health, studies of individual health require the identification of a group of people who share the medical condition under study. Identifying this group, also known as a cohort, is a challenging problem considering that search engine data is usually anonymous and rarely linked to medical information such as medical records. Thus, researchers have identified a group of users, called Self-Identified Users (SIUs) \cite{yomtov2015automatic}, who issue experiential queries \cite{paparrizos2016} such as ``I have ovarian cancer". SIUs were used either to identify the cohort \cite{paparrizos2016} or as a seed-set for algorithms which use these data in conjunction with other information to form the cohort. However, Soldaini and Yom-Tov \cite{soldaini2017} found that queries by SIUs differ substantially from those of other people they identified as sharing the same medical condition. This observation, if true, should have a dramatic influence on the representativeness of the cohort, especially if it is selected to include only SIUs.
Thus, in this work we seek to examine how user's demographics influence their choice of queries. The differences we find could affect the quality of results that search engines return, thus having an important effect on the fairness of the results served by search engines, and are consequential to studies of cohorts based on SIUs.
\section{Methods}
Three main datasets from two sources were used in this study. First, a sample of approximately 5.5 million queries submitted to the Bing search engine from users in the USA during one day, 21st March 2018. For each query we obtained the text of the query, the age group and gender (male or female) of the user. The latter two were as reported by users during registration to Bing. We refer to these as dataset 1.
To validate our findings from the analysis of the first dataset, we performed identical analyses on 5 weeks of query data collected from an opt-in consumer panel recruited by Internet analytics company comScore. Millions of panelists provide comScore with explicit permission to passively measure all of their online activities using monitoring software installed on their computers. In addition to logged search behavior, the comScore data also collects panelists' gender and age group. This dataset contains queries made not only to Bing. We refer to these as dataset 2.
Finally, we collected all experiential queries submitted to Bing during March 2018. These queries consisted of all queries containing the phrase ``been diagnosed with $<$condition$>$" or ``I have $<$condition$>$", without queries that only indicate possibilities (``Do I have $<$condition$>$") or negation (``I have not been diagnosed with $<$condition$>$"). Conditions were one of 5521 conditions and their 25,584 synonyms, as used in \cite{yomtov2015automatic}. Here again, for each query we obtained the text of the query, the age group and gender (male or female) of the user.
To evaluate spelling mistakes in the text of queries we used Python's Language Check package. In order to evaluate queries for whether they formed complete sentence(s) we randomly sampled 50 queries from each age group and gender in dataset 1. These queries were labeled by 5 crowdsourced workers on the Crowdflower platform as to whether they were complete sentences. We analyzed all those queries which had an agreement of 4 or more workers.
\section{Results}
\subsection{General queries}
The average number of words per query was 3.2 in dataset 1 and 3.0 in dataset 2. This is in agreement with previously reported average lengths, e.g., \cite{song2013}. The distribution of queries as a function of query length is shown in Figure \ref{fig:length}. As the figure shows, the distribution is highly skewed to shorter queries, and is similar in both datasets.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{Figs/Query_length.jpg}
\caption{Fraction of queries of each length. The vertical axis is log-scaled.}
\label{fig:length}
\end{figure}
We computed the fraction of queries at each length, stratified by age group and gender. Figure \ref{fig:gender} shows the ratio between the fraction of queries of length $N$ made by males, compared to the fraction as made by females, for both datasets. As the figure shows, the datasets are extremely similar to each other. Interestingly, queries with 2-4 words are approximately 5\% more common among males, but longer queries are much more common among females, with a clear correlation between query length and preference by females.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{Figs/ratio_by_gender.jpg}
\caption{Ratio between the fraction of queries at each length made by men to the fraction of queries at this length made by women}
\label{fig:gender}
\end{figure}
Figure \ref{fig:age} shows the average query length for each age group. Each age group is represented by the middle of the age group (i.e., the age group of 18-20 year olds is represented by a point at 19 years). As the figure shows, younger people use longer queries, though the effect is small (between 3.6 words to 2.9 words).
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{Figs/len_by_age.jpg}
\caption{Average number of words by age. Lines are linear regression lines, the top for dataset 2 and the bottom for dataset 1. }
\label{fig:age}
\end{figure}
Income level (provided in 7 groups for dataset 2) is uncorrelated with query length.
The average number of spelling mistakes per word per age group is correlated with age (adjusted $R^2$ of $0.88$, $p=0.005$), with higher aged users making fewer spelling mistakes (0.296 per word for the youngest age group and 0.230 for the oldest ones). The difference between males (0.266) and females (0.260) is negligent.
Finally, we evaluated whether the longer queries (by females and younger users) were due to these users making queries that were complete sentences. As noted above, crowdsourced workers labeled a sample of the queries for whether they formed complete sentences (e.g., ``Do I have the flu?", vs. ``flu"). A multi-way ANOVA with interactions found that gender was not statistically significantly associated with the use of complete sentences, while age ($P=0.007$) and the interaction of age and gender ($P=0.010$) were. Younger users were more likely to query using complete sentences.
\subsection{Experiential queries}
In the previous section it was shown that females use longer queries than males. Experiential queries of the type previously used to identify cohorts for study, or as a seed-set to find cohorts using auxiliary information, are naturally at least 3 words in length (e.g., ``I have cancer") and usually far longer (e.g., ``I was diagnosed with stage 2 lung cancer"). Therefore, here we examine how the demographic bias in query length is reflected in experiential queries.
As described in the Methods, we extracted experiential queries from one month of Bing data.
The 10 most common conditions mentioned in these queries, in descending order of popularity, were: cancer, allergy, hernia, yeast infection, cyst, pimple, bleeding, pregnancy, black eye, blister.
The fraction of experiential queries made by males was found to be 36.2\% lower than expected, according to their fraction in the population. Conversely, the fraction of experiential queries made by females was 50.0\% greater than expected. This is to be expected, since the average length of experiential queries was 13.2 words, and (as shown above) females make longer queries than males.
Figure \ref{fig:sius} shows the fraction of experiential queries by age group, compared to the baseline of all queries submitted to Bing. The figure shows a correlation between age and the excess in experiential queries, perhaps partly because the incidence of diseases is higher among older people.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{Figs/SIUs_vs_baseline.pdf}
\caption{Relative propensity of experiential queries, compared to the baseline (all queries), by age group.}
\label{fig:sius}
\end{figure}
Finally, in the 18 cancers for which at least 20 people made experiential queries, we compared the ratio of males to females who made these queries, compared to known gender ratio\footnote{As provided by either Cancer Research UK (\url{www.cancerresearchuk.org}), American Cancer Society (\url{cancer.org}) or published in the scientific literature.}. To model the relationship between the known gender ratio and the observed gender ratio (on Bing), we used a linear model where the independent variable was the known gender ratio and the independent variable the observed ratio. The model reached an $R^2$ of 0.35, with a slope of 0.5. This means that the gender ratio in experiential queries is strongly biased towards females, as expected by the above results.
\section{Discussion}
Our analysis of search engine logs from two sources reveal that search queries differ by the demographics of those querying. Specifically, age and gender (but not income) have a statistically significant effect on query length, where females and younger people appear to make more longer queries. The findings are in agreement with studies of longer texts \cite{koppel2009computational}, which found similar differences among ages and genders.
The similarity among datasets shows that the difference is not because of the demographics of a specific search engine (e.g., Bing). This is in agreement with past studies which did not find a difference in the demographics of search engine users \cite{yomtov2018}.
Our findings have two areas of impact. First, they contribute to the knowledge on the differences in search engine use among people, and the need to take these into account when measuring issues such as fairness and bias in search engine results.
Second, our results have a clear implication for studies which use anonymous query data to study aspects of human health. In such studies a cohort needs first to be identified. Several methods for this have been suggested, but all rely on experiential queries, either as the sole source of data or as a supporting source. Since our work has shown that experiential queries are highly skewed by gender and age, by implication, so is the cohort, especially when based only on experiential queries. Future work is required to assess if any of the previously suggested methods for cohort analysis \cite{yomtov2015automatic,ofran2012,soldaini2017} also suffer from this imbalance, or if their use of additional data mitigates this issue.
\bibliographystyle{plain}
|
1,116,691,497,143 | arxiv | \section{Introduction}
The current picture of three-flavour neutrino oscillations has been completed by the measurement of a non-zero reactor mixing angle $\theta_{13}$~\cite{An:2012eh}, yielding a self-consistent picture, see Refs.~\cite{GonzalezGarcia:2012sz,Tortola:2012te,Capozzi:2013csa} for global fits. More recently, perhaps even some hint for a CP-violating phase $\delta_{\mathrm{CP}}$ has been already seen in the combination of different experiments~\cite{Capozzi:2013csa}. On the other hand, several anomalies at short baselines indicate that the picture may in fact not be complete, and it thus may have to be extended by one or more sterile neutrinos at the eV-scale (and maybe at other scales, too). In greater detail, evidence for $\bar \nu_\mu \rightarrow \bar \nu_e$ appearance has been found in the LSND experiment~\cite{Aguilar:2001ty}, which has been confirmed by the MiniBooNE experiment in both the neutrino~\cite{AguilarArevalo:2007it} and antineutrino~\cite{Aguilar-Arevalo:2013pmq} modes. These evidences are compatible with one or more extra sterile neutrinos at the eV-scale. On the other hand, recent re-calculations of the reactor $\bar \nu_e$ fluxes~\cite{Mueller:2011nm,Huber:2011wv} are in tension with the corresponding short-baseline disappearance measurements, indicating that a fraction of the electron antineutrinos may have already disappeared into sterile species by oscillations. Finally, somewhat lower event rates than predicted were measured in solar gallium neutrino experiments, yielding a $3\sigma$ indication that electron neutrinos from the Sun are missing, too, which again suggests that these may have partially disappeared into a sterile species~\cite{Giunti:2010zu}. While each of these observations may be interpreted by adding (at least) one extra sterile neutrino, there is a well-known tension between appearance and disappearance data in the global fits, see Refs.~\cite{Kopp:2011qd,Kopp:2013vaa} for recent works. Several new experiments have been proposed~\cite{Agarwalla:2010zu,deGouvea:2011zz,Rubbia:2013ywa,2013arXiv1304.7127K,Porta:2010zz} to solve these issues and to draw a self-consistent picture, see Ref.~\cite{Abazajian:2012ys} for an extensive review on sterile neutrino phenomenology and experimental prospects.
Due to the increasing amount of experimental indications for eV-scale sterile neutrinos, and also due to slightly heavier (keV-scale) sterile neutrinos being viable candidates for Dark Matter if a suitable production mechanism is used~\cite{Dodelson:1993je,Canetti:2012kh,Shi:1998km,Bezrukov:2009th,Nemevsek:2012cd,King:2012wg,Shaposhnikov:2006xi,Bezrukov:2009yw,Kusenko:2006rh,Petraki:2007gq,Merle:2013wta}, the problem of explaining very light sterile neutrinos has attracted the attention of model builders, see Ref.~\cite{Merle:2013gea} for a recent review.
The basic problem is two-fold:
\begin{enumerate}
\item One has to come up with an explanation for the mass of at least one sterile neutrino being very small (and being protected against radiative corrections), compared to the ``natural'' mass scale for right-handed neutrinos which is thought to be very high (around the scale of grand unification).
\item In addition, one needs to explain the active-sterile mixing $\theta_{i4}$. Depending on the case, this mixing would either need to be sizable, of $\theta_{i4} \sim \mathcal{O}(0.1)$, for eV-sterile neutrinos~\cite{Kopp:2011qd,Kopp:2013vaa} or it should be really tiny, at most of $\theta_{i4} \sim \mathcal{O}(10^{-4})$, for keV-sterile neutrinos~\cite{Watson:2006qb,Abazajian:2001vt,Abazajian:2006jc,Boyarsky:2005us,Dolgov:2000ew,Boyarsky:2006fg,RiemerSorensen:2006fh,Abazajian:2006yn,Boyarsky:2006ag,Boyarsky:2007ge,Loewenstein:2008yi,Watson:2011dw,Loewenstein:2012px}.
\end{enumerate}
Both these requirements are not easy to achieve. Nevertheless, many models have been proposed to solve these problems. A rough classification among the known models distinguishes whether a model attempts to find a unified explanation for both problems, or whether the mechanism to generate a light sterile neutrino mass and the generation of the mixing pattern are separate ingredients. Naturally the former ansatz tends to be much more constrained but, on the other hand, its benefit is being more predictive. Most of the mechanisms to explain light sterile neutrino masses either rely on the principle of suppressing one (or more) sterile neutrino mass eigenvalues or on forcing the natural mass of one sterile neutrino to be zero which is then lifted to a finite but small value by some correction (e.g., by sub-leading terms arising from symmetry breaking).
Models which attempt a simultaneous solution of the light mass problem and of the active-sterile mixing are typically based on flavour symmetries. Known examples include a non-standard $L_e - L_\mu - L_\tau$ lepton number~\cite{Mohapatra:2001ns,Shaposhnikov:2006nn,Lindner:2010wr} or a $Q_6$ symmetry~\cite{Araki:2011zg}, which both force the lightest sterile neutrino to be exactly massless in the symmetry limit but generate a small non-zero mass once the symmetry is broken. Alternatively, a certain mechanism could be used to suppress masses and mixings at the same time, and proposals include the use of the Froggatt-Nielsen mechanism~\cite{Froggatt:1978nt} to explain light sterile neutrinos~\cite{Merle:2011yv} as well as the use of exponential suppressions arising from extra spatial dimensions~\cite{Kusenko:2010ik,Takahashi:2013eva}. Both these proposals have the nice feature that the low energy seesaw mechanism is guaranteed to work, however, they also have the disadvantage that no exact mixing angles can be predicted. Another approach is the use of intermediate scales, which can arise in several extensions of the seesaw mechanism~\cite{Barry:2011wb,Zhang:2011vh,Dev:2012bd}. In general, the most flexible scenarios combine a mass suppression mechanism with a flavour symmetry motivating the mixing, as done for example in the models which use an $A_4$ symmetry in settings where the sterile neutrino mass is suppressed by the Froggatt-Nielsen~\cite{Barry:2011fp,Barry:2011wb}, split seesaw~\cite{Adulpravitchai:2011rq}, or extended seesaw~\cite{Zhang:2011vh} mechanisms. Most of the known models fall into one of the above categories~\cite{Babu:2004mj,Sayre:2005yh,Dias:2005yh,Dinh:2006ia,Cogollo:2009yi,Ma:2009gu,Dias:2010vt,Merle:2012ya,Mavromatos:2012cc,Allison:2012qn,Heeck:2012bz}, although a notable exception exist in which light Dirac-type sterile neutrinos are motivated as composite states~\cite{Grossman:2010iq,Robinson:2012wu}.
In general, it is interesting to ask the question if there are other ways to connect the active and sterile neutrino sectors. We will assume in this paper that some mechanism is at work to explain one very light sterile neutrino -- however, we would like to stress that it is not of great relevance which of the known or yet to be found mechanisms does this job. We then show how the mixings in both active and sterile sectors can be tightly connected in a very simple framework. In particular, the situation considered will allow for predictions in neutrino oscillation experiments which are testable in the very near future.
If the evidence for sterile neutrinos at the eV-scale is to be taken seriously, global solutions typically point towards
\begin{equation}
U_{e3}\simeq U_{e4} \sim \lambda_C \, ,
\end{equation}
where $U$ is the $4\times 4$ unitary matrix that diagonalises the $4\times 4$ neutrino mass matrix and $\lambda_C\approx 0.2$. This means that the active-sterile and reactor neutrino mixing angles will be of the same order of magnitude. It is therefore suggestive to investigate scenarios where the active $3 \times 3$ sub-sector of the neutrino mass matrix enforces $\theta_{13}=0$ by a symmetry structure, such as tri-bimaximal (TBM) mixing~\cite{Harrison:2002er} or the $\mu-\tau$ symmetric case~\cite{Fukuyama:1997ky,Mohapatra:1998ka,Ma:2001mr,Lam:2001fb}. For these scenarios well-known flavour symmetry models exist, such as Refs.~\cite{Babu:2002dz,Grimus:2003kq} for the $\mu-\tau$ exchange symmetry case and Refs.~\cite{Ma:2004zv,Altarelli:2005yp,Altarelli:2005yx,Babu:2005se,deMedeirosVarzielas:2006fc} for TBM. By the addition of a sterile species, the mass matrix will be modified and both active-sterile and reactor mixings may be generated. In flavour symmetry models, however, this option turns out to be not that straightforward: the vacuum alignment of the flavon vacuum expectation values (VEVs) prohibits the direct generation of a non-zero $\theta_{13}$, see Refs.~\cite{Barry:2011wb,Barry:2011fp}. We therefore split the problem into two pieces: we first study the requirements for the vacuum alignment in a generic way to produce both active-sterile and reactor mixings of similar magnitudes. Then we discuss the model requirements and how these restrict our generic findings.
We notice that TBM is a special case of the $\mu-\tau$ symmetric case where the solar angle is not free but \emph{trimaximal}, i.e., $\sin^2\theta_{\rm sol}=1/3$. Since we are interested in studying a possible new origin for the reactor angle independently of the particular value of the solar angle, we consider the general class of $\mu-\tau$ symmetric neutrino mass matrices in this paper. In principle our results could be applied to the subclass of TBM models as well.
The paper is organised as follow: in Sec.~\ref{sec:method}, we describe our general method and set the stage for the remainder of the paper. Then, in Sec.~\ref{sec:pheno}, we discuss at length our results and their phenomenological consequences. We indicate in Sec.~\ref{sec:theory} how the results can be obtained and sharpened in concrete models, but the discussion of the mathematical details of the models is postponed to Sec.~\ref{sec:models}. We finally conclude in Sec.~\ref{sec:conc}.
\section{\label{sec:method}Method}
Let us consider the $3\times 3$ generic $\mu-\tau$ invariant Majorana neutrino mass matrix given in~\cite{Lam:2001fb},
\begin{equation}\label{eq2}
M_{\mu-\tau} =
\left(
\begin{array}{ccc}
A&B&B\\
B&C &D\\
B&D&C
\end{array}
\right),
\end{equation}
where $A ,B, C,D$ are free parameters. Such a matrix is diagonalised by the orthogonal matrix
\begin{equation}\label{eq3}
O=
\left(
\begin{array}{ccc}
-c_{12}&s_{12}&0\\
\frac{s_{12}}{\sqrt{2}}& \frac{c_{12}}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \\
\frac{s_{12}}{\sqrt{2}}& \frac{c_{12}}{\sqrt{2}}&\frac{1}{\sqrt{2}}
\end{array}
\right),
\end{equation}
which has the eigenvector $(0,-1/\sqrt{2},1/\sqrt{2})^T$. This leads to a zero reactor angle and a maximal atmospheric angle, while the solar angle is a function of the parameters $A, B, C, D$.
We assume only one sterile neutrino $\nu_s$, and therefore the neutrino mass matrix is given by a $4\times 4$ (symmetric) matrix.\footnote{Introducing several sterile neutrinos does not improve the global fits significantly, at least for a 3+2 instead of a 3+1 model. See e.g. Ref.~\cite{Kopp:2013vaa}.} We furthermore assume the following structure (with the charged leptons being diagonal) in the basis $(\nu_e,\nu_\mu,\nu_\tau,\nu_s)$:
\begin{equation}\label{mnu44}
M_\nu^{4\times 4} =
\left(
\begin{array}{c|c}
M_{\mu-\tau}&A\\ \hline
A^T&m_s
\end{array}
\right),
\end{equation}
where $m_s$ is the mass contribution of the sterile neutrino assumed to be of the order of $1\,$eV and $A=(a,b,c)^T$ is a $3\times 1$ vector. The vector $A$ can induce mixing effects of the active neutrinos, as discussed in Ref.~\cite{Smirnov:2006bu}. In the limit $A\to 0$ (or if $A$ is an eigenvector of $M_{\mu-\tau}$) the reactor angle is zero, but otherwise the reactor angle can deviate from zero and this deviation is {\it proportional} to the active-sterile mixing, as we will show. In this framework the active-sterile matrix elements $M_{\nu,i4}^{4\times 4}$ (with $i = e, \mu, \tau$) are the origin for the reactor angle. It is noteworthy to add that a model predicting an ``extended $\mu$-$\tau$ symmetry'' could also affect the active-sterile mixings leading to $b=c$. These are not the models we consider in this study, as they would necessarily lead to $\theta_{13}=0$. However, we will demonstrate that $|b|=|c|$ with different phases is compatible with our ansatz, see the discussion in Section~\ref{sec:phases}.
Using the neutrino mass matrix Eq.~\eqref{mnu44}, our purpose is two-fold:
\begin{enumerate}
\item[1)] we want to study if any phenomenological consequences or interplay between the active-sterile mixings emerges from that structure and
\item[2)] we want to investigate the structure of the vector $A$ and which consequences it could have for model builders.
\end{enumerate}
Previous models have studied such interplay in the context of TBM (that is a subclass of our framework~\cite{Barry:2011wb,Barry:2011fp}), but our approach is substantially different because the reactor angle originates from the sterile sector \emph{only}, while in~\cite{Barry:2011fp} deviations from TBM together plus a sterile neutrino are necessary (in our case, next-to-leading order contributions would \emph{not} be sufficient to generate an acceptable reactor angle).
In this paper, we will embark a numerical analysis of our considerations, supplemented by some analytical approximations. Indeed, it turns out that many aspects are much easier to see numerically than analytically, which simply originates from the fact that, after all, the diagonalisation of a $4\times 4$ mass matrix does involve some complicated formulae. Nevertheless, as we will see, some global tendencies can be seen analytically and, indeed, our general expectations will be confirmed by the numerics.
In our calculation, we first of all assume a general $4\times 4$ neutrino mass matrix by rotating from the mass into the flavour basis assuming Majorana neutrinos,
\begin{equation}
M_\nu^{4\times 4} = U_{4\times 4}^* {\rm diag}(m_1, m_2, m_3, m_4) U_{4\times 4}^\dagger
\equiv \begin{pmatrix}
m_{e 1} & m_{e 2} & m_{e 3} & m_{e 4}\\
m_{e 2} & m_{\mu 2} & m_{\mu 3} & m_{\mu 4}\\
m_{e 3} & m_{\mu 3} & m_{\tau 3} & m_{\tau 4}\\
m_{e 4} & m_{\mu 4} & m_{\tau 4} & m_{s 4}
\end{pmatrix},
\label{eq:matrix_expl}
\end{equation}
which is of course symmetric.
In principle there are many different parameterisations of $U_{4\times 4}$ see e.g. \cite{Schechter:1980gr}, since the order of the sub-rotations is arbitrary. Following Refs.~\cite{Donini:2007yf,Donini:2008wz,Adamson:2010wi,Meloni:2010zr,Rodejohann:2011vc}, we choose the parameterisation
\begin{equation}
\label{equ:3+1param1}
U_{4\times 4} =
R_{34}(\theta_{34} ,\, \gamma) \;
R_{24}(\theta_{24} ,\, \beta) \;
R_{14}(\theta_{14} ,\, \alpha) \;
R_{23}(\theta_{23} ,\, \delta_3) \;
R_{13}(\theta_{13} ,\, \delta_2) \;
R_{12}(\theta_{12} ,\, \delta_1) \,.
\end{equation}
In Eq.~\eqref{equ:3+1param1}, $R_{ij}(\theta_{ij},\ \varphi)$ are the complex rotation matrices in the $ij$-plane, defined as:
\begin{equation}
[R_{ij}(\theta_{ij},\ \varphi)]_{pq} = \left\{
\begin{array}{ll} \cos \theta_{ij} & p=q=i,j\ , \\
1 & p=q \not= i,j\ , \\
\sin \theta_{ij} \ e^{-i\varphi} & p=i;q=j\ , \\
-\sin \theta_{ij} \ e^{i\varphi} & p=j;q=i\ , \\
0 & \mathrm{otherwise\ .}
\end{array} \right.
\label{eq:rot}
\end{equation}
This means that $\delta_2$ becomes $\delta_{\mathrm{CP}}$ in the three flavour limit. This parameterisation has the advantage that the standard leptonic mixing matrix has to be recovered in the case of vanishing new mixing angles. Note that the order of the 34-24-14-rotations is arbitrary. We chose the 34-angle as the left-most one, which makes it hardest to observe (it affects only $\nu_\tau$-$\nu_s$-mixing). Changing the order here does not change the fact that one of the rotations is difficult to extract. Even though the Majorana phases $(\alpha, \beta, \gamma)$ are absent in the oscillation parameters, they do play an important role for the structure of the mass matrix itself, and in particular for the correlations between different observables. However, there are nevertheless cases in which they have trivial values as predicted by a certain model under consideration. To cover the general tendencies, we will present most of our results first for $(\alpha, \beta, \gamma) = (0, 0, 0)$, in which case even detailed analytical predictions are possible, and we then generalise to arbitrary $(\alpha, \beta, \gamma)$. As will be visible in our plots, the former case will always be a subset of the latter, as to be expected, which confirms the consistency of our numerical procedure.
We therefore have a total of $4 + 6 + 3 = 13$ real parameters, plus potentially 3 Majorana phases:
\begin{itemize}
\item \emph{Four} masses $(m_1, m_2, m_3, m_4)$, where we assume for simplicity normal ordering ($m_1 < m_2 < m_3$) and the fourth (mainly sterile) mass eigenstate to be the heaviest (``$3+1$ scheme'': $m_{1,2,3} < m_4$). A generalisation to other scenarios is straightforward.
\item \emph{Six} mixing angles, three of them ($\theta_{12,13,23}$) describing the ordinary mixing between active neutrinos and three further angles ($\theta_{14,24,34}$) describing the mixing between active and sterile neutrinos.
\item \emph{Three} Dirac phases $\delta_{1,2,3}$, which describe all the CP violation that is potentially measurable in neutrino oscillation experiments.
\item \emph{If applicable: }\emph{Three} Majorana phases $(\alpha, \beta, \gamma)$, which could only be measured in neutrinoless double beta decay~\cite{Rodejohann:2011mu}.\footnote{We leave a detailed study of the predictions for neutrinoless double beta decay for future work.}
\end{itemize}
These parameters can be easily related to short- and long-baseline neutrino oscillation probabilities, see Ref.~\cite{Meloni:2010zr} for details. To leading order in the small mixing angles, the most relevant short baseline probabilities can be written as:
\begin{align}
\mathcal{P}_{ee} \simeq & 1-\sin^2\left(2 \theta _{14}\right) \, \sin ^2 \Delta_{41} , \label{equ:pee2}\\
\mathcal{P}_{\mu\mu} \simeq & 1- \sin^2\left(2 \theta_{24}\right) \, \sin^2 \Delta_{41} \label{equ:pmm2} , \\
\mathcal{P}_{e\mu} = \mathcal{P}_{\mu e} \simeq & \frac{1}{4} \sin^2 \left( 2 \theta_{14} \right) \, \sin^2 \left( 2 \theta_{24} \right) \, \sin^2 \Delta_{41} \label{equ:pem2} , \\
\mathcal{P}_{e\tau} \simeq & \frac{1}{4} \sin^2 \left( 2 \theta_{14} \right) \, \sin^2 \left( 2 \theta_{34} \right) \, \sin^2 \Delta_{41} , \\
\mathcal{P}_{\mu \tau} \simeq & \frac{1}{4} \sin^2 \left( 2 \theta_{24} \right) \, \sin^2 \left( 2 \theta_{34} \right) \, \sin^2 \Delta_{41} , \label{equ:pmt2}
\end{align}
where $\Delta_{ij} \equiv \Delta m_{ij}^2 L/(4 E)$. Note that the CP violating phases and also the light neutrino mass square differences would show up as corrections to Eqs.~\eqref{equ:pee2} to~\eqref{equ:pmt2} at longer distances. One can easily see in these formulae that, if LSND and MiniBooNE measured the transition in Eq.~\eqref{equ:pem2} which is quadratic in both $\theta_{14}$ and $\theta_{24}$, both electron neutrino disappearance in Eq.~\eqref{equ:pee2} and muon neutrino disappearance in Eq.~\eqref{equ:pmm2} would follow as a consequence. The electron ($\propto \theta_{14}^2$) and muon ($\propto \theta_{24}^2$) neutrino disappearance searches have, so far, not found anything directly, which leads to the well-known tension between appearance and disappearance data. The third mixing angle in our parameterisation, $\theta_{34}$, only enters in $\nu_\tau$ appearance searches, which are much harder to perform because of the high $\tau$ production threshold.
In our numerical analysis, we have fixed the lightest neutrino mass to be zero, $m_1 = 0$, and the heaviest one to be $m_4 = 1$~eV as an example (with one exception, where we illustrate the effect of $m_1 \neq 0$). Of course we could vary these masses, which would not spoil our principal results but only blur them. We have fixed the other two neutrino masses by imposing the best-fit values~\cite{GonzalezGarcia:2012sz} for the two mass-square differences, $\Delta m_\odot^2$ and $\Delta m_A^2$. Furthermore, we have set $\theta_{12}$ to its best-fit value and we have also set $\theta_{23} = \pi/4$ in order to ensure that the breaking of the $\mu$--$\tau$ symmetry indeed arises from the three parameters $(a, b, c) = (m_{e 4}, m_{\mu 4}, m_{\tau 4})$.\footnote{Note that, alternatively, we could have left $\theta_{23}$ to be a free parameter. Indeed, the contributions from the sterile neutrinos can pull that angle away from its maximal value. However, since the active-sterile mixing angles considered are small after all, it turns out that the resulting interval for $\theta_{23}$ would nevertheless be centered around the maximal value. In particular, the corrections from the sterile sector are \emph{not} large enough to pull this angle to one of its two best-fit values~\cite{GonzalezGarcia:2012sz} in the first or second octant, thus comprising an indirect \emph{signature} of our setting. The correlations shown in our plots would, had we left $\theta_{23}$ free, of course loosen but they would not be wiped out. Thus, for the clarity of the plots and to illustrate the global tendencies of the setting under consideration rather than the influence of experimental uncertainties, we have decided to stick to the choice of $\theta_{23} = \pi/4$. An example of the effect of letting $\theta_{23}$ vary will nevertheless be shown in Fig.~\ref{fig:13-generation1}, upper right panel.} We then generated random values for the parameters $\theta_{13}$ (linear distribution within the $3\sigma$ range of $\sin \theta_{13}$), $\theta_{24}$ (log-scale distribution within $[10^{-5}, 10^{-0.75}]$), and $\delta_2$ (linear distribution within $[0, 2\pi]$). In cases where the Majorana phases $(\alpha, \beta, \gamma)$ have been varied, too, we have also generated random values for each of them, following a linear distribution within $[0, 2\pi]$.
The next step is to impose $\mu$--$\tau$ symmetry onto the upper left $3 \times 3$ block of the full mass matrix $M_\nu^{4\times 4}$ by requiring the two complex equations equations $m_{e 2} = - m_{e 3}$ and $m_{\mu 2} = m_{\tau 3}$ to hold and solving them for the remaining parameters $\theta_{14,34}$ and $\delta_{1,2}$. By this procedure, we have obtained a set of $100,000$ points\footnote{Note that, in the actual plots presented, we show for each region only subsets of the data with a few thousand points each. We have checked that the plots would look practically identical when including all the data so that, had we included all of them, only the file size of the plots would be increased without any significant gain.} which all fulfill the criteria of leading to mass matrices with the desired form of the upper left $3 \times 3$ block and which are phenomenologically valid except for, maybe, their value for $\theta_{13}$, which is exactly what we would like to investigate.
We furthermore have done a similar procedure for two concrete alignments, both of which we will motivate in Sec.~\ref{sec:models} in concrete models. For now, we only observe that for concrete models e.g.\ the family symmetries $A_4$ and $D_4$ can be used. The vector $(a,b,c)$ so far considered can transform as a triplet under $A_4$, or as a singlet plus a doublet in the $D_4$ case. Thus, in concrete models, the vector $(a,b,c)$ is not arbitrary but it is given by the minimisation of the scalar potential invariant under the flavour symmetry of the particular setting considered. Typically, in $A_4$ the following sets of triplet VEV alignments have been studied:
\begin{equation}
\langle (a,b,c) \rangle \sim (a,0,0)\,, \quad a(1,1,1)\,,\quad (a,b,b^*)\,,\quad a(1,4,2)\,,
\label{eq:align_1}
\end{equation}
where the first two alignments are the ones used for TBM, see for instance~\cite{Altarelli:2005yp}, the third alignment is motivated by certain models based on discrete symmetries~\cite{Lavoura:2007dw}, and the fourth one is phenomenologically motivated in~\cite{King:2013iva,King:2013xba}. In the same way in models based on a $D_4$ family symmetry we can have different possibilities for the VEV alignments of the doublet, namely
\begin{equation}
\langle (b,c) \rangle \sim (b,0)\,, \quad b(1,1)\,,\quad (0,c)\,.
\label{eq:align_2}
\end{equation}
We consider two example alignments: the first one, $(a, b, c) = (a, b, b^*)$, is motivated by an $A_4$ flavour symmetry, while the second one, $(a, b, c) = (a, 0, c)$, can be obtained in models based on $D_4$. From now on, the only important point for us in what concerns phenomenology is that each of these alignments imposes one more complex equation ($c=b^*$ and $b=0$, respectively), which we can use to eliminate two real parameters. Thus, in the generation of numerical mass matrices which fulfill one of the two alignments, we have only generated random values for $\theta_{13}$ and for the Majorana phases if applicable (as in the general case), but numerically solved for $\theta_{24}$ and $\delta_2$.
In the plots, we have also indicated certain bounds and/or experimentally favoured regions for light sterile neutrinos. However, we want to stress that -- at the moment -- not all the data sets stemming from different experiments seem to fit together, see Ref.~\cite{Palazzo:2013me} for a concise discussion. Thus, the best we can do is to show some example bounds and let future experiments decide which of them, if any, are correct. We have therefore extracted three different bounds from Ref.~\cite{Kopp:2013vaa}, where we have in each case used the active-sterile mixing angle regions obtained for $\Delta m_{41}^2 = 1~{\rm eV}^2$.\footnote{For the one case we show where $m_1 = 0.05$~eV instead of zero, this would strictly speaking require a largest mass of $m_4 = 1.00125$~eV, which however is so close to $1$~eV that we have neglected this tiny difference.}
The chosen regions are:
\begin{itemize}
\item \emph{all $\nu_e$ disappearance reactor and solar data} (light green region in our plots; see Fig.~2 in~\cite{Kopp:2013vaa}): $8.24\cdot 10^{-3} \leq |U_{e4}|^2 \leq 1.94\cdot 10^{-2}$, where $U_{e4} = \sin \theta_{14}$,
\item \emph{null results combined from atmospheric and short/long baseline accelerator experiments} (region below the thick orange line in our plots; see Fig.~4a in~\cite{Kopp:2013vaa}): $|U_{\mu 4}|^2 \leq 2.74\cdot 10^{-2}$, where $U_{\mu 4} = \cos \theta_{14} \sin \theta_{24}$,
\item \emph{combined results from $\nu_\mu \to \nu_e$ and $\bar{\nu}_\mu \to \bar{\nu}_e$ appearance data} (light purple region in our plots; see Fig.~7 in~\cite{Kopp:2013vaa}): $2.40\cdot 10^{-3} \leq \sin^2 (2 \theta_{e \mu}) \leq 4.20\cdot 10^{-3}$, where $\sin^2 (2 \theta_{e \mu}) = 4 |U_{e4}|^2 |U_{\mu 4}|^2$.
\end{itemize}
As we had already pointed out, the different data sets available do not seem to fit together at the moment. Accordingly, the setting discussed here cannot be consistent with all of them simultaneously, and thus one should keep in mind that the bounds and favoured regions presented comprise example data sets and are not to be taken fully representative. However, from the current perspective it seems likely that one of them might survive future experimental tests and/or that they will be resolved in terms of the discovery of an unknown systematic error in one type of experiment or maybe even by the discovery of more than one type of light sterile neutrino. On the other hand, no matter which data set is favoured by the reader, our general findings remain correct: it is possible to generate a sizable reactor angle from sterile neutrino contributions to the light neutrino mass matrix.
\section{\label{sec:pheno}Experimental consequences: \\ what phenomenologists are interested in}
We will now discuss our numerical results for the mixing angles. For the moment, let us focus on the general case of a $4\times 4$ light neutrino Majorana mass matrix with a $\mu - \tau$ symmetric upper left $3\times 3$ block, which corresponds to the \emph{light gray} points in all plots. The specific alignments (\emph{red} and \emph{blue} points) will be discussed later on.
\begin{figure}[tp]
\centering
\begin{tabular}{lr}
\includegraphics[width=7cm]{t14_vs_t24_with_t13_LOG_OLD.png} &
\includegraphics[width=7cm]{t14_vs_t24_with_t13_LOG_23varied_OLD.png}\\
\includegraphics[width=7cm]{t14_vs_t24_with_t13_LOG.png} &
\includegraphics[width=7cm]{t14_vs_t24_with_t13_LOG_m005.png}
\end{tabular}
\caption{\label{fig:13-generation1}
Allowed region (gray points) in $\theta_{14}$ and $\theta_{24}$, for $\theta_{13}$ being within its $3\sigma$ range~\cite{GonzalezGarcia:2012sz}. The different panels correspond to zero Majorana phases, $m_1=0$, and $\theta_{23} = \pi/4$ (upper left), zero Majorana phases, $m_1=0$, and $\theta_{23} \in 3\sigma$ (upper right), complex Majorana phases, $m_1=0$, and $\theta_{23} = \pi/4$ (lower left), and zero Majorana phases, $m_1=0.05 \, \mathrm{eV}$, and $\theta_{23} = \pi/4$ (lower right).
In addition, some example experimental constraints are displayed~\cite{Kopp:2013vaa} (``$\nu_e$ disapp.'' for the region compatible with $\nu_e$ disappearance data, ``Null. res. (upper)'' for the upper limit from $\nu_\mu$ disappearance, ``Comb.'' for the region compatible with combined short-baseline appearance data; see text for details).
It is implied that large $\theta_{14}$ generates large $\theta_{13}$ of the same order, while $\theta_{24}$ could have any value. As visible from the two examples shown, choosing a certain alignment allows to select narrow regions within the general correlated parameter space. For example, one can fix $\theta_{24}$ to be relatively large [$A_4$-like alignment $(a, b, c) = (a, b, b^*)$: red points] or relatively small [$D_4$-like alignment $(a, b, c) = (a, 0, c)$: blue points]. Thus a concrete model can give very clear predictions for the observables.
}
\end{figure}
\subsection{General case for alignments}
Let us now discuss the correlations which appear -- first for $m_1 = 0$, $(\alpha, \beta, \gamma) = (0,0,0)$, and fixed $\theta_{23}$. In the upper left panel of Fig.~\ref{fig:13-generation1}, we present the correlation between $\sin \theta_{14}$ and $\sin \theta_{24}$, where we have selected the gray points from the lists of numerical mass matrices generated by requiring that $\sin \theta_{13}$ lies within its experimental $3\sigma$ interval~\cite{GonzalezGarcia:2012sz}. The result is a clear correlation between the two active-sterile mixing angles. Indeed, one can see that a sizable (within the $3\sigma$ range) reactor angle $\theta_{13}$ also implies a large mixing angle $\theta_{14}$, i.e., $\sin \theta_{14}\approx 0.02$ to $0.4$, while $\sin \theta_{24}$ can essentially assume all values between $0.2$ and zero. This is a clear tendency we have seen in our data: indeed, had we also included smaller (unphysical) values of $\theta_{13}$ in the plot, we would have seen that $\theta_{14}$ is always of the same order as $\theta_{13}$, while $\theta_{24}$ is in general not strongly constrained. Looking closer, we can see that there exist in fact two branches of the correlation between $\sin \theta_{14}$ and $\sin \theta_{24}$. Notably, the ``upper'' branch also strongly constrains $\sin \theta_{24}$ to be $\gtrsim 0.03$, so that in a large number of cases that angle is also sizable.
Let us now allow for some more freedom, starting with general Majorana phases $(\alpha, \beta, \gamma)$ while we still keep $m_1 = 0$ and $\theta_{23} = \pi/4$. This is the case we will throughout the paper present \emph{below} the corresponding plot with $(\alpha, \beta, \gamma) = (0,0,0)$, so that we should now look at the lower left panel of Fig.~\ref{fig:13-generation1}. As can be seen, the two distinct branches of the correlation are now completely indistinguishable, as can be seen form the gray points.\footnote{Note that, in order not to unnecessarily produce too many unphysical points, we have limited our numerical scan to $|\sin \theta_{i4}| < 0.5$, as can be seen in the plot. This does not present any physical restriction.} However, what remains is nevertheless the tendency of not having a too small $\theta_{14}$, unless $\theta_{24}$ is very large.
For completeness, we have (only for this correlation) also illustrated the effects of relaxing one of the other two assumptions, i.e., either varying $\theta_{23}$ within its $3\sigma$ interval (upper right panel) or taking $m_1 \neq 0$ (lower right panel). For these two cases we have again chosen $(\alpha, \beta, \gamma) = (0,0,0)$, in order not to lose sight of which relaxation has which effect. Starting with the case were $\theta_{23}$ is allowed to be non-maximal, the principal tendencies are not really changed, but the allowed spread of points is increased. This blurs the correlation to some extend (as to be expected). Interestingly, it also leads to at least a few general (light gray) points which are consistent with the region favoured by the combined appearance data (marked by the purple strip, as we will explain below), contrary to the points allowed for $\theta_{23}$ taken to be exactly maximal. Thus, allowing $\theta_{23}$ to vary seems to have, at least at first sight, a similar effect as varying the Majorana phases. A less dramatic effect happens if we instead increase $m_1$ to $0.05$~eV, see the lower right panel. Even though $m_1$ is now considerably different from zero, and in fact $m_1 \sim \sqrt{\Delta m^2_A}$, the qualitative features of the correlation are not destroyed. The two branches are still visible, although not as clearly as for the $m_1 = 0$ case, which comes from the slight change in the shape. The only qualitative change is the few gray points on the upper right of the plot, which did not exist for $m_1 = 0$. More dramatic changes will be present for the alignments, as we will see later.
As already mentioned, we have also displayed the favoured regions from all $\nu_e$ disappearance data (green region in the plots labeled by ``$\nu_e$ disapp.'') and from the combined $e$ to $\mu$ appearance data (purple region in the plots labeled by ``Comb.''), as well as the upper bound from all null results combined (orange thick line in the plots labeled by ``Null res.\ (upper)''). As can be seen, our general region is for fixed Majorana phases incompatible with the combined appearance data if $\theta_{23}$ is maximal (which means in particular that it is incompatible with the LSND results, because the bounds from MiniBooNE are not as stringent for $\Delta m^2 = 1~{\rm eV}^2$). However, the points are easily compatible with all null results (only a very marginal region at the top of the region of interest is cut away by that bound) and also the $\nu_e$ disappearance data can be fitted if $\sin \theta_{14} \sim 0.10$ and $\sin \theta_{24} \sim 0.05$. If $\theta_{23}$ is varied (upper right panel) or if the Majorana are varied (lower left panel), however, there exist at least a few points consistent with the combined appearance region.
Going to Fig.~\ref{fig:2434-1314p}, the correlation between $\sin \theta_{14}$ ($\sin \theta_{24}$) and $\sin \theta_{34}$ is displayed on the left (right) panels. Starting with the left panel and $(\alpha, \beta, \gamma) = (0,0,0)$, it is visible that $\theta_{34}$ is not constrained by the current data, as this angle would correspond to $\nu_\tau \to \nu_s$ transitions which are experimentally hardly accessible. This is why the only favoured region displayed is the green band stemming from the $\nu_e$ disappearance data. The null results do not constrain $\sin \theta_{14}$, and the combined appearance data would not exclude any of the gray points in the $\sin \theta_{14}$--$\sin \theta_{34}$ plane, which is why we have decided not to plot it here. Similarly to the previous case, a clear correlation between $\sin \theta_{14}$ and $\sin \theta_{34}$ is found, again consisting of two distinct branches. However, the difference compared to $\sin \theta_{24}$ is that $\sin \theta_{34}$ (and thus $\theta_{34}$) cannot be arbitrarily small in any branch but is bound to be between roughly $0.2$ and $0.03$ (upper branch) or $0.003$ (lower branch). If the $\nu_e$ disappearance data is to be reproduced, we are forced to have $\sin \theta_{34} \sim 0.05$ (and again $\sin \theta_{14} \sim 0.10$). In the upper right panel, the remaining combination of angles (the correlation between $\sin \theta_{24}$ and $\sin \theta_{34}$) is displayed, which is perfectly consistent with the previous two correlations (one can even make out the correspondences between the different branches). This figure is less favourable in what concerns the experimental bounds, since the upper bound from the null results only appears as a straight line, due to the missing dependence on $\theta_{14}$ in this plot. However, in the region of interest, this does not make a significant difference.
Varying the Majorana phases (lower two panels of Fig.~\ref{fig:2434-1314p}), the correlations are considerably broadened. In particular, it is not possible anymore to distinguish the different branches. Furthermore, also very small values for $\sin \theta_{34}$ are possible in this case. However, it remains true that $\sin \theta_{34}$ and $\sin \theta_{14}$ (or $\sin \theta_{34}$ and $\sin \theta_{24}$) cannot simultaneously be small. This fact can be understood analytically, as we will see later on.
\begin{figure}[tp]
\centering
\begin{tabular}{lr}
\includegraphics[width=7cm]{t14_vs_t34_with_t13_LOG_OLD.png} &
\includegraphics[width=7cm]{t24_vs_t34_with_t13_LOG_OLD.png}\\
\includegraphics[width=7cm]{t14_vs_t34_with_t13_LOG.png} &
\includegraphics[width=7cm]{t24_vs_t34_with_t13_LOG.png}
\end{tabular}
\caption{\label{fig:2434-1314p}
Correlations between $\sin \theta_{14}$ \& $\sin \theta_{34}$ (left panels) and $\sin \theta_{24}$ \& $\sin \theta_{34}$ (right panels). The Majorana phases are chosen to be zero in the upper row and are varied in the lower row. The allowed regions to describe certain data are shown as well, see caption of Fig.~\ref{fig:13-generation1}. As can be seen from both panels, $\theta_{34}$ does have a certain minimal value, while $\theta_{24}$ could be essentially zero (at least in the upper branch of the correlation shown on the right), in consistency with Fig.~\ref{fig:13-generation1}.
}
\end{figure}
\subsection{Specific alignments}
We will now investigate what changes if we choose a certain vacuum alignment, i.e., a particular form of the vector $A = (a, b, c)$. Such relations are no arbitrary assumptions but they can be derived within concrete models, as we will illustrate later. However, we chose to first present our results to increase the clarity, such that it is easy to see the effect of the alignments, while the inclined reader who is interested in the theoretical details behind the alignments is advised to consult the dedicated Sec.~\ref{sec:models}.
Looking again at the two leftmost panels of Fig.~\ref{fig:13-generation1}, we have displayed the resulting regions for two different alignments, one of which can be motivated by models based on an $A_4$ symmetry [$(a, b, c) = (a, b, b^*)$, cf.\ Eq.~\eqref{eq:align_1}: red points in the plots] and one of which can be derived from $D_4$ models [$(a, b, c) = (a, 0, c)$, cf.\ Eq.~\eqref{eq:align_2}: blue points in the plots]. The effect of the alignments is immediate: they single out very small patches of the general (light gray) region which, in turn, leads to a high predictivity of the corresponding models. In the case of vanishing Majorana phases, Figs.~\ref{fig:13-generation1} and~\ref{fig:2434-1314p} together tell us that the first alignment (the one with $c = b^*$) predicts $(\sin \theta_{14}, \sin \theta_{24}, \sin \theta_{34}) \sim (0.03, 0.2, 0.2)$ while the second one (where $b=0$) leads to $(\sin \theta_{14}, \sin \theta_{24}, \sin \theta_{34}) \sim (0.3, 2\cdot 10^{-4}, 0.04)$. Indeed, both alignments are highly predictive, so much so that the $A_4$-like (red) alignment (if $\theta_{23} = \pi/4$) is not only incompatible with both the $\nu_e$ disappearance and the combined appearance results (the latter point is not too much of a surprise, given that already the general gray region had been incompatible with this data set), but it is even barely compatible with the not very stringent null results combined. Thus, this alignment case could in fact be excluded very soon. The $D_4$-like (blue) alignment is also only compatible with the null results, but here the predicted value of $\sin \theta_{24}$ is so small that a near-future exclusion of that setting seems more than unlikely.
It is worth to note that varying $\theta_{23}$ does not only spread out the generally allowed set of points, but also the regions allowed for a certain alignment, as can be seen from the upper right panel of Fig.~\ref{fig:13-generation1}. While this effect seems very tiny for the $D_4$-like (blue) alignment, the allowed region for the $A_4$-like (red) alignment is considerably increased. In particular, it is now possible to find red points which match the region allowed by the $\nu_e$ disappearance data, even without varying the Majorana phases. This is very good news, since it means that the red alignment will in fact be a valid possibility if the green region persists, since we cannot expect $\theta_{23}$ to be exactly maximal (and the global fits tell us that a non-maximal value even seems more likely~\cite{GonzalezGarcia:2012sz}, however our setting is unable to reach any of the $\theta_{23}$ best-fit points as both are too far away from $\pi/4$).
Going back to the case where $\theta_{23}$ is taken to be maximal and comparing the upper left panels of Figs.~\ref{fig:13-generation1} and~\ref{fig:2434-1314p}, it is intriguing that $\sin \theta_{24} \simeq \sin \theta_{34}$ holds for the $A_4$-like (red) alignment. The $D_4$-like (blue) alignment, in turn, leads to a very small angle $\theta_{24}$, whereas $\theta_{34}$ is bound to be on the upper branch of the correlation and thus $\sin \theta_{34} \sim 0.03$.
What changes if we allow the Majorana phases to vary? As to be expected, also the regions allowed by the alignments are blown up, cf.\ lower left panel of Fig.~\ref{fig:13-generation1} and lower panels of Fig.~\ref{fig:2434-1314p}. The former plot in particular reveals that now, it is not only possible to meet the region favoured by the $\nu_e$ disappearance data for both alignments, but the red alignment can even be consistent with the purple combined $e$ to $\mu$ appearance data. However, the alignments nevertheless clearly reveal certain distinct patterns within the set of gray points. Furthermore, the alignment regions for $(\alpha, \beta, \gamma) = (0,0,0)$ are clearly contained in the more general alignment regions where the Majorana phases are varied, which again confirms the consistency of our numerics. Not too surprisingly, the alignment regions are also blow up for the other correlations, cf.\ lower panels of Fig.~\ref{fig:2434-1314p}. However, what is very remarkable is that the red alignment clearly predicts $\sin \theta_{24} \simeq \sin \theta_{34}$, even if the Majorana phases are varied. This strongly indicates a clear prediction of the red alignment which indeed can be analytically derived as we will see in the next section.
Finally, the most dramatic change of the alignments happens of we choose $m_1 \neq 0$. While the red alignment is only shifted to slightly larger values of $\sin \theta_{14}$, the blue alignment seems to enforce $\sin \theta_{14} \equiv 1$ according to our numerics, and thus violates our condition $\sin \theta_{14} < 0.5$, which is why it does not appear in the plot. This is clearly unphysical, since a maximal active-sterile mixing angle would have been detected already. This is a good example for the predictivity of alignments: while they do allow for some freedom, forcing the mixing angles to be within their physically tolerable ranges might restrict the neutrino masses such that only a certain mass scale between $0$ and $1$~eV is allowed. Turning it round, if the mass scale is known, an alignment can make concrete predictions for at least active-sterile mixings.\\
The principal tendency we wanted to reveal was that \emph{non-trivial sterile mixing can generate a non-zero reactor angle $\theta_{13}$}. This can indeed be seen from the plots in Figs.~\ref{fig:13-generation1} and~\ref{fig:2434-1314p}, which for $(\alpha, \beta, \gamma) = (0,0,0)$ clearly demonstrate that both $\theta_{14}$ and $\theta_{34}$ must be large to generate a sizable reactor angle $\theta_{13}$. On the other hand, $\theta_{24}$ could be small or large, depending on the branch of the correlation. If we allow the Majorana phases to vary, than each of the three active-sterile mixing angles can in principle be small, but not all at the same time: at least one active-sterile mixing angle \emph{must} be large, in order for a sizable reactor angle $\theta_{13}$ to be generated.
\subsection{Analytical understanding}
Let us try to get some analytical understanding of the behaviour shown in the plots. Using Eqs.~\eqref{mnu44}, \eqref{eq:matrix_expl}, \eqref{equ:3+1param1}, and~\eqref{eq:rot}, together with the approximations $m_{1,2} \simeq 0$, $m_3 \simeq \sqrt{\Delta m_A^2}$, an expansion to first order in $s_{14,24,34}$, and neglecting terms like $\sqrt{\Delta m_A^2} s_{13} s_{i4}$ when compared with terms containing $m_4$ and a smaller number of suppressions yields the following approximations for some of the entries in the neutrino mass matrix:
\begin{eqnarray}
m_{e 2} &\simeq& m_4 e^{-i (\alpha + \beta)} s_{14} s_{24} + \sqrt{\Delta m_A^2} e^{-i(\delta_2 + \delta_3)} s_{13} c_{13} s_{23},\nonumber\\
m_{e 3} &\simeq& m_4 e^{-i (\alpha + \gamma)} s_{14} s_{34} + \sqrt{\Delta m_A^2} e^{-i \delta_2} s_{13} c_{13} c_{23},\nonumber\\
m_{\mu 2} &\simeq& \sqrt{\Delta m_A^2} e^{-2 i \delta_3} c_{13}^2 s_{23}^2,\nonumber\\
m_{\tau 3} &\simeq& \sqrt{\Delta m_A^2} c_{13}^2 c_{23}^2.
\label{eq:12132233}
\end{eqnarray}
As already mentioned in Sec.~\ref{sec:method}, the conditions for a $\mu - \tau$ symmetric upper left $3\times 3$ block are:
\begin{equation}
m_{e 2} = - m_{e 3}\ \ \ {\rm and}\ \ \ m_{\mu 2} = m_{\tau 3}.
\label{eq:mu-tau}
\end{equation}
Applying the latter condition to Eq.~\eqref{eq:12132233}, one obtains $e^{-2 i \delta_3} s_{23}^2 \simeq c_{23}^2$, which immediately implies $\delta_3 \simeq 0$ and
\begin{equation}
\sin \theta_{23} \simeq \cos \theta_{23} \simeq \frac{1}{\sqrt{2}}\ \ \ \Rightarrow\ \ \ \theta_{23} \simeq \frac{\pi}{4}.
\label{eq:23max}
\end{equation}
This confirms that $\theta_{23}$ should be very close to maximal, as we had already mentioned. Then, using the first condition from Eq.~\eqref{eq:mu-tau} and inserting $\delta_3 \simeq 0$ and $s_{23} \simeq c_{23} \simeq 1/\sqrt{2}$, one obtains
\begin{equation}
s_{14} (s_{24} e^{-i \beta} + s_{34} e^{-i \gamma}) \simeq - \frac{\sqrt{2 \Delta m_A^2}}{m_4} e^{-i (\alpha - \delta_2)} s_{13} c_{13} \approx 0.01,
\label{eq:branches}
\end{equation}
where we have in the final step inserted the best-fit values of the remaining oscillation parameters as well as $m_4 = 1$~eV and $\delta_2 \simeq \pi$, the latter being implied for vanishing Majorana phases, $(\alpha, \beta, \gamma) = (0,0,0)$. It is this equation which teaches us quite a bit about the plots presented in Figs.~\ref{fig:13-generation1} and~\ref{fig:2434-1314p}. First of all, as we had anticipated in Sec.~\ref{sec:method}, the equation proves our central point: up to terms of $\mathcal{O}(s_{13}^3)$ arising from the cosine of $\theta_{13}$, it is indeed true that \emph{the reactor mixing is proportional to the active-sterile mixing}.
Let us again start with the case of vanishing Majorana phases, $(\alpha, \beta, \gamma) = (0,0,0)$. Then, in particular one would necessarily switch off the reactor mixing angle $\theta_{13}$ if either $\theta_{14}$ or $\theta_{24,34}$ were zero. Second, in order for the numerical version of Eq.~\eqref{eq:branches} to hold, $s_{14}$ must be of $\mathcal{O}(0.01)$ or even larger, which is consistent with the limit $\sin \theta_{14}\gtrsim 0.02$ obtained from the plots. Third, only one of $s_{24,34}$ can be small. This fact explains the two branches in Figs.~\ref{fig:13-generation1} and~\ref{fig:2434-1314p}: in the upper left panel of Fig.~\ref{fig:13-generation1} (Fig.~\ref{fig:2434-1314p}), the upper branch is obtained for sizable $s_{24}$ ($s_{34}$), while the lower branch allows for very small values of $s_{24}$ (significantly smaller values of $s_{34}$). The overlap regions of each pair of branches indicate the both angles $s_{24,34}$ are sizable. The differences between $s_{24}$ and $s_{34}$ can be attributed to the sub-leading terms neglected in Eq.~\eqref{eq:12132233}. These considerations basically remain true for general phases $(\alpha, \beta, \gamma)$. The absolute value of the right-hand side of Eq.~\eqref{eq:branches} will be sizable for non-zero reactor angle $\theta_{13}$ and, while the terms in parentheses on the right-hand side could in principle cancel even for large $\theta_{24} = \theta_{34}$ if $\beta = \gamma + \pi$, they cannot sum up to a large number if all angles are small. Thus, even in the general case, a relatively large $\theta_{13}$ enforces a large $\theta_{14}$ and either $\theta_{24}$ or $\theta_{34}$ to be sizable, too.
We can also get some analytical understanding of the effect of the alignments: again using Eqs.~\eqref{mnu44}, \eqref{eq:matrix_expl}, \eqref{equ:3+1param1}, and~\eqref{eq:rot}, it is easy to see that in the limit $m_{1,2,3} \ll m_4$, one obtains
\begin{eqnarray}
a &\simeq& m_4 e^{-i \alpha} \sin \theta_{14}\cdot \cos \theta_{14} \cos \theta_{24} \cos \theta_{34},\nonumber\\
b &\simeq& m_4 e^{-i \beta} \sin \theta_{24}\cdot \cos^2 \theta_{14} \cos \theta_{24} \cos \theta_{34},\nonumber\\
c &\simeq& m_4 e^{-i \gamma} \sin \theta_{34}\cdot \cos^2 \theta_{14} \cos^2 \theta_{24} \cos \theta_{34}.
\label{eq:bc}
\end{eqnarray}
The $D_4$-like (blue) alignment requires $b=0$ and we thus know that $\cos^2 \theta_{14} \sin (2 \theta_{24}) \cos \theta_{34}\simeq 0$. Furthermore, $\cos \theta_{14}$ and $\cos \theta_{34}$ cannot be zero since $\theta_{14, 34}$ must be somewhat small. This immediately leads to $\sin (2 \theta_{24}) \simeq 0$ and thus requires a very small angle $\theta_{24}$, which is perfectly consistent with our numerical results, cf.\ Fig.~\ref{fig:13-generation1} and right panel of Fig.~\ref{fig:2434-1314p}, even in the general case of arbitrary Majorana phases. Using a similar approximation as in Eq.~\eqref{eq:12132233}, we could alternatively have derived
\begin{equation}
b \simeq m_4 e^{-i \beta} s_{24} - \sqrt{\frac{\Delta m_A^2}{2}} \left[e^{-i (\alpha - \delta_2)} s_{13} c_{13} s_{14} + c_{13}^2 \frac{s_{24} e^{i\beta} + s_{34} e^{i\gamma}}{\sqrt{2}} \right] \stackrel{!}{=} 0,
\label{eq:b-alternative}
\end{equation}
where we have already inserted $s_{23} \simeq c_{23} \simeq 1/\sqrt{2}$. For vanishing $(\alpha, \beta, \gamma)$, which also implies $\delta_2 \simeq \pi$, this equation cannot be solved for $s_{34} \simeq 0$, since the left-hand side would then necessarily be positive. However, in the ``opposite'' limit, $s_{24} \simeq 0$, one can easily find a solution $s_{34} \approx \sqrt{2} \tan \theta_{13} s_{14}$ which, inserting the best-fit value for $\theta_{13}$, implies that $s_{14} \approx 5 s_{34}$. Looking at the upper left panel of Fig.~\ref{fig:2434-1314p}, this relation indeed seems to be approximately fulfilled for the blue alignment. Glancing at the figures with arbitrary Majorana phases, it is visible that the general tendency of avoiding $s_{34} \simeq 0$ again remains true for the blue alignment, although the allowed regions of course open up a little.
For the $A_4$-like (red) alignment, in turn, $c = b^*$ is enforced, where
\begin{equation}
c \simeq m_4 e^{-i \gamma} s_{34} - \sqrt{\frac{\Delta m_A^2}{2}} \left[e^{-i (\alpha - \delta_2)} s_{13} c_{13} s_{14} + c_{13}^2 \frac{s_{24} e^{i\beta} + s_{34} e^{i\gamma}}{\sqrt{2}} \right] \stackrel{!}{=} 0,
\label{eq:c-alternative}
\end{equation}
which immediately implies that $b - m_4 e^{-i \beta} s_{24} \simeq c - m_4 e^{-i \gamma} s_{34}$. For $(\alpha, \beta, \gamma) = (0,0,0)$, combining Eqs.~\eqref{eq:b-alternative} and~\eqref{eq:c-alternative} results in $\sin \theta_{34} \simeq \tan \theta_{24}$, which is approximately equal to $\sin \theta_{24}$ due to the angle $\theta_{24}$ being small. Thus, this alignment leads to $\sin \theta_{34} \simeq \sin \theta_{24}$ and it is exactly that part of the general region which is numerically predicted by the red alignment, cf.\ upper right panel of Fig.~\ref{fig:2434-1314p}. Furthermore, when using Eq.~\eqref{eq:branches} in addition, one can also see that $s_{14} s_{24,34} \sim 0.005$. In in the upper left panel of Fig.~\ref{fig:13-generation1} (Fig.~\ref{fig:2434-1314p}), one can read off $s_{14} \sim 0.03$ and $s_{24} \sim 0.2$ ($s_{34} \sim 0.2$) for the red alignment, which is in good agreement with our analytical estimate. Remarkably, the prediction $\sin \theta_{34} \simeq \sin \theta_{24}$ for the red alignment remains perfectly valid even in the case of non-vanishing $(\alpha, \beta, \gamma)$, cf.\ lower right panel of Fig.~\ref{fig:2434-1314p}. This can be seen most easily by approximating $\sqrt{\Delta m_A^2} \approx 0$ in Eqs.~\eqref{eq:b-alternative} and~\eqref{eq:c-alternative}, which is justified because this quantity is always multiplied by the sines of angles which are not too large. Then, $c = b^*$ and thus $|b| = |c|$ immediately implies $\sin \theta_{34} \simeq \sin \theta_{24}$, which confirms our numerical results.
\section{\label{sec:theory}Results for the mass matrix:\\ what model builders want to know}
The next question to ask is about the concrete connection between the mass matrix entries $a = m_{e4}$, $b = m_{\mu 4}$, and $c = m_{\tau 4}$ and the active-sterile mixing angles. These are the results which are interesting for model builders, because they will reveal which alignments, i.e., ``directions'' of the complex vector $(a,b,c)$ are compatible with the allowed regions in our plots. Again, we will first of all present our general results, i.e., the elements $(a,b,c)$ are arbitrary as long as the resulting points are experimentally valid, and afterwards we will discuss more specifically how certain alignments, i.e., special choices of $(a,b,c)$ as derived within the framework of flavour models which can dramatically sharpen the predictions.\\
\begin{figure}[tp]
\centering
\begin{tabular}{lr}
\includegraphics[width=7cm]{t14_vs_ABSba_with_t13_LOG_OLD.png} &
\includegraphics[width=7cm]{t14_vs_ABSbc_with_t13_LOG_OLD.png}\\
\includegraphics[width=7cm]{t14_vs_ABSba_with_t13_LOG.png} &
\includegraphics[width=7cm]{t14_vs_ABSbc_with_t13_LOG.png}
\end{tabular}
\caption{\label{fig:elements_14}
Correlations between $\sin \theta_{14}$ and the ratios $|b|/|a|$ and $|b|/|c|$ in the left and right columns, respectively. The Majorana phases are chosen to be zero in the upper row, and are varied in the lower row. Again, the connection between the different quantities is very clear for vanishing Majorana phases: a certain value of $|b|/|a|$ corresponds to a very definite value of $\theta_{14}$ and the same is true for $|b|/|c|$, although there exist two different possibilities for that quantity for a given $\theta_{14}$. When varying the phases, some correlation persists for $|b|/|a|$, while it is wiped out completely for $|b|/|c|$.
}
\end{figure}
\subsection{Correlations between observables and absolute values of the alignments}
As examples we display in Fig.~\ref{fig:elements_14} the correlations between $\sin \theta_{14}$ and $|b|/|a|$ (left panels) and $|b|/|c|$ (right panels).\footnote{Note that we could have chosen $\sin \theta_{24,34}$ instead, but these plots would not add anything significant.} Starting with the correlation of $|b|/|a|$ and again assuming vanishing $(\alpha, \beta, \gamma)$ for the start, cf.\ upper left panel, it is clearly visible that there is practically a one-to-one correspondence between the value of $|b|/|a|$ and that of $\sin \theta_{14}$. Naturally, $|b|/|a|$ is bound to be positive, but it can be nearly zero for large values of $\sin \theta_{14} \gtrsim 0.2$. Lower values of $\sin \theta_{14}$ quickly increase the ratio $|b|/|a|$ to roughly $5$ for the smallest possible value of $\sin \theta_{14} \sim 0.03$. This can again be understood analytically: using Eq.~\eqref{eq:bc} for $\alpha = \beta = 0$, it is easy to see that $|b|/|a| \simeq \sin \theta_{24}/\tan \theta_{14}$. This is clearly reflected in the curve depicted in the upper left panel of Fig.~\ref{fig:elements_14}. Imposing the restriction from the example data sets (which is only the $\nu_e$ disappearance data in this case), one can see that $|b| \sim 0.1 |a|$ or, more generally, $|b| \ll |a|$ is enforced. Allowing for varying Majorana phases (cf.\ lower left panel), the correlation between $|b|/|a|$ and $\sin \theta_{14}$ gets broader, but it is not wiped out. In particular for large values of $|b|$, there is still a rough one-to-one correspondence left. However, for very small values of $|b|$, such as enforced by the blue alignment, the allowed range for $\sin \theta_{14}$ becomes quite large.
Looking at the alignments for $(\alpha, \beta, \gamma) = (0,0,0)$, the $A_4$-like (red) alignment corresponds to the upper range of $|b|/|a| \sim 5$, due to $\sin \theta_{14}$ being very close to its lowest predicted value in that case. Note that, in this alignment, a relation like $|b| \gg |a|$ has never been imposed, but it is instead a consequence of the tightness of the parameter space and thus a reflection of the predictivity of the concrete alignment. The $D_4$-like (blue) alignment in turn enforces $b=0$ (and hence trivially $|b|/|a|=0$), which is confirmed by the resulting points and thus comprises a sanity check of our numerical calculations. As before, $\sin \theta_{14}$ is slightly smaller than 0.3 for this alignment. Varying the Majorana phases allows the red alignment to go much further down to lower values of $|b|/|a|$, however, a clear one-to-one correspondence between $|b|/|a|$ and $\sin \theta_{14}$ remains present to some extend. As anticipated for the blue alignment, having $|b| = 0$ opens up many possibilities for $\sin \theta_{14}$, which can now be as small as about $0.04$.
On the right panels, the correlation between $\theta_{14}$ and $|b|/|c|$ is displayed. For vanishing $(\alpha, \beta, \gamma)$, cf.\ upper right panel, it consists of two branches. For very small values of $\theta_{14}$ (close to the lowest value possible, $\sin \theta_{14}\sim 0.03$), both branches meet and enforce $|b| \simeq |c|$. For larger values of $\theta_{14}$, however, the two branches split and enforce $|b|\neq |c|$. For the upper branch, a rough bound of $|c| < |b| \lesssim 2.5 |c|$ is visible, although there are a few points above that boundary. For the lower branch, in turn, there is no limit except for the trivial one, $|b|/|c|\geq 0$. Note that this curve cannot be understood as easily on analytical grounds: Eq.~\eqref{eq:bc} only implies that $|b|/|c| \simeq \tan \theta_{24}/\sin \theta_{34}$, and thus the dependence on $\theta_{14}$ must arise from sub-leading terms. Imposing the $\nu_e$ disappearance data enforces either $|b| \sim 1.25 |c|$ or $|b| \sim 0.8 |c|$. While these tendencies are nicely visible, the lower right panel of Fig.~\ref{fig:elements_14} reveals that the correlation between $\theta_{14}$ and $|b|/|c|$ is practically wiped out for general phases $(\alpha, \beta, \gamma)$. Thus, in this case, getting useful information is only possible for models which predict fixed values of the Majorana phases.
The picture looks similar for the alignments. As already mentioned, in the case of vanishing Majorana phases the $A_4$-like (red) alignment yields quite a small $\theta_{14}$. Here we can see why: $(a, b, c) = (a, b, b^*)$ trivially imposes $|b| = |c|$, and this is only possible for small $\theta_{14}$, as can be seen from the gray points. The $D_4$-like (blue) alignment in turn leads to a pretty large $\theta_{14}$. Also this is clear from this figure: $(a, b, c) = (a, 0, c)$ requires $|b|/|c|=0$, which can only be fulfilled if $\theta_{14}$ is large enough. Both these tendencies get practically wiped out if the phases are allowed to have arbitrary values, in which case the alignments do not give more of a prediction than the trivial ones, i.e., $|b| / |c| = 1$ (red) and $|b| / |c| = 0$ (blue).
\subsection{Correlations between observables and phases of the alignments}
\label{sec:phases}
A further interesting relation could potentially arise between the absolute values and the phases of the matrix elements $(a, b, c)$, which is displayed for $b$ and $c$ as examples in Fig.~\ref{fig:ab_bc1}. Let us again have a look at the upper panels first, for which the Majorana phases are all taken to be zero. Note that, in this figure, the same data set is displayed in two different ways in order to reveal certain features. Let us first look at the upper left panel. Here we plot the quantity ${\rm arg}(b) + {\rm arg}(c)$ versus the ratio $|b|/|c|$. As can be seen, the ball-park of the valid points requires that either ${\rm arg}(b) = - {\rm arg}(c)$ (i.e.\ if plotted in the complex plane and normalised to unit length, the two vectors $b$ and $c$ would transform into each other by a reflection on the real axis) or that $|b|=0$ (in which case the phase of $b$ is not well defined and thus ${\rm arg}(b) = - {\rm arg}(c)$ can be trivially fulfilled). This means in particular that these points \emph{cannot} be obtained by alignments such as $(a, b, c) = (1,4,2) a$~\cite{King:2013iva,King:2013xba}. There are also a few outlier points visible at phases $\pm \pi$. These values are in principle accessible (even though not ``likely'' from the parameter scan), as exemplified by the $D_4$-like (blue) alignment. Note that this alignment again does not enforce ${\rm arg}(c) = \pi$ by itself, but it does so when combined with $\mu - \tau$ symmetry. The $A_4$-like (red) alignment trivially imposes $|b|/|c|=1$, in which case ${\rm arg}(b) + {\rm arg}(c) = 0$ is enforced.
\begin{figure}[tp]
\centering
\begin{tabular}{lr}
\includegraphics[width=7cm]{ARGbplusc_vs_ABSbc_with_t13_LOG_OLD.png} & \includegraphics[width=7cm]{ARGbc_vs_ABSbc_with_t13_LOG_OLD.png}\\
\includegraphics[width=7cm]{ARGbplusc_vs_ABSbc_with_t13_LOG.png} & \includegraphics[width=7cm]{ARGbc_vs_ABSbc_with_t13_LOG.png}
\end{tabular}
\caption{\label{fig:ab_bc1}
Correlation between the complex parameters $b$ and $c$. In order to clearly reveal the different features, the same data are plotted in two different ways in the left and right columns, respectively; see main text for details. The Majorana phases are chosen to be zero in the upper row, and are varied in the lower row.
}
\end{figure}
Now we turn to the upper right panel of Fig.~\ref{fig:ab_bc1}. Here, the same data set is displayed, however, this time as a function of ${\rm arg}(b)/{\rm arg}(c)$ instead of ${\rm arg}(b) + {\rm arg}(c)$. The reason is the following: as we had already mentioned, we need $b \neq c$ in order not to obtain a $4\times 4$ matrix with an extended $\mu - \tau$ symmetry, which would enforce $\theta_{13} \equiv 0$. Thus, if our numerical calculation is sensible, there should be no gray points found at $b = c$ or, equivalently, around the point $(|b|/|c|, {\rm arg}(b)/{\rm arg}(c)) = (1, 1)$. In the left panel, this point could not be displayed properly, since ${\rm arg}(b)={\rm arg}(c)$ would still allow for any value of ${\rm arg}(b) + {\rm arg}(c)$, but in the right panel it is marked in dark yellow. Indeed, although the same two branches of the correlations appear in the figure, no gray points are visible around $(1, 1)$, which is correct since none of them could possibly yield $\sin \theta_{13} = 0$. Note that, while this feature is clearly visible in that plot, the two alignments could not be displayed properly in the right panel: since the $A_4$-like (red) alignment together with the $\mu - \tau$ symmetry would force $b$ to be zero, the parameter ${\rm arg}(b)/{\rm arg}(b^*)$ would for those points essentially be a division of two (numerical) zeros in the case of varying Majorana phases. Similarly, for the $D_4$-like (blue) alignment, our numerical calculation would essentially find all kinds of values for ${\rm arg}(b)$, which would be meaningless since $|b|=0$, however, they would mess up the plot on the right panel. Indeed, for the information contained, there seems to be no optimum way to capture all the features in one single plot.
Unfortunately, nearly all these tendencies are again wiped out completely if the Majorana phases are taken to have general values, cf.\ lower panels of Fig.~\ref{fig:ab_bc1}. While some white patches may or may not be visible, there is certainly no correlation left for the gray points. For the red alignment, one can see that, in addition to $|b| = |c|$, we cannot only see that trivially ${\rm arg}(b) + {\rm arg}(c) = 0$ (lower left panel) or ${\rm arg}(b)/{\rm arg}(c) = -1$ (lower right panel), as we could have anticipated from $c = b^*$. However, the lower right panel reveals that, for the blue alignment, in addition to the trivial case ${\rm arg}(b) =0$, it could also be that ${\rm arg}(b)/{\rm arg}(c) = -1$. Unfortunately, this does not have any effect as long as $|b| = 0$. The only really solid prediction is that, even for the general case, the point $(1, 1)$ is still avoided by the gray dots. This is not easy to see by eye in the large version of the plot in the lower right panel, but the enlarged region in the inset shows that it is nevertheless correct.
\subsection{Alignments required to reproduce $\nu_e$ disappearance results}
Finally, we would like to ask the question which alignment $(a,b,c)$ we have to choose if we would like to successfully reproduce a certain part of the data. Since the null results only yield an upper bound and since our general (gray) region is incompatible with the combined appearance data as long as $\theta_{23}$ is taken to be maximal and the Majorana phases are taken to be zero, but it can easily fit the $\nu_e$ disappearance results, it would be interesting to see how $(a,b,c)$ have to be chosen for that data to be matched. This is shown in Fig.~\ref{fig:abc_success}, where we plot the absolute real and imaginary parts of $(a, b, c)$ on the left, and some ratios between moduli and arguments on the right.
Starting with the upper left panel, it can be seen that for all points, the real parts dominate while the imaginary parts are comparatively small. Furthermore, while ${\rm Re}(a)$ can be positive or negative, ${\rm Re}(b,c)$ are practically always positive. Furthermore, there is a clear tendency for $|{\rm Re}(a)|$ to be considerably larger than ${\rm Re}(b,c)$. The latter two are practically of the same size, although a very slight tendency for ${\rm Re}(b) < {\rm Re}(c)$ is visible. Note that, since we display absolute elements of the neutrino mass matrix, all points given carry the unit eV.
Allowing for the Majorana phases to vary reveals the actual correlation, cf\ lower left panel of Fig.~\ref{fig:13-generation1}. While the allowed regions for $b$ and $c$ form crosses that lie on top of each other (in fact, the corresponding points in the upper left panel also lie pratically on top of each other, which makes it a bit difficult to distinguish them visually), the points for a suitable entry $a$ lie on a circle around the origin with a radius of roughly $|a| \sim 0.1$~eV. Thus, while $b$ and/or $c$ can in principle be zero, the $e4$ element $a$ of the $4\times 4$ neutrino mass matrix must be non-zero with a well determined absolute value. Glancing at the upper left panel again, it is visible that setting the Majorana phases to zero essentially pics some of the regions of the circle and of the crosses which intersect (or, rather, are close to) the line with zero imaginary part, as to be expected from Eqs.~\eqref{eq:bc}. However, in addition the points where $b,c > 0$ are much more likely in that case, which is a feature that is non-trivial to understand.
On the upper right panel, in turn, we instead show certain ratios of quantities, namely $|b|/|c|$ vs.\ $|a|/|b|$ (dark yellow points) and ${\rm arg}(b)/{\rm arg}(c)$ vs.\ ${\rm arg}(a)/{\rm arg}(b)$ (purple points), again for vanishing $(\alpha, \beta, \gamma)$. Also here, clear correlations are visible. In particular there is a tendency for the phases of $b$ and $c$ to have different signs, while the phases of $a$ and $b$ always have the same sign. Furthermore, the inset shows a region where ${\rm arg}(a)/{\rm arg}(b) \ll 1$ while ${\rm arg}(b)/{\rm arg}(c) \simeq -0.6$, which simply means that for these points both ${\rm arg}(b,c)$ are very small (i.e., $b$ and $c$ are nearly real), but there is a fixed ratio between the two arguments.
Allowing the Majorana phases to vary, cf.\ lower right panel, again increases the allowed regions considerably. However, at least some general tendencies are visible, namely that $|b|$ should be somewhat small, unless $|a|$ is small, and quite generally most of the poitns cluster around $(a, b, c)$ having non-identical values which are however of the same order of magnitude.\\
Further such tendencies could be read off this plot and, hopefully, they will give an indication to model builders where in the parameter space to look for a prediction that yields an active-sterile mixing in the correct region.
\begin{figure}[tp]
\centering
\begin{tabular}{lr}
\includegraphics[width=7.5cm]{Align_ABSOLUTE_OLD.png} & \includegraphics[width=7cm]{Align_RELATIVE_OLD.png}\\
\includegraphics[width=7.5cm]{Align_ABSOLUTE.png} & \includegraphics[width=7cm]{Align_RELATIVE.png}
\end{tabular}
\caption{\label{fig:abc_success}
Alignment points which can successfully reproduce the $\nu_e$-disappearance data. In the left column, we show the absolute sizes of the real and imaginary parts, where the different colour codings correspond to $a$, $b$, and $c$, respectively. In the right column, we show the ratios among different quantities, where the different colour codings correspond to the absolute values and phases, respectively. The Majorana phases are chosen to be zero in the upper row, and they are varied in the lower row.
}
\end{figure}
\section{\label{sec:models}Ideas for model building}
In the literature there are many interesting examples of models giving a $\mu-\tau$ symmetric neutrino mass matrix in the basis where the charged leptons are diagonal. Here we consider two examples to show how we can apply our results. One of the first example is based on $A_4$~\cite{Ma:2001dn,Babu:2002dz} and the second one on a $D_4$~\cite{Grimus:2003kq} flavour symmetry. We shortly describe the main features of both models and we show how we can extend them by a sterile neutrino in order to generate the reactor angle.
\subsection{$A_4$ model}
The model from Ref.~\cite{Babu:2002dz} is supersymmetric and it is based on the flavour symmetry $A_4$. This is the finite group of even permutations of four objects. It has three singlet and one triplet irreducible representations, and it is the smallest non-Abelian discrete group featuring triplets.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
&$\hat Q$ & $\hat L$ & $\hat u^c_1, ~\hat d^c_1, ~\hat e^c_1$ &$ \hat u^c_2, ~\hat d^c_2, ~\hat e^c_2 $&$ \hat u^c_3, ~\hat d^c_3, ~\hat e^c_3$ & $\hat \phi_{1,2}$ &$\hat U$&$\hat U^c$ &$\hat D$ &$\hat D^c$&$\hat E$&$\hat E^c$ &$\hat \chi$\\
\hline
$A_4$ & $\mathbf{3}$ & $\mathbf{3}$ & $\mathbf{1}$ &$\mathbf{1'}$ & $\mathbf{1''}$& $\mathbf{1}$ & $\mathbf{3}$ & $\mathbf{3}$ & $\mathbf{3}$ &$\mathbf{3}$ & $\mathbf{3}$& $\mathbf{3}$ & $\mathbf{3}$\\
\hline
$Z_3$ & $1$ & $1$ & $\omega$ &$\omega$ & $\omega$& $1$ & $1$ & $1$ & $1$& $1$&$1$& $1$& $\omega^2$\\
\hline
\end{tabular}
\caption{\label{tabbmv}Matter assignment of the model of Ref.\,\cite{Babu:2002dz}. Note that $\omega^3 = 1$ and $1 + \omega + \omega^2 = 0$.}
\end{center}
\end{table}
The usual quark, lepton, and Higgs superfields transform under $A_4$ as detailed in Tab.~\ref{tabbmv}, where extra heavy $SU(2)$ singlet quark, lepton, and Higgs superfields are also added. The superpotential is given by
\begin{eqnarray}
\hat W &=& M_U \hat U_i \hat U^c_i + f_u \hat Q_i \hat U^c_i \hat \phi_2 + h^u_{ijk} \hat U_i \hat u^c_j \hat \chi_k + M_D \hat D_i \hat D^c_i + f_d \hat Q_i \hat D^c_i \hat \phi_1 + h^d_{ijk} \hat D_i \hat d^c_j \hat \chi_k \nonumber \\
&& + M_E \hat E_i \hat E^c_i + f_e \hat L_i \hat E^c_i \hat \phi_1 + h^e_{ijk} \hat E_i \hat e^c_j \hat \chi_k + \mu \hat \phi_1 \hat \phi_2 \nonumber \\
&& + \frac{1}{2} M_\chi \hat \chi_i \hat \chi_i + h_\chi \hat \chi_1 \hat \chi_2 \hat \chi_3.
\end{eqnarray}
The $Z_3$ auxiliary symmetry is explicitly broken softly by $M_\chi \neq 0$. The scalar potential for the fields $\chi_i$ is given by
\begin{equation}
V = |M_\chi \chi_1 + h_\chi \chi_2 \chi_3|^2 + |M_\chi \chi_2 + h_\chi \chi_3
\chi_1|^2 + |M_\chi \chi_3 + h_\chi \chi_1 \chi_2|^2,
\end{equation}
and from its minimisation we get:
\begin{equation}
\langle \chi_1 \rangle = \langle \chi_2 \rangle = \langle \chi_3 \rangle =
u = -M_\chi/h_\chi.
\end{equation}
Consider now the $6 \times 6$ Dirac mass matrix linking $(e_i,E_i)$ to
$(e_j^c,E_j^c)$,
\begin{equation}
{\cal M}_{eE} = \left( \begin{array} {c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@
{\quad}c} 0 & 0 & 0 & f_e v_1 & 0 & 0 \\ 0 & 0 & 0 & 0 & f_e v_1 & 0 \\
0 & 0 & 0 & 0 & 0 & f_e v_1 \\ h_1^e u & h_2^e u & h_3^e u & M_E & 0 & 0 \\
h_1^e u & h_2^e \omega u & h_3^e \omega^2 u & 0 & M_E & 0 \\
h_1^e u & h_2^e \omega^2 u & h_3^e \omega u & 0 & 0 & M_E \end{array} \right),
\end{equation}
where $v_1 = \langle \phi_1^0 \rangle$. The quark mass matrices look similar. The reduced $3 \times 3$ charged lepton mass matrix is
\begin{equation}
{\cal M}_e = U_L \left( \begin{array} {c@{\quad}c@{\quad}c} {h_1^e}' & 0 & 0
\\ 0 & {h_2^e}' & 0 \\ 0 & 0 & {h_3^e}' \end{array} \right) \frac{\sqrt{3} f_e v_1 u}{M_E},
\end{equation}
where ${h_i^e}' \equiv h_i^e [1+(h_i^e u)^2/M_E^2]^{-1/2}$ and
\begin{equation}
U_L = \frac{1}{\sqrt{3}} \left( \begin{array} {c@{\quad}c@{\quad}c} 1 & 1 & 1
\\ 1 & \omega & \omega^2 \\ 1 & \omega^2 & \omega \end{array} \right).
\end{equation}
Clearly, the \emph{up} and \emph{down} quark mass matrices are obtained in the same way and are both diagonalised by $U_L$, so that the charged-current mixing Cabibbo-Kobayashi-Maskawa (CKM) matrix $V_{\rm CKM}$ is the identity matrix. The small measured CKM angles may be generated from corrections associated to the structure of the soft supersymmetry breaking sector to make the model viable.
In this model the neutrino masses arise from the dimension-5 Weinberg operator,
\begin{equation}
\frac{f_\nu}{\Lambda} \hat L_i \hat \phi_2 \hat L_i \hat \phi_2.
\end{equation}
The effective Majorana neutrino mass matrix in the basis where the charged lepton mass matrix is diagonal is given by
\begin{equation}\label{mnuA4p}
{\cal M}_\nu = \frac{f_\nu v_2^2}{\Lambda} U_L^T U_L = \frac{f_\nu^2 v_2^2}{\Lambda}
\left( \begin{array} {c@{\quad}c@{\quad}c} 1 & 0 & 0 \\ 0 & 0 & 1 \\
0 & 1 & 0 \end{array} \right)
\equiv \frac{f_\nu v_2^2}{\Lambda} \lambda^0,
\end{equation}
giving (at this stage) a maximal atmospheric mixing angle but degenerate light neutrino masses.
Going down to the electroweak scale, $\lambda^0$ in Eq.~\eqref{mnuA4p} is corrected by the wave-function renormalisations of $\nu_e$, $\nu_\mu$, and $\nu_\tau$, as well as by the corresponding vertex renormalisations, i.e.\ $\lambda^0\to \lambda$, which breaks the neutrino mass degeneracy. The radiative corrections associated with a general slepton mass matrix in softly broken supersymmetry (related to $\nu_i \to \nu_j$ transitions) are given by the matrix
\begin{equation}
R = \left( \begin{array} {c@{\quad}c@{\quad}c}
\delta_{ee} & \delta_{e \mu} & \delta_{e \tau} \\
\delta_{e \mu} & \delta_{\mu \mu} & \delta_{\mu \tau} \\
\delta_{e \tau} & \delta_{\mu \tau} & \delta_{\tau\tau} \end{array} \right),
\end{equation}
so that at the low scale $\lambda$ is:
\begin{equation}
\lambda = \lambda^0+ R \lambda^0+\lambda^0 R^T=
\left( \begin{array} {c@{\quad}c@{\quad}c} 1 + 2 \delta_{ee} &
\delta_{e \mu} + \delta_{e \tau} & \delta_{e \mu} + \delta_{e \tau} \\
\delta_{e \mu} + \delta_{e \tau} & 2 \delta_{\mu \tau} & 1 + \delta_{\mu \mu}
+ \delta_{\tau \tau} \\ \delta_{e \mu} + \delta_{e \tau} & 1 + \delta_{\mu \mu}
+ \delta_{\tau \tau} & 2 \delta_{\mu \tau} \end{array} \right),
\end{equation}
where we have assumed all parameters to be real for simplicity. The above mass matrix, ${\cal M}_\nu$, is clearly $\mu-\tau$ invariant and yields a zero reactor angle as well as maximal atmospheric mixing.
In order to generate a non-zero reactor angle in this model, we can use the method described in this paper that makes use of one sterile neutrino $\hat \nu_s$ which transforms as a singlet under $A_4$. We assume that the sterile neutrino is charged under an extra auxiliary symmetry $Z_2$. We also add to the particle content of the model a scalar electroweak singlet flavon $\xi$ that is charged under $Z_2$(this parity ensures that the flavon $\xi$ can glue only to the sterile neutrino) and that transforms as a triplet under $A_4$: $\xi = (\xi_1, \xi_2, \xi_3)$. Thus, the superpotential contains the following extra term that mixes the active and sterile neutrinos and also adds a sterile neutrino mass term,
\begin{equation}
\hat W \supset \frac{f_s}{\Lambda} \hat L_i \hat \phi_2 \nu_s \xi_i+ \frac{m_s}{2} \hat \nu_s\hat \nu_s,
\end{equation}
where $\Lambda$ is an effective scale.
The fourth column of the neutrino mass matrix is then proportional to the VEVs of the flavons $\xi_i$, giving
\begin{equation}
a=\frac{f_s}{\Lambda} v_2 \langle \xi_1 \rangle\,,\quad b=\frac{f_s}{\Lambda} v_2 \langle \xi_2 \rangle\,,\quad
c=\frac{f_s}{\Lambda} v_2 \langle \xi_3 \rangle\,.\quad
\end{equation}
From the model-independent numerical analysis in Secs.~\ref{sec:method} to~\ref{sec:theory} it is clear that the two scalar $A_4$ triplets $\xi$ and $\chi$ must take VEVs in different directions of $A_4$,
\begin{equation}
\langle \chi \rangle\sim (1,1,1)\ne (a,b,c) \sim \langle \xi \rangle .
\end{equation}
It is well-known that, given two different $A_4$ scalar triplets $\xi$ and $\chi$, the minimisation of their scalar potential $V(\xi,\chi)$ yields as a natural solution $\langle \chi \rangle \sim \langle \xi \rangle$, i.e., the VEVs of the two fields are aligned. This is in contrast to the requirement obtained from our numerical results, because we had found that the two triplets must take VEVs in different $A_4$-directions. Typically, in order to solve such a problem, it is required to break the flavour symmetry explicitly in the scalar potential or to make use of extra dimensions. It is not the purpose of this paper to give a complete model, but just to suggest possible strategies that could be followed. Using explicit $A_4$ breaking terms it is quite straightforward to obtain the VEV misalignment required, and we do not embark this enterprise in more detail. We also want to comment on the possibility to use extra dimensions. In this case we could assume that, following the general idea of~\cite{Altarelli:2005yp}, $\nu_s$ and $\xi$ live on the $y=L$ ultraviolet (UV) brane while all the other fields stay on the Standard Model (SM) $y=0$ brane. Since in this framework $\chi$ and $\xi$ are located on different branes, their potentials are separated and can easily have independent minima. However the sets of scalar fields that live on different branes can interact at higher order, giving a deviation of the vacuum alignments. But such a deviation is typically of order $1/(\Lambda\,L)^4$ (see \cite{Altarelli:2005yp} for a detailed discussion), where $\Lambda$ is the effective scale of the model. It is clear that, for sufficiently large $L$, the vacuum alignment corrections are negligible. A detailed study of these corrections is beyond the scope of the present paper because we have only sketched some possible ideas, while for a complete study it is necessary to fix a particular model.
\subsection{$D_4$ model}
This model is based on the dihedral group $D_4$~\cite{Grimus:2003kq}\footnote{The dihedral group $D_4$ is isomorphic to the group of permutation of three objects $S_3$.} which has five irreducible representations, four singlets $\mathbf{1}_{++}$, $\mathbf{1}_{+-}$, $\mathbf{1}_{-+}$, $\mathbf{1}_{--}$, and one doublet $\mathbf{2}$. The product of two doublets is $\mathbf{2} \otimes \mathbf{2} = \mathbf{1}_{++} \oplus \mathbf{1}_{+-} \oplus \mathbf{1}_{-+} \oplus \mathbf{1}_{--}$, and the products of the singlets are trivial (for example, $\mathbf{1}_{+-} \otimes \mathbf{1}_{-+} = \mathbf{1}_{--}$). Differently from the previous one, this model is not supersymmetric. The SM is only extended by adding three right-handed neutrinos $\nu^c_{1,2,3}$, three Higgs doublets $H_{1,2,3}$, and two neutral scalar singlets $\chi_{1,2}$, as detailed in Tab.~\ref{tab:D4}.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $L_e$ & $e^c$ & $L_{\mu,\tau} $ & $\mu^c,\tau^c$ & $\nu^c_1$ & $\nu^c_{2,3}$ & $H_1$ & $H_{2}$ & $H_{3}$ & $\chi_{1,2}$ \\
\hline
$D_4$ & $\mathbf{1}_{++}$ & $\mathbf{1}_{++}$ & $\mathbf{2}$ & $\mathbf{2}$ & $\mathbf{1}_{++}$ & $\mathbf{2}$ & $\mathbf{1}_{++}$ & $\mathbf{1}_{++}$ & $\mathbf{1}_{+-}$ & $\mathbf{2}$\\
$Z_2^{\rm aux}$ & $+$ & $-$ & $+$ & $+$ & $-$ & $-$ & $-$ & $+$ & $-$ & $+$ \\
\hline
\end{tabular}
\caption{\label{tab:D4}Matter content of the model from Ref.~\cite{Grimus:2003kq}.}
\end{center}
\end{table}
The Lagrangian is given by
\begin{eqnarray}
\mathcal{L}&=&[y_1 \overline{L_e} \nu^c_1 +y_2(\overline{L_\mu} \nu^c_2+ \overline{L_\tau} \nu^c_3)] \tilde{H}_1+\nonumber \\
&& +y_3 \overline{L_e} e^c_1 H_1 +y_4 (\overline{L_\mu} \mu^c+ \overline{L_\tau} \tau^c) H_2 +y_5( \overline{L_\mu} \mu^c- \overline{L_\tau} \tau^c) H_3+\nonumber \\
&& + y_\chi{\nu_1^c}^T (\nu_2^c \chi_1+\nu_2^c \chi_2)+ M {\nu_1^c}^T \nu_1^c +
M' ({\nu_2^c}^T \nu_2^c +{\nu_3^c}^T \nu_3^c ) + H.c.
\end{eqnarray}
After flavour symmetry breaking, the $\chi$-fields take VEVs $\langle\chi_1\rangle =\langle\chi_2\rangle $, giving a $\mu - \tau$ invariant neutrino mass matrix (with maximal atmospheric and zero reactor mixings), while the charged lepton mass matrix is diagonal. Here we do not give other details and we refer interested readers to the original paper.
Like in the previous case based on the $A_4$ group, we can generate a deviation of the reactor angle from zero by the use of a sterile neutrino $\nu_s$. It is also required to introduce three extra scalar fields, $\xi_{1}\sim \mathbf{1}_{++}$ and $(\xi_2,\xi_3)\sim \mathbf{2}$ under $D_4$, which are gauge singlets. Then, the following new terms are allowed in the Lagrangian:
\begin{equation}
\mathcal{L} \supset
\frac{f_{s1}}{\Lambda} \overline{L_e} \tilde{H}_1 \nu_s \xi_1+ \frac{f_{s2}}{\Lambda} [ \overline{L_\mu} \xi_2 + \overline{L_\tau} \xi_3] \tilde{H}_1 \nu_s +
\frac{m_s}{2} \overline{\nu_s^c} \nu_s + H.c.
\end{equation}
As in the $A_4$ case, we assume that the fields $\nu_s$ and $\xi_{1,2,3}$ transform non-trivially under an extra $Z_2$ symmetry. In order to generate the reactor angle, the VEV of the $D_4$-doublet $(\xi_2,\xi_3)$ must break the $\mu-\tau$ symmetry, by $\langle \xi_2\rangle \ne \langle \xi_3\rangle$. This may be in contrast to the alignment $\langle\chi_1\rangle =\langle\chi_2\rangle $. Such misalignment problems can again be solved easily by using extra dimensions, just like in the $A_4$ case. A detailed study of this possibility goes beyond the scope of the present paper.
\section{\label{sec:conc}Summary and conclusions}
In this paper we have considered the possibility that the recently measured reactor angle and the active-sterile mixings, needed to describe the short-baseline anomalies, have a common origin. This is suggested from the fact that the active-sterile mixings obtained in fits of the short-baseline data in $3+N$ models are of the same order as the reactor angle. We have assumed the simplest framework possible, with only one sterile neutrino (giving a $4\times 4$ neutrino mass matrix). We have postulated that the reactor neutrino mixing vanishes in the active-active mass matrix part, which is why we have considered the $3\times 3$ active neutrino mass matrix to be $\mu-\tau$ invariant. This assumption implies that the atmospheric mixing angle is almost maximal, in compatibility with data. As a consequence, both a non-zero value of $\theta_{13}$ and the active-sterile mixings originate from the active-sterile mass matrix elements and are potentially of the same order of magnitude.
There have been several important questions of our analysis: 1) Which correlations among or constraints on the observables are implied in this framework?, 2) Can the short-baseline anomalies be reproduced?, 3) What are the requirements for the vacuum alignments of the VEVs?, and 4) What does that imply for flavour models?
We have demonstrated that $\theta_{14}$, which in our parameterisation leads to electron neutrino disappearance, must be non-zero in this framework. On the other hand, either $\theta_{24}$, leading to muon neutrino disappearance, or $\theta_{34}$ can vanish (but not both at the same time -- they are anti-correlated). Therefore, this framework is perfectly consistent with the reactor and gallium anomalies, and with the non-observation of muon neutrino disappearance. It is more difficult to reconcile this approach with the LSND results, as this is possible only for specific choices of the Majorana phases.
We have also shown how the active-sterile mixing and the non-zero value of $\theta_{13}$ emerge from the misalignment of the active-sterile VEVs, i.e., the explicit breaking of the $\mu - \tau$ symmetry. We have noted that ``misalignment'' could refer to the absolute values and/or phases of the VEVs, which can both be the origin of the breaking of the $\mu - \tau$ symmetry. We have also demonstrated that specific assumptions for the alignments, which can be found in the literature based on $A_4$ and $D_4$ models, are in fact very predictive. These choices may also impact the predictions for neutrinoless double beta decay. A detailed study is beyond the scope of this work, as the phenomenology of neutrinoless double beta can considerably change in the presence of light sterile neutrinos, see e.g.\ Refs.~\cite{Barry:2011wb,Girardi:2013zra,Merle:2013ibc}.
As far as the implications for flavour models are concerned, we have sketched the requirements in terms of two well-known example models based on $A_4$ and $D_4$, respectively. For instance, in the $A_4$ model, scalar electroweak singlet flavons are needed which must be triplets under $A_4$ to generate neutrino masses. It is however well-known that it is difficult for these triplets to take VEVs in different directions of $A_4$. We have proposed either an explicit breaking of $A_4$ or the use of extra spatial dimensions. In the latter case, the sterile neutrino and one of the flavon triplets would live on the UV brane, whereas the other flavon and SM fields reside on the infrared/SM brane.
We conclude that, if sterile neutrinos exist, it is possible for active-sterile mixings and the non-zero value of $\theta_{13}$ to have a common origin in terms of flavour models. While we have studied the simplest setting possible, models with more than one sterile neutrino may have much wider possibilities. Our starting point has been the $\mu - \tau$ symmetric case, but other possibilities are viable as well -- as for example tri-bimaximal mixing. In such alternative approaches, it may also be possible to describe a non-zero $\theta_{13}$ and strong deviations from maximal atmospheric mixing at the same time, whereas our framework has implied $\theta_{23}$ being close to maximal.
\section*{Acknowledgments}
A.M.\ acknowledges support by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Programme FP7-PEOPLE-2011-IEF, contract PIEF-GA-2011-297557, and partial support from the European Union FP7 ITN-INVISIBLES (Marie Curie Actions, PITN-GA-2011-289442). S.M.\ and W.W.\ thank the DFG grants WI 2639/3-1 and WI 2639/4-1 for financial support.
\bibliographystyle{apsrev}
|
1,116,691,497,144 | arxiv | \section{Magnetic First Order Transition}
Phase transitions can be caused by varying one of the two (or more) independent control variables. The need to study the behavior in two-parameter space is clear for first order transitions if the Clausius-Clapeyron relation needs to be established. Superconducting magnets enable easy traversal of large regions in field and temperature. This talk on half-doped manganites shows results on supercooled and superheated states, and glass-like arrested states, which may provide new insights on metastabilities across first order transitions.
The Ehrenfest classification of phase transitions is based on observing a discontinuity, in some derivative of the free energy taken with respect to a control variable. For a first order transition, this discontinuity has to bear a specific relation with the slope of the phase transition line in the two parameter space. A lot of experimental effort goes into `establishing' a first order transition; the case of vortex solid-to-liquid transition is a widely researched recent problem where the two-parameter space explored was magnetic field (H) and temperature (T) \cite{zeldov, schilling}. This talk focuses on studies using facilities at our Consortium where liquid helium is imperative for varying both these parameters.
Most of the materials of current interest, including the CMR manganites discussed here, are multicomponent systems whose properties become more interesting with substitutions \cite{pc1}. Frozen-in disorder is thus intrinsic to these materials and they would not exhibit sharp first order transitions; discontinuities would not be observed. The `rounded transitions', if first order, would show hysteresis related to supercooling and superheating. These metastable states would also be observed when the control parameter is other than temperature \cite{pc2}. It is interesting to establish that the transition is first order because discontinuities must then be occurring over the length scale of the correlation length, and this has obvious implications as one explores these compounds in nano-material form.
\section{Glass-like Arrest of First Order Transition}
A second metastability that could be observed if the rounded transition is first order corresponds to glass-like arrest of kinetics. This has the potential to \textbf{\emph{play havoc}} with theoretical efforts to understand a material because the state observed at temperatures below the closure of hysteresis may be a kinetically arrested state and not an equilibrium state \cite{ab1, ab2, ab3, pc3}. Resolving whether the observed state is an equilibrium state becomes possible only by traversing the two-parameter space (of H and T, or of P and T) and we shall amplify on this with our data on CMR manganites. Liquid helium is crucial to our ability to vary H over the range of -14 Tesla to 14 Tesla. It is also crucial to our ability to vary T from 2K to ambient, and the centenary of liquefaction of helium is an appropriate occasion to talk of this work. We can span the space of two thermodynamic parameters H and T, and study the stability of different phases. Since H, unlike P, is transmitted without a medium, this is experimentally easier than spanning P and T. Transition temperature T$_C$ would vary with the second thermodynamic parameter for a first order transition; one finds a much larger variation of T$_C$ with the experimentally available range of H (in some magnetic/superconducting systems) than one finds with the experimentally available range of P. In this sense, we can today span a much larger region of (H,T) space, than of (P,T) space.
\emph{Manganites provide an excellent platform for such studies; no other known family of materials provides the versatility detailed below.} With slight change of composition, the ground state can be changed from ferromagnetic-metal (FM-M) to antiferromagnetic insulator (AF-I). Because of this, the slope of the first order transition line (in H-T space) changes sign within the same family of materials \cite{ab1, ab2}. Again, glass-like kinetic arrest occurs at a temperature T$_K$ which is a function of H. This dependence of T$_K$ on H also changes sign within the manganite family as one goes from FM-M ground state to AF-I ground state \cite{ab1}. Finally, since the conductivity changes drastically along with magnetic order, this family of CMR manganites has an inherent advantage over studies on other metamagnetic materials (with no accompanying metal to insulator transition). A decrease in global magnetization of the sample can be interpreted as either reduction of moment in FM-M phase, or as part-transformation of FM-M to AF-I. A simultaneous measurement of conductivity provides a clear choice between the two alternatives because of the orders of magnitude resistivity changes associated with the metal (M) to insulator (I) transition in the latter case. \emph{This is a special for half-doped manganites, and obviates the necessity of mesoscopic measurements to establish phase coexistence. }
We show in fig 1 magnetization (M vs T in H=1Tesla) and resistivity (R vs T in H=0Tesla) measurements for the half-doped manganite La$_{0.5}$Ca$_{0.5}$MnO$_3$ (LCMO) \cite{ab3, pc3}.
\begin{figure}[h]
\centering
\includegraphics{Fig1.eps}
\caption{First order ferromagnetic-metallic to antiferromagnetic-insulating transition in La$_{0.5}$Ca$_{0.5}$MnO$_3$. (a) Magnetization while heating and cooling in 1T field. (b) Resistivity while heating and cooling in zero field.}
\label{fig:Fig1}
\end{figure}
These bring out that, with decreasing T, the sample undergoes a ferromagnetic-metallic to antiferromagnetic-insulating transition. We note a large thermal hysteresis corresponding to the metastable supercooled and superheated states.
We show in fig 2 magnetization (M vs T in various H from 3Tesla to 6Tesla) and resistivity (R vs T in various H from 3Tesla to 4Tesla) measurements for the half-doped manganite Pr${_{0.5}}$Ca$_{0.5}$MnO${_3}$ (PCMO), with 2.5\% Al substitution on Mn site (PCMAO) \cite{ab2}. These bring out that, with decreasing T, this sample undergoes an antiferromagnetic-insulating to ferromagnetic-metallic transition. We again note a large thermal hysteresis corresponding to the metastable supercooled and superheated states. We also note that the reversible values of M and R, obtained below the closure of hysteresis, show that the AF-I to FM-M transition is not completed even at the lowest temperature. This glass-like arrest of the high-temperature phase corresponds to a metastability that is very different from supercooling \cite{ab3, pc3}.
\begin{figure}[h]
\centering
\includegraphics{Fig2.eps}
\caption{Thermal hysteresis in magnetization and resistivity in same measurement field for Pr${_{0.5}}$Ca$_{0.5}$Mn$_{0.975}$Al$_{0.025}$O${_3}$ sample showing first order AF-I to FM-M transition. The measurements are repeated for different measurement fields. (a) shows magnetization while cooling and then heating in the same field. The measurements field ranges between 1-6 T. Steep rise in magnetization around 100 K indicate AF to FM transition. (b) shows resistivity while cooling and then heating in same field. The measurement fields ranges between 3-4 T. Sharp fall in resistivity around 100 K indicate insulator to metal transition.}
\label{fig:Fig2}
\end{figure}
\section{Glass formation: Rapid vs. Slow Cooling}
A glass is viewed as a liquid with time held still \cite{Braw}, a liquid in which the molecules have suddenly stopped moving at some instant of time. The motion is frozen by reaching a low temperature ($<$T$_g$), and the sites are frozen by reaching this T on a time scale that is very short compared to the time required for a molecule to adjust its position. This is the philosophy underlying the splat-cooling technique used to form metallic glasses. It is known, however, that the cooling rate required for glass formation depends on the ratio of T$_g$ to the thermodynamic freezing point \cite{greer}.
We have been, however, comparing T$_g$ with the spinodal limit T* for supercooling, and have argued that slow cooling can result in glass formation if T$_g$ $>$ T*, whereas rapid cooldown is essential if T$_g$ $<$ T* \cite{pc1, pc3}. We have also generalized the term glass to include any high-T phase (its higher entropy implies higher disorder, even if it is not of the structural kind) that exists at low-T; its decay rate decreasing with lowering T \cite{ab3, pc3}. In this category of generic glass, half-doped manganites provide first order transitions where T$_C$ (and T*) is tuned heavily by varying H. This opens up the possibility that T$_g$ $>$ T* at some H, and T$_g$ $<$ T* at some other H, in the same material \cite{pc1}. Slow cooling at the former field value will yield a glass, which will devirtify on slow warming in the latter field value. We have accordingly exploited the new measurement protocol CHUF (Cooling and Heating in Unequal Fields) to access, in a controlled way, both glass formation and glass devitrification. We now provide data as we explore this new possibility, and shall discuss some physics results with general applicability in the last section. In all the measurements reported here, temperature was varied at 1.5K/min, which is our version of slow cooling/warming.
\section {Results on Half-doped Manganites}
We show in fig.3 magnetization measurements in Al. doped PCMO (PCMAO).
\begin{figure}[h]
\centering
\includegraphics{Fig3.eps}
\caption{Depicts devitrification of arrested AF-I phase through magnetization measurements. (a) shows magnetization while cooling in 3 T. Then at 5 K the field is isothermally increased to 4 T and magnetization is measured while warming in 4T. The sharp increase in magnetization above 15 K shows the devitrification of the remaining Af-I phase. (b) shows similar devitrification while warming in 3.5 T after cooling in 2.5 T.}
\label{fig:Fig3}
\end{figure} This material is close to half-doping but the small amount (2.5\%) of Al on Mn site changes the low temperature ground straight to FM-M \cite{ab2}. On cooling in H=3T (fig.3a) or in H=2.5T (fig.3b), we find an anti-ferromagnetic to ferromagnetic transition initiated at about 100K and continuing till about 30K. If the sample is then warmed in a higher field (the field is changed isothermally at 5K), we find a further anti-ferromagnetic to ferromagnetic transition between 20K and 35K. We explain the increase in magnetization on lowering T as due to the first order AF-I to FM-M transition that undergoes a glass-like arrest at about 35K, before it can be completed. In a higher field applied during warming, the glass-like arrest temperature has become lower while T* has become higher, and devitrification of the arrested AF-I state is observed \cite{ab4}.
In fig.4 the same physics is shown through resistivity measurements.
\begin{figure}[h]
\centering
\includegraphics{Fig4.eps}
\caption{Resistivity while cooling in zero field increases monotonically with decrease in temperature and exceeds the measurement range of the instrument below 75 K. After cooling to 5 K, the field is isothermally changed to 4 T and resistivity is measured while warming. The rapid fall with increase in temperature indicates devitrification of the glassy AF-I phase.}
\label{fig:Fig4}
\end{figure}
The transformation to the FM-M state is not seen on cooling in H=0, and the insulating (AF-I) state is arrested. Since T$_g$ falls in higher field we observe devitrification to the metallic state (FM-M) on warming in the higher field of 4 Tesla \cite{ab2}. The same behavior of devitrification from AF-I to FM-M is shown in fig.5 for La-Pr-Ca-Mn-O,\begin{figure}[h]
\centering
\includegraphics{Fig5.eps}
\caption{Magnetization of La-Pr-Ca-Mn-O in 1 T while warming after cooling to 5 K in different field. The rapid increase indicates devitrification of the arrested AF-I phase to equilibrium FM-M phase. }
\label{fig:Fig5}
\end{figure} a manganite that is far from half-doping \cite{kranti}.
Here the sample is warmed in a field (H=1T) higher than the various cooling fields used, and devitrification is seen below 30K.
We now consider the half-doped manganite LCMO whose ground state is AF-I. Here glass-like arrest of the high temperature FM-M phase is observed with cooling in a high field, and devitrification to the AF-I phase is seen warming in lower field. This is brought out through magnetization (Figs.6a \& 6b) and resistivity (Figs. 6c \& 6d) measurements \cite{ab3, pc3}.
\begin{figure}[h]
\centering
\includegraphics{Fig6.eps}
\caption{Devitrification of arrested FM-M state in La$_{0.5}$Ca$_{0.5}$MnO$_3$ shown through magnetization and resistivity measurements while warming in higher fields than the cooling fields. (a) Magnetization in 1 T after cooling in 6 T and isothermally changing the field to measurement field of 1 T at 5 K. (b) similar to (a), measured in 3 T after cooling in 4.5 T. (c) Resistivity while warming in zero field after cooling in 1 T and isothermally changing the field to measurement field of 0 T at 5 K. (d) similar to (c), measured in zero field after cooling in 6 T. }
\label{fig:Fig6}
\end{figure}
Devitrification can also be observed on isothermal variation of H. \begin{figure}[h]
\centering
\includegraphics{Fig7.eps}
\caption{Depicts devitrification of arrested AF-I phase during isothermal field variation. (a) Pr${_{0.5}}$Ca$_{0.5}$Mn$_{0.975}$Al$_{0.025}$O${_3}$ sample is cooled from room temperature to 5 K in different fields. Then magnetization is measured at 5 K while increasing the field from the value of the respective cooling fields. (b) devitrification of arrested AF-I phase of La-Pr-Ca-Mn-O system during isothermal field variation. The sample is cooled from room temperature to 5 K in different fields. Then magnetization is measured at 5 K while increasing the field from the value of the respective cooling fields. It may be noted that for both the cases the sharp change in magnetization shifts to higher field values for higher cooling fields.}
\label{fig:Fig7}
\end{figure}
When the arrested state is AF-I then devitrification is observed on raising H. We show results through magnetization measurements for PCMAO (Fig. 7a) and for La-Pr-Ca-Mn-O (Fig. 7b) \cite{ab2, kranti}. If the arrested state is FM-M as in pure half-doped manganites, then devitrification will be observed on lowering H. \begin{figure}[h]
\centering
\includegraphics{Fig8.eps}
\caption{Evidence of kinetically arrested glassy FM-M phase fractions in La$_{0.5}$Ca$_{0.5}$MnO$_3$ and their devitrification during reduction of field. Each time the sample is cooled from 320 K to 25 in different field H. Then the magnetization is measured while isothermally cycling the field at 25 K from H to -H. Cooling in different H renders different amount of FM-M phase at 25 K. During the field reduction from H to 0, they show distinct magnetization because of different amount of frozen FM-M phase. However, as the field is reduced to zero a part of these arrested FM-M fractions devitrifies to equilibrium AF-I phase as is evident in lower magnetization values in the negative field cycling. The devitrification is more for the higher cooling fields, which is to do with the anticorrelation between supercooling and kinetic arrest (glass transition) temperatures of different regions.}
\label{fig:Fig8}
\end{figure} In this case devitrification is seen on lowering H at 25K (Fig. 8) \cite{ab3}.
In other studies on manganites we have also focused on the broad first order transition which corresponds to different regions of the sample (of length scale corresponding to the correlation length) having different values of T$_C$, and also of T$_g$. Through macroscopic measurements following non-conventional paths in (H,T) space, we have shown in various manganite samples that region which have lower T$_C$ (or T*) have a higher T$_g$ \cite{ab3, pc3, kranti, rawat1}. This anti-correlation between the effect of disorder on T$_g$ and T* has also been observed in some non-manganite samples showing first order magnetic transitions \cite{roy, rawat2}. Does this correspond to the confusion principle enunciated for metallic glasses in the context of the T$_g$? \cite{greer} Similar studies with pressure (instead of H) being the second control parameter would be needed to check whether the anti-correlation being observed here is a general principle extending to all kinds of glasses.
\section {Conclusions of general applicability}
In our recent studies on LCMO and on PCMAO, we have found that if we chose a cooling field so as to have a larger fraction of glass at low temperature, then subsequent devitrification and recrystallization results in a larger faction of equilibrium phase \cite{ab3, ab4}. The question of this strong initial state dependence being applicable to all the structural glasses is an interesting one.
\section{Acknowledgement}
DST Government of India is acknowledged for funding the 14 Tesla PPMS-VSM.
|
1,116,691,497,145 | arxiv |
\section*{Acknowledgments}
\label{section:Acknowledgments}
This work is funded by FEDER funds through the COMPETE 2020 Programme and National Funds through FCT (Portuguese Foundation for Science and Technology) under the projects UID-B/05256/2020, UID-P/05256/2020 and MIT-EXPL/TDI/0038/2019 - APROVA - Deep learning for particle-laden viscoelastic flow modelling (POCI-01-0145-FEDER-016665) under MIT Portugal program. The authors would like to acknowledge the University of Minho cluster under the project NORTE-07-0162-FEDER-000086 (URL: http://search6.di.uminho.pt), the Minho Advanced Computing Center (MACC) (URL: https://
macc.fccn.pt) under the project CPCA\_A2\_6052\_2020, the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (URL: http://www.tacc.utexas.edu), the Gompute HPC Cloud Platform (URL: https://www.gompute.com), and PRACE - Partnership for Advanced Computing in Europe under the project icei-prace-2020-0009, for providing HPC resources that have contributed to the research results reported within this paper.
\section*{Appendix A}
\label{section:Appendix}
In the DNS study presented in section~\ref{sec:DNS}, we considered two different domain configurations, one with spheres having centroids in a wall region of thickness $a$ around all four lateral edges of the flow domain and another in which the sphere centroids are excluded from this wall region. We refer to these cases as the no-excluded volume and excluded volume configurations, respectively. As shown in Fig.~\ref{fig:draftEV}(a) when spheres are allowed to be located in the wall region (blue color), i.e., when their centroid is located less than one radius from the wall, then the boundary acts as a perfectly periodic wall. In the opposite case the boundary walls exclude the spheres and act like rigid stress free periodic walls. In fact, based on Fig.~\ref{fig:draftEV}(b), we can calculate the probability of a single sphere being located in the wall region. Assuming a square cross-section with a width of $8a$, the total cross-sectional area is $64a^2$. Regarding the blue annular area, i.e., the region which excludes the spheres near the wall, the area is of ($64a^2-36a^2=28a^2$). Hence, the probability of a randomly placed sphere being located in the excluded region is equal to $28a^2/64a^2=0.4375$ and the overall/fraction area of spheres (of volume fraction $\phi$) in this region can be as large as $0.4375\phi$. Therefore, as $\phi$ increases it is important that when particles are randomly distributed in the domain they should be allowed to be placed with centroids near the walls.
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.9\columnwidth]{Figures/EV1V2.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{0.4\columnwidth}
\centering
\caption{{}}
\hspace{-1.5cm} \includegraphics[width=0.7\columnwidth]{Figures/EV2.pdf}%
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Schematic representation of (a) excluded and no-excluded volume regions and (b) area fraction in the excluded volume region.}
\label{fig:draftEV}
\end{figure}
In Fig.~\ref{fig:EVvsNEV}, we show contours of the velocity magnitude in the transverse $y-z$ plane for configurations with an excluded volume and no-excluded volume region. From the distribution of velocity magnitude contours, it can be seen that in the excluded volume configuration (i.e., Fig.~\ref{fig:EVvsNEV}(a)), the larger local concentration of rigid impenetrable spheres in the middle of the square channel push the strongest fluid flow ownwards towards the walls causing a stagnant region near the channel center. On the other side, in the no-excluded volume configuration (i.e., Fig.~\ref{fig:EVvsNEV}(b)), the fluid flow is more evenly distributed across the entire channel. This affects the average drag force exerted on the spheres as shown in Table~\ref{tab:newtonian} and Fig.~\ref{fig:streamDe0}.
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\caption{{}}
\hspace{-2cm} \includegraphics[width=0.8\columnwidth]{Figures/de0phi20YZplaneUxSphereNotInWallV3.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{0.5\columnwidth}
\centering
\caption{{}}
\hspace{-2cm} \includegraphics[width=0.8\columnwidth]{Figures/de0phi20YZplaneUxSphereInWallV3.pdf}%
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Steady flow field around one representative random array of particles in a channel filled with Newtonian fluid. Contours of the velocity magnitude field $\|\textbf{u}\|$ (with the inflow direction pointed out of the plane of the page) are represented for particle volume fraction $\phi=0.2$, in the $y-z$ plane with (a) excluded volume near the walls and (b) a no-excluded volume configuration. In the latter case the velocity field is more evenly distributed across the entire cross-section of the domain.}
\label{fig:EVvsNEV}
\end{figure}
\section*{References}
\label{section:References}
\def\color{black}{\color{black}}
\newcommand*{\doi}[1]{\href{\detokenize{#1}} {\raggedright\color{black}{DOI: \detokenize{#1}}}}
\bibliographystyle{unsrtnat}
\section{Introduction}
Understanding the force balance that governs the migration of rigid particles suspended in a viscoelastic fluid is fundamental to a wide range of engineering and technology applications. Examples include polymer processing of highly-filled viscoelastic melts and elastomers \cite{liff2007high}, the processing of semi-solid conductive flow battery slurries \cite{olsen2016coupled}, the flow-induced migration of circulating cancer cells in biopolymeric media such as blood \cite{lim2014inertio}, magma eruption dynamics \cite{parmigiani2016bubble}, and hydraulic fracturing operations using solids-filled muds, slurries and foams \cite{barbati2016complex,faroughi2018rheological}.
In many ways, developing robust and accurate tools to simulate such behaviors may be viewed as an unsolved grand challenge in the dynamics of complex fluids, involving the effects of nonlinear material rheology, fluid inertia, elasticity, flow-unsteadiness plus many-body interactions. In a fluid with Newtonian or non-Newtonian rheology, the presence of a cloud of particles dramatically changes the transmission of stress between both phases (fluid and particles), specifically in terms of the rate at which the constituents of the mixture exchange momentum, known as hindrance effect \cite{faroughi2015unifying,faroughi2016theoretical} . When particles are suspended in a viscoelastic fluid (e.g. a polymer solution or polymer melt), the problem becomes even more challenging, because the fluid may shear-thin or shear-thicken as well as exhibit viscoelasticity and yield stress attributes \cite{Shaqfeh2019,Tanner2019}. Therefore, understanding and predicting both the bulk/macro-scale and particle-level response of these complex multiphase suspensions remains an open problem. As a first step, quantifying the momentum exchange between the constituent phases (i.e. a viscoelastic fluid matrix and a suspended phase of rigid spherical particles) remains a challenging and important problem to be solved in non-Newtonian fluid dynamics.
Because of the linear response between stress and deformation rate, the hydrodynamic behavior of rigid spheres in a Newtonian fluid has received considerable attention since the pioneering work of \citet{Stokes1851} (see for example the monographs by \citet{Happel1983,Kim2005} and \citet{Guazzelli2011}). In the limit of infinite dilution, and when inertial effects can be neglected, the drag force, $\textbf{F}_d$, exerted by the fluid on the solid object takes the Stokes-Einstein form $\textbf{F}_d=6\pi a \eta_0 \overline{u}$, where $a$ is the radius of the spherical particle, $\eta_0$ is the fluid shear viscosity and $\overline{u}$ is the superficial fluid velocity, defined as the fluid velocity averaged over the total volume of the system \cite{Hoef2005}. Higher order corrections to the Stokes-Einstein drag acting on a single particle arising from the presence of neighboring particles have been evaluated in terms of an expansion in the particle volume fraction $\phi$. For small packing fractions ($\phi < 0.10$), the first few terms can be worked out analytically \cite{Hasimoto1959}, but for larger packing fractions, the drag force has to be estimated from approximate theoretical methods \cite{Brinkman1947, Kim1985}, or from empirical data via experimental measurements \cite{Carman1937}. Additionally, numerical simulations \cite{Hoef2005, Hill2001, Koch1999} can provide data to derive drag force expressions for the creeping flow of random arrays of spheres surrounded by a Newtonian fluid.
For the creeping flow of a sphere through an unbounded viscoelastic fluid, measurements of the changes to the force acting on the sphere are typically represented in terms of a dimensionless drag correction factor $X(Wi)$, which is the ratio of the measured drag coefficient compared to the well-known Stokes drag, $X(Wi)\equiv C_D(Wi)/C_D(Wi=0)=C_D(Wi)/(24/Re)$ where $Re$ and $Wi$ are the dimensionless Reynolds and Weissenberg numbers, respectively. For the inertialess flow of a viscoelastic fluid, perturbation solutions predict the departure of the drag from the Stokes result to be quadratic in the Weissenberg number for spheres \cite{leslie1961slow}, and linear in the case of long rod-like particles \cite{Leal1975}. Recently, two reviews comparing experimental data with computations for non-Brownian suspension rheology with non-Newtonian matrices have been published \cite{Shaqfeh2019, Tanner2019}. \citet{Tanner2019} compared and contrasted inelastic fluids with rate-dependent viscosity, materials with a yield stress, as well as viscoelastic fluids - highlighting the need for rheological modeling improvement, possibly with multiple relaxation times. Additionally, he concludes that several aspects of suspension rheology, such as roughness, ionic forces, particle shape, and polydispersity all need to be addressed. Finally, \citet{Tanner2019} also reported experimental results for steady viscometric flows, unsteady shear flows and uniaxial elongational flows. However, good agreement between computation and experiment is scarce, because there are, as yet, few computational studies which allow careful comparison with experimental data, further emphasizing that progress in rheological modelling and improved computational methods are needed. In a recent perspective, \citet{Shaqfeh2019} notes that the foundations for the development of suspension mechanics in viscoelastic fluids, as well as the development of computational methods to accurately simulate with particle-level non-Brownian suspensions, have been established. Nevertheless, numerous unanswered questions remain, including the rheological behavior of these suspensions for different matrix fluid rheologies, particle shapes, deformability, flow histories, etc. All these questions can be addressed, in principle, by employing theoretical/computational frameworks to systematically explore the coupling between the kinematics and momentum distribution of the fluid phase and the resulting evolution of the dispersed particulate phase. \citet{Shaqfeh2021} performed 3D transient simulations of the bulk shear rheology of particle suspensions in Boger fluids for a range of $Wi \leq 6$ and finite strains and calculated the per-particle extra viscosity of the suspension. They categorize the per-particle viscosity calculations as contributions from either the particle-induced fluid stress (PIFS) or stresslet contributions. It was concluded that in the dilute limit, the PIFS increases monotonically with shear strain; however, the stresslet contribution shows a non-monotonic evolution to steady state at large $Wi$. The total combined per-particle viscosity contribution, however, shows a monotonic evolution to steady state. Additionally, \citet{Shaqfeh2021} performed multiple-particle simulations using the IB method to examine the effect of particle-particle hydrodynamic interactions on the per-particle viscosity calculation. It was concluded from transient immersed boundary simulations that the steady values of per-particle viscosity increase with $\phi$, but the per-particle contribution to the primary normal coefficient was independent of $\phi$ (up to 10\% particle volume fraction) at the two values of Weissenberg number investigated ($Wi$ = 3 and 6).
The nonlinear interactions of fluid inertia with viscosity and elasticity cause unexpected phenomena (e.g. negative wakes, shear-induced migration/chaining) in the dynamics of particles suspended in a viscoelastic matrix \cite{Avino2012,Loon2014,Jaensson2016}. These interactions may be expected to change the evolution in the viscoelastic drag correction factor with Weissenberg number. Extensive research efforts over the past 20 years have been directed at the elucidation of the role of fluid rheology and wall effects on the drag of a sphere and the wake developed behind it. Excellent reviews are available in the literature \cite{Caswell2004, walters1992, mckinley2002steady}. There have been a number of computational studies investigating the effect of fluid rheology on the motion of the sphere; however, a common limitation is that results are not convergent for Weissenberg numbers beyond a certain critical limiting value (i.e. typically $Wi_c\approx$ 2 or 3) \cite{Caswell2004}. In the work we extend viscoelastic suspension flow calculations up to $Wi=4$, by employing the log-conformation approach \cite{Fattal2004,Fattal2005,Habla2014,Francisco2017}.
Our previous work \cite{Salah2019} proposed, for the first time, a closure model for the viscoelastic drag coefficient of a single sphere translating in a quasi-linear viscoelastic fluid, which can be well described by the Oldroyd-B constitutive equation. In the present work, we extend the proposed model in order to be able to describe moderate volume fraction viscoelastic suspensions ($\phi\leq 0.2$), which are commonly encountered in a wide range of industrial operations. We focus on non-colloidal suspensions with Newtonian and viscoelastic fluid matrices, and the net effect of other particles in the flow is studied by computing an effective average drag force acting on a particle \cite{TannerHousiadas2014}.
For Newtonian matrices, \citet{Brinkman1947} presented a modification of Darcy's equation in porous media, in which the viscous force exerted on a dense suspension of rigid particles by the Newtonian fluid is calculated. The main idea is that, from the point of view of a single particle, the other distributed particles effectively act as a porous or Darcy medium. \citet{Durlofsky1987} performed numerical simulations to compare against the predictions of Brinkman's model; however, they only obtained good agreement with the analytical solution of Brinkman for very small volume fractions up to $\phi=5\%$, emphasizing the importance of including the configurational effects of more distant particles. The reason that the Brinkman approach starts to break down at what appears to be a rather low volume fraction is related to the fact that at $\phi=5\%$ a characteristic inter-particle spacing is only slightly larger than four particle radii; so that neighboring particles are in fact quite close together and hydrodynamically interact. For dense random arrays of spheres ($\phi \ge 40\%$) the empirical Carman-Kozeny (CK) relation \cite{Carman1937} is found to describe well the drag force exerted by the fluid flow in the dispersed phase. The idea behind the CK relation is that the suspended medium can be considered as a system of tortuous channels, from which the pressure drop across the porous medium is calculated using the Darcy equation \cite{Darcy1937}. In the present work, we report results for volume fractions of $\phi=~$ 4\%, 8\%, 12\%, 16\% and 20\%, representative of semi-dilute non-colloidal suspension behaviour \cite{Tanner2013}.
Additionally, we note that fully-resolved particle-laden viscoelastic solvers \cite{Fernandes2019,Shaqfeh2020} are presently only able to directly resolve $O(10^3)$ particles \cite{Hager2014}, which limits their application to large industrial case studies \cite{Steven2020}. To overcome this limitation we implement an Eulerian-Lagrangian viscoelastic solver ($DPMviscoelastic$), which employs the closure drag model that we develop here for moderately dense suspensions with a viscoelastic matrix fluid to quantify the momentum exchange between the two constituent phases (a moderate volume fraction of rigid spherical particles and a non-shear thinning viscoelastic matrix fluid).
The present paper describes the simulation method employed to measure the drag force on randomly-dispersed particle arrays immersed in viscoelastic fluids that can be described by the quasi-linear Oldroyd-B constitutive equation, which predicts constant values of the shear viscosity and first normal stress coefficient. The extension of this work to more complex viscoelastic matrix-based fluids, for example models predicting shear-thinning fluid behavior (such as the Giesekus model), is also currently being studied. In this case the magnitude of the stresses is typically smaller and easier to resolve computationally, but the dimensionality of the problem is higher due to the additional nonlinear model parameter(s) required to characterize the shear-thinning. To accomplish the required developments, an open-source library, OpenFOAM \cite{openfoam2019}, was modified to be able to calculate the average drag force acting on random particle arrays. The latter information is then used to formulate a new drag force correlation for the creeping flow of an Oldroyd-B fluid through randomly distributed arrays of spherical particles, with solid volume fractions $0<\phi\leq 0.2$, over a range of Weissenberg numbers $(Wi \leq 4)$. To ensure stability at high $Wi$, the polymer stress contribution is computed using the log-conformation formulation \cite{Fattal2004, Fattal2005, Habla2014, Francisco2017}. Finally, to the best of the authors' knowledge, the current work presents for the first time an Eulerian-Lagrangian solver, in which the fluid continuum phase has viscoelastic rheology and the dynamics of the particulate phase are computed following a discrete particle method (DPM). The momentum transfer between both phases is calculated using the drag force law proposed in the present work (see Section~\ref{sec:viscoelasticDrag}). The newly-developed $DPMviscoelastic$ solver is employed to predict particle settling effects in rectangular channels (which mimics a vertical fissure such as those encountered in gas/oil extractions during hydraulic fracturing operation) and in an annular pipe (a model of pumped transport of a suspension along a drill string during horizontal drilling operations).
The paper is organized in the following manner: in Section 2 we present the governing equations and numerical method used to compute the solution of the appropriate balance equations for the viscoelastic fluid flows considered in this work. Section 3 provides the details of the simulation methodology used to compute the drag force values exerted by the fluid on the particulate phase. In Section 4 these results are verified for Stokes flow of a particle suspension dispersed in a viscous Newtonian fluid. The results are then used to derive a new closure law for the average fluid-particle drag force acting on random arrays of spheres immersed in an Oldroyd-B fluid. Section 5 is dedicated to description of the development of the $DPMviscoelastic$ solver for up-scaled three-dimensional simulations of particle-laden viscoelastic flows, in which the dispersed phase is modeled by the discrete particle method. Additionally, we illustrate the capability of the newly-developed code to solve challenging physical problems, specifically two canonical proppant transport problems, which are commonly encountered in hydraulic fracturing operations. Finally, in Section 6, we summarize the main conclusions of this work.
\section{Governing equations and numerical method}
\subsection{Governing equations}
\label{subsec:mathematicalmodel}
Following the work of \citet{Salah2019}, as a first step, we consider the problem of moderately dense ($0<\phi\leq 0.2$) suspensions constituted from a continuum viscoelastic matrix fluid and a monodisperse static random array of rigid spheres. For the continuum fluid phase the familiar Oldroyd-B constitutive model was chosen, representing an elastic fluid with a constant shear viscosity, which has been shown by \citet{Dai2020} to fairly well describe the response of highly elastic Boger fluid suspensions in steady shear and uniaxial elongation. By adopting the Oldroyd-B model, we confine the dimensionality of the viscoelastic fluid calculations to only two degrees of freedom (i.e. the relaxation and retardation times, or equivalently). However, the consideration of moderately dense suspensions increases the dimensionality of the problem to four degrees of freedom, due to the addition of two more variables; the particle volume fraction present in the suspension and the number of random particle configurations studied (to obtain statistical significance in the DNS results). Thus, for the current study, we have also fixed the retardation time of the fluid at $\beta=\eta_S/(\eta_S+\eta_P)=\eta_S/\eta_0=1/2$, where $\eta_0$ is the total matrix fluid viscosity, with $\eta_S$ and $\eta_P$ being the solvent and polymeric viscosities, respectively. Within these constraints, the drag correction expression developed in this study will form a foundation for higher-dimensional parameterizations, which should also consider a range of solvent viscosities as well the effect of more complex fluid rheology (e.g. shear-thinning) on random arrays of particles in viscoelastic fluids, by using machine learning algorithms such as convolutional neural networks to capture the non-linear effects of all constitutive parameters on the resulting drag coefficient expressions acting on the particle arrays.
The dimensionless conservation equations governing transient, incompressible and isothermal laminar flow of an Oldroyd-B fluid are given by
\begin{eqnarray}
\nabla \cdot \textbf{\~u} = 0,
\label{eqn:continuitydimensionless}
\end{eqnarray}
\begin{eqnarray}
Re_a\left(\frac{\partial\textbf{\~u}}{\partial \textit{\~t}} + \textbf{\~u}\cdot\nabla\textbf{\~u}\right)-\nabla^2\textbf{\~u} = -\nabla \textit{\~p} -\nabla\cdot\left[(1-\beta) \nabla\textbf{\~u}\right] + \nabla \cdot \tilde{\boldsymbol\tau_P},
\label{eqn:momentumdimensionless}
\end{eqnarray}
\begin{eqnarray}
\tilde{\boldsymbol\tau_P} + Wi \left(\frac{\partial\tilde{\boldsymbol\tau_P}}{\partial \textit{\~t}} + \textbf{\~u} \cdot \nabla \tilde{\boldsymbol\tau_P} - \tilde{\boldsymbol\tau_P} \cdot \nabla \textbf{\~u} - \nabla \textbf{\~u}^T \cdot \tilde{\boldsymbol\tau_P} \right) = (1-\beta) \left(\nabla \textbf{\~u} + \nabla \textbf{\~u}^T\right),
\label{eqn:oldroydBeqdimensionless}
\end{eqnarray}
where the following dimensionless quantities are used
\begin{eqnarray}
\textbf{\~x}=\frac{\textbf{x}}{L},~\textbf{\~u}=\frac{\textbf{u}}{U},
~\textit{\~t}=\frac{U}{L}t,~\textit{\~p}=\frac{L}{\eta_0 U}p,
~\tilde{\boldsymbol\tau}_P=\frac{L}{\eta_0 U}\boldsymbol\tau_P,
\end{eqnarray}
with $L$ and $U$ being the characteristic length and velocity values, respectively, $\textbf{x}$ the position vector, $\textbf{u}$ the velocity vector, $t$ the time, $p$ the pressure and $\boldsymbol \tau_P$ the polymeric contribution to the extra-stress tensor. As mentioned above, the retardation ratio is fixed with the value of $\beta=0.5$ for all the calculations performed in this work.
For the present problem with $L=a$, where $a$ is the radius of a single suspended particle, and $U$ the average fluid velocity at the inlet of the channel, we define the Reynolds and Weissenberg numbers as follows,
\begin{subequations}
\begin{align}
\label{eqn:reynolds}
Re_D = 2Re_a = \frac{2 a\rho U}{\eta_0},\\
\label{eqn:weissenberg}
Wi = \frac{\lambda U}{a},
\end{align}
\end{subequations}
where $\rho$ is the fluid density and $\lambda$ is the relaxation time. Notice that for the case of a Newtonian fluid flow $\lambda=0$ and $\eta_0 = \eta_S$.
To ensure computational stability over a wide range of fluid elasticities, including suspension flows at high Weissenberg number, we incorporate the log-conformation approach for calculating the polymeric extra-stress tensor. In the present work, we follow the implementation of the log-conformation approach in the OpenFOAM computational library \cite{openfoam2019}, presented in \citet{Habla2014} and \citet{Francisco2017}. Details on the mathematical formulation behind the log-conformation approach can be found in the original works of \citet{Fattal2004,Fattal2005}.
\subsection{Numerical method}
The equations presented in Section~\ref{subsec:mathematicalmodel} are discretized using the finite-volume method (FVM) implemented in the OpenFOAM framework \cite{openfoam2019}.
Pressure-velocity coupling was accomplished using segregated methods, in which the continuity equation is used to formulate an equation for the pressure, using a semi-discretized form of Eq.~(\ref{eqn:continuitydimensionless}) \cite{Ferziger1995}. The resulting equation set is solved by a segregated approach, using the SIMPLEC (Semi-Implicit Method for Pressure-Linked Equations-Consistent) algorithm \cite{Doomaal1984}, which does not require under-relaxation of pressure and velocity (except for non-orthogonal grids, where the pressure needs to be under-relaxed \cite{Francisco2017}). Additionally, the computational cost per iteration of this algorithm is lower than in the PISO (Pressure-Implicit Split Operator) algorithm \cite{Issa1986}, because the pressure equation is only solved once per cycle. The coupling between stress and velocity fields is established using a special second-order derivative of the velocity field in the explicit diffusive term added by the iBSD (improved both-sides diffusion) technique \cite{Fernandes2017}. The velocity gradient is calculated using a second-order accurate least-squares approach, and the diffusive term in the momentum balance is discretized using second-order accurate linear interpolation. For non-orthogonal meshes the minimum correction approach is used, as explained in \citet{Jasak1996}, in order to retain second-order accuracy. The advective terms in the momentum and constitutive equations are discretized using the high-resolution scheme CUBISTA \cite{Alves2003} following a component-wise and deferred correction approach, enhancing the numerical stability. The time derivatives are discretized with the bounded second-order implicit Crank-Nicolson scheme \cite{Crank1947}. Here, a Poisson-type equation for the pressure field is solved with a conjugate gradient method with a Cholesky preconditioner, and the linear systems of equations for the velocity and stress are solved using BiCGstab with Incomplete Lower-Upper (ILU) preconditioning \cite{Lee2003, Jacobs1980, Ajiz1984}. The absolute tolerance for pressure, velocity and stress fields was set as $10^{-10}$. The simulations are performed including transient terms, but the time marching is used only for relaxation purposes as we will just be looking for the steady-state solution, i.e., when the drag coefficient ceased to vary in the third decimal place.
\section{Simulation methodology}
\label{sec:simmeth}
In order to develop our computational methodology, we address only non-colloidal suspensions with viscoelastic matrices, and focus on both dilute and moderately dense suspensions. Following \citet{TannerHousiadas2014}, we consider the effect of other particles in the flow by assuming that from the point of view of a single particle at any instant, the remaining particles act effectively like a porous medium \cite{Brinkman1947}. Fig.~\ref{fig:DNSChannel} illustrates schematically the computational domain, which is used here to simulate the steady-state flow of unbounded viscoelastic fluids around random arrays of spheres. The porous media considered in the present work have volume fractions $0<\phi\leq 0.2$, with particles that are randomly distributed in a box of square cross-section with dimensions $L\times H \times H$, by means of a constrained approach to prevent particle-particle overlap. The computational domain has a total length of $TL = 200a$ and a square cross-section with height $H=8a$. For Newtonian fluids, the applied boundary conditions are fixed velocity, $\mathbf{u}_{in}=(U,0,0)$, and zero pressure gradient at the inlet. Periodic boundary conditions are applied for the front, back, top and bottom boundaries. At the outlet we enforce zero velocity gradient and zero reference pressure. Finally, on each of the sphere surfaces a no-slip velocity condition is applied. For the viscoelastic calculations the applied boundary conditions for velocity and pressure are the same as those listed for the Newtonian fluid, and for the polymeric extra-stress components, we also apply zero gradient boundary condition at the outlet, periodic boundary conditions at the front, back, top and bottom boundaries, zero stress at the inlet, and linear extrapolation, from the adjacent fluid cell centroid \cite{Francisco2017}, at the sphere surface.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{Figures/DNSChannelV5.pdf}
\caption{Schematic of the channel cross-section used for DNS of random arrays of spheres immersed in Newtonian and quasi-linear Oldroyd-B viscoelastic fluids. The particle volume fractions considered include $\phi = 0.04,~0.08,~0.12,~0.16$ and $0.20$.}
\label{fig:DNSChannel}
\end{figure}
The number of spherical particles is chosen such that the solid volume fraction in the simulation $\phi = \frac{n_s (4/3) \pi a^3}{LH^2}$ is as close as possible to the desired packing fraction, where $n_s$ is the number of spheres enclosed in the square duct (whose volume is $LH^2$) and $a$ is the sphere radius. For the purpose of simulating the proppant transport phenomena that occur during hydraulic fracturing operations, we consider in this work moderately dense suspensions where the particle volume fraction range is $0<\phi\leq 0.2$ \cite{barbati2016complex,Steven2020,Chris2018}. Following \citet{Hill2001} we note that the system to be studied must include a sufficient number of spheres, $n_s$, to minimize artifacts and statistical oscillations coming from the finite size of the computational domain. In practice, $n_s$ is chosen to be large enough to avoid periodic artifacts (typically $24\leq n_s \leq 122$, see Table~\ref{tab:newtonian} in Section~\ref{sec:newtonianStokes}), and statistical uncertainty is reduced by ensemble averaging the results from $n_c$ random sphere configurations (in this work $n_c=5$ was found to be sufficiently large to obtain a standard error for the average below $5\%$ of its actual value, as shown in Section~\ref{sec:DNS}, Tables~\ref{tab:newtonian} and \ref{tab:oldroydB}).
The numerical model employed in this work was comprehensively tested against a similar computational challenge (see \citet{Salah2019} for the bounded and unbounded flow past a single sphere in a fluid described by the Oldroyd-B constitutive model). The meshes employed in this work have the same level of mesh refinement as the most refined mesh (M1) used by \citet{Salah2019}, which resulted from a grid refinement study.
In all cases, the magnitude of the fluid velocity imposed at the inflow was such that the Reynolds number based on the particle diameter $D$, Eq.~(\ref{eqn:reynolds}), was equal to $Re_D=0.05$, representative of creeping flow conditions. At this point we should note that there exists some ambiguity in the literature \cite{Hoef2005} on the proper definition of the drag force, specifically, if the pressure gradient term should contribute to the drag force or not. It is known that the two definitions differ by a factor $(1-\phi)$, i.e., the relation between the total average force that the fluid exerts on each particle, $\textbf{F}_t$, and the drag force, $\textbf{F}_d$, which results from the friction between the particle and the fluid at the surface of the particle, is $\textbf{F}_t=\textbf{F}_d/(1-\phi)$. Note that in some literature (\citet{Hill2001}), the total force on the particle is defined as the drag force. In this work, the results will be presented in terms of an average dimensionless drag force, $\langle F \rangle$, which is defined as
\begin{eqnarray}
\langle F \rangle = \frac{\langle\textbf{F}_d\rangle\cdot\mathbf{e_\textit{x}}}{6\pi \eta_0 a \overline{u}}=\frac{(1-\phi)\langle\textbf{F}_{t,i}\rangle\cdot\mathbf{e_\textit{x}}}{6\pi \eta_0 a \overline{u}},
\label{eq:dragforce}
\end{eqnarray}
where $\overline{u}=(1-\phi)U$ is the superficial fluid velocity, $\langle\textbf{F}_{t,i}\rangle$ is the average drag force on the random array of spheres computed as $\langle\textbf{F}_{t,i}\rangle=\frac{1}{n_s}\displaystyle\sum_{i=0}^{n_s}\textbf{F}_{t,i}$, where $\textbf{F}_{t,i}$ is the drag force on sphere $i$ from an ensemble of $n_s$ spheres, and $\mathbf{e_\textit{x}}$ is the unit vector in the $x$-direction. The denominator on the right-hand side of Eq.~(\ref{eq:dragforce}) is the Stokes drag force, obtained in the limit of infinite dilution and when inertial effects can be neglected \cite{Stokes1851}. In this work, the uncertainty in the computed average force, referred as the standard error, is calculated from
\begin{eqnarray}
\Delta \langle F\rangle = \sqrt{\frac{\displaystyle\frac{1}{n_c}\displaystyle\sum_{i=1}^{n_c}\left(\langle F\rangle_i-\overline{\langle F\rangle_i}\right)^2}{n_c-1}}.
\label{eq:standarderror}
\end{eqnarray}
Note that the factor $n_c-1$ in the denominator on the right-hand side of Eq.~(\ref{eq:standarderror}) corrects for the fact that there are $n_c-1$ degrees of freedom, since the average is used to calculate the variance in the numerator \cite{Bevington1992}. This is important since the number of random configurations, $n_c$, used to calculate the average of $\langle F \rangle$ is small.
\section{Results: simulation of flow in random particle arrays}
\label{sec:DNS}
\subsection{Verification: Stokes flow of suspensions with Newtonian fluid matrices}
\label{sec:newtonianStokes}
In this section the creeping flow of random arrays of spheres surrounded by a Newtonian fluid is studied. This way the simulation methodology presented in Section~\ref{sec:simmeth} can be verified against results found in the literature.
One of the earliest drag force models for describing Stokes flow through an array of spherical particles is the \citet{Carman1937} relation,
\begin{eqnarray}
\langle F \rangle = \frac{10\phi}{(1-\phi)^2}.
\label{eq:carman}
\end{eqnarray}
This relation is only valid for dense arrays $((1-\phi)\ll 1)$, which can be seen by the fact that it does not have the correct limit $\langle F \rangle\to 1$ for $\phi\to 0$. For the limit of dilute systems, \citet{Kim1985} derived a closed-form expression for $\langle F\rangle$:
\begin{eqnarray}
\langle F \rangle = (1-\phi)\left(1 + \frac{3}{\sqrt{2}}\phi^{1/2} + 16.456\phi + \frac{135}{64}\phi~\text{ln}~ \phi + O(\phi^{3/2})\ldots\right).
\label{eq:kim}
\end{eqnarray}
The computational results obtained by \citet{Hill2001}, using Lattice-Boltzmann simulations, were found to be in very good agreement with Kim and Russel's~\cite{Kim1985} drag force expression (Eq.~(\ref{eq:kim})) for dilute arrays of particles $(\phi \le 0.1)$.
Subsequently, several expressions have been developed to find an accurate drag force model that is valid over the full solid fraction range. Using a modification of the Darcy equation \cite{Darcy1937}, \citet{Brinkman1947} derived the well-known drag force model,
\begin{eqnarray}
\langle F \rangle = (1-\phi)\left(1 + \frac{3}{4}\phi\left(1-\sqrt{\frac{8}{\phi}-3}\right)\right)^{-1}.
\label{eq:brinkman}
\end{eqnarray}
\citet{Koch1999} proposed the following expression for the drag force:
\begin{eqnarray}
\langle F\rangle = \begin{cases}
\displaystyle\frac{(1-\phi)\left(1+\frac{3}{\sqrt{2}}\phi^{1/2} + 16.456\phi + \frac{135}{64}\phi~\text{ln}~\phi\right)}{1 + 0.681\phi - 8.48\phi^2 + 8.16\phi^3} & \text{for } \phi\leq 0.4\\
\displaystyle\frac{10\phi}{(1-\phi)^2} & \text{for } \phi\geq 0.4,
\end{cases}
\label{eq:koch}
\end{eqnarray}
which for low solid volume fraction is equal to Eq.~(\ref{eq:kim}) to $O(\phi~\text{ln}~\phi)$, whereas for large solid volume fractions is the drag force given by the Carman expression Eq.~(\ref{eq:carman}). Finally, \citet{Hoef2005} presented a best fit to simulation data obtained using a Lattice-Boltzmann method (for $\phi \le 0.6$), which takes the following simple form:
\begin{eqnarray}
\langle F \rangle = \frac{10\phi}{(1-\phi)^2}+(1-\phi)^2(1+1.5\sqrt{\phi}),
\label{eq:hoef}
\end{eqnarray}
which is the Carman expression with a correction term for the limiting case of $\phi\to 0$. Recently, \citet{faroughi2015unifying} predicted the correction to the drag coefficient for monosized spherical particles as:
\begin{eqnarray}
\langle F \rangle = \left(\frac{1-\displaystyle\frac{\phi}{\phi_m}}{1-\phi}\right)\times\displaystyle\left(\frac{1-\displaystyle\frac{\phi}{\phi_m}}{1-\phi}\right)^{\displaystyle\frac{-2.5\phi_m}{1-\phi_m}}\times\left[1-\frac{3}{2}\beta\left(\frac{\phi}{\phi_m}\right)^{1/3}+\frac{\beta^3}{2}\frac{\phi}{\phi_m}\right]^{-1},
\label{eq:salah}
\end{eqnarray}
where $\phi_m$ denotes the maximum random close packing fraction, which, for monosized spherical particles, we take to be $\phi_m\approx 0.637$ \cite{Boyer2011}, and $\beta$ is a geometrical proportionality constant, which is related to the shape of the streamtube in the real flow field. The value of $\beta = 0.65$ provides the best fit to numerical simulations shown in Fig.~\ref{fig:newtonian}. The first term in Eq.~(\ref{eq:salah}) accounts for a vertical drag correction due to the reduction in drag even at low Reynolds number which is expected owing to the fact that particles aligned with gravity experience significant acceleration due to viscous forces (Smoluchowski effect). The second term in Eq.~(\ref{eq:salah}) accounts for the change in dynamic viscosity of the suspensions due to the existence of a cloud of particles inside the medium. Finally, the last term in Eq.~(\ref{eq:salah}) accounts for the horizontal drag correction resultant from the hindrance associated with the return flow of ambient fluid in a bounded system.
Fig.~\ref{fig:newtonian} shows our finite-volume simulation results for the dimensionless drag force $\langle F \rangle$ in a random array of spheres, immersed on a Newtonian fluid, at solid volume fractions up to $\phi=0.2$. Additionally, the results of \citet{Hill2001} obtained with lattice-Boltzmann simulations, the \citet{Carman1937} (Eq.~(\ref{eq:carman})), \citet{Kim1985} (Eq.~(\ref{eq:kim})), \citet{Brinkman1947} (Eq.~(\ref{eq:brinkman})), \citet{Koch1999} (Eq.~(\ref{eq:koch})), \citet{Hoef2005} (Eq.~(\ref{eq:hoef})) and \citet{faroughi2015unifying} (Eq.~(\ref{eq:salah})) expressions are also represented in Fig.~\ref{fig:newtonian} for comparison. For all the simulated particle volume fractions, the dimensionless drag force $\langle F \rangle$ obtained from our numerical algorithm is similar (within an average error value of 5\% and a maximum error value of 8.9\%) to both the theories for dilute and semi-concentrated suspensions and with the Lattice-Boltzman numerical results of \citet{Hill2001}. Notice that up to moderate volume fractions, the dimensionless average drag force computed for the case in which the locations of all of the spheres are in the interior of the channel region (i.e. where we take into account an excluded volume region provided by rigid bounding walls) is similar (within 2\%) to the results obtained when we allow the surface of the individual spheres to overlap the periodic channel boundaries, i.e. where we neglect the excluded volume provided by rigid bounding walls (see results in Fig.~\ref{fig:newtonian} and Table~\ref{tab:newtonian} for $\phi=0.2$). For a more detailed discussion regarding the excluded volume provided by impenetrable channel side-walls refer to Appendix A.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{Figures/nonDimensionalDragDe0V11.pdf}
\caption{The average value of the dimensionless drag force (multiplied by the porosity squared) for creeping flow of a Newtonian fluid past an array of spheres as a function of the packing fraction $\phi$ for $n_c=5$ different configurations. The symbols represent the simulation data, from this work (squares and diamond) and from \citet{Hill2001} (circles). Also shown are the correlations by \citet{Carman1937} (grey line), \citet{Brinkman1947} (dotted line), \citet{Kim1985} (solid line), \citet{Koch1999} (dashed line), \citet{Hoef2005} (dotted-dashed line) and \citet{faroughi2015unifying} (green line).}
\label{fig:newtonian}
\end{figure}
The sphere volume fractions, the number of spheres used on each simulation, the number of mesh points, the dimensionless average drag force $\langle F \rangle$ on the spheres and the respective standard errors are listed in Table~\ref{tab:newtonian}. In all cases, the standard errors of $\langle F \rangle$, which measure the statistical accuracy achieved by averaging the results with five random configurations, were below $3.5\%$ of the average.
\begin{table}[H]
\centering
\begin{threeparttable}
\scriptsize
\caption{Parameters used for the creeping flow of a Newtonian fluid past random arrays of spheres. $\Delta \langle F \rangle$ is the standard error in the average of $\langle F \rangle$.}
\centering
\begin{tabular}{lrrccc}
\toprule
$\phi$ & $n_s$ & $n_c$ & Number of mesh points & $\langle F \rangle$ & $\Delta \langle F \rangle/\langle F \rangle$ \\
\midrule
0.04 & 24 & 5 & 1341845 & 1.581 & 0.028 \\
0.08 & 49 & 5 & 1293067 & 2.231 & 0.030 \\
0.12 & 73 & 5 & 1249529 & 2.734 & 0.024 \\
0.16 & 98 & 5 & 1203460 & 3.485 & 0.015 \\
0.20 & 122 & 5 & 1158730& 4.402 & 0.035 \\
0.20$^*$ & 122 & 5 & 1199453 & 4.481 & 0.011 \\
\bottomrule
\end{tabular}
\scriptsize $^*$ includes particles located in the excluded volume region near the periodic channel walls
\label{tab:newtonian}
\end{threeparttable}
\end{table}
In Fig.~\ref{fig:streamDe0} we show normalized axial velocity contours obtained from the numerical simulation of random particle arrays in a channel filled with a Newtonian fluid. As can be seen from the velocity contours, as we increase the particle volume fraction more of the fluid is forced to flow through the tortuous paths in the interstitial spaces between the spheres, rather than as a continuous fluid stream that is mildly perturbed by widely separated spheres. This is also visible by the higher magnitude of the fluid velocities near the channel walls where the fluid is squeezed. Notice that for the higher particle volume fraction employed $\phi=0.2$ we have also conducted simulations where particles can be located in the excluded volume region provided by the rigid bounding walls ($\phi^*=0.2$), and the dimensionless average drag force $\langle F \rangle$ remains similar (approximately 2\% higher) to the case with impenetrable side walls where the particles are located only in the central portion of the channel beyond an excluded volume region of thickness $a$ that is adjacent to the bounding side walls region (see Fig.~\ref{fig:newtonian} and Table~\ref{tab:newtonian}). Placing spheres uniformly throughout the entire domain results in a more uniform velocity profile across the channel cross-section.
\begin{figure}[H]
\centering
\captionsetup[subfloatrow]{format = hang, labelfont = up, textfont = up}
\captionsetup{labelfont = up, textfont = up}
\ffigbox{%
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi4V3.pdf}
{\caption{}}
\end{subfloatrow}\\
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi8V3.pdf}
{\caption{}}
\end{subfloatrow}\\
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi12V3.pdf}
{\caption{}}
\end{subfloatrow}\\
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi16V3.pdf}
{\caption{}}
\end{subfloatrow}\\
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi20V1.pdf}
{\caption{}}
\end{subfloatrow}\\
\hspace{-4cm}
\begin{subfloatrow}[1]
\includegraphics[width=0.7\textwidth]{Figures/de0phi20V2_5.pdf}
{\caption{}}
\end{subfloatrow}\\
}{\caption{Cross-sections of the steady flow field around one representative random particle array in a channel filled with Newtonian fluid. Contours of the dimensionless velocity field are represented for each of the particle volume fractions used on the simulations, $\phi=0.04, 0.08, 0.12, 0.16$ and $0.2$, in the $x-y$ plane at $z=0$. The configuration denoted $\phi^*=0.20$ corresponds to the more realistic case of penetrable side-walls in which the randomly placed particles can also be located in the excluded volume region of thickness $a$ near each bounding side wall.}
\label{fig:streamDe0}}
\end{figure}
\subsection{Computational study: drag force on spheres within an Oldroyd-B fluid}
\label{sec:viscoelasticDrag}
We performed finite volume simulations of viscoelastic creeping flows (with the Oldroyd-B constitutive equation) past fixed random configurations of particles at Weissenberg numbers up to $Wi=4$ and for solid volume fractions in the range $0<\phi\leq 0.2$. The numerical resolutions used in the simulations are comparable with those used by \citet{Salah2019}.
The dimensionless average drag force $\langle F(\phi,Wi) \rangle$ on the spheres (see Eq.~(\ref{eq:dragforce})) and the respective standard errors are listed in Table~\ref{tab:oldroydB} for the different kinematic conditions stated above. In all cases, the standard errors of $\langle F(\phi,Wi) \rangle$, which measure the statistical accuracy achieved by averaging the results with five random configurations, were below $4.7\%$ of the average drag force.
\begin{table}[H]
\centering
\begin{threeparttable}
\scriptsize
\caption{Parameters used for the creeping flow of an Oldroyd-B viscoelastic fluid past random arrays of spheres. $\Delta \langle F \rangle$ is the standard error in the average of $\langle F(\phi,Wi) \rangle$. For each case we consider $n_c=5$ configurations and keep $\eta_S/\eta_0=0.5$ and $Re_D=0.05$.}
\centering
\begin{tabular}{cccccccccccc}
\toprule
$Wi$ & $\phi$ & $\langle F \rangle$ & $\Delta \langle F \rangle/\langle F \rangle$ & $Wi$ & $\phi$ & $\langle F \rangle$ & $\Delta \langle F \rangle/\langle F \rangle$ & $Wi$ & $\phi$ & $\langle F \rangle$ & $\Delta \langle F \rangle/\langle F \rangle$ \\
\midrule
\multirow{5}{*}{0.5} &
0.04 & 1.577 & 0.028 & \multirow{5}{*}{1} &
0.04 & 1.563 & 0.030 & \multirow{5}{*}{2} &
0.04 & 1.701 & 0.035 \\
&0.08 & 2.209 & 0.029 & &0.08 & 2.210 & 0.030 & &0.08 & 2.274 & 0.031 \\
&0.12 & 2.685 & 0.024 & &0.12 & 2.826 & 0.031 & &0.12 & 2.896 & 0.025\\
&0.16 & 3.417 & 0.015 & &0.16 & 3.661 & 0.020 & &0.16 & 3.760 & 0.018 \\
&0.20 & 4.229 & 0.022 & &0.20 & 4.476 & 0.022 & &0.20 & 4.709 & 0.020 \\
\midrule
\multirow{5}{*}{3} &
0.04 & 1.785 & 0.045 & \multirow{5}{*}{4} &
0.04 & 1.849 & 0.047 &&&& \\
&0.08 & 2.415 & 0.033 & &0.08 & 2.619 & 0.034 &&&& \\
&0.12 & 3.045 & 0.029 & &0.12 & 3.343 & 0.028 &&&& \\
&0.16 & 3.979 & 0.017 & &0.16 & 4.331 & 0.015 &&&& \\
&0.20 & 4.956 & 0.023 & &0.20 & 5.409 & 0.022 &&&& \\
\bottomrule
\end{tabular}
\label{tab:oldroydB}
\end{threeparttable}
\end{table}
Additionally, we show in Fig.~\ref{fig:statistics} statistical measures of the total drag force exerted by the Newtonian and viscoelastic fluids in the spheres. In the panels of Fig.~\ref{fig:statistics} we show the distribution in the absolute values of the dimensionless drag force exerted by the fluid on each individual sphere, $F_{t,i}$, extracted from the numerical simulations prior to ensemble averaging. Fig.~\ref{fig:statistics}(a) presents the frequency distribution of the drag force exerted by the Newtonian fluid on each individual sphere, $F_{t,i}$, for one representative configuration at a volume fraction $\phi=0.12$. The distribution obtained is approximately Gaussian, with a mean value of $\langle F_{t,i} \rangle = 112.7$ and standard deviation of $\sigma_{F_{t,i}}=27.7$. Fig.~\ref{fig:statistics}(b) shows the effect of the Weissenberg number, $Wi$, on the mean and standard deviation of the frequency distribution of the drag force on the ensemble of spheres, for $\phi=0.20$. Notice that the error bars represent the standard deviation $\pm \sigma_{F_{t,i}}$. The results obtained show that the distribution of the results (as measured by the standard deviation values) is similar for all Weissenberg numbers employed in the calculations, and that the mean value of the drag force first decreases for $Wi<1$ and then increases as elasticity starts to play a progressively more important role. Fig.~\ref{fig:statistics}(c) shows the effect of varying the location of the spheres on the average drag force $\langle F_{t,i} \rangle$, for a fixed solid volume fraction $\phi=0.12$, and $Wi=0$ and $2$, through changes in the variable $L$ (which represents the total length of the square-cross section duct in which the particles are confined, see Fig.~\ref{fig:DNSChannel}). From the results obtained it can be concluded that for $L/a \geq 30$ the average drag force distribution has converged. As before, the standard deviation values for the average drag force distribution represented by the error bars, $\sigma_{F_{t,i}}$, obtained for different $L/a$ lengths are similar. Notice that the non-monotonic behavior of the $\langle F_{t,i} \rangle$ values for the different values of $L/a$ can be attributed to the fact that here only one configuration ($n_c=1$) is used to estimate the average drag force of each particle in the suspension. Finally, in Fig.~\ref{fig:statistics}(d) we analyze the effect of the number of configurations, $n_c$, used in each kinematic condition, on the distribution of the dimensionless average drag force $\sigma_{\langle F \rangle}$, for $\phi=0.12$, and $Wi=0$ and $1$. We can conclude that for $n_c \geq 3$, the standard deviation $\sigma_{\langle F \rangle}$ has converged to a constant value.
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=\columnwidth]{Figures/histWi0phi12V4.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=\columnwidth]{Figures/SDvsWiV4.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=\columnwidth]{Figures/LvsAvgFV3.pdf
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=\columnwidth]{Figures/NvsSigmaFV2.pdf}%
\label{}
\end{subfigure}
\end{tabular}}
\caption[]
{(a) Frequency distribution (for one configuration, $n_c=1$) of the dimensionless drag force on individual spheres, $F_{t,i}$, for volume fraction $\phi=0.12$ (corresponding to a total number of spheres equal to $n_s=73$) and Newtonian fluid ($Wi=0$), (b) average drag force, $\langle F_{t,i} \rangle$, and standard deviation, $\sigma_{F_{t,i}}$, on the random array of spheres for $\phi=0.20$ at different $Wi$ numbers, (c) average drag force, $\langle F_{t,i} \rangle$, and standard deviation, $\sigma_{F_{t,i}}$, on one single random array of spheres for $\phi=0.12$ at $Wi=0$ and $Wi=2$, using different confinement lengths $L$, and (d) evolution in the standard deviation of the dimensionless average drag force $\sigma_{\langle F \rangle}$ on the random array of spheres for $\phi=0.12$ at $Wi=0$ and $Wi=1$, using a progressively increasing number of configurations $n_c$.}
\label{fig:statistics}
\end{figure}
With the goal of finding a closure model for the drag force exerted by an Oldroyd-B fluid on random arrays of particles at creeping flow conditions and retardation ratio $\beta=0.5$, in Fig.~\ref{fig:dragViscoelastic}(a) we show the dimensionless average drag force, $\langle F(\phi,Wi)\rangle$, exerted on the particles for the solid volume fractions and Weissenberg numbers presented in Table~\ref{tab:oldroydB}, along with the standard deviation errors. The results obtained allow us to conclude that the variations with $\phi$ are much larger than the variations with $Wi$. Additionally, in Fig.~\ref{fig:dragViscoelastic}(b) we show the values of the normalized drag force, $\langle F(\phi,Wi)\rangle/F^0(Wi)$, exerted on the particles for the solid volume fractions and Weissenberg numbers referred above. Notice that here we have normalized the dimensionless average drag force $\langle F(\phi,Wi)\rangle$ by $F^0(Wi)$, which is the drag coefficient of a single sphere translating through an unbounded Oldroyd-B fluid under creeping flow conditions and given by the closure model presented in \citet{Salah2019} (see Eqs.~(19a) and (19b) therein, using $\zeta=1-\beta=0.5$). For the sake of completeness we write here the expression we employed for $F^0(Wi)$ with $\zeta=1-\beta=0.5$,
\begin{eqnarray}
F^0(Wi) = \begin{cases}
\displaystyle 1-\frac{0.0015955Wi^2+0.0295475Wi^4-0.017345Wi^6}{0.0534+3.2325Wi^2+Wi^4} & \text{if } Wi\leq 1\\
\\
\displaystyle 1+\frac{-0.0123176Wi^4+0.0078197Wi^6+0.000142825Wi^8}{0.2444225Wi^2+Wi^4} & \text{if } Wi > 1.
\end{cases}
\label{eq:F0}
\end{eqnarray}
The numerical results presented in Figure~\ref{fig:dragViscoelastic}(b) show that this rescaled drag force can be considered independent from any statistically-significant trend with Weissenberg number for the range of solid volume fractions computed in this work; i.e, when we normalize the dimensionless average drag force we compute in our simulations by $F^0(Wi)$ it helps to collapse $\langle F(\phi,Wi)\rangle$ at any given value of $\phi\leq 0.20$.
In the last section we obtained a good agreement between the numerical results for the average drag force exerted on an ensemble of particles immersed on a Newtonian fluid with the model given by \citet{Hoef2005}, we thus propose fitting an equation of the same form to the computational results obtained in a viscoelastic fluid, i.e.,
\begin{eqnarray}
\langle F(\phi,Wi) \rangle/F^0(Wi) = (1-\phi)^2\left(1 + k_1\phi^{k_2}\right),
\label{eq:fit}
\end{eqnarray}
where $F^0(Wi)$ is the infinitely dilute ($\phi\to 0$) result for the drag force presented in \citet{Salah2019} for $\zeta=0.5$ (cf. Eq.~(\ref{eq:F0})). Notice that we could also have used a similar expression for the correction to the drag coefficient as that given by \citet{faroughi2015unifying} in Eq.~(\ref{eq:salah}) to fit our viscoelastic results, however, due to its additional simplicity we have here chosen the functional form of the \citet{Hoef2005} expression. The correlation constants obtained using a suitable fitting procedure (conducted by solution of a nonlinear least-squares problem using the iterative Levenberg-Marquardt algorithm \cite{Levenberg2005}) led to $k_1=63.03$ and $k_2=1.459$. The resulting model accounts for 98.22\% of the variance of the numerical data, a root mean square error (RMSE) of 0.1797, and an average error computed between the proposed model and the numerical data of 5.7\%. The inset in Figure~\ref{fig:dragViscoelastic}(b) shows the evolution of the normalized drag force, $\langle F(\phi,Wi) \rangle/F^0$, with Weissenberg number, for $\phi=0.2$. There is a residual systematic trend in the rescaled values with $Wi$ indicating that the average drag force is not perfectly factorizable into functions of $\phi$ and $Wi$ respectively. However, the variation shown is less than the RMSE error between simulations and we neglect this henceforth.
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{0.7\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.7\columnwidth]{Figures/FNotNormalizedV2.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[b]{0.7\columnwidth}
\centering
\caption{{}}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=0.7\textwidth]{Figures/vanderHoef_2parm_V2_5.pdf}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]\node[anchor=south west,inner sep=0] (image) at (0.625,0.28) {\includegraphics[width=0.23\textwidth]{Figures/dragOldroydBV4.pdf}};
\end{scope}
\end{tikzpicture}
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Variation of (a) dimensionless average drag force $\langle F(\phi,Wi)\rangle$ and (b) normalized drag force $\langle F(\phi,Wi)\rangle/F^0(Wi)$ with Weissenberg number for random arrays of fixed particles with solid volume fractions $0<\phi\leq 0.2$ within an Oldroyd-B viscoelastic matrix-based fluid. Here $F^0(Wi)$ represents the drag force exerted by the Oldroyd-B fluid on a single particle, as described in \citet{Salah2019} and Eq.~(\ref{eq:F0}).}
\label{fig:dragViscoelastic}
\end{figure}
In Fig.~\ref{fig:N1} and Fig.~\ref{fig:zoomStress} we show contours of the dimensionless first normal stress difference, defined as $(\tau_{xx}-\tau_{yy})/(\eta_P U/a)$, obtained from the numerical simulations. The first normal stress difference is mainly generated near the no-slip surfaces and in the wake of each of the spheres. Notice that increasing the fluid elasticity (increasing $Wi$ number), i.e. from the left to the right panels in Fig.~\ref{fig:N1}, promotes a strong elastic wake and an increase in the magnitude of the first normal stress difference. Only by use of the log-conformation approach for computing the polymeric extra-stress tensor components were we able to stabilize the numerical algorithm. Additionally, increasing the particle volume fraction, i.e., from top to bottom panels, increases the magnitude of the first normal stress difference that is generated on the front stagnation point of the particles. Finally, from Fig.~\ref{fig:zoomStress} we see that the magnitude of the first normal stress difference near the rear stagnation point is much larger than the one developed upstream near the front stagnation point, and this difference becomes progressively larger for higher $Wi$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{Figures/N1V5.pdf}
\caption{Dimensionless first normal stress difference distribution around the random particle arrays in a channel filled with an Oldroyd-B viscoelastic matrix-based fluid.}
\label{fig:N1}
\end{figure}
A zoomed in region at $z/a=0$ and $x/a=-10$, of the normal stress distributions $\tau_{xx}-\tau_{yy}$ and $\tau_{yy}-\tau_{zz}$, as well as the shear stress distributions $\tau_{xy}$ and $\tau_{yz}$ are shown in Fig.~\ref{fig:zoomStress} for $Wi=4$ and $\phi=0.2$. In the plane $z/a=0$ (i.e. the $xy$-plane, corresponding to a longitudinal section in the flow direction, Fig.~\ref{fig:zoomStress}(a)), the contour plot showing the first normal stress distribution reveals the formation of large elastic wakes at the rear stagnation point of the spheres, and the shear stress $\tau_{xy}$ indicates the formation of quadrupolar structures at $\pm 45\degree$ angles in the quadrants around each of the spheres. The magnitudes of the first normal stress difference $\tau_{xx}-\tau_{yy}$ and $\tau_{xy}$ are much greater than the magnitudes of the second normal stress difference $\tau_{yy}-\tau_{zz}$ and $\tau_{yz}$ shown in the plane $x/a=-10$ (i.e. the $zy$-plane, corresponding to a section perpendicular to the flow direction, Fig.~\ref{fig:zoomStress}(b)).
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=\columnwidth]{Figures/N1zoom.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.85\columnwidth]{Figures/TauYYMZZzoom.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{Figures/shearZoom.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics[width=0.85\columnwidth]{Figures/shearZoomYZ.pdf}%
\label{}
\end{subfigure}
\end{tabular}}
\caption[]
{Zoomed-in region in a plane at (a) fixed $z/a=0$ showing the dimensionless $\tau_{xx}-\tau_{yy}$ and $\tau_{xy}$ distributions, and at (b) fixed $x/a=-10$ showing the dimensionless $\tau_{yy}-\tau_{zz}$ and $\tau_{yz}$ distributions, for $Wi = 4$ and $\phi=0.2$.}
\label{fig:zoomStress}
\end{figure}
\section{Proppant transport during the hydraulic-fracture process}
In this section we develop a computational framework, based on the Eulerian-Lagrangian formulation \cite{Celio2018}, capable of numerically describing, as a proof-of-concept, proppant transport in a viscoelastic matrix-based fluid that can be characterized by the Oldroyd-B constitutive model. The newly-developed algorithm takes into account the effect of the particle volume fraction (the Lagrangian phase) on the viscoelastic fluid phase (the Eulerian phase). For this purpose, we extend the formulation presented in \citet{Celio2018} (and references therein) to be able to take into account the viscoelastic behavior of the fluid.
Consider the motion of an incompressible viscoelastic fluid phase in the presence of a secondary particulate phase, which is governed by the volume-averaged continuity equation
\begin{equation}
\begin{aligned}
\frac{\partial \epsilon_f}{\partial t}+\nabla \cdot (\epsilon_f \textbf{U}^f) = 0,
\end{aligned}
\label{eq:mass_equation}
\end{equation}
and Cauchy momentum equation
\begin{equation}
\begin{aligned}
\frac{\partial(\epsilon_f \textbf{U}^f)}{\partial t}+\nabla \cdot (\epsilon_f \textbf{U}^f \textbf{U}^f) = - \nabla P - S_p + \nabla \cdot (\epsilon_f \boldsymbol\tau_f) + \epsilon_f\textbf{g},
\end{aligned}
\label{eq:momentum_equation}
\end{equation}
where $\epsilon_f$ is the fluid porosity field satisfying $\epsilon_f = 1-\phi$, $\textbf{U}^f$ is the fluid velocity, $P$ is the modified pressure ($p/\rho_f$, with $p$ being the dynamic pressure and $\rho_f$ the fluid density), $\textbf{g}$ is the gravity acceleration vector and the fluid-phase stress tensor $\boldsymbol\tau_f$ is given by:
\begin{equation}
\begin{aligned}
\boldsymbol\tau_f= \frac{1}{\rho_f}\left[\eta_S\left((\nabla \textbf{U}^f)+(\nabla \textbf{U}^f)^T\right)+\boldsymbol\uptau_P\right],
\end{aligned}
\label{eq:viscous_stress_equation}
\end{equation}
where $\boldsymbol\uptau_P$ is the polymeric extra-stress tensor computed using the Oldroyd-B viscoelastic matrix-based constitutive model given by Eq.~(\ref{eqn:oldroydBeqdimensionless}).
The two-way coupling between the fluid phase and particles is enforced via the source term $S_p$ in the momentum balance equations, Eq.~(\ref{eq:momentum_equation}), of the fluid-phase. Because the fluid drag force, $\textbf{F}_{d,i}$, acting on each particle $i$ is known (see Section~\ref{sec:viscoelasticDrag}), then according to Newton's third law of motion the source term is computed as a volumetric fluid-particle interaction force given by:
\begin{equation}
\begin{aligned}
S_p = \frac{\displaystyle\sum_{i=1}^{N_p}{\textbf{F}_{d,i}}}{\rho_f V_{cell}},
\end{aligned}
\label{eq:source_term_equation}
\end{equation}
where $V_{cell}$ is the volume of a computational cell, and $N_p$ is the number of particles located in that cell.
In this work, we consider two different formulations to describe the contact between particles, the spring-dashpot model \cite{Cundall197947} and a Multi-Phase Particle In Cell (MPPIC) model \cite{Rourke2009}. The former allows us to explicitly handle each contact between two particles and, therefore, is very computationally intensive. The latter can be used to represent the particle collisions on average without resolving particle-particle interactions individually. In the MPPIC method the particle-particle interactions are computed by models which utilize mean values calculated on the Eulerian mesh \cite{MPPICOpenFOAM}. For that purpose, in the present work we have employed a collision damping model to represent the mean loss in kinetic energy which occurs as particles collide, and helps to produce physically realistic scattering behaviour \cite{MPPICOpenFOAM}. Finally, a collision isotropy model is also employed to spread the particles uniformly across cells \cite{MPPICOpenFOAM}.
Two case studies were performed to validate the newly-developed $DPMviscoelastic$ solver. In the first case, we study proppant transport and sedimentation in a long conduit of rectangular cross section, a typical geometry to study flow in hydraulic fracturing \cite{Steven2020}. In the second case, we study the segregation phenomena which occurs in cement casing for horizontal wells. Both case studies were performed using Newtonian and viscoelastic carrier fluids. For the first case, particle collisions are modelled using the MPPIC model in order to handle $O(10^6)$ particles, and for the latter, the Hertzian spring-dashpot model \cite{Tsuji1992239} is employed with a total of $125,000$ particles. The following sections present comparisons between the numerical results obtained for the aforementioned case studies and results found in the scientific literature.
\subsection{Rectangular channel flow}
Despite many advances in hydrocarbon reservoir modeling and technologies \cite{dahi2011numerical,faroughi2013prompt,bordbar2018pseudo,han2016numerical,bakhshi2020numerical} especially for unconventional resource development, the efficiency of hydrocarbon recovery in shale reservoirs is still very low \cite{seales2017recovery}. One of the leading issues causing this inefficiency is the lack of proper proppant placement in the fracture networks. Proppant emplacement within fractures directly impacts productivity because it controls both short- and long-term conductivities of the fractured wells \cite{gomaa2014viscoelastic}. Proppant particles must be carried over large distances to ensure successful placement, which requires a spatially homogeneous distribution of particles \cite{faroughi2018rheological}. However, flows of non-Brownian particles, such as proppant, often result in non-homogeneous patterns, in which particle sedimentation is commonly observed \cite{Steven2020}. The proppant particles that will be modeled in this section are glass microspheres of diameter 73 $\upmu\mathrm{m}$ and of density 2.54 g/cm$^3$, in order to simulate the experiments conducted by \citet{Steven2020}. The carrier fluid is a glycerol/water mixture of 85:15 w/w$\%$ with a viscosity and density of 0.1 Pa$\mathpunct{.}$s and 1.22 g/cm$^3$ at ambient temperature, respectively.
The computational setup used to study the suspension transport is shown schematically in Fig.~\ref{fig:rectangularChannel}. First the pure carrier matrix fluid is homogeneously injected to prefill the channel at the start of the simulation. After that, the proppant suspension is injected at a flow rate $Q$ and a uniform initial volume fraction $0 < \phi_i \leq 0.05$. The channel interior, which is used to mimic a vertical fissure, has a rectangular cross section of height $H=5~$mm and width $W=1~$mm, the latter corresponds to a maximum of 14 particles across the width. The channel is $L=1~$m long. The channel exit is left open to atmospheric pressure.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Figures/rectChannelV6.pdf}
\caption{Schematic of the channel cross-section used for simulating suspensions of particles ($\phi= 0.025,~0.038$ and $0.05$) settling in a confined channel ($Re_W = 0.06832$ and $0 < Wi \le 2.1$), which mimic a hydraulic fracture vertical fissure.}
\label{fig:rectangularChannel}
\end{figure}
To verify the numerical algorithm for 3D calculations of the carrier fluid alone we conduct a set of numerical experiments to evaluate the pressure difference $\Delta P$ as a function of flow rate $Q$. A mesh refinement sensitivity analysis is performed by using three different levels of mesh refinement on $L\times H\times W$: Mesh 1 (M1), $2500\times 13\times 3$ cells, Mesh 2 (M2), $5000\times 26\times 6$ cells and Mesh 3 (M3), $10000\times 52\times 12$ cells. Table~\ref{tab:vortexsize} gives the pressure drop results corresponding to each of the mesh refinement levels employed at the prescribed inlet flow rate $Q$. Additionally, Table~\ref{tab:vortexsize} compares the numerical results obtained and the analytical values of the pressure difference for the Hagen-Poiseuille flow of a Newtonian fluid \cite{Mortensen2005}, which for an aspect ratio of $H/W=5$ is given by $\Delta P/L = 13.7\mu Q/H W^3$. The numerical results obtained in the most refined mesh employed in the calculations (M3) are within 0.39$\%$ of the analytical values for all the flow rates tested.
\begin{table}[H]
\begin{threeparttable}
\caption{Pressure difference $\Delta P$ (mbar) as a function of flow rate $Q$ (cm$^3$/h) and mesh level refinement for the Hagen-Poiseuille flow of a Newtonian fluid with viscosity $\mu=0.1~$Pa$\mathpunct{.}$s and geometry parameters given in the text. The relative error (\%) between the calculated numerical result and the analytical value \cite{Mortensen2005} is $0.39\%$ for the most refined mesh M3.}
\vspace{0.1cm}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l r r r r}
\toprule
$Q$ & \multicolumn{4}{c}{$\Delta P$} \\
\cmidrule{1-5}
& M1 & M2 & M3 & Analytical \\
10 & 6.20 & 7.25 & 7.58 & 7.61\\
20 & 12.41 & 14.50 & 15.16 & 15.22\\
50 & 31.01 & 36.25 & 37.89 & 38.06\\
100 & 62.03 & 72.50 & 75.79 & 76.11\\
150 & 93.04 & 108.75 & 113.68 & 114.17\\
200 & 124.06 & 145.00 & 151.57 & 152.22\\
300 & 186.09 & 217.50 & 227.36 & 228.33\\
\% error$^a$ & 18.50 & 4.74 & 0.39 & \\
\bottomrule
\end{tabular}
\label{tab:vortexsize}
\vspace{0.1cm}
\begin{tablenotes}
\scriptsize
\item[$^a$] Calculated between numerical results obtained with M1, M2 and M3, and analytical values.
\end{tablenotes}
\end{threeparttable}
\end{table}%
Fig.~\ref{fig:pressureDifference} shows the comparison between experimental pressure difference measurements, $\Delta P$, as a function of flow rate $Q$, obtained by \citet{Steven2020} using the 0.1 Pa$\mathpunct{.}$s carrier fluid, and the computed numerical results in this work. From the experimental data, it is clear that $\Delta P$ increases linearly with $Q$ up to $\Delta P\approx 0.2$ bar, corresponding to a flow rate $Q=150~$cm$^3$/h and a Reynolds number of $Re_W=\rho(Q/WH)W/\mu = 0.10248$ for the glycerol/water mixture fluid. As noted by \citet{Steven2020}, the onset of nonlinear behavior is most likely to be caused by some deformation of the poly dimethylsiloxane elastomer at higher pressures. Therefore, the flow rate of our numerical tests is set at $Q=100~$cm$^3$/h in the following studies with suspensions. This results in an average fluid velocity $U=Q/HW=5.6\times 10^{-3}~$m/s, which corresponds to $Re_W = 0.06832$, confirming that we are in the creeping flow regime. We also note that $U$ is much greater than the average particle sedimentation velocity $U_{Stokes}=3\times 10^{-5}~$m/s, and thus, our simulations are performed under favorable transport conditions with minimal settling at the entrance. In fact, the slope of the trajectory of a sedimenting particle being transported at this average speed, $U_{Stokes}/U\approx 5\times 10^{-3}$, is similar to the ratio of channel height to length $H/L$, meaning that most of the initially suspended particles entering the channel should settle as they approach the channel exit.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Figures/pressureDropNewtonianFluidV3.pdf}
\caption{Pressure difference, $\Delta P$ (mbar), as a function of the imposed flow rate, $Q$ (cm$^3$/h). Numerical results obtained in this work using mesh M3 are compared against analytical values for the Hagen-Poiseuille flow of a Newtonian fluid (black solid line) and to the experimental data of \citet{Steven2020} (black solid circles).}
\label{fig:pressureDifference}
\end{figure}
Fig.~\ref{fig:expQ100Phi5} shows a visual comparison of experimental and numerical simulation results for steady-state sedimentation heights in three different portions of the channel length ($x=0.03;~0.34;~\textrm{and}~0.67$~m), using an initial suspension volume fraction of $\phi_i=0.05$ and a flow rate $Q=100~$cm$^3$/h. The numerical results for the axial sediment distribution follow a similar trend as those obtained experimentally \cite{Steven2020}, i.e., after the detection of the suspension in each channel section (as indicated by the slight turbidity compared with the pure carrier fluid) the progressive buildup of an opaque particle sediment along the channel floor is observed. This is particularly noticeable in the second and third channel observation sections, in which the sediment height increases markedly. Then, the sediment height ceases to grow further quite abruptly, although the particle suspension continues to flow through the channel. This steady-state behavior persists for the duration of the flow and represents a balance between sedimentation and shear-driven resuspension. Upon cessation of flow we immediately observe a collapse in the dense phase height, i.e. the particle phase settles further to form a more compact final sedimented state. Figure~\ref{fig:contoursEpsilon} shows the contours of the fluid porosity distribution $\epsilon_f(x,z)$ under steady flow conditions, as well as the change in the sediment height after flow has ceased, for a lateral position $x\approx 67$~cm and initial suspension volume fractions $\phi_i=0.025;~0.038~\textrm{and}~0.05$ conducted at flow rate $Q=100$~cm$^3$/h ($Re_W = 0.06832$). A pure fluid phase with no particles corresponds to $\epsilon_f\to 1.0$. For random close packed spheres we expect the viscosity to diverge when $\phi\to\phi_m=0.637$ \cite{faroughi2015generalized}. This value changes significantly based on the shape of particles \cite{faroughi2017self} and the size ratio between particles in the pack \cite{faroughi2014crowding,faroughi2016theoretical}. Contours of $\epsilon_f \lesssim 0.4$ thus correspond to effectively solid deposits of particles. Again we can conclude that the steady-state sediment height during suspension flow increases with the initial suspension volume fraction, and that following the cessation of flow the sediment bed compactifies and reduces in height.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Figures/expQ100Phi5V2.pdf}
\caption{Comparison between the experimental (left) \cite{Steven2020} and numerical (right) steady-state sedimentation heights in three different portions of the channel (corresponding to $x/H=6,~6.8$ and $13.4$) during flow of a suspension with a Newtonian matrix fluid with initial volume fraction $\phi_i = 0.05$ at a flow rate of $Q=100~$cm$^3$/h ($Re_W = 0.06832$).}
\label{fig:expQ100Phi5}
\end{figure}
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{Suspended sediments under flow}}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi0025hV2.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{Suspended sediments when the flow has ceased}}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi0025h0V2.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi0038hV2.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi0038h0V2.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi005hV2.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/epsilonfphi005h0V2.pdf}%
\label{}
\end{subfigure}
\end{tabular}}
\caption[]
{Contours of the fluid porosity field $\epsilon_f$ as a function of the lateral position near $x\approx 67~$cm ($x/H=13.4$) and channel height $z$ for (a) steady-state measurements of suspended sediments under flow; and (b) static sediment heights, when the suspension flow has ceased, for initial suspension volume fraction $\phi_i=0.025~(\textrm{top});~0.038~(\textrm{middle})~\textrm{and}~0.05~(\textrm{bottom})$, conducted at a flow rate $Q=100~$cm$^3$/h, corresponding to $Re_W = 0.06832$ and $El=0$.}
\label{fig:contoursEpsilon}
\end{figure}
To quantitatively define the steady-state sediment height, $h$, under flow and the static sediment height, $h_0$, once the flow of the suspension has ceased, we compute the average fluid porosity $\bar{\epsilon}_f$ over a local section as
\begin{equation}
\begin{aligned}
\bar{\epsilon}_f = \frac{1}{H}\int_{x-H/2}^{x+H/2}\epsilon_f(x,z)dx,
\end{aligned}
\label{eq:fluid_porosity}
\end{equation}
and define appropriate characteristic values for $\bar{\epsilon}_f$ to quantify $h$ and $h_0$. For particular flow conditions of a non-Brownian suspension flowing at $Q=100$~cm$^3$/h, \citet{Steven2020} observed the buildup of a dense but flowing sediment that rapidly reaches a steady-state height $h$. \textit{The existence of this steady-state flowing sediment implies that the proppant flux leaving the channel equals that entering the channel, and thus, an ``efficient" proppant transport occurs.} Knowing this fact, we define the criteria to compute $h$ as $\bar{\epsilon}_f=1-\phi_i$ (see Fig.~\ref{fig:expvsOFQ100}(a)). Because the flow is at a low Reynolds number ($Re_W = 0.06832$), the relevant mechanism of sediment transport must be viscous resuspension (flow of an ``expanded" sediment at an equilibrium height, and its subsequent ``collapse" once the flow ceases \cite{Leighton1986,Acrivos1993}). To quantify $h_0$, we quote the work of \citet{Steven2020} stating that \textit{for quiescent conditions the packing volume fraction when water is the suspending fluid is} $\phi_p\approx 0.58$,\textit{ which is close to} $\phi_m$. \textit{However, when the 85:15 w/w}$\%$ \textit{glycerol/water mixture (viscosity} $\approx 0.1$~Pa.s) \textit{was employed as the suspending fluid, the packing fraction decreased to} $\phi_p\approx 0.5$. Following this, we consider the criterion to compute $h_0$ as $\bar{\epsilon}_c=0.5$ to represent a dense/compact suspension bed (see Fig.~\ref{fig:expvsOFQ100}(a)).
An example of these computations is shown in Fig.~\ref{fig:expvsOFQ100}(b), which depicts the evolution of the average fluid porosity distribution, $\bar{\epsilon}_f$, along the channel height, $z$. This is shown for the last observable channel section $x\approx 67~$cm ($x/H=13.4$) for initial suspension volume fraction $\phi_i=0.05$. Notice that the sharply inflected ``elbow" shape of the average fluid porosity distribution very close to the bottom wall of the rectangular channel is due to the random-packing which occurs in this region, where we may have either void spaces (fluid) or particles close to the contact plane at the wall. In Fig.~\ref{fig:expvsOFQ100}(c), we compare the evolution of $h$ (circular symbols) and $h_0$ (square symbols) against the experimental results obtained by \citet{Steven2020}, for initial suspension volume fractions $\phi_i=0.025;~0.038~\textrm{and}~0.05$ at $x\approx 67$~cm ($x/H=13.4$). Both the local bed height under flow ($h$) and the compact bed at rest ($h_0$) follow the same trend of the experimental data presented in \citet{Steven2020}, where monotonic increases of the sediment heights are observed with the increase of the initial suspension volume fraction. Finally, in Fig.~\ref{fig:expvsOFQ100}(d) we compare the evolution of $h$ (circular symbols) and $h_0$ (square symbols) against the experimental results obtained by \citet{Steven2020}, for initial suspension volume fraction $\phi_i=0.05$ along the channel length direction $x$. Again our numerical results follow the experimental data of \citet{Steven2020}, showing an increase of the sedimentation heights along the channel length. These results verify the robustness of our coupled Eulerian-Lagrangian technique to model the sedimentation phenomena which occur in many applications, e.g., during hydraulic fracture processes.
\begin{figure}[H]
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c@{}}
\begin{subfigure}[b]{.6\columnwidth}
\centering
\caption{{}}
\includegraphics[width=1\columnwidth]{Figures/definitionHandH0.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.4\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.9\columnwidth]{Figures/porosityVsHeight_V5.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[b]{.6\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.6\columnwidth]{Figures/experimVsOpenFOAM_V10.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.4\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.9\columnwidth]{Figures/xVsHeights_V6.pdf}%
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Sediment height measurements under steady flow conditions with height $h$; and static sediment heights, $h_0$, when the suspension flow has ceased. (a) contours of fluid porosity $\epsilon_f$ and particle distribution for initial suspension volume fraction $\phi_i=0.05$ in the last observable channel section $x\approx 67~$cm ($x/H=13.4$), (b) average fluid porosity distribution $\bar{\epsilon}_f$ along channel height direction $z$ at $x\approx 67~$cm, (c) sediment heights for initial suspension volume fraction $\phi_i=0.025,~0.038~\textrm{and}~0.05$ in the last observable channel section $x\approx 67~$cm ($x/H=13.4$) and (d) evolution of sediment heights along the channel direction $x$ for $\phi_i=0.05$, conducted at a flow rate $Q=100~$cm$^3$/h, corresponding to $Re_W = 0.06832$ and $El=0$.}
\label{fig:expvsOFQ100}
\end{figure}
Finally, we simulate the sedimentation of particle-laden Oldroyd-B viscoelastic fluids using the newly-developed $DPMviscoelastic$ solver and fluid drag model (see Eq.~(\ref{eq:fit}) in Section~\ref{sec:viscoelasticDrag}). Figure~\ref{fig:phiXElYSF} shows contours of the particle and fluid velocity fields for $El = Wi/Re = 0$ (Newtonian flow) and $El = 30$ at $1/3$ and $2/3$ of the channel length $L$ for initial suspension volume fraction $\phi_i=0.05$, conducted at flow rate $Q=100$~cm$^3$/h ($Re_W = 0.06832$).
\begin{figure*}
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{$El=0$}}
\includegraphics[width=\columnwidth]{Figures/Img1V4.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{$El=30$}}
\includegraphics[width=\columnwidth]{Figures/Img3V3.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/Img2V4.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[t]{.5\columnwidth}
\centering
\vspace{1.5cm}
\includegraphics[width=\columnwidth]{Figures/Img4V2.pdf}%
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Velocities of the settling particles (first and third rows of images) and the matrix fluid (second and fourth rows of images) at $1/3$ (top) and $2/3$ (bottom) of the channel length $L$ for (a) $El = 0$ (left column) and (b) $El = 30$ (right column), with $Re_W=0.06832$ and $\phi_i=0.05$.}
\label{fig:phiXElYSF}
\end{figure*}
For the Newtonian fluid (Fig.~\ref{fig:phiXElYSF}(a)) at $El=0$, when analyzing the particle distribution at $x=L/3$ (top left images), we notice that there is a significant sedimentation layer where the particle velocity is zero. In the middle and top zones of the channel, both the matrix fluid and particles flow smoothly and axially along the channel. The particles continue to slowly sediment and eventually join the deposited layer with $(U_x)_p\to 0$.
For the quasi-linear Oldroyd-B viscoelastic fluid (Fig.~\ref{fig:phiXElYSF}(b)) at $El=30$, when analyzing the particle distribution at $x=L/3$ (top right images), we notice that the distribution of particle velocities is almost uniform along the channel height, and only a thin layer of sedimentation is observed. This is in contrast to the Newtonian fluid behavior. At $x/L=2/3$, the behavior of the particles and fluid constituents is not substantially changed from that at $x/L=1/3$, and, therefore, there is no significant particle settling zone along the channel floor for an elastic fluid with $El=30$.
Figure~\ref{fig:NewtVsVisc} shows the evolution of the sedimentation heights, $h$ and $h_0$, along the channel length for both $El=0$ and $El=30$. As before, we compute $h$ and $h_0$ using the definition of $\bar{\epsilon}_f$ in Eq.~(\ref{eq:fluid_porosity}). The sedimentation heights are nearly constant along the channel length for the viscoelastic case ($El=30$), contrarily to the Newtonian case ($El=0$) where a progressive increase of the sedimentation height $h(x)$ is observed. This indicates that the addition of fluid viscoelasticity hinders the settling of particles through the viscoelastic enhancement of the drag coefficient. This can be extremely helpful to identify fluid formulations that can improve proppant transport in hydraulic fracturing operations in long rectangular channels/cracks. In the future, we will use this numerical framework to further explore other types of fluid rheologies, specifically with shear-thinning rheology and non-zero second normal stress differences as captured by the Giesekus fluid model. The resulting numerical framework can also be applied to other migration and settling problems in the industrial and life science fields, e.g. in the circulatory system.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Figures/xVsHeights_viscoelastic_V3.pdf}
\caption{Evolution in sediment height along the channel length direction for steady-state measurements of sediments under flow, height $h$; and when the suspension flow has ceased, static sediment heights $h_0$ are compared at $El=0$ and $El=30$, with $Re_W=0.06832$ and $\phi_i=0.05$.}
\label{fig:NewtVsVisc}
\end{figure}
\subsection{Annular pipe flow}
Particle segregation in pumped concrete is one of the big challenges encountered when creating casing for horizontal drilled wells \cite{Faroughi2017,robisson2020}. In this case, particulate solids tend to segregate axially across the pipe due to differences in the size, density, shape and other properties of the constituent phases. The corresponding increase in the percentage of cementitious particles in the bottom part of the casing increases the chance of shrinkage and formation of cracks in the upper portion of the cemented casing. These cracks, often large in size, can easily transport hydrocarbons and other toxic chemicals into the formation which is a concern. Tuning the rheology of the conveying fluids systematically by considering the hindrance effect (i.e., reduction in the relative settling velocity of a particle due to the presence of other particles) can help minimize this issue.
Here we study numerically the particle segregation in a simplified annular pipe geometry. The setup used to study the particle segregation is shown schematically in Fig.~\ref{fig:eccentricflowGeometry}. The channel interior, which is used to mimic an horizontal well, has an annular cross section with inner ($R_i$) and outer ($R_o$) radius of $25~$mm and $50~$mm, respectively, and a depth of $10~$mm. The particles have a diameter of 200 $\upmu\mathrm{m}$ and density of 5 g/cm$^3$. The carrier fluid is a Newtonian silicone oil with a constant viscosity of 0.01 Pa$\mathpunct{.}$s and density of 1 g/cm$^3$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Figures/tubChannelV3.pdf}
\caption{Schematic of the annular pipe channel cross-section used for simulating suspensions of particles in horizontal wells. The axis of the pipe is denoted by the $x-$direction for consistency with Fig.~\ref{fig:DNSChannel} and gravity is aligned in the $-\textbf{e}_z$ direction.}
\label{fig:eccentricflowGeometry}
\end{figure}
The initial setup for this computational study is shown schematically in Fig.~\ref{fig:eccentricflow}(a). A total of 125,000 particles, representing $1\%$ of the annular cavity volume, is used in this case study. The particles are distributed evenly throughout the stagnant fluid at time zero (see Fig.~\ref{fig:eccentricflow}(a)). The particle positions at time $t=0$ are generated using a nearest neighbor algorithm. Gravity is applied vertically across the thin annular geometry ($\textbf{g}=-g\textbf{e}_z$), breaking the azimuthal symmetry and mimicking the onset of concrete particle settlement right after injection is stopped. The goal here is to capture the settling dynamics and test our numerical code to reproduce those dynamics. The code can be then used to analyze different rheological tuning mechanisms to minimize settling over the required time-scale for the concrete to harden. Figures \ref{fig:eccentricflow}(b) and (c) illustrate the numerical result obtained for a Newtonian fluid. It can be readily seen that our code captures local azimuthal avalanches along the rigid walls of the inner pipe, and ultimately static dome build-up effects on longer time-scales of order ($t_c$) where the characteristic settling time is defined as $t_c=R_o/U_{Stokes}$ \cite{Faroughi2017,robisson2020} (see also the movie in the supplementary material). This confirms the accuracy of our 4-way coupling model in which the continuous fluid matrix affects particle motion, local densification effects of the particle lead to enhanced gravitational body forces (per unit volume) driving the sedimenting flow, and local compaction as $\phi\to\phi_m$ leads ultimately to flow arrest. This coupled model is used below to study how these effects change with elastic contributions to the stress field provided by polymeric fluid additives.
\begin{figure*}
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{} c c@{}}
\begin{subfigure}[b]{.33\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.7\columnwidth]{Figures/El0time0.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.33\columnwidth}
\centering
\caption{{}}
\includegraphics[width=1.1\columnwidth]{Figures/El0End.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.33\columnwidth}
\centering
\caption{{}}
\includegraphics[width=1.0\columnwidth]{Figures/El0UzFluid.pdf}%
\label{}
\end{subfigure}\\
\end{tabular}}
\caption[]
{Panel (a) shows the computational annular pipe setup ($R_i=25~$mm and $R_o=50~$mm) and the initial homogeneous distribution of the particles at $\phi_i=1\%$ when $t/t_c=0$, where $t_c=R_o/U_{Stokes}$. Panel (b) shows the numerical result for a Newtonian fluid at $t/t_c \approx 1.2$ that reveals the development of strong spatially inhomogeneous particle distributions as avalanches along the rigid wall of the annulus and a static deposited dome builds up. Panel (c) shows the fluid velocity distribution. The results are shown in a slice through the midplane of the computational domain. Velocities in red, orange, yellow and green are aligned with gravity (which points in the $-\textbf{e}_z$ direction) and velocities in blue indicate backflow.}
\label{fig:eccentricflow}
\end{figure*}
Figure~\ref{fig:eccentricEl} shows the dimensionless $z$-component of the velocity of the particles, $-\rho(U_z)_p a/\eta_0$, and fluid, $-\rho U_z a/\eta_0$, computed numerically for a viscoelastic matrix fluid described by the Oldroyd-B constitutive model at $El=Wi/Re=0.1$ and $5$. To vary the elasticity number, we kept the Reynolds number and the settling velocity fixed and changed the Weissenberg number accordingly (i.e. effectively changing the relaxation time of the fluid).
For these two elasticity numbers the particle distributions are similar, with a settling zone and avalanche zones at the bottom and lateral walls of the annular pipe domain, respectively. Additionally, in the settling zone, a backflow of particles occurs due to fluid displaced by the sedimentation and net accumulation of particles in this region. At the north pole (point $N$ in Fig.~\ref{fig:eccentricEl}) of the inner cylinder wall the particles have a backflow velocity, which makes them bounce and slide along the inner cylinder wall. Subsequently, the particles approach the most unsteady settling zone of the annular pipe channel, where a mixture of fluid backflow and gravity-induced velocities are present. Regarding the differences in the results obtained for the two elastic fluids, $El=0.1$ and $El=5$, we can see that the fluid velocity distribution results in a larger region of positive (upwards) velocity near the dome region for the less elastic case, which indicates a migration of the particles to the avalanche zone. In fact, from the particle velocity distributions, we see that the stronger migration of the particles to the avalanche zone at $El=0.1$ causes an increase in the suspension bed height when comparing to the higher elastic case with $El=5$. In fact, the calculated final packed bed height for the cases where $El=0$ or $El=0.1$ is 4.5 mm and for $El=5$ is 3.5 mm.
\begin{figure*}
\captionsetup[subfigure]{justification=justified,singlelinecheck=false}
\centering
{\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{}c}
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.86\columnwidth]{Figures/eccentricUpEl01V11.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[b]{.5\columnwidth}
\centering
\caption{{}}
\includegraphics[width=0.9\columnwidth]{Figures/eccentricUpEl5V9.pdf}%
\label{}
\end{subfigure}\\
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics[width=0.9\columnwidth]{Figures/eccentricUfEl01V8.pdf}%
\label{}
\end{subfigure}&
\begin{subfigure}[t]{.5\columnwidth}
\centering
\includegraphics[width=0.9\columnwidth]{Figures/eccentricUfEl5V6.pdf}%
\label{}
\end{subfigure}
\end{tabular}}
\caption[]
{Velocities of the settling particles (top) and fluid (bottom) in an Oldroyd-B fluid with $\zeta=0.5$ for (a) $El = 0.1$ and (b) $El = 5$ at $ t/t_c \approx 1.2$. The results are shown on a slice through the midplane of the computational domain. Velocities in red, orange, yellow and green are aligned with gravity (which points in the $-\textbf{e}_z$ direction) and velocities in blue indicate backflow.}
\label{fig:eccentricEl}
\end{figure*}
\section{Conclusions}
\label{sec:conclusions}
Direct numerical simulations (DNS) of random arrays of spherical particles immersed in Newtonian and constant-viscosity viscoelastic fluids were performed using a finite-volume method. The overall procedure solves the equations of motion coupled with the viscoelastic Oldroyd-B constitutive equation using a log-conformation approach, with a SIMPLEC (Semi-Implicit Method for Pressure-Linked Equations-Consistent) method. The drag forces on individual particles were calculated with the aim of providing an approximate closed-form model to describe numerical simulation data obtained for the unbounded flow of Newtonian and Oldroyd-B fluid past random arrays of spheres. This expression can then be integrated into a Eulerian-Lagrangian solver that enables coupled simulations of the fluid flow and particle migration over a wide range of kinematic conditions. For this purpose, the DNS consisted of a total of 150 different configurations, in which the average fluid-particle drag force is obtained for solid volume fractions $\phi$ $(0 < \phi \leq 0.2)$ and Weissenberg number $Wi$ $(0 \leq Wi \leq 4)$.
The proposed DNS methodology was first tested and verified for the creeping flow of random arrays of spheres immersed in a Newtonian fluid. It was found that the numerical results obtained agree with the Lattice-Boltzmann results of \citet{Hill2001} and can be described by the best-fit model of \citet{Hoef2005}. Statistical accuracy was achieved by averaging the DNS results at each value of $\phi$ over five random configurations, resulting in errors below $3.5\%$ of the average drag force. Subsequently, the same DNS methodology was used to perform finite volume simulations of viscoelastic creeping flows (using the Oldroyd-B constitutive equation with $\zeta=0.5$) past the same fixed random configurations of particles. A simple factorized closure model for the viscoelastic drag coefficient of random arrays of spheres (corresponding to moderately dense suspensions $\phi\leq 0.2$) translating in a quasi-linear Oldroyd-B viscoelastic fluid was proposed, by fitting the DNS results with an equation of the same form as the \citet{Hoef2005} model, combined with the viscoleastic drag force correction on a single sphere proposed by \citet{Salah2019}. The resulting regression model accounts for 98.2\% of the variance of the numerical data, with an average error of 5.7\%.
Finally, a numerical formulation for Eulerian-Lagrangian simulation of solid particles in viscoelastic fluids, $DPMviscoelastic$, was presented and implemented using a combination of the finite-volume and the discrete particle methods. The implementation was carried out by extending the solver $DPMFoam$ from the open-source $OpenFOAM$ library. The algorithm solves the motion of an incompressible viscoelastic fluid phase in the presence of a secondary particulate phase, in which the volume-averaged continuity and Navier-Stokes equations are employed together with a viscoelastic constitutive equation to describe the fluid flow, and a discrete particle method is used to update the particle movements. This approach guarantees the coupling between the dynamics of the continuous fluid and the discrete solid phases, by imposing a two-way coupling between the two phases. The coupling is provided by momentum transfer through the drag force expression proposed here, which is exerted by the fluid on the solid particles. Additionally, we consider two different formulations to describe the contact between particles, the Hertzian spring-dashpot and Multi-Phase Particle In Cell (MPPIC) models.
As a proof-of-concept, the newly-developed algorithm was assessed for accuracy in two case studies. First, we studied the proppant transport and sedimentation during pumping (a phenomenon typical of hydraulic fracturing operations) in a long channel of rectangular cross section. For the case in which the fluid matrix is Newtonian, the resulting axial distribution of particle sedimentation profiles was compared with experimental data available in the literature for suspensions formulated with Newtonian matrix fluids and different initial particle volume fraction, and good agreement was obtained. Subsequently, the $DPMviscoelastic$ solver was tested on the same problem using an Oldroyd-B fluid. Analysis of the particle distribution and fluid velocity profiles at an elasticity number of $El=30$, showed that fluid elasticity inhibits the rate of particle settling and prevents the formation of a dense sedimented layer along the floor of the channel.
Subsequently, segregation phenomena which occurs when pumping a casing material along horizontal wells was also studied in an annular pipe domain. Numerical simulations using a Newtonian fluid were performed, and we were able to capture the avalanche and dome build-up effects observed in experimental observations of the particle distributions \cite{robisson2020}. Additionally, a viscoelastic fluid was also employed at two different elasticity numbers $El=0.1$ and $5$. The particles were found to sediment with two markedly contrasting zones, a highly disordered and unsteady region where a mixture of fluid backflow and gravity-induced settling velocities are present and a sedimented zone where particles are closely packed together and the fluid velocity is almost zero. It was found that the stronger migration of the particles to the avalanche zone at $El=0.1$ cause an increase in the suspension bed height when comparing to the higher elastic case with $El=5$.
In summary, the DNS computational methodology presented here allows us to construct a closed-form expression for the drag force exerted by an Oldroyd-B viscoelastic fluid on random arrays of particles, which can be incorporated in a newly-developed Eulerian-Lagrangian viscoelastic code, $DPMviscoelastic$, using an open-source framework. The resulting code can predict the flow patterns and particle distributions that develop in moderate volume fraction suspensions with viscoelastic matrix fluids. We hope that in the future this open-source code can be used to help understand other migration and settling phenomena in complex fluids which are commonly encountered in a range of industrial and biological applications.
|
1,116,691,497,146 | arxiv | \section{Introduction}
In this paper we investigate superintegrability of three-dimensional systems that separate in Cartesian coordinates in the presence of a magnetic field. We say that a mechanical system is superintegrable if it is Liouville integrable and possesses additional independent integrals of motion. Depending on their number we distinguish minimal superintegrability when only one additional integral is present, and maximal superintegrability when the number of additional integrals is the maximal possible, i.e., equal to the number of degrees of freedom minus one. (In three spatial dimensions there is no other possibility.)
The study of superintegrability with magnetic fields was initiated in~\cite{DoGraRaWin} and subsequently followed in both two spatial dimensions~\cite{BeWin,CharHuWin,Pucacco,PuRos} and three spatial dimensions~\cite{BS, MS,MS2,MSW,MSW2}; relativistic version of the problem was recently considered too, cf.~\cite{HeiIld}. Separability of three-dimensional systems with magnetic fields was considered in the papers~\cite{BeChaRas, ShaBaMe}. Particular planar two-body systems, e.g., Coulomb, in perpendicular constant magnetic field were also studied from the point of solvability and superintegrability, see, e.g., \cite{TurEsc1,Taut1,Taut2,TurEsc2}.
It turns out that the presence of magnetic field significantly increases the complexity of both calculations and structure of these systems. E.g., contrary to the case without magnetic field separability in orthogonal coordinates and integrability with integrals at most quadratic in the momenta are no longer equivalent, namely separability is stronger and implies the existence of at least one integral linear in the momenta. Similarly, the explicit construction of superintegrable systems and their classification become much harder when magnetic fields are present.
In the present paper we attempt to approach the problem from a different viewpoint. We exploit the fact that in certain situations the three-dimensional system can be rewritten as effectively a two-dimensional one without magnetic field, thus generalizing the principal idea of~\cite{MS2}. In other cases we show that the existence of a quadratic integral necessarily implies the existence of an integral in a particular simpler form, which makes our calculations tractable. When the results of the present paper and~\cite{MS, MSW} are viewed together, they provide an exhaustive list of three-dimensional quadratically minimally and maximally superintegrable systems with magnetic fields separable in Cartesian coordinates.
We shall investigate the superintegrability of the system defined on the phase space $\mathbb{R}^6$, with the canonical coordinates $(\vec x,\vec p)$, by
\begin{gather}\label{Hamiltonian}
H(\vec x,\vec p)=\frac12\big(\big(p_1^A\big)^2+ \big(p_2^A\big)^2+\big(p_3^A\big)^2\big)+ W(\vec x),
\end{gather}
where $W(\vec x)$ denotes the so called electrostatic or effective potential, $p_j^A$ are the covariant expressions for the momenta
\begin{gather}\label{covarP}
p_j^A=p_j+ A_j(\vec x),\qquad j=1,2,3,
\end{gather}
and $A_j(\vec x)$ are the components of the vector potential. The magnetic field $\vec B(\vec x)$ is related to~$\vec A(\vec x)$ through
\begin{gather*}
\vec B(\vec x)=\nabla\times\vec A(\vec x).
\end{gather*}
Newtonian equations of motion and thus also the physical dynamics are gauge invariant, i.e., depend only on $B(\vec x)$ and $\nabla W(\vec x)$. However, in the Hamiltonian formulation gauge transformations can be seen as canonical transformations (cf.\ \cite[Problem~11.25]{KotSer}), namely they alter the Hamiltonian, the corresponding Hamilton's equations of motion and the Hamilton--Jacobi equation in a prescribed way. Separation of variables in the Hamilton--Jacobi equation is related to a~specific choice of the coordinate system and is not preserved under canonical transformations~-- on the contrary, one looks for a suitable canonical transformation such that the system becomes separable after it. Since we are interested in systems that separate in Cartesian coordinates, we find it preferable to work in a suitably chosen fixed gauge adapted to the separation.
Furthermore, we will sometimes use canonical transformations to reduce to cyclic coordinates corresponding to integrals. Also in this perspective, it is helpful to fix an appropriate gauge. However, the final results, in particular the superintegrable systems found shall be given in the gauge covariant form, so to express them in the most general way.
In gauge dependent form the Hamiltonian \eqref{Hamiltonian} reads
\begin{gather}\label{Hamiltonian_gauge}
H(\vec x,\vec p)=\frac12\big(p_1^2+ p_2^2+p_3^2\big)+ A_1(\vec x) p_1 + A_2(\vec x)p_2 + A_3(\vec x) p_3+ V(\vec x),
\end{gather}
where the gauge dependent ``scalar'' potential $V(\vec x)$, i.e., the momentum-free term in~\eqref{Hamiltonian_gauge}, is related to the gauge invariant electrostatic potential $W(\vec x)$ via
\begin{gather*}
V(\vec x)=W(\vec x)+ \frac12 \big|\vec A(\vec x)\big|^2.
\end{gather*}
There are only two cases in which the system~\eqref{Hamiltonian_gauge} separates in Cartesian coordinates \cite{BeChaRas,ShaBaMe}, up to a canonical permutation of the variables. Let us write them in both gauge dependent and gauge covariant form:
\textbf{Case I}
\begin{gather}\label{Sep1}
V(\vec x)=V_1(x_1)+ V_2(x_2),\qquad \vec A(\vec x)=(0,0,u_1 (x_2)-u_2 (x_1)),
\end{gather}
therefore
\begin{gather}\label{Omega_sep1}
\vec B(\vec x)=(u_1' (x_2), u_2' (x_1),0),\qquad W(\vec x)=V_1(x_1)+ V_2(x_2)-\frac12(u_1(x_2)-u_2(x_1))^2.
\end{gather}
\textbf{Case II}
\begin{gather}\label{Sep2}
V(\vec x)=V_1(x_1),\qquad \vec A(\vec x)=(0,u_3 (x_1),-u_2(x_1)),
\end{gather}
thus
\begin{gather}\label{Omega_sep2}
\vec B(\vec x)=(0, u_2' (x_1),u_3'(x_1)),\qquad W(\vec x)=V_1(x_1)-\frac12\big(u_3(x_1)^2+u_2(x_1)^2\big).
\end{gather}
In these two cases the system admits two Cartesian-type integrals, related to the separation of variables:
\begin{gather}
X_1=\big(p_1^A\big)^2-2(u_2(x_1)(p_3^A-u_1(x_2)+u_2(x_1))-V_1(x_1))=p_1^2- 2 (u_2(x_1) p_3- V_1(x_1)),\nonumber\\
X_2=\big(p_2^A\big)^2+2(u_1(x_2)(p_3^A-u_1(x_2)+u_2(x_1))+V_2(x_2))=p_2^2+ 2 (u_1(x_2) p_3+V_2(x_2))\!\!\!\!\label{CartInt1}
\end{gather}
for~\eqref{Sep1} and
\begin{gather}
X_1=p_2^A-u_3(x_1)= p_2,\qquad X_2=p_3^A- u_2(x_1)=p_3\label{CartInt2}
\end{gather}
for \eqref{Sep2}.
\begin{rem}
$X_0=p_3^A - u_1(x_2)+ u_2 (x_1)=p_3$ is another integral of~\eqref{Omega_sep1}, though dependent on the Hamiltonian and \eqref{CartInt1}.
\end{rem}
Minimal superintegrability due to the existence of another first-order integral has been studied in \cite{MS, MSW}. Here we investigate the conditions for the existence of an additional integral of order at least two for the systems~\eqref{Omega_sep1},~\eqref{Omega_sep2}. We give an exhaustive list of systems for which an additional second-order integral exists, and are able to answer the question on the existence of higher-order integrals in special cases.
Sections \ref{sec:minimally_ext} and \ref{sec:maximally_higher} present two propositions for finding out whether certain classes of systems are superintegrable by reducing to a two-dimensional (2D) problem without magnetic field. In this way we also construct families of systems with higher-order integrals. Next, in Section~\ref{sec:second_ord_int} we address the problem of second-order superintegrability. The determining equations for second-order integrals are given in gauge covariant form, together with their compatibility conditions. In Section~\ref{sec:necessary} we give a necessary condition for second-order superintegrability, which is used in Sections~\ref{sec:case1} and \ref{sec:case2} to simplify the structure of the integral for the classes \eqref{Omega_sep1}, \eqref{Omega_sep2}, respectively. With these simplifications at hand, the determining equations for the integral can be solved. In Section~\ref{Conclusion1} we list the superintegrable systems so found; their explicit derivation is rather technical and tedious and we review it in Appendices~\ref{appendix:case1},~\ref{appendix:case1-part2} and~\ref{appendix:case2}. The special case in which the magnetic field is constant and the functions $V_j$ in \eqref{Omega_sep1} and \eqref{Omega_sep2} are at most quadratic polynomials is studied in Section~\ref{sec:pol2}. Finally, in Section~\ref{Conclusion2} we discuss the approaches to construction of higher-order integrals.
\section[Minimal superintegrability for Case~I when all the integrals commute with one linear momentum]{Minimal superintegrability for Case~I\\ when all the integrals commute with one linear momentum}\label{sec:minimally_ext}
Let us consider the natural Hamiltonian systems on the phase space $(x_1,x_2,p_1,p_2)$, for $\kappa$ $\in\mathbb{R}$, $\kappa\neq0$
\begin{gather}\label{2dof}
\mathcal H^{\kappa}_0(x_1,x_2,p_1,p_2)=\frac12\big(p_1^2+p_2^2\big)+ \kappa(u_1(x_2)- u_2(x_1))+ V_1(x_1)+ V_2(x_2).
\end{gather}
For the sake of clarity let us refer here to the Hamiltonian of Case I as to $\mathcal H$.
Since $p_3$ is an integral of motion for~\eqref{Sep1}, by setting $p_3=\kappa$, $\mathcal H_0^{\kappa}= \mathcal H(x,y,z,p_1,p_2,\kappa)-\frac{1}{2}\kappa^2 $. Both systems have a pair of second-order integrals corresponding to separation: $X_j$ as in~\eqref{CartInt1} for $\mathcal H$ and clearly
\begin{gather*}
\mathcal I_j^{\kappa}= X_j(x_1,x_2,p_1,p_2,\kappa),\qquad j=1,2
\end{gather*}
for \eqref{2dof}.
If \eqref{Sep2} possesses any additional integral $X_3$ independent of the variable $x_3$, then
$\mathcal I_3^{\kappa}(x_1,x_2,p_1,\allowbreak p_2)=X_3(x_1,x_2,p_1,p_2,\kappa)$ would be an integral for~\eqref{2dof}. And vice versa, any additional integral~$\mathcal I_3^{\kappa}$ of~\eqref{2dof}, would correspond to an integral $X_3$ of~\eqref{Sep2}, obtained by simply replacing~$\kappa$ by~$p_3$, i.e., $X_3(x_1,x_2,x_3,p_1,p_2,p_3)=\mathcal I_3^{p_3}(x_1,x_2,p_1,p_2)$. Indeed,
\begin{gather*}
\{ \mathcal H, X_3 \}=\sum_{i=1}^2\left(\frac{\partial \mathcal H_0^{p_3} }{\partial {x_i}}\frac{\partial \mathcal I_3^{p_3}}{\partial {p_i}}-\frac{\partial\mathcal I_3^{p_3} }{\partial {x_i}}\frac{\partial \mathcal H_0^{p_3} }{\partial {p_i}}\right) + \frac12 \big\{p_3^2, X_3 \big\}=0,
\end{gather*}
where $\{\,,\,\}$ is the Poisson bracket on the phase space $\mathbb{R}^6$.
The right hand side of the equality is zero since both $\mathcal H$ and $X_3$ do not depend on $x_3$ and $\mathcal I_3^{p_3}$ is an integral of $\mathcal H_{0}^{p_3}$. Thus, we arrive at the following immediate conclusion
\begin{Proposition}\label{lemma_minimally}
Let us consider the Hamiltonian system defined by \eqref{Hamiltonian} on the phase space $(x_1,x_2,x_3,p_1,p_2,p_3)$ with magnetic field and effective potential as in \eqref{Omega_sep1}. Such system admits an additional independent integral $I_3$ such that $\{I_3,p_3\}=0$ if and only if \eqref{2dof} is superintegrable on the phase space $(x_1,x_2,p_1,p_2)$.
\end{Proposition}
Therefore all the systems of the form \eqref{Omega_sep1} that are minimally superintegrable, with an additional integral independent of Cartesian coordinate, can be deduced from 2D natural superintegrable systems of the form \eqref{2dof}. And vice versa, every superintegrable system in two degrees of freedom can be extended to a minimally superintegrable system in three degrees of freedom with magnetic field. Superintegrable systems of the form \eqref{2dof} have been widely studied. In particular they have been completely classified for integrals up to third order \cite{MiPoWin}. Concerning higher-order integrals, many examples are known, including the harmonic oscillator and the caged oscillator~\cite{EvaVe,RTW}, and a wide class of so called exotic potentials~\cite{EscLVWin1,EscWinYur,MarSajWin}.
\subsection{Example: extension of 2D second-order superintegrable systems}
Table \ref{table:2Dquad} contains all three-dimensional systems that can be proven to be (at least) minimally quadratically superintegrable by applying Proposition \ref{lemma_minimally} to 2D superintegrable systems that separate in Cartesian coordinates and have integrals at most quadratic. The list of 2D systems is taken from \cite{MiPoWin}, from which we consider only the systems on real phase space. To obtain the most general family of systems (and recalling that the Hamiltonian must depend linearly on $\kappa$), we renamed all the parameters as $c_j=a_j \kappa + b_j$, $a_j$ not all vanishing, then set $p_3=\kappa$ and applied Proposition \ref{lemma_minimally}. The third integral, leading to superintegrability, can then be found from the integral $\mathcal I_{3}$ of the 2D system, by substituting $c_j=a_j p_3 + b_j$. Since the dependence on the constants $c_j$ is linear, the order of the so obtained integral remains quadratic.
\begin{sidewaystable}
\centering
\caption{3D (at least) minimally quadratically superintegrable extensions of 2D quadratically superintegrable systems that separate in Cartesian coordinates. For the reader's convenience, we give the Hamiltonian expressed in the gauge choice~\eqref{Sep1}, but also the functions $u_j$ and $V_j$ that allow to find the magnetic field $\vec B$ and potential $W$ as in the more general gauge invariant form~\eqref{Omega_sep1}. In the integrals, $L_3$ denotes angular momentum on the plane, $L_3=x_1 p_2- x_2 p_1$. }\label{table:2Dquad}
\vspace{2mm}
\begin{tabular}[htbf]{|cc|c|}
\hline
& 2D system and its third integral & 3D system \\
\hline
$\mathcal E_1$: & $\begin{array}{c}
\mathcal H_0= \frac12\big(p_1^2+p_2^2\big)+ c_1\big(x_1^2+ x_2^2\big)+ \frac{c_2}{x_1^2}+ \frac{c_3}{x_2^2}\\
\mathcal I_3= L_3^2 +2\big(c_2\frac{x_2^2}{x_1^2}+ c_3\frac{x_1^2}{x_2^2}\big)\end{array}$ & $\begin{array}{c}
\mathcal H=\frac12\big(p_1^2+p_2^2+p_3^2\big)+\big(a_1\big(x_1^2+ x_2^2\big)+ \frac{a_2}{x_1^2}+ \frac{a_3}{x_2^2}\big) p_3+ b_1\big(x_1^2+ x_2^2\big)+ \frac{b_2}{x_1^2}+ \frac{b_3}{x_2^2} \tsep{2pt}\\
u_1(x_2)= a_1 x_2^2+\frac{a_3}{x_2^2},\qquad u_2(x_1)= -a_1 x_1^2-\frac{a_2}{x_1^2} \\ V_1(x_1)= b_1 x_1^2+\frac{b_2}{x_1^2},\qquad V_2(x_2)= b_1 x_2^2+\frac{b_3}{x_2^2} \end{array}$ \bsep{2pt}\\
\hline
$\mathcal E_2$: & $\begin{array}{c}
\mathcal H_0=\frac12\big(p_1^2+p_2^2\big)+c_1\big(4 x_1^2+ x_2^2\big) + c_2 x_1 + \frac{c_3}{x_2^2}\\
\mathcal I_3= p_2 L_3-x_2^2\big( 2 c_1 x_1+ \frac{c_2}{2}\big)+ 2c_3\frac{x_1}{x_2^2}
\end{array}$ & $\begin{array}{c}
\mathcal H=\frac12\big(p_1^2+p_2^2+ p_3^2\big)+\big(a_1\big(4 x_1^2+ x_2^2\big) + a_2 x_1 + \frac{a_3}{x_2^2}\big)p_3+ b_1\big(4 x_1^2+ x_2^2\big) + b_2 x_1 + \frac{b_3}{x_2^2} \tsep{2pt}\\
u_1(x_2)=a_1 x_2^2 +\frac{a_3}{x_2^2},\qquad u_2(x_1)=-4 a_1 x_1^2-a_2 x_1 \\ V_1(x_1)=4 b_1 x_1^2+b_2 x_1,\qquad V_2(x_2) = b_1 x_2^2+\frac{b_3}{x_2^2} \end{array}$ \bsep{2pt}\\
\hline
$\mathcal E_3$: & $\begin{array}{c}
\mathcal H_0=\frac12\big(p_1^2+p_2^2\big)+c_1\big(x_1^2 + x_2^2\big)+ c_2 x_1+ c_3 x_2\\
\mathcal I_3=p_1 p_2 + 2 c_1 x_1 x_2 + c_2 x_2+ c_3 x_1
\end{array}$ & $ \begin{array}{c}
\mathcal H=\frac12\big(p_1^2+p_2^2+ p_3^2\big)+ \big(a_1\big(x_1^2 + x_2^2\big)+ a_2 x_1+ a_3 x_2\big) p_3 +b_1\big(x_1^2 + x_2^2\big)+ b_2 x_1+ b_3 x_2 \tsep{2pt}\\
u_1(x_2)=a_1 x_2^2 + a_3 x_2,\qquad u_2(x_1)=-a_1 x_1^2-a_2 x_1 \\ V_1(x_1)=b_1 x_1^2+ b_2 x_1,\qquad V_2 (x_2)=b_1 x_2^2 + b_3 x_2
\end{array}$ \bsep{2pt}\\
\hline
\end{tabular}
\end{sidewaystable}
\subsection[Example: a family of higher-order superintegrable systems from the 2D caged oscillator]{Example: a family of higher-order superintegrable systems\\ from the 2D caged oscillator}\label{sec:minimally_higher}
Let us consider the two-dimensional caged anisotropic oscillator
\begin{gather}\label{2Dcage}
\mathcal {H}_0=\frac12\big(p_1^2+p_2^2\big)+\omega\big(\ell^2 x_1^2+m^2 x_2^2\big)+ \frac{\alpha}{x_1^2}+\frac{\beta}{x_2^2}
\end{gather}
for $\omega\in\mathbb{R}\setminus\{0\}$, $\ell$, $m$ nonvanishing integers and $\alpha,\beta\in\mathbb{R}$. The system is well known to be superintegrable if $\frac{\ell}{m}$ rational~\cite{EvaVe,RTW}. A first straightforward extension to a~3D superintegrable system is given by
\begin{gather}\label{extcage1}
\mathcal {H}=\frac12\big(p_1^2+p_2^2+ p_3^2\big)+\big(\ell^2 x_1^2+m^2 x_2^2\big)p_3+ \frac{\alpha}{x_1^2}+\frac{\beta}{x_2^2},
\end{gather}
that can be transformed into \eqref{2Dcage} by simply reducing $p_3=\omega$.
A more general extension can be constructed as in the previous example. Let us set
\begin{gather}\label{newparameters2}
\ell^2= \ell_1 \kappa +\ell_2,\qquad \alpha=\alpha_1\kappa+\alpha_2, \qquad
m^2=m_1 \kappa+m_2 ,\qquad \beta=\beta_1\kappa+\beta_2.
\end{gather}
The system \eqref{2Dcage} can then be seen as the 2D reduction of
\begin{gather}
\mathcal {H} = \frac12\big(p_1^2+p_2^2+p_3^2\big)+\left(\omega\big(\ell_1 x_1^2+m_1 x_2^2\big)+\frac{\alpha_1}{x_1^2}+\frac{\beta_1}{x_2^2}\right)p_3 \nonumber \\
\hphantom{\mathcal {H} =}{} + \omega\big(\ell_2 x_1^2+m_2 x_2^2\big) + \frac{\alpha_2}{x_1^2}+\frac{\beta_2}{x_2^2},\label{extcage2}
\end{gather}
by substituting $p_3=\kappa$. We obtain in this way the three-dimensional integrable system \eqref{extcage2} that becomes superintegrable when the frequency ratio of \eqref{2Dcage} (where \eqref{newparameters2} has to be taken into account) is a rational number, i.e., when
\begin{gather}\label{ratio1}
\frac{\ell_1 p_3+\ell_2}{m_1 p_3+ m_2 }=\frac{\ell^2}{m^2},\qquad \frac{\ell}{m}\in\mathbb{Q},
\end{gather}
for every possible value of the phase space variable~$p_3$. Equivalently, \eqref{ratio1} can be written as
\begin{gather*
\big(m^2 \ell_1-\ell^2 m_1\big)p_3+ m^2\ell_2-m_2\ell^2=0, \qquad \frac{\ell}{m}\in\mathbb{Q}.
\end{gather*}
The above equation contains a polynomial in $p_3$ that must be identically zero. This is possible only when the coefficient of each power of~$p_3$ vanishes. Namely, when
\begin{gather}\label{ratio_cond}
\frac{\ell_1}{m_1}=\frac{\ell_2}{m_2}=\frac{\ell^2}{m^2},\qquad \frac{\ell}{m}\in\mathbb{Q}.
\end{gather}
Thus, the family of systems~\eqref{extcage2} is superintegrable if and only its parameters satisfy \eqref{ratio_cond} (and in that case also \eqref{2Dcage} is superintegrable). For $\ell_j=m_j=0$ for some $j$ (not both $j=1,2$), the previous condition reduces to
\begin{gather*}
\frac{\ell_j}{m_j}=\frac{\ell^2}{m^2},\qquad \frac{\ell}{m}\in\mathbb{Q}.
\end{gather*}
For $\alpha_1=\beta_1=\ell_2=m_2=0$ we have the simpler system~\eqref{extcage1}.
The case $\alpha_j=\beta_j=0$, $\ell_j=m_j=\pm1$, $j=1,2$ was studied in~\cite{MS} and it is shown there to be quadratically minimally superintegrable, with the fourth independent integral (besides the two Cartesian ones) inherited from the 2D caged oscillator, of first order. In the more general case \eqref{extcage2}, the order of the fourth integral can be arbitrarily high, depending on the value of $\frac{\ell}{m}$. Notice that all the systems in Table~\ref{table:2Dquad} are contained in the family \eqref{extcage2}, except the systems $\mathcal{E}_2$ and $\mathcal{E}_3$ for the special case $a_1=b_1=0$ (i.e., $c_1=0$), in which the linear terms in the space variables cannot be eliminated by translation, due to the absence of quadratic terms.
\section[Maximal superintegrable class canonically conjugated to natural 2D systems]{Maximal superintegrable class canonically conjugated\\ to natural 2D systems}\label{sec:maximally_higher}
Let us consider the system whose magnetic field and effective potential read
\begin{gather}\label{magfieldB}
\vec B(\vec x)=(0, \gamma,0),\qquad \gamma\in\mathbb{R}\setminus\{0\}
\end{gather}
and
\begin{gather}\label{potB}
W(\vec x)= V(x_2),
\end{gather}
respectively. This system can be written in the form~\eqref{Sep1}, with the gauge chosen as
\begin{gather*
\vec A(\vec x)=(0,0, -\gamma x_1).
\end{gather*}
Its Hamiltonian reads
\begin{gather}\label{HB}
H=\frac12\big(p_1^2+p_2^2+p_3^2\big)-\gamma x_1 p_3 +\frac{\gamma^2}{2} x_1^2+ V(x_2).
\end{gather}
Actually by a different choice of the gauge and a canonical permutation of the variables $x_1$ and $x_2$ we see that the system belongs also to Case~II.
The Hamiltonian \eqref{HB} admits three independent first-order integrals~\cite{MS}
\begin{gather}\label{integralsB}
I_1 = p_1-\gamma x_3,\qquad I_2 = p_3, \qquad I_3 = 2l_2 +\gamma \big(x_1^2-x_3^2\big).
\end{gather}
Out of them, we can construct two Cartesian-type integrals,
\begin{gather}\label{CartIntCan}
X_1=I_1^2+\gamma I_3,\qquad X_2=2 H-I_1^2-I_2^2 -\gamma X_3.
\end{gather}
The system can be reduced to two degrees of freedom through the following canonical transformation
\begin{gather}\label{reducingtrB}
x_1= X+ \frac{P_3}{\gamma},\qquad x_2=Y,\qquad x_3=Z+\frac{1}{\gamma}P_1,\qquad p_j=P_j,\qquad j=1,2,3,
\end{gather}
with the second type generating function
\begin{gather*}
G(\vec x,\vec P)=\left(x_1-\frac {1}{\gamma} P_3\right) P_1+ x_2 P_2+ x_3 P_3.
\end{gather*}
The Hamiltonian in the new coordinates reads
\begin{gather}\label{HB2}
\mathcal K(\vec X,\vec P)=\frac12\big(P_1^2+P_2^2\big)+ \frac12 \gamma^2 X^2+ V(Y),
\end{gather}
i.e., it is effectively in two degrees of freedom and without magnetic field. This system \eqref{HB2} has two cyclic coordinates in the full phase space $(\vec X, \vec P)$, namely $Z$ and $P_3$, that are therefore both integrals. Expressed in the original variables, these integrals correspond to~$p_3$ and $\frac{I_1}{\gamma}$ as in~\eqref{integralsB}. Moreover~\eqref{HB2} separates in the Cartesian coordinates $(X,Y)$, and the corresponding Cartesian-type integrals, $\mathcal I_1$, $\mathcal I_2$, once written in the original coordinates, provide~\eqref{CartIntCan}. Thus we have
\begin{Proposition}\label{lemma:HBmaxsuper}
The system with the magnetic field~\eqref{magfieldB} and potential~\eqref{potB} is maximally superintegrable if and only if~\eqref{HB2}, seen as a system in two degrees of freedom on the phase space $(X,Y,P_1,P_2)$ has one additional integral of motion, besides $\mathcal I_1$, $\mathcal I_2$, and independent of them.
\end{Proposition}
Therefore the problem of maximal superintegrability of \eqref{HB} has been reduced to the two-dimensional problem of superintegrability of \eqref{HB2}. In particular, all the potentials $V(Y)$ that make~\eqref{HB2} superintegrable give (by simply replacing $Y=x_2$) the effective potentials that render~\eqref{HB} superintegrable.
The cases
\begin{gather}\label{1/x2potential}
V(Y)=\frac{c}{Y^2}+\frac{\gamma^2 Y^2}{8},
\end{gather}
and
\begin{gather}\label{g22z2potential}
V(Y)=\frac{\gamma^2}{2} Y^2,
\end{gather}
that correspond to 3D superintegrable systems with additional second-order integral have already been found in~\cite{MS} with a different approach.
All the potentials $V(Y)$ that lead to second and third-order superintegrability in 2D have been classified~\cite{MiPoWin}.
If we focus on second-order integrals, they are listed in Table~\ref{table:2Dquad}.
The systems that can be obtained from it, after applying the transformation~\eqref{reducingtrB} and are still quadratically superintegrable are given by~\eqref{1/x2potential}, and
\begin{gather*}
V(Y)=\frac{\gamma^2}{2} Y^2 + c Y,
\end{gather*}
that, since $\gamma\neq0$, can be reduced to \eqref{g22z2potential} by translation in~$Y$.
However, higher-order superintegrable systems can be generated, e.g., from
\begin{gather}\label{B4thOrder}
V(Y)=\frac{c}{Y^2}+\frac{\gamma^2 Y^2}{2},\qquad c\geq 0.
\end{gather}
The additional integral of \eqref{HB2} is second order and reads (see Table \ref{table:2Dquad})
\begin{gather*}
\mathcal X_4= \mathcal L_3^2+ 2c\frac {X^2}{Y^2}.
\end{gather*}
Here $\mathcal L_3$ denotes the third component of the angular momentum with respect to the coordinates $(X,Y,Z, P_1,P_2,P_3)$.
Inverting the transformation \eqref{reducingtrB}, it gives the fourth-order integral
\begin{gather*}
X_4=\frac{1}{\gamma^2}\left(\big(p_2 ^{A} p_3 ^{A}+\gamma p_1^{A} x_2\big)^2+2 c \frac{\big(p_3^{A}\big)^2}{x_2^{2}}\right).
\end{gather*}
Actually, by polynomial combinations with the other integrals, it can be reduced to the third order one
\begin{gather*}
X_5 = 2 \gamma p_2^{A} p_3^{A} l_3^{A}+\gamma^2 \Big( x_1^2\big( p_2^{A}\big)^2 + x_2^2\Big( \big(p_3^{A}\big)^2-\big(p_1^{A}\big)^2 \Big)\Big)+ 2\gamma \frac{x_1}{x_2^2}\big(\gamma^2 x_2^4+ 2 c\big) p_3^A \\
\hphantom{X_5 =}{}+\gamma^2\frac{x_1^2}{x_2^2}\big(\gamma^2 x_2^4+2 c\big),
\end{gather*}
that cannot be further reduced to lower order by using any of the integrals~\eqref{integralsB} nor~\eqref{CartIntCan}.
A more general 3D infinite family of maximally superintegrable system, including the previous cases \eqref{1/x2potential} and~\eqref{B4thOrder} and the one found in \cite{MS2}, corresponds to the caged oscillator
\begin{gather}\label{cagedOsc}
V(Y)=\frac{c}{Y^2}+\frac{ m^2}{\ell^2} \gamma^2 Y^2,\qquad \ell,m\in\mathbb{N}
\end{gather}
If we compare it with \eqref{extcage2}, we see that for $\gamma^2=\omega l_2^2$, $\alpha_2=0$, $\beta_2=c$ and $m_2$ satisfying \eqref{ratio_cond} the two obtained 3D families would have the same scalar potential. However, the magnetic fields differ, rendering~\eqref{cagedOsc} maximally superintegrable, while~\eqref{extcage2}~-- as far as we can see~-- is only minimally superintegrable.
\section{Second-order integrals}\label{sec:second_ord_int}
Any second-order integral of motion we can write
\begin{gather}\label{classint}
X= \sum_{j=1}^{3} h_j(\vec x) p_j^A p_j^A + \sum_{j,k,l=1}^{3} \frac{1}{2} |\epsilon_{jkl}| n_j(\vec x) p_k^A p_l^A + \sum_{j=1}^{3} s_j(\vec x) p_j^A+m(\vec x),
\end{gather}
where $\epsilon_{jkl}$ is the completely antisymmetric tensor with $\epsilon_{123}=1$.
The condition that the Poisson bracket
\begin{gather*
\{a(\vec x,\vec p),b(\vec x,\vec p)\}=\sum_{j=1}^{3}\left(\frac{\partial a}{\partial {x_j}} \frac{\partial b}{\partial {p_j}} - \frac{\partial b}{\partial {x_j}} \frac{\partial a}{\partial {p_j}} \right)
\end{gather*}
of the integral \eqref{classint} with the Hamiltonian~\eqref{Hamiltonian} vanishes
\begin{gather*}
\{ H,X\}=0
\end{gather*}
seen as a polynomial in the momenta leads to the determining equations for the unknown functions $h_j$, $n_j$, $s_j$, $j=1,2,3$ and $m$ in the integral. Order by order (from the third to the zeroth) they read (cf.~\cite{MSW}):
\begin{gather}\label{3ordcond}
\begin{aligned}
&\partial_{x_1} h_1 = 0, \qquad && \partial_{x_2} h_1 = -\partial_{x_1} n_3 , \qquad && \partial_{x_3} h_1 =- \partial_{x_1} n_2 ,&\\
& \partial_{x_1} h_2 =-\partial_{x_2} n_3, \qquad && \partial_{x_2} h_2 =0, \qquad && \partial_{x_3} h_2 =-\partial_{x_2} n_1 ,& \\
& \partial_{x_1} h_3 =- \partial_{x_3} n_2 , \qquad && \partial_{x_2} h_3 =- \partial_{x_3} n_1 , \qquad && \partial_{x_3} h_3 = 0,\\
& \nabla \cdot \vec n =0 ,&& && &
\end{aligned}\\
\nonumber \partial_{x_1} s_1 = n_2 B_2-n_3 B_3, \\
\nonumber \partial_{x_2} s_2 = n_3 B_3-n_1 B_1, \\
\nonumber \partial_{x_3} s_3 = n_1 B_1-n_2 B_2, \\
\label{2ordcond} \partial_{x_2} s_1 + \partial_{x_1} s_2 =n_1 B_2 -n_2 B_1+2 (h_1 - h_2) B_3, \\
\nonumber \partial_{x_3} s_1+\partial_{x_1} s_3 = n_3 B_1-n_1 B_3+2 (h_3 - h_1) B_2, \\
\nonumber \partial_{x_2} s_3+\partial_{x_3} s_2 = n_2 B_3-n_3 B_2+2 (h_2 - h_3) B_1,
\\
\nonumber \partial_{x_1} m = 2 h_1 \partial_{x_1} W+ n_3 \partial_{x_2} W+ n_2 \partial_{x_3} W+s_3 B_2-s_2 B_3, \\
\label{1ordcond} \partial_{x_2} m = n_3 \partial_{x_1} W+2 h_2 \partial_{x_2} W+ n_1 \partial_{x_3} W+s_1 B_3-s_3 B_1, \\
\nonumber \partial_{x_3} m = n_2 \partial_{x_1} W+ n_1 \partial_{x_2} W+2 h_3 \partial_{x_3} W+s_2 B_1-s_1 B_2,\\
\label{0ordcond}
\vec s \cdot \nabla W = 0.
\end{gather}
The equations \eqref{3ordcond} prescribe that the functions $h_j$, $n_j$ are such that the highest-order terms in the integral \eqref{classint} are linear combinations of products of the generators $p_1$, $p_2$, $p_3$, $l_1$, $l_2$, $l_3$ of the Euclidean group, where $l_j=\sum\limits_{k,l} \epsilon_{jkl} x_k p_l$~\cite{MSW}. Explicitly, in terms of the expressions~\eqref{covarP}, we have
\begin{gather}\label{classintUEA}
X =\sum_{i,j\colon i\leq j}\alpha_{ij}l_i^A l_j^A+ \sum_{i,j}\beta_{ij}p_i^A l_j^A+\sum_{i,j:\; i\leq j} \gamma_{ij}p_i^A p_j^A + \sum_{j=1}^{3} s_j(\vec x) p_j^A+m(\vec x),
\end{gather}
where $l_j^A=\sum\limits_{k,l} \epsilon_{jkl} x_k p_l^A$.
By subtracting the Hamiltonian and the two Cartesian integrals we can a priori set $\gamma_{11}=\gamma_{22}=\gamma_{33}=0$.
There are compatibility conditions on equations~\eqref{2ordcond}, consequence of the following conditions on the derivatives of the functions $s_j$, namely,
\begin{gather}
\partial^2_{x_2}\partial_{x_1}s_1+ \partial^2_{ x_1} \partial_{x_2}s_2=\partial_{x_1}\partial_{ x_2}(\partial_{x_2} s_1+ \partial_{x_1} s_2),\nonumber\\
\partial^2_{x_3}\partial_{x_1}s_1+ \partial^2_{ x_1} \partial_{x_3}s_3=\partial_{x_1}\partial_{x_3}(\partial_{x_3} s_1+ \partial_{x_1} s_3),\nonumber\\
\partial^2_{x_3} \partial_{x_2}s_2+ \partial^2_{ x_2} \partial_{x_3} s_3=\partial_{x_2}\partial_{x_3}(\partial_{x_3} s_2+ \partial_{x_2} s_3),\nonumber\\
\partial_{x_1}\partial_{x_3}(\partial_{x_2} s_1+\partial_{x_1} s_2)=2\partial_{x_2}\partial_{x_3}(\partial_{x_1}s_1)-\partial_{x_1}\partial_{x_2}(\partial_{x_3} s_1+\partial_{x_1} s_3)+\partial^2_{x_1}(\partial_{x_3} s_2+\partial_{x_2} s_3),\nonumber\\
\partial_{x_2}\partial_{x_3}(\partial_{x_2} s_1+\partial_{x_1} s_2)=2\partial_{x_1}\partial_{x_3}(\partial_{x_2}s_2)-\partial_{x_1}\partial_{x_2}(\partial_{x_3} s_2+\partial_{x_2} s_3)+\partial^2_{x_2}(\partial_{x_3} s_1+\partial_{x_1} s_3),\nonumber\\
\partial_{x_2}\partial_{x_3}(\partial_{x_3} s_1+\partial_{x_1} s_3)=2\partial_{x_1}\partial_{x_2}(\partial_{x_3}s_3)-\partial_{x_1}\partial_{x_3}(\partial_{x_3} s_2+\partial_{x_2} s_3)+\partial^2_{x_3}(\partial_{x_2} s_1+\partial_{x_1} s_2).\!\!\!\!\label{comps}
\end{gather}
These translate into compatibility conditions on the magnetic field and the constants in the coefficients of the second-order terms.
Further compatibility constraints come from \eqref{1ordcond}, consequence of
\begin{gather}\label{compm}
\partial_{x_i}\partial_{x_j} m=\partial_{x_j}\partial_{x_i} m,\qquad i,j=1,2,3, \quad i\neq j.
\end{gather}
\section{A necessary condition for second-order superintegrability}\label{sec:necessary}
Both classes of systems that separate in Cartesian coordinates have at least one first-order integral and it is always possible to choose a gauge so that such integral reads as one of the linear momenta. To fix the ideas, let us work in such a gauge choice and assume that the constant momentum is $p_3$. If a second-order integral $X$ exists, then $K_1=\{X,p_3\}$ is still an integral at most of second order or a constant. Since the highest-order terms in $X$ are as in~\eqref{classintUEA}, they can be at most quadratic in $x_3$. This means that if $K_1$ is quadratic in the momenta, its second-order terms are at most linear in $x_3$, since $K_1=\{X,p_3\}=\frac{\partial X}{\partial x_3}$. Thus, $K_2=\{K_1,p_3\}$ can be, as above, either an integral at most quadratic or a constant. If $K_2$ is again quadratic, $K_3=\{K_2,p_3\}$ can be now at most linear in the momenta, since the highest-order terms in $K_2$ do not depend on $x_3$. Therefore, we can conclude that if a second-order independent integral $X$ exists, then necessarily there must exist a second-order integral (which could be $X$ itself) such that $\{X,p_3\}$ is at most linear in the momenta. In general, for a conserved momentum~$p_j$, the result is the same, it is enough to replace $x_3$ by $x_j$ in the argument above. Thus, we obtain the following
\begin{Proposition}\label{lemma:linear}
Let the system defined by $H$ as in \eqref{Hamiltonian} separate in Cartesian coordinates and have a quadratic integral $\mathcal I$ independent of the Cartesian integrals. Then there exists a second-order integral~$X$, not necessarily different from $\mathcal I$, such that $\{X,p_j\}$ is a polynomial expression in the momenta of at most first order, for some~$j$.
\end{Proposition}
Thus, to answer the question on the existence of an additional second-order integral for the class of systems we are considering here, we can start by answering the simpler question on the existence of the necessary integral $X$ that satisfies the above property. This is done in the following Sections~\ref{sec:case1} and~\ref{sec:case2} and Appendices~\ref{appendix:case1},~\ref{appendix:case1-part2},~\ref{appendix:case2}.
Since we found that the special case in which the magnetic field is constant and the functions~$V_j$ are second-order polynomials in the respective variables appears several times in the computation therein, we discuss it at once in the separate Section~\ref{sec:pol2}.
\section{Quadratic superintegrability in Case I}\label{sec:case1}
We start with the class of systems in~\eqref{Omega_sep1}. To fix the ideas, let us choose a gauge as in~\eqref{Sep1} and assume that there exists a quadratic independent integral~$\mathcal I$. Thus, by Proposition~\ref{lemma:linear} there exist another quadratic integral~$X$ such that $\{X,p_3\}$ is at most first order as a polynomial in the momenta. Here we consider only the case in which the two Cartesian-type integrals do not reduce to first-order integrals. In case one of them does, then the system is at the intersection of Case~I and Case~II (up to a permutation of indices) and it is treated at once in Section~\ref{sec:case2}. Moreover, we assume there does not exist a linear integral, other than~$p_3$. If it exists, the corresponding systems can be found in~\cite{MS}, where there is a complete study of quadratically superintegrable systems with Cartesian integrals and one independent first-order integral.
We can have several cases:
\begin{itemize}\itemsep=0pt
\item [(i)] $\{X,p_3\}$ is at most linear and not vanishing. Thus the only possibility of finding something new is in assuming that $\{X,p_3\}$ is a dependent integral or a constant (we excluded the case there is an independent first-order integral). We therefore look for a quadratic integral $X$ such that
\begin{gather}\label{linearint0}
\partial_{x_3} X=\{X,p_3\}=c_1 p_3+ c_0,\qquad c_j\in\mathbb{R},
\end{gather}
and $c_j$ not both vanishing, $j=0,1$.
\item [(ii)] $\{X,p_3\}=0$ and there exist no quadratic integral independent of the Cartesian integrals and commuting with~$p_3$. Then~$X$ is trivial, in the sense that it depends on the Cartesian integrals and~$p_3$. However, to have a quadratic superintegrable system, a quadratic integral~$\mathcal I$ as in Proposition~\ref{lemma:linear} must exist. Without loss of generality, we can assume $X=\{ \mathcal I, p_3\}$ with
\begin{gather}\label{quadint00}
\{\mathcal I, p_3\}=a_0 p_3^2 + a_1 X_1+ a_2 X_2 + c_1 p_3 + c_0,
\end{gather}
where $X_1$ and $X_2$ are as in~\eqref{CartInt1}, $a_0,a_1,a_2,c_0,c_1\in\mathbb{R}$, not all $a_j$ vanishing, otherwise we are in the previous point i).
\item [(iii)] $\{ X,p_3\}=0$ and $X$ is independent of the Cartesian integrals. Since $X$ commutes with $p_3$, it satisfies the assumptions of Proposition~\ref{lemma_minimally}. Thus, the corresponding systems can be found in Table~\ref{table:2Dquad}.
If an additional quadratic independent integral exists, then its Poisson bracket with~$p_3$ cannot vanish. This is a consequence of the fact that the 2D system~\eqref{2dof} cannot have more than~$3$ independent integrals.
However, as in the previous point, there could exist a quadratic independent integral $\mathcal I$ such that $\{ \mathcal I, p_3\}$ depends on the others, namely
\begin{gather}\label{quadint0}
\{\mathcal I, p_3\}=a_0 p_3^2 + a_1 X_1+ a_2 X_2 + a_3 X_3+ c_1 p_3 + c_0,
\end{gather}
where $a_0,a_1,a_2,a_3,c_0,c_1\in\mathbb{R}$ and not all~$a_j$ are vanishing (otherwise we are in case~(i)), $X_1$, $X_2$ as in~\eqref{CartInt1} and $X_3=X$.
\end{itemize}
Let us investigate the possibilities for $X_3$ in \eqref{quadint0}. Its highest-order terms should come from a Poisson bracket of the quadratic terms of~$\mathcal I$ with~$p_3$, i.e., their derivatives with respect to~$x_3$. Moreover, by assumption~$X_3$ does not depend on~$x_3$. Thus, its second-order terms can arise only by taking derivatives of a second-order polynomial that contains terms of the form $p_i\cdot l_j$, $i=1,2,3$, $j=1,2$. By computing their Poisson bracket with $p_3$, we see that the only outcome (for an integral~$X_3$ independent of~$X_1$ and~$X_2$) is in terms of the type~$p_i p_{\ell}$, $i\neq\ell$. Looking at the integrals of the of 2D systems in Table~\ref{table:2Dquad}, and the dependent integrals obtained by their Poisson bracket with the Cartesian integrals, we see that the only possibility is~\eqref{quad_maximally} below.
Now that we outlined all the possibilities, we need to solve the determining equations \eqref{2ordcond}--\eqref{0ordcond}, for the different cases. For this, it is necessary to work in gauge covariant setting.
The conditions \eqref{linearint0}, \eqref{quadint00} and \eqref{quadint0} can be written together as (we can now set $a_3=0$):
\begin{gather}\label{linearint}
\partial_{x_3} X=a_0 \big(p_3^A-u_1(x_2)+ u_2 (x_1)\big) ^2 + a_1 X_1+ a_2 X_2+ c_1 \big(p_3^A-u_1(x_2)+ u_2 (x_1)\big) + c_0,\!\!\!
\end{gather}
where with an abuse in the notation we denoted~$\mathcal I$ as $X$ (the unknown independent integral we are looking for), with $a_j,c_j\in\mathbb{R}$ and not all vanishing. For $a_j=0$, $j=0,1,2$ we are in case~(i).
Equation \eqref{linearint} implies the following values for the second-order terms of $X$ as in \eqref{classintUEA}:
\begin{gather}
\alpha_{11}=\alpha_{22}=\alpha_{12}=\alpha_{13}=\alpha_{23}=\beta_{31}=\beta_{32}=0, \qquad \beta_{11}=\beta_{22},\nonumber\\
a_0=0,\qquad a_1=\beta_{12},\qquad a_2=-\beta_{21}.\label{coeffcond1}
\end{gather}
Moreover, since $\vec p\cdot \vec L=0$ we can set $\beta_{22}=0$ (and consequently also $\beta_{11}=0$).
Concerning the lower-order terms, by integrating the right-hand side of~\eqref{linearint}, we obtain the following restriction on the structure of~$X$:
\begin{gather}
s_j =S_j(x_1,x_2),\qquad j=1,2,\nonumber\\
s_3 =S_3(x_1,x_2)- (2\beta_{12} u_2(x_2)+2\beta_{21}u_1(x_2)-c_1) x_3,\nonumber\\
m_3 =c_0 x_3 + (u_1(x_2)-u_2(x_1)) \left((2\beta_{12}u_2(x_1)+2\beta_{21}u_1(x_2)- c_1) x_3\right)\nonumber\\
\hphantom{m_3 =}{} +(2\beta_{12} V_1(x_1)- 2\beta_{21} V_2(x_2))x_3 + M(x_1,x_2).\label{CaseIsm}
\end{gather}
With this simplifications at hand, we can solve equations \eqref{2ordcond}--\eqref{0ordcond}.
Let as assume that $a_1$ and $a_2$ in \eqref{linearint} are not both zero; e.g., let it be $a_1\neq0$. Then we can shift both the potential $V_1(x_1)$ and the third component of the vector potential by a constant, thus absorbing the constants~$c_0$ and~$c_1$.
Similarly, if $a_2\neq0$ we could use~$X_2$.
Therefore, by~\eqref{coeffcond1}, we see that if either $\beta_{12}\neq0$ or $\beta_{21}\neq0$, we can proceed in the solution of \eqref{2ordcond}--\eqref{0ordcond} as if $c_1=c_0=0$. We obtain that no new superintegrable system can be found in this case.
The details of the computation are in Appendix~\ref{appendix:case1-part2}.
For $\beta_{12}=\beta_{21}=0$ we find it convenient to start from~\eqref{2ordcond}, in which the third equation simplifies to
\begin{gather}
(\beta_{33} x_1 + \gamma_{23})u_1'(x_2)+ (\beta_{33} x_2 - \gamma_{13}) u_2'(x_1)-c_1 =0.\label{eqtostart}
\end{gather}
The above equation could be trivially satisfied for some of the functions $u_j$ or not. This determines a major splitting in the computation. For the details see Appendix~\ref{appendix:case1}, the resulting list of systems is given in the conclusions, Section~\ref{Conclusion1}.
\section{Quadratic superintegrability in Case II}\label{sec:case2}
For the class of systems \eqref{Omega_sep2} we can choose a gauge so that there are two mutually orthogonal conserved linear momenta. Let us assume that they are $p_2$ and $p_3$ as in~\eqref{CartInt2}. As above, we assume there exists an independent quadratic integral. Thus, by Proposition~\ref{lemma:linear} we can
have two possibilities:
\begin{itemize}\itemsep=0pt
\item [(i)] there exists a quadratic integral $X$ such that $\{X,p_2\}=\{X,p_3\}=0$. Then $X$ is an integral of the reduced system obtained from~\eqref{Omega_sep2} by setting the conserved momenta to constants, i.e., function of the 1-dimensional Hamiltonian. Thus, it is dependent on the Hamiltonian and the conserved momenta. The only hope to find something interesting is to look for a~quadratic integral $\mathcal I$ such that $ \{\mathcal I, p_j\}=X$ for some~$j$.
\item [(ii)] There exists a quadratic integral $X$ such that $\{X,p_j\}$ is linear and not vanishing for at least one $p_j$, $j=2,3$. Without loss of generality we can assume that $\{X,p_3\}\neq 0$, otherwise we permute the coordinates~$x_2$ and~$x_3$.
\end{itemize}
Let us set $j=3$ in both cases and with an abuse of notation let us rename~$\mathcal I$ in case~(i) as~$X$. Thus, we look for a quadratic integral~$X$ such that{\samepage
\begin{gather}\label{Comm2}
\{X, p_3\}= 2 a_0 \big(H-X_1^2-X_2^2\big)+a_1 X_1^2+a_2 X_2^2+a_3 X_1 X_2 + c_0 + c_1 X_1+ c_2 X_2,
\end{gather}
$X_1$, $X_2$ as in~\eqref{CartInt2}. For $a_j=0$, $j=1,\dots,4$, we have case~(ii).}
Equation \eqref{Comm2} implies the following conditions on the coefficients of the higher-order terms of the integral, expressed as in \eqref{classintUEA} (again, we use the condition $\vec p \cdot \vec L=0$)
\begin{gather}
\alpha_{11} =\alpha_{22}=\alpha_{12}=\alpha_{13}=\alpha_{23}=\beta_{11}=\beta_{22}=\beta_{32}=0,\nonumber\\
a_0 = \beta_{12},\qquad a_1=-\beta_{21},\qquad a_2=0,\qquad a_3=-\beta_{31}.\label{alphaComm2}
\end{gather}
Moreover, by subtracting $X_1X_2$ from $X$, we can set $\gamma_{23}=0$.
Still as a consequence of \eqref{Comm2}, we have further conditions on the coefficients of the lower-order terms
\begin{gather*}
s_1=S_1(x_1,x_2),\qquad s_2=S_2(x_1,x_2)+ (2(\beta_{12}+ \beta_{21}) u_3(x_1)-\beta_{31} u_2(x_1)+ c_1) x_3,\nonumber\\
s_3=S_3(x_1,x_2)+ z( \beta_{31} u_3(x_1)-2 \beta_{12}u_2(x_1)+c_2
\end{gather*}
and
\begin{gather*}
m =M(x_1,x_2) -\big((2\beta_{12}+\beta_{21})u_3(x_1)^2- \beta_{31}u_2(x_1)u_3(x_1) \nonumber\\
\hphantom{m =}{}+ c_1 u_3(x_1)- c_2 u_2(x_1)+2\beta_{12}u_2(x_1)^2 -2\beta_{12} V_1(x_1)- c_0\big)x_3
\end{gather*}
With these simplifications at hand, we are able to solve the determining
equations~\eqref{3ordcond}--\eqref{0ordcond}.
Let us perform the substitution
\begin{gather}\label{USub}
u_j(x_1)=U_j'(x_1),\qquad j=2,3.
\end{gather}
Since $u_j$ are defined in~\eqref{Omega_sep1} up to addition of arbitrary constants and~$U_j$ is defined as in~\eqref{USub}, in the following we can set to zero all the coefficient of first and zero-order powers of~$x_1$ in the solutions for~$U_j$.
From \eqref{2ordcond} we find
\begin{gather*}
S_1(x_1,x_2) =s_1(x_2)+\beta_{12}U_2(x_1)+ (\beta_{13}-2\alpha_{33} x_2)U_3(x_1) -(\beta_{12} x_1 +\beta_{33}x_2-\gamma_{13})U_2'(x_1)\nonumber\\
\hphantom{S_1(x_1,x_2) =}{} + (2\alpha_{33} x_1 x_2-\beta_{13}x_1+\beta_{23} x_2- \gamma_{12})U_3'(x_1),\nonumber\\
S_2(x_1,x_2)= s_2(x_1)- \left(\alpha_{33}x_1 x_2^2-\beta_{13}x_1 x_2+\frac{1}{2}\beta_{23} x_2^2-\gamma_{12} x_2 \right)U_3''(x_1),\nonumber\\
S_3(x_1,x_2)=s_3(x_1)+ c_1 x_2+\beta_{31}x_2 U_2'(x_1) -2 (\beta_{12}+\beta_{21})x_2 U_3'(x_1)\nonumber\\
\hphantom{S_3(x_1,x_2)=}{} + \left(\alpha_{33} x_1 x_2^2-\beta_{13} x_1 x_2 +\frac12 \beta_{23} x_2^2-\gamma_{12} x_2 \right)U_2''(x_1)\nonumber\\
\hphantom{S_3(x_1,x_2)=}{}- \left(\beta_{12} x_1 x_2 +\frac12 \beta_{33}x_2^2-\gamma_{13} x_2\right)U_3''(x_1),
\end{gather*}
where $U_j$ and $s_{\ell}$ must satisfy the third, fourth and fifth equation of~\eqref{2ordcond}.
Let us continue by considering the third of these equations, namely
\begin{gather}
U_2''(x_1) (\beta_{12}x_1+\beta_{33} x_2-\gamma_{13})+2 \beta_{12} U_2'(x_1)-\beta_{31}U_3'(x_1)-c_2=0 \label{eqtostart2},
\end{gather}
together with the compatibility conditions~\eqref{comps}. The first one is trivially satisfied, while the remaining five read
\begin{gather}
\beta_{33} U_2'''(x_1)=0,\nonumber\\
(2\alpha_{33} x_1+\beta_{23}) U_2'''(x_1)+6\alpha_{33}U_2''(x_1)-\beta_{33} U_3'''(x_1)=0,\nonumber\\
(\beta_{12}x_1+\beta_{33}x_2- \gamma_{13}) U_2^{(4)}(x_1)+4 \beta_{12} U_2'''(x_1)-\beta_{31} U_3'''(x_1)=0,\nonumber\\
-(2\alpha_{33} x_1 x_2 -\beta_{13} x_1 +\beta_{23} x_2- \gamma_{12})U_3^{(4)}(x_1)+
4(\beta_{13}-2 \alpha_{33} x_2) U_3'''(x_1)-\beta_{21} U_2'''(x_1)=0,\nonumber\\
(8 \alpha_{33}x_2-4\beta_{13}-\beta_{31})U_2'''(x_1)- (4\beta_{12} +\beta_{21}) U_3'''(x_1) \label{COmegaU2}\\
\qquad{} +(2 \alpha_{33} x_1 x_2-\beta_{13}x_1+\beta_{23}x_2- \gamma_{12}) U_2^{(4)}(x_1)-
(\beta_{12}x_1 +\beta_{33}x_2-\gamma_{13}) U_3^{(4)}(x_1)=0.\nonumber
\end{gather}
We can have different subcases according to whether the equations \eqref{eqtostart2},~\eqref{COmegaU2} are trivially sa\-tisfied for some of the functions $U_j$ or not. This determines a major splitting in the computation. The details are given in Appendix \ref{appendix:case2}.
\section[Constant magnetic field and second-order polynomial potentials]{Constant magnetic field\\ and second-order polynomial potentials}\label{sec:pol2}
Let us consider the particular case in which the magnetic field is constant
\begin{gather}\label{Bpol2}
\vec B(\vec x)=(a_1,a_2,0)
\end{gather}
and in \eqref{Omega_sep1} we have
\begin{gather*}
V_1(x)= v_{11} x_1 + v_{12} x_1^2,\qquad V_2(x_2)=v_{21}x_2 + v_{22} x_2^2,\qquad u_1= a_1 x_2,\qquad u_2= -a_2 x_1.
\end{gather*}
This system appears in various branches of calculation in the appendices; thus we find it practical to discuss it separately here.
Notice that since the magnetic field is constant, by rotation around $x_3$-axis we could reduce it to the case in which it is aligned with one of the Cartesian axis. However, the system would no longer separate in the corresponding rotated Cartesian coordinates, therefore we prefer not to perform such a rotation.
Let us also point out that if $V_1(x_1)=0$, for constant magnetic field a rotation around $x_2$ brings the system~\eqref{Omega_sep1} into~\eqref{Omega_sep2}. Thus, what we will deduce in the following for $V_1=0$ applies also for~\eqref{Omega_sep2}.
\subsection[$v_{12}$ and $v_{22}$ both not vanishing]{$\boldsymbol{v_{12}}$ and $\boldsymbol{v_{22}}$ both not vanishing}
Let us assume $v_{12}$ and $v_{22}$ are both not vanishing. Then by the translation of the coordinate system we can set $v_{11}=v_{21}=0$ without loss of generality.
Then, similarly to Section~\ref{sec:maximally_higher}, we can reduce to a natural Hamiltonian system through canonical transformations. Namely, let us take as the generating function
\begin{gather}\label{v12v22cantransf}
G= \left(x_1-\frac{a_2 P_3}{2 v_{12}}\right)P_1+\left(\frac{a_1 P_3}{2v_{22}}+x_2\right)P_2+x_3 P_3,
\end{gather}
so that $p_j=P_j$, $j=1,2,3$ and
\begin{gather*}
x_1= X+\frac{a_2 P_3 }{2 v_{12}},\qquad x_2=Y-\frac{a_1 P_3}{2v_{22}}, \qquad x_3= Z+\frac{a_2 v_{22} P_1-a_1 v_{12} P_2}{2 v_{12}v_{22}}.
\end{gather*}
After the transformation, with gauge chosen as in~\eqref{Sep1}, the Hamiltonian reads
\begin{gather*}
H=\frac12\left(P_1^2+P_2^2+ \left(1-\frac{a_1^2}{2v_{22}}-\frac{a_2^2}{2v_{12}}\right)P_3^2\right)+ v_{12} X^2 +v_{22} Y^2.
\end{gather*}
If $\frac{a_1^2}{2v_{22}}+\frac{a_2^2}{2v_{12}}\neq1$ we can, by a canonical transformation
\begin{gather*}
P_3= \frac{1}{\lambda} \tilde{P}_3, \qquad Z= \lambda \tilde{Z}, \qquad \lambda^2= \left| 1-\frac{a_1^2}{2v_{22}}-\frac{a_2^2}{2v_{12}} \right|
\end{gather*}
scale the $P_3^2$ term to have the Hamiltonian of the form
\begin{gather*}
H=\frac12\big(P_1^2+P_2^2 \pm \tilde{P}_3^2\big)+ v_{12} X^2 +v_{22} Y^2.
\end{gather*}
The system can therefore be reduced to a system determined by a two-dimensional, possibly inverted, anisotropic harmonic oscillator and free motion along the $Z$-direction. The original 3D system is minimally superintegrable if and only if the corresponding 2D oscillator is superintegrable as a system in the $(X,Y,P_1,P_2)$ space. If $v_{12}=v_{22}$ we have a special case of the system~$\mathcal{E}_3$ in Table~\ref{table:2Dquad}. If $\frac{v_{12}}{v_{22}} \in \mathbb{Q}$, $\frac{v_{12}}{v_{22}}\neq 1$ we have a higher-order integral when expressed in the variables $x_j$, $p_j$.
If $\frac{a_1^2}{2v_{22}}+\frac{a_2^2}{2v_{12}}=1$, the coordinate $P_3$ becomes cyclic and its conjugated variable $Z$ is an independent constant of motion. In this case the system \eqref{Omega_sep1} becomes at least minimally superintegrable. It is maximally superintegrable if and only if its reduction on the $(X,Y,P_1,P_2)$ space is superintegrable. Indeed, we have reduced to the system \eqref{HB2} for $V(Y)=v_{22} Y^2$ and $2v_{12}=\gamma^2=a_2^2$. Its maximally superintegrable exception is included in the family of systems~\eqref{cagedOsc}.
\subsection[$v_{22}$=0 and $v_{12}$ not vanishing]{$\boldsymbol{v_{22}=0}$ and $\boldsymbol{v_{12}}$ not vanishing}
In this case by translation in $x_1$ we can still set $v_{11}=0$. Then by a canonical transformation such that
\begin{gather*}
x_1=X + \frac{a_2}{2 v_{12} } P_3,\qquad x_2=Y,\qquad x_3=Z+ \frac{a_2}{2 v_{12} }P_1,\qquad p_j=P_j,\qquad j=1,2,3,
\end{gather*}
we obtain the system
\begin{gather*}
H=\frac12 \big(P_1^2 + P_2^2 + P_3^2\big) + v_{12} X^2 + a_1 P_3 Y + v_{21} Y.
\end{gather*}
If $a_1=0$ we have reduced to a natural system. By reducing the integral~$P_3$ we have a~2D system that, to our knowledge, is not superintegrable.
If $a_1\neq0$ we have reduced to the case with magnetic field aligned along one axis. The effective potential of the so obtained system reads
\begin{gather}\label{Wpol21}
W=v_{12} X^2 + v_{21} Y - \big(a_1^2 Y^2\big)/2.
\end{gather}
Thus, by the translation
\begin{gather*}
Y\rightarrow Y+ \frac{v_{21}}{a_1^2}
\end{gather*}
we can eliminate the linear term from the effective potential. A~shift of the vector potential by a constant, i.e.,
\begin{gather*}
A_3\rightarrow A_3-\frac{v_{21}}{a_1},
\end{gather*}
gives
\begin{gather*}
H=\frac12 \big(P_1^2 + P_2^2 + P_3^2\big) + v_{12} X^2 + a_1 P_3 Y.
\end{gather*}
Thus, without loss of generality, we can set $v_{21}=0$. By plugging~\eqref{Wpol21} and~\eqref{Bpol2} with these simplifications into the determining equations for a second-order integral as in~\eqref{classintUEA}, we find that they have no solution.
\subsection[$v_{12}=v_{22}=0$]{$\boldsymbol{v_{12}=v_{22}=0}$}
We have a subcase of the system $\mathcal{E}_3$ in Table~\ref{table:2Dquad}.
Thus, the system admits a second-order integral. With the gauge chosen as in~\eqref{Sep1}, that integral reads
\begin{gather*
X_3=p_1 p_2+ (v_{11}- a_2 p_3)x_2+ (v_{21}+a_1 p_3) x_1.
\end{gather*}
Equivalently, in gauge covariant form, we have
\begin{gather*}
X_3=p_1^A p_2^A+ (a_1 x_1-a_2 x_2) p_3^A- \big(a_1^2+a_2^2\big)x_1 x_2+a_1 a_2\big(x_1^2+x_2^2\big)+v_{11}x_2+v_{21}x_1,
\end{gather*}
corresponding to the fact that the system actually separates in any rotated system of Cartesian coordinates, since the Hamiltonian is linear in the space variables. Without altering the structure of the Cartesian-type integrals, we can therefore by rotation align the magnetic field along one Cartesian axis, let us say the $x_2$-axis. Thus, without lost of generality, let us assume $a_1=0$. The determining equations for an additional second-order integral can be solved. We find for $v_{11}=0$ one maximally superintegrable system:
\begin{gather}\label{quad_maximally}
\vec B(\vec x)=(0,a_2,0),\qquad W(\vec x)=v_{21} x_2-\frac12 a_2^2 x_1^2,
\end{gather}
with the integral
\begin{gather*
X_4 =3p_3^A l_1 ^A-p_1^A l_3^A-\frac{3v_{21}}{a_2} l_2^A + a_2 x_1 x_2 p_3^A + 3 a_2 x_1 l_1^A + v_{21} x_1^2 + a_2^2 x_1^2 x_2\nonumber\\
\hphantom{X_4}{} =3p_3 l_1- p_1 l_3-\frac{3}{a_2}\big(3 v_{21} l_2+2 a_2^2 x_1 x_2 p_3 +2 a_2 v_{21} x_1^2\big).
\end{gather*}
\section{Conclusions}\label{Conclusion}
Let us summarize our results. We have provided an exhaustive determination of quadratically superintegrable systems which separate in the Cartesian coordinates with magnetic field. In addition, we have found classes of systems minimally and maximally superintegrable with higher-order integrals. We list them below for reader's convenience.
\subsection{Superintegrable systems with second-order integrals}\label{Conclusion1}
We have constructed an exhaustive list of quadratically superintegrable systems with nonvanishing magnetic field which separate in Cartesian coordinates. Under the assumption that there is no independent first-order integral other than the Cartesian ones (in that case we refer the reader to our previous work~\cite{MS,MSW}) we have found~8 classes of minimally superintegrable systems, among which one contains a quadratically maximally superintegrable subclass, cf.~\eqref{quad_maximally}.
For brevity, we write here the magnetic field, the electrostatic potential and the leading order terms in the integral(s) together with the reference to the equation in which the system was introduced. We refer the reader to the relevant formulas therein encoding the complete information about the integral(s).
\begin{description}
\item[Case I,] i.e., the magnetic field and potential are of the form~\eqref{Omega_sep1} and the Cartesian integrals as in~\eqref{CartInt1}. The superintegrable systems read
\begin{description}
\item[(a)] \begin{gather*}
\vec B(\vec x) = \big( a {\mathrm{e}}^{b x_2}, c, 0\big),\qquad
W(\vec x)= a \left( w +\frac{ c}{b} x_1 \right){\mathrm{e}}^{b x_2} -\frac{a^2}{2 b^2}{\mathrm{e}}^{2 b x_2},
\end{gather*}
$ X_3 = p_1^A p_3^A + \cdots$, cf.~\eqref{BW31}.
\item[(b)]
\begin{gather*}
\vec B(\vec x) = 2 \left(a_1 x_2-\frac{a_3}{x_2^3}, -a_1 x_1+ \frac{a_2}{x_1^3}, 0 \right),\nonumber \\ \nonumber
W(\vec x) = -\frac{1}{2} a_1^2 \big(x_1^2+x_2^2\big)^2-\frac{a_2^2}{2 x_1^4}-\frac{a_3^2}{2 x_2^4}-
a_1 \left( a_2 \frac{x_2^2}{x_1^2}+ a_3\frac{x_1^2}{x_2^2}\right) \\
\hphantom{W(\vec x) =}{} -\frac{a_2 a_3}{x_2^2 x_1^2}+\frac{b_3}{x_2^2}+b_1 \big(x_1^2+x_2^2\big) +\frac{b_2}{x_1^2},
\end{gather*}
$X_3=\big(l_3^A\big)^2+\cdots$, cf. Table~\ref{table:2Dquad}.
\item[(c)] \begin{gather*}
\vec B(\vec x) = \left( 2 \left( a_1 x_2- \frac{a_3}{x_2^3} \right),-8 a_1 x_1-a_2,0\right), \nonumber \\
W(\vec x) = -\frac{a_1^2}{2} \big(4 x_1^2+x_2^2\big)^2 -\frac{a_2^2}{2} x_1^2 -\frac{a_3^2}{2 x_2^4} -a_2 a_3 \frac{x_1}{x_2^2} \\
\hphantom{W(\vec x) =}{} - a_1 a_2 x_1 \big(4 x_1^2+x_2^2\big)-4 a_1 a_3\frac{x_1^2}{x_2^2} + \frac{b_3}{x_2^2}+ b_1 \big(4 x_1^2+x_2^2\big)+b_2 x_1,
\end{gather*}
$X_3= p_2^A l_3^A +\cdots$, cf.\ Table~\ref{table:2Dquad}.
\item[(d)]
\begin{gather*}
\vec B(\vec x) = ( 2 a_1 x_2+a_3, -2 a_1 x_1-a_2,0 ), \\
W(\vec x) = -\frac{a_1^2}{2} \left(x_1^2+x_2^2\right)^2-\frac{a_3^2}{2} x_2^2-\frac{a_2^2}{2} x_1^2 - a_2 a_1 x_1 \left(x_1^2+x_2^2\right) \\
\hphantom{W(\vec x) =}{} -a_2 a_3 x_1 x_2-a_1 a_3 x_2 \big(x_1^2+x_2^2\big) + b_1 \big(x_1^2+x_2^2\big)+b_2 x_1+b_3 x_2,
\end{gather*}
$X_3=p_1^A p_2^A+\cdots$, cf.\ Table~\ref{table:2Dquad}.
When $a_1=a_3=0$ and $b_1=b_2=0$ the system becomes maximally superintegrable, with the additional integral of the form $X_4= p_1^A l_3^A-3p_3^A l_1 ^A+\cdots$, cf.~\eqref{quad_maximally}.
\end{description}
\item[Case II,] i.e., the magnetic field and potential are of the form~\eqref{Omega_sep2} and the Cartesian integrals as in~\eqref{CartInt2}. The superintegrable systems read
\begin{description}
\item[(a)] \begin{gather*}
\vec B(\vec x)=\big(0,a {\rm e}^{b x_1},0\big),\qquad W(\vec x)= w x_1+ c e ^{b x_1}-\frac12 \frac{a^2}{b^2} {\rm e}^{2 bx_1},
\end{gather*}
$X_3= p_1^A p_2^A - b p_3^A l_1^A +\cdots$, cf.~\eqref{BWSCa},
\item[(b)] \begin{gather*}
\vec B(\vec x)=\big(0,a (b-2) x_1^{b-3},0\big),\qquad W(\vec x)=-\frac{a^2 x_1^{2(b-2)}}{2}+ a (b-2) c x_1^{b-2}+ \frac{w}{x_1^2},
\end{gather*}
$X_3= p_1^A l_3^A - b p_3^A l_1^A +\cdots$, cf.~\eqref{BWSCb},
\item[(c)] \begin{gather*}
\vec B(\vec x)=\left(0,\frac{a}{x_1},0\right), \qquad W(\vec x)=-\frac12 a^2 \left(\ln |x_1|\right)^2 + b \ln |x_1|+ \frac{w}{x_1^2},
\end{gather*}
$X_3= 2 p_1^A l_3^A - p_3^A l_1^A +\cdots$, cf.~\eqref{BWSCc},
\item[(d)] \begin{gather*}
\vec B(\vec x) = \left( 0, 0, \frac{a}{x_1^3} \right), \qquad W(\vec x) = -\frac{a b \ln|x_1|}{x_1^2}-\frac{a^2}{8 x_1^4}+\frac{w}{x_1^2},
\end{gather*}
$X_3= p_1^A l_2^A +\cdots$, cf.~\eqref{C42UBW}.
\end{description}
\end{description}
Our approach also demonstrates that any quadratically maximally superintegrable system with magnetic field which separates in Cartesian coordinates would necessarily appear at the intersection of the presented classes. Given the different structure of the magnetic field in each of the cases we find only few potential candidates. One is the intersection of Case~I.d and Case~II.b which, as we already observed, leads to the maximally superintegrable system~\eqref{quad_maximally}. Another is the system Case I.a which for $c=0$ reduces to the system Case~II.a (upon interchange of the~$x_1$ and~$x_2$ coordinates and momenta). However, the integral $X_3$ of Case~I.a when $c=0$ becomes a function of the two first-order Cartesian integrals, i.e., it is not independent anymore. Last but not least, the systems Case~I.b, Case~I.c, Case~II.b and Case~II.d (after a permutation of coordinates) overlap for $a_1=a_2=0$ (Case I.b/c) and $b=0$ (Case~II.b/d) but the integrals again turn our to be dependent.
Thus we conclude that no other quadratically maximally superintegrable systems separating in Cartesian coordinates other than~\eqref{quad_maximally} and the ones found in~\cite{MS} exist.
\subsection{Superintegrable systems with higher-order integrals}\label{Conclusion2}
Above we have provided a complete answer to the problem of quadratic superintegrability for the considered classes of systems~\eqref{Omega_sep1} and~\eqref{Omega_sep2}. As we have seen, maximal superintegrability via at most quadratic integrals is very rare in the presence of magnetic field, as opposed to numerous purely scalar maximally superintegrable systems discussed, e.g., in~\cite{Evans1, MaSmoVaWin}. Thus one should consider also the possible existence of higher-order integrals. However, these are computationally very difficult to find. In this paper we have presented
two propositions, namely Propositions~\ref{lemma_minimally} and~\ref{lemma:HBmaxsuper} which can be used to construct three-dimensional maximally superintegrable systems with magnetic field out of two-dimensional scalar ones. In particular, Proposition~\ref{lemma:HBmaxsuper} states that a system with
\begin{gather*}
\vec B(\vec x)=(0, \gamma,0),\qquad \gamma\neq 0, \qquad W(\vec x)= V(x_2),
\end{gather*}
is maximally superintegrable if and only if the two-dimensional system with the Hamiltonian
\begin{gather*}
\mathcal K(\vec X,\vec P)=\frac12\big(P_1^2+P_2^2\big)+ \frac12 \gamma^2 X^2+ V(Y)
\end{gather*}
is superintegrable (where $V$ is the same function of a single variable).
Using Proposition~\ref{lemma:HBmaxsuper} we have arrived at an explicit example of maximally superintegrable system with
\begin{gather*}
\vec B(\vec x)=(0, \gamma,0), \qquad W(\vec x)= \frac{c}{x_2^2}+\frac{ m^2}{\ell^2} \gamma^2 x_2^2,\qquad \ell,m\in\mathbb{N}, \qquad c\in \mathbb{R},
\end{gather*}
cf.~\eqref{cagedOsc}, with three first-order integrals~\eqref{integralsB} and an additional integral coming from the integral of two-dimensional caged oscillator through the change of variables~\eqref{reducingtrB}.
Similarly, Proposition~\ref{lemma_minimally} led us to minimally superintegrable systems~\eqref{extcage2}
\begin{gather*}
\vec B(\vec x) = 2 \left( \omega m_1 x_2-\frac{\beta_1}{x_2^3}, -\omega \ell_1 x_1+\frac{\alpha_1}{x_1^3}, 0\right), \\
W(\vec x) = -\frac{\omega^2}{2} \big(\ell_1 x_1^2+m_1 x_2^2\big)^2+\omega \left( \ell_2 x_1^2+ m_2 x_2^2- \alpha_1 m_1 \frac{x_2^2}{x_1^2} - \beta_1 \ell_1 \frac{x_1^2}{x_2^2} \right) \\
\hphantom{\vec B(\vec x) =}{} +\frac{\alpha_2}{x_1^2}+\frac{\beta_2}{x_2^2}-\frac{1}{2}\left(\frac{\alpha_1}{x_1^2}+\frac{\beta_1}{x_2^2}\right)^2, \qquad \frac{l_1}{m_1}=\frac{l_2}{m_2}=\frac{l^2}{m^2},\qquad l,m\in\mathbb{Z}.
\end{gather*}
Systems~I.b and~I.c of Section~\ref{Conclusion1} with $a_1\neq 0$ are special subcases of it when the integral~$X_3$ becomes second order one.
Another class of minimally superintegrable systems with
\begin{gather*}
\vec B(\vec x)=(a_1,a_2,0), \qquad W(\vec x)= v_{12} x_1^2 + v_{22} x_2^2 - \frac{1}{2} \left( a_2 x_1 + a_1 x_2\right)^2,\\
\frac{v_{12}}{v_{22}} \in \mathbb{Q}, \qquad v_{12},v_{22}\neq 0
\end{gather*}
can be constructed out of anisotropic harmonic oscillator in two dimensions through the canonical transformation~\eqref{v12v22cantransf}.
Of course, more efficient and widely applicable tools for construction of higher-order superintegrable systems are needed. Given the recent rapid progress on a similar problem for scalar potentials~\cite{EscWinYur, Mar1,Mar2,MarSajWin2,PoWi} (see also references in~\cite{MiPoWin}) we hope that in foreseeable future we will be able to report on further development also in the case with magnetic field.
|
1,116,691,497,147 | arxiv | \section{Selection of recurrence threshold}
\paragraph*{Threshold selection.} The crucial algorithmic parameter of recurrence-based time series analysis is $\varepsilon$. Several invariants of a dynamical system (e.g., the 2nd-order R\'enyi entropy $K_2$) can be estimated by taking its recurrence properties for $\varepsilon\to 0$ \cite{Marwan2007}, which suggests that for a feasible analysis of recurrence networks, a low $\varepsilon$ is preferable as well. This is supported by the analogy to complex networks based on spatially extended systems, where attention is usually restricted to the strongest links between individual vertices (i.e., observations from different spatial coordinates) for retrieving meaningful information about relevant aspects of the systems' dynamics \cite{Zhou2006,Donges2009}. In contrast, a high edge density
\begin{equation}
\rho(\varepsilon)=\frac{2E(\varepsilon)}{N(N-1)}
\end{equation}
\noindent
(with $E(\varepsilon)$ being the total number of edges for a chosen $\varepsilon$) does not yield feasible information about the actually relevant structures, because these are hidden in a large set of mainly less important edges.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.40]{manhaton_dist.eps}
\includegraphics[scale=0.40]{euclidean_dist.eps}
\includegraphics[scale=0.40]{maximum_dist.eps}
\caption{\small {(Color online) Effects of different metrics and embeddings on the $\rho(\varepsilon)$ relationship, expressed in terms of the corresponding first derivative. (A,B,C,D): Manhattan distance; (E,F,G,H): Euclidean distance; (I,J,K,L): maximum distance. (A,E,I): Lorenz system $\left( \dot{x}=10(y-x),\ \dot{y}=x(28-z),\ \dot{z}=xy-\frac{8}{3}z\right)$ with original components at three different randomly chosen initial conditions. (B,F,J): Same Lorenz system embedded from the $x$ component with embedding delays $\tau_1=5, \tau_2=15$, and $\tau_3 = 20$. (C,G,K): same as (A,E,I) for the R\"ossler system $\left(\dot{x}=-y-z,\ \dot{y}=x+0.2y,\ \dot{z}=z(x-5.7)\right)$. (D,H,L): same as (B,F,J) for the R\"ossler system with $\tau_1=10, \tau_2=15$ and $\tau_3=20$. Circles indicate the respective maxima. In all cases, time series of $N=1,000$ points with a sampling time of $\Delta t=0.05$ have been used, obtained with a 4th-order Runge-Kutta integrator with fixed step width $h=0.01$. The values of $\tau_2$ are guided by the first zeros of the corresponding auto-correlation functions.} \label{fig1_metric} }
\end{figure*}
As a consequence, only those states should be connected in a recurrence network that are closely neighbored in phase space, leading
to rather sparse networks. Following a corresponding rule of thumb recently confirmed for recurrence quantification analysis
\cite{Schinkel2008}, we suggest choosing $\varepsilon$ as corresponding to an edge density $\rho\lesssim 0.05$
\cite{Marwan2009,Donner2009}, which yields neighborhoods covering appropriately small regions of phase space. Note that since many
topological features of recurrence networks are closely related to the local phase space properties of the underlying
attractor~\cite{Donner2009}, the corresponding information is best preserved for such low $\varepsilon$ unless the presence of
noise requires higher $\varepsilon$ \cite{Schinkel2008}.
Recently, a heuristic criterion has been proposed by Gao and Jin, which selects $\varepsilon$ as the (supposedly unique) turning
point $\varepsilon_{crit}$ in the $\rho(\varepsilon)$ relationship of certain dynamical systems \cite{Gao2009}, formally reading
\begin{equation}
\left. \frac{d\rho}{d\varepsilon}\right|_{\varepsilon=\varepsilon_{crit}}=\max!, \quad \left. \frac{d^2\rho}{d\varepsilon^2}\right|_{\varepsilon=\varepsilon_{crit}}=0.
\label{tpc}
\end{equation}
\noindent
In contrast to our above considerations, for different realizations of the Lorenz system, this turning point criterion yields link
densities of $\rho_{crit}=\rho(\varepsilon=\varepsilon_{crit})\sim 0.15\dots 0.3$ \cite{Gao2009}, implying that considerably large
regions of the attractor are covered by the corresponding neighborhoods. In such cases, it is however \textit{not} possible to
attribute certain network features to specific \textit{small-scale} attractor properties in phase space. More generally,
$\varepsilon$ should be chosen in such a way that small variations in $\varepsilon$ do not induce large variations in the results
of the analysis. In contrast, the turning point criterion (\ref{tpc}) explicitly selects $\varepsilon$ such that small
perturbations in its value will result in a maximum variation of the results. Moreover, besides our general considerations
supporting low $\varepsilon$, application of the turning point criterion leads to serious pitfalls:
(i) $\varepsilon_{crit}$ and, hence, $\rho_{crit}$ depend on the specific metric used for defining distances in phase space
(Fig.~\ref{fig1_metric}). Moreover, experimental time series often contain only a single scalar variable, so that embedding might
be necessary. Since the detailed shape of the attractor in phase space is affected by the embedding parameters, changing the
embedding delay has a substantial effect on $\varepsilon_{crit}$, which is particularly visible in the R\"ossler system (see
Fig.~\ref{fig1_metric} (D,H,L)). { An improper choice of embedding parameters would further increase the variance of $\varepsilon_{crit}$ and will generally not yield meaningful results.} In a similar way, depending on the choice of the other parameters the sampling time of the time
series may also influence the recurrence properties~\cite{Facchini2007} (and, hence, $\varepsilon_{crit}$), since temporal
coarse-graining can cause a loss of detections of recurrences.
(ii) The $\varepsilon$-selection should be as independent as possible of the particular realization of the studied system,
especially from the initial conditions and the length $N$ of the time series. The turning point $\varepsilon_{crit}$ after
conditions (\ref{tpc}) is however \textit{not} independent of the specific initial conditions (Fig.~\ref{fig1_metric} (A,E,I) and
(C,G,K)): while its \textit{average} value does not change much with changing $N$, there is a large variance among the individual
trajectories that converges only slowly with increasing $N$ (Fig.~\ref{lorros_dist_length}). Hence, for the same system and the
same network size, already slightly different initial conditions may yield strong differences in $\varepsilon_{crit}$ and
$\rho_{crit}$ (Fig.~\ref{lorros_dist_length}, inset) and, hence, the topological features of the resulting networks.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{length_dependence.eps}
\caption{Mean values (squares) and range (shaded areas) of turning points $\varepsilon_{crit}$ in 200 independent realizations of
the Lorenz (A) and the R\"ossler system (B) in dependence on the network size $N$ (Euclidean distance, $\Delta t=0.05$). The insets
show the corresponding link densities $\rho_{crit}$ for the same range of $N$.}
\label{lorros_dist_length}
\end{figure}
(iii) One has to emphasize that the turning point criterion is \textit{not} generally applicable, since there are various typical
examples for both discrete and continuous dynamical systems that are characterized by \textit{several} maxima of
$d\rho(\varepsilon)/d\varepsilon$ (Fig.~\ref{dist_stdmap}).
The above considerations are mainly of concern when studying properties of (known) dynamical systems. In applications to real-world
time series with typically a small number of data or even non-stationarities, it is still possible to derive meaningful
\textit{qualitative} results from small time series networks. However, for a detailed system-theoretic interpretation the use of
smaller recurrence thresholds is recommended~\cite{Marwan2009}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{bimodal_dist.eps}
\caption{Examples for multiple turning points of $d\rho/d\varepsilon$: (A) quasiperiodic trajectory of a continuous system (torus)
and (B) a weakly chaotic orbit of the standard map ($x_{n+1}=x_n+5\sin(y_n) \mod 1$, $y_{n+1}=y_n+x_{n+1} \mod 1$, see
\cite{Zou2007}).}
\label{dist_stdmap}
\end{figure}
\paragraph*{Topology of recurrence networks.} The topological features of recurrence networks are closely related to invariant
properties of the observed dynamical system \cite{Marwan2009,Donner2009,Gao2009}. However, a system-theoretic interpretation of the
resulting network characteristics is feasible only based on a careful choice of $\varepsilon$, avoiding the pitfalls outlined
above. For example, many paradigmatic network models as well as real-world systems have been reported to possess small-world
properties (i.e., a high clustering coefficient $\mathcal{C}$ and low average path length $\mathcal{L}$). However, it can be shown
that $\mathcal{C}$ and $\mathcal{L}$ are both functions of $\varepsilon$. In particular, $\mathcal{L}\sim 1/\varepsilon$ (for given
$N$), since spatial distances are approximately conserved in recurrence networks, whereas the specific $\varepsilon$-dependence of
$\mathcal{C}$ varies between different systems.
In addition to the aforementioned global network characteristics, specific vertex properties characterize the local attractor
geometry in phase space in some more detail, where the spatial resolution is determined by $\varepsilon$. In particular, the
\textit{local} clustering coefficient $\mathcal{C}_v$, which quantifies the relative amount of triangles centered at a given vertex
$v$, gives important information about the geometric structure of the attractor within the $\varepsilon$-neighborhood of $v$ in
phase space. Specifically, if the neighboring states form a lower-dimensional subset than the attractor, it is more likely that
closed triangles emerge than for a neighborhood being more uniformly filled with states \cite{Dall2002}. Hence, high values of
$\mathcal{C}_v$ indicate lower-dimensional structures that may correspond to laminar regimes \cite{Marwan2009} or dynamically
invariant objects like unstable periodic orbits (UPOs) \cite{Donner2009}. The relationship with UPOs follows from the fact that
trajectories tend to stay in the vicinity of such orbits for a finite time~\cite{Lathrop_pra_1989}, which leads to a certain amount
of states being accumulated along the UPO with a distinct spatial geometry that differs from that in other parts of a chaotic
attractor. However, since there are infinitely many UPOs embedded in chaotic attractors, such objects (even of a low order) can
hardly be detected using large $\varepsilon$ (where the resulting neighborhoods cover different UPOs) and short time series as
recently suggested \cite{Gao2009}. In contrast, they may be well identified using low $\varepsilon$ and long time series
\cite{Donner2009}.
Another intensively studied vertex property is betweenness centrality $b_v$, which quantifies the relative number of shortest paths
in a network that include a given vertex $v$ \cite{Freeman1979}. In a recurrence network, vertices with high $b_v$ correspond to
regions with low phase space density that are located between higher density regions. Hence, $b_v$ yields information about the
local fragmentation of an attractor. In particular, since phase space regions close to the outer boundaries of the corresponding
attractors do not contribute to many shortest paths, the vertices located in these regions are characterized by low $b_v$, which is
(at least for the Lorenz oscillator) even enhanced by a lower state density. For the sharp inner boundary of the R\"ossler
oscillator, one may observe the opposite behavior. For phase space regions close to low-period UPOs, one also finds lower values of
$b_v$ due to the accumulation of states along these structures (many alternative paths). As the distribution of $b_v$
(Fig.~\ref{lorros_cc_bc} (A,B)) suggests, these features are robust for low $\varepsilon$, but may significantly change if
$\varepsilon$ gets too large (i.e., $\rho=0.2$).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{dist_cc_bc_ros_lrz.eps}
\caption{(Color online) Probability distribution function of betweenness centrality $b_v$ (in logarithmic scale) for different edge
densities ($\rho_1=0.005$, $\rho_2=0.01$, $\rho_3=0.015$, $\rho_4=0.2$) for the Lorenz (A, $N=20,000$) and R\"ossler system (B,
$N=10,000$), and corresponding relationships between local clustering coefficient $\mathcal{C}_v$ and betweenness centrality $b_v$
(C: Lorenz, D: R\"ossler, $\rho=0.01$) obtained from the original data using the Euclidean distance ($\Delta t=0.05$). }
\label{lorros_cc_bc}
\end{figure}
We conclude that in a recurrence network, both $\mathcal{C}_v$ and $b_v$ are sensitive to the presence of UPOs, but resolve
complementary aspects (see Fig.~\ref{lorros_cc_bc}). For the R\"ossler system, we find two distinct maxima in the betweenness
distribution, which are related to the inner and outer parts of the attractor, respectively. In particular, the abundance of low
values is promoted by a high state density at the outer boundary of the attractor near the $x$-$y$ plane, which coincides with a
period-3 UPO \cite{Thiel2003}. In contrast, for the Lorenz system there is no second maximum of $p(b_v)$, since the outer parts
of the attractor are more diffuse and characterized by a considerably lower phase space density than in the R\"ossler attractor. In
both cases, vertices with a high clustering coefficient $\mathcal{C}_v$ are characterized by a broad continuum of betweenness
values, which suggests that $b_v$ is no universal indicator for the presence of UPOs, whereas $\mathcal{C}_v$ allows an approximate
detection of at least low-periodic UPOs in phase space.
In summary, transforming time series into complex networks yields
complementary measures for characterizing phase space properties
of dynamical systems. This work has provided empirical arguments
that the recently suggested approach based on the recurrence
properties in phase space allows a detailed characterization of
dynamically relevant aspects of phase space properties of the
attractor, given that (i) the considered time series is long
enough to be representative for the system's dynamics and (ii) the
threshold distance $\varepsilon$ in phase space for defining a
recurrence is chosen small enough to resolve the scales of
interest. In particular, using the network-theoretic measures
discussed here, the turning point criterion for threshold
selection~\cite{Gao2009} often does \textit{not} allow feasible
conclusions about dynamically relevant structures in phase space.
In contrast, for sufficiently low recurrence thresholds (we
suggest $\rho\lesssim 0.05$ as a rule of thumb), small-scale
structure may be resolved appropriately by complex network
measures, which allow identification of invariant objects such as
UPOs by purely geometric means. { We emphasize that
although our presented considerations have been restricted to
paradigmatic example systems, recurrence networks and related
methods have already been successfully applied to real-world data,
e.g., a paleoclimate record~\cite{Marwan2009} or seismic
activity~\cite{Davidsen2008}. Since these examples are typically
characterized by non-stationarities and non-deterministic
components, we conclude that recurrence networks are promising for
future applied research on various interdisciplinary problems
(e.g.,~\cite{Perc2005}).
As a final remark, we note that the problem of parameter selection
arises for most other network-based methods of time series
analysis (see~\cite{Donner2009} for a detailed comparison).
Important examples include cycle networks~\cite{Zhang2006} with a
correlation threshold, and $k$-nearest neighbor
networks~\cite{Xu2008} with a fixed number $k$ of neighbors as
free parameters, respectively. A general framework for parameter
selection in the context considered here would consequently be
desirable. Other methods are parameter-free, but may suffer from
conceptual limitations and strong intrinsic assumptions. For
example, the currently available visibility graph
concepts~\cite{Lacasa2008} are restricted to univariate time
series.}
\textit{Acknowledgments.} This work has been financially supported
by the German Research Foundation (SFB 555, project C4 and DFG
project no. He 2789/8-2), the Max Planck Society, {the
Leibniz Society (project ECONS),} and the Potsdam Research Cluster
PROGRESS (BMBF). All complex networks measures have been
calculated using the \texttt{igraph} package \cite{Csardi2006}.
|
1,116,691,497,148 | arxiv | \section{Introduction}
There has been considerable interest on the recent discovery of a new family of quasi-one-dimensional (quasi-1D) unconventional superconductors A$_{2}$Cr$_{3}$As$_{3}$(A = K, Rb, Cs) in ambient pressure with $T_c$ up to 6.1 K \cite{GHCao_K,GHCao_Rb,GHCao_Cs},
because of their exotic properties revealed in various experiments below.
(1) In the normal state, the resistivity in polycrystalline samples follows a linear temperature dependence, $\rho (T)=\rho _{0}+AT$, in a wide temperature region, different from the usual Fermi liquid behavior $\rho _{0}+AT^{2}$\cite{GHCao_K,GHCao_Rb,GHCao_Cs}.
On the other hand, the transport measurement in single crystalline samples indicates that the normal state is a smectic metal, namely, it behaves as a metal along the $c$-axis and a semiconductor in the $ab$-plane \cite{GHCaoabc}.
(2) Nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR) measurements on K$_{2}$Cr$_{3}$As$_{3}$ show a non-integer power-law temperature dependence $1/T_{1}\sim T^{0.75} $ above $T_{c}$,
which is neither $1/T_{1}\sim T$ for a Fermi liquid nor Curie-Weiss behavior $1/T_{1}T\sim C/(T+\theta )$ for a ferromagnet or antiferromagnet \cite{TImai_K}.
Meanwhile, NMR and NQR experiments on Rb$_{2}$Cr$_{3}$As$_{3}$ show a critical spin fluctuation above $T_c$, $1/T_{1}T\sim a + b/(T+\theta )$, where $\theta\sim 0$K \cite{GQZheng_Rb}.
The Hebel-Slichter coherence peak of $1/T_{1}$ is absent in both compounds.
(3) K$_{2}$Cr$_{3}$As$_{3}$ possesses a large upper critical field $H_{c2}$, which exceeds the BCS weak-coupling Pauli limit field by 3-4 times \cite{GHCao_K,PCCanfield_K,RDMcDonald_K}.
The angle resolved $H_{c2}$ measurement demonstrates strong anisotropy and reveals dominant spin-triplet SC pairing \cite{ZWZhu_K},
which is consistent with the observation of a very weak spontaneous internal magnetic field near $T_c$ in the muon spin relaxation/rotation ($\mu$SR) experiment \cite{muSR_K}.
(4) London penetration depth measurement for K$_{2}$Cr$_{3}$As$_{3}$ shows linear temperature dependence, $\Delta\lambda (T)\sim T$, at temperatures $T\ll T_{c}$, indicating the existence of line nodes in the SC gap \cite{HQYuan_K}.
(5) Doping nonmagnetic impurities in K$_{2}$Cr$_{3}$As$_{3}$ will reduce $T_c$ significantly, which indicates non-$s$-wave superconductivity \cite{GHCao_impurity}.
There have also been a series of theoretical studies. (1) The electronic structure of K$_{2}$Cr$_{3}$As$_{3}$ has been investigated by Jiang \textit{et al.}\cite{CCao_K} using density functional theory (DFT),
which is confirmed by later calculation \cite{JPHu_magnetism}. The band calculations show that Cr-3$d$ orbitals dominate the electronic states near the Fermi level,
and there exist three energy bands at the Fermi level: two quasi-1D $\alpha $- and $\beta $-bands with flat Fermi surfaces, and a 3D $\gamma $-band.
(2) Zhou et al. \textit{et al.} proposed a minimum effective model based on three molecular orbitals on a hexagonal lattice with $D_{3h}$ symmetry \cite{YZhou_threeband}.
They found that for small Hubbard $U$ and moderate Hund's coupling $J$, the pairing arises from the 3D $\gamma$ band and has a spatial symmetry $f_{y\left(3x^{2}-y^{2}\right)}$, which gives line nodes in the gap function,
while for large $U$, a fully gapped $p$-wave state, $p_{z}\hat{z}$ dominates at the quasi-1D $\alpha$-band. The spin-triplet SC pairing is driven by the Hund's coupling.
Similar three-band and six-band models were also proposed by Wu \textit{et al.} \cite{JPHu_threeband,JPHu_sixband,JPHu_experiment}.
The dominant SC instability channels are found as $p_z$ and $f_{y\left(3x^{2}-y^{2}\right)}$ for weak and strong Hund's coupling respectively.
(3) Zhong \textit{et al.} carried out DFT calculation on a single [CrAs]$_{\infty}$ tube to construct an effective three-band Hubbard model \cite{JHDai_TLL}. Possible Tomonaga-Luttinger liquid instabilities have been proposed based on such a three-band Hubbard chain.
Besides its possible exotic superconductivity, K$_{2}$Cr$_{3}$As$_{3}$ provides a platform for us to study 1D correlated electrons apart from carbon nanotubes and cuprate ladders.
The key building block of K$_{2}$Cr$_{3}$As$_{3}$ is the 1D [(Cr$_3$As$_3$)$^{2-}$]$_{\infty}$ double-walled subnanotubes, which are separated by columns of K$^{+}$ ions, in contrast to the layered iron-pnictide and
copper-oxide high Tc superconductors \cite{GHCao_K}. These [(Cr$_3$As$_3$)$^{2-}$]$_{\infty}$ tubes together with K$^{+}$ ions form a noncentrosymmetric hexagonal lattice with $D_{3h}$ point group \cite{GHCao_K}.
The quasi-one-dimensionality can be also seen from its electronic structure, say, the existence of two quasi-1D electron bands \cite{CCao_K,JPHu_magnetism}.
Experimentally, both the smectic metallic transport \cite{GHCaoabc} and the non-integer power-law temperature dependence in NMR $1/T_{1}\sim T^{0.75}$ \cite{TImai_K} imply a Tomonaga-Luttinger liquid (TLL) normal state.
The question is how this three-band TLL normal state gives rise to the unconventional SC states below $T_c$. This motivates us to study possible instabilities of three-band TLLs in this paper. Our analysis on the TLLs in this class of materials may also help us understand their normal state properties.
It is noted that two-leg Hubbard ladders and two-orbital Hubbard chains have already been investigated \cite{Chudzinski,AJMillis_twoband}, and that the
SC instability caused by electron-phonon coupling in three-band metallic nanotubes has also been theoretically studied \cite{EOrignac_threeband}.
In this paper, we shall focus on electron interactions in a 1D three-band Hubbard model.
This paper is organized as follows. We present the electronic model Hamiltonian in Section~\ref{model}.
In Section~\ref{g-ology}, the low-energy scattering processes near the Fermi points are classified by using generalized $g$-ology.
In Section~\ref{bosonization}, we take the continuum limit and use bosonization technique to transform the fermionic Hamiltonian into bosonic Hamiltonian. The non-interacting part describes a three-band TLL, and
the remaining terms describe the bosonic interactions.
In Section~\ref{order}, order parameters are defined to characterize ordered states.
In Section~\ref{RG}, we utilize renormalization group (RG) to analyze these bosonic interactions. The RG equations are derived by operator product expansion (OPE) method.
The relevant terms lead to different instabilities in different parameters regions.
Section~\ref{conclusion} is devoted to discussions and conclusions.
\section{model Hamiltonian}\label{model}
We consider a single fermionic chain with a unit cell (per Cr$_6$As$_6$ cluster) containing three molecular orbitals.
One of the three orbitals belongs to one-dimensional irreducible representation $A_{1}^{\prime}$ of $D_{3h}$ group, and the other two are in the two-dimensional irreducible representation $E^{\prime}$ \cite{YZhou_threeband}.
Without loss of generality, the fermionic Hamiltonian consists of two parts,
\begin{subequations}
\begin{equation
\begin{aligned}
H^{F}=H_{0}^{F}+H_{int}^{F},
\end{aligned}
\end{equation}
where the non-interacting part $H_{0}^{F}$ is a three-band tight-binding Hamiltonian describing the electron hopping, while the interacting part $H_{int}^{F}$ originates from the electron-electron interaction.
The $D_{3h}$ lattice symmetry does not allow the mixture between $A_1^{\prime}$ state and $E^{\prime}$ states along the $c$-direction. The absence of such hybridization is also seen from the DFT calculation,
where the $\beta$ and $\gamma$ bands are degenerate along the $\Gamma-A$ line.
Neglecting the inter-chain coupling, we have the following $H^F_0$ in such a 1D system,
\begin{equation}\label{Eq:tight-binding}
\begin{aligned}
H_{0}^{F}=\sum_{km\sigma}\xi_{km}c_{km\sigma}^{\dagger}c_{km\sigma},
\end{aligned}
\end{equation}
where $\sigma=\uparrow,\downarrow$ is the spin index, and the orbital (or band) index $m=0$ refers to the $A_1^{\prime}$ state and $m=\pm 1$ refer to $E^{\prime}$ states.
$c_{km\sigma}$($c_{km\sigma}^{\dagger}$) is electron annihilation (creation) operator for orbital $m$ and spin $\sigma$.
The band structure from tight-binding model\cite{YZhou_threeband} is plotted in Fig.~\ref{fig:dispersion}, where the linearized energy dispersion near the Fermi energy is shown in the inset.
\begin{figure}[hptb]
\includegraphics[width=9.2cm]{dispersion.eps}
\caption {Band structure from tight-binding model. The $A_1^{\prime}$ band is nondegenerate and the $E^{\prime}$ band is two-fold degenerate. $\Gamma=(0,0,0)$ and $A=(0,0,\pi)$ in the reciprocal space.
Inset shows the linearized energy dispersion near the Fermi energy.}
\label{fig:dispersion}
\end{figure}
The interacting part $H_{int}^{F}$ describes electron interactions. In the Hubbard approximation, we only retain on-site Coulomb repulsion. The interaction Hamiltonian contains four terms
\begin{eqnarray}\label{Eq:Hubbard}
H_{int}^{F} & = & \frac{1}{2}\sum_{im}\sum_{\sigma\neq\sigma'}Un_{im\sigma}n_{im\sigma'}+\frac{1}{2}\sum_{i\sigma\sigma'}\sum_{m\neq m'}U'n_{im\sigma}n_{im'\sigma'}\nonumber \\
& - & \sum_{i}\sum_{m\neq m'}J\left(\vec{S}_{im}\cdot\vec{S}_{im'}+\frac{1}{4}n_{im}n_{im'}\right)\nonumber \\
& + & \frac{1}{2}\sum_{i\sigma}\sum_{m\neq m'}J'c_{im\sigma}^{\dagger}c_{im\bar{\sigma}}^{\dagger}c_{im'\bar{\sigma}}c_{im'\sigma},
\end{eqnarray}
\end{subequations}
where $n_{im\sigma}=c_{im\sigma}^{\dagger}c_{im\sigma}$, $n_{im}=\sum_{\sigma}n_{im\sigma}$, $\vec{S}_{im}=\frac{1}{2}\sum_{\alpha\beta}c_{im\alpha}^{\dagger}\vec{\tau}_{\alpha\beta}c_{im\beta}$,
$\vec{\tau}$ is a vector with three components of Pauli matrices, and $\bar{\sigma}=-\sigma$ is the opposite spin to $\sigma$.
$U$ is the intra-orbital repulsion, $U'$ is the inter-orbital repulsion, $J$ is the Hund's coupling, and $J'$ is the pair-hopping.
Note that we have chosen Wannier functions to be real. The two degenerate orbitals $m=\pm1$ transfer as $x$ and $y$ under $D_{3h}$ symmetry operations, respectively.
We also assume that
\begin{equation}\label{Eq:J}
J'=J>0,
\end{equation}
so that the following relation
\begin{equation}\label{Eq:U-J}
U=U'+2J
\end{equation}
arises subject to the rotational symmetry of the Coulomb interaction.
It is noted that similiar models for three coupled chains \cite{EArrigoni_threechain} and three-leg ladders \cite{TKimura_threeleg} have been investigated using renormalization group.
The important difference between these existing models and present model is that two of the three bands are degenerate or nearly degenerate in our case, which plays a crucial role for superconducting instabilities as we will see in next sections.
\section{Continuum limit and the $g$-ology}\label{g-ology}
Now we introduce electron fields $c_{m\sigma}(x)$ to study the low energy physics in the continuum limit, hereafter $x$ denotes the coordinate along the chain ($c$-direction).
In a 1D system, Fermi points fall into two categories characterized by chirality $p=R,L$, which represents right and left-moving electrons, respectively. Thus the electron field $c_{m\sigma}(x)$ can be decomposed into two parts
\begin{equation}\label{Eq:c-field}
c_{m\sigma}(x)=\psi_{Rm\sigma}(x)+\psi_{Lm\sigma}(x)
\end{equation}
in low energies.
In order to classify various scattering processes in such a three-band system, we shall generalize the conventional $g$-ology \cite{TGiamarchi_bosonization,AJMillis_twoband} for single-band spinless fermions, which now includes chirality, band and spin indices.
For single-band spinless fermions, there are four possible scattering processes between the two chiralities because of lattice momentum conservation.
All these scattering processes are illustrated in Fig.~\ref{fig:g-ology},
back scattering $g^{(1)}\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{p}\psi_{\bar{p}}$,
double-chirality forward scattering $g^{(2)}\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{\bar{p}}\psi_{p}$,
umklapp scattering $g^{(3)}\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{\bar{p}}\psi_{\bar{p}}$ and single-chirality forward scattering $g^{(4)}\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{p}\psi_{p}$,
where $\bar{p}$ is the opposite chirality to $p$.
\begin{figure}[hptb]
\begin{center}
\includegraphics[width=8.4cm]{g-ology.eps}
\end{center}
\caption{
Four possible scattering processes for signle-band spinless fermions:
$g^{(1)}\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{p}\psi_{\bar{p}}$,
$g^{(2)}\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{\bar{p}}\psi_{p}$, $g^{(3)}\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{\bar{p}}\psi_{\bar{p}}$
and $g^{(4)}\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{p}\psi_{p}$, where $\bar{p}$ is the opposite chirality to $p$.}
\label{fig:g-ology}
\end{figure}
\begin{table*}[htpb]
\caption{$g$-ology for the three-band spinful fermion system.}
\label{table:g-gology}
\begin{center}
\begin{tabular}{|c|l|c|c|c|c|c|l|}
\hline
\multicolumn{1}{|c}{} & chirality & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{band} & & \multicolumn{1}{c}{} & spin\tabularnewline
\hline
$g(f)^{(1)}$ & $\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{p}\psi_{\bar{p}}$ & $g_{1}$ & $\psi_{m}^{\dagger}\psi_{\bar{m}}^{\dagger}\psi_{m}\psi_{\bar{m}}$ & $f_{1}$ & $\psi_{m}^{\dagger}\psi_{0}^{\dagger}\psi_{m}\psi_{0}+h.c.$ & $g(f)_{\parallel}$ & $\psi_{\sigma}^{\dagger}\psi_{\sigma}^{\dagger}\psi_{\sigma}\psi_{\sigma}$\tabularnewline
\hline
$g(f)^{(2)}$ & $\psi_{p}^{\dagger}\psi_{\bar{p}}^{\dagger}\psi_{\bar{p}}\psi_{p}$ & $g_{2}$ & $\psi_{m}^{\dagger}\psi_{\bar{m}}^{\dagger}\psi_{\bar{m}}\psi_{m}$ & $f_{2}$ & $\psi_{m}^{\dagger}\psi_{0}^{\dagger}\psi_{0}\psi_{m}+h.c.$ & $g(f)_{\perp}$ & $\psi_{\sigma}^{\dagger}\psi_{\bar{\sigma}}^{\dagger}\psi_{\bar{\sigma}}\psi_{\sigma}$\tabularnewline
\hline
$g(f)^{(3)}$ & $\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{\bar{p}}\psi_{\bar{p}}$ & $g_{3}$ & $\psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{\bar{m}}\psi_{\bar{m}}$ & $f_{3}$ & $\psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{0}\psi_{0}+h.c.$ & & \tabularnewline
\hline
$g(f)^{(4)}$ & $\psi_{p}^{\dagger}\psi_{p}^{\dagger}\psi_{p}\psi_{p}$ & $g_{4}$ & $\psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{m}\psi_{m}$ & $g$ & $\psi_{0}^{\dagger}\psi_{0}^{\dagger}\psi_{0}\psi_{0}$ & & \tabularnewline
\hline
\end{tabular}
\end{center}
\end{table*}
For the three-band spinful fermions, we introduce an additional notation $f$ and additional two subscripts to describe the scattering processes due to the multi-bands and spin degrees of freedom, which is summarized in Table \ref{table:g-gology}.
One of the subscripts is for spin degrees of freedom, namely, ``$\parallel$" denotes spin parallel scattering and ``$\perp$" denotes spin anti-parallel scattering.
The other subscript is associated with the notations $g$ and $f$.
Now the notation $g$ is used only for the scattering processes within the same $D_{3h}$ irreducible representation, which includes the scattering between two $E^{\prime}$ bands with $m=\pm 1$ and the scattering within the $A_1^{\prime}$ band with $m=0$.
It is similar to $g^{1,2,3,4}$ for two chiralities that we use $g_{1} \psi_{m}^{\dagger}\psi_{\bar{m}}^{\dagger}\psi_{m}\psi_{\bar{m}}$,
$g_{2} \psi_{m}^{\dagger}\psi_{\bar{m}}^{\dagger}\psi_{\bar{m}}\psi_{m}$,
$g_{3} \psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{\bar{m}}\psi_{\bar{m}}$,
and $g_{4} \psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{m}\psi_{m}$ for the scatterings between two $E^{\prime}$ bands, where $\bar{m}$ is the opposite orbital to $m$.
We also use $g \psi_{0}^{\dagger}\psi_{0}^{\dagger}\psi_{0}\psi_{0}$ for the scattering within the $A_1^{\prime}$ band by neglecting the subscript.
On the other hand, the new notation $f$ describes the scattering between $E^{\prime}$ and $A^{\prime}_1$ bands, including
$f_{1}(\psi_{m}^{\dagger}\psi_{0}^{\dagger}\psi_{m}\psi_{0}+h.c.)$, $f_{2}(\psi_{m}^{\dagger}\psi_{0}^{\dagger}\psi_{0}\psi_{m}+h.c.)$, and $f_{3}(\psi_{m}^{\dagger}\psi_{m}^{\dagger}\psi_{0}\psi_{0}+h.c.)$, where $m=\pm 1$.
Four typical scattering processes are plotted in Fig.~\ref{fig:example}, which are all the dominant scattering processes at incommensurate filling as we will discuss later.
\begin{figure*}[hptb]
\begin{center}
\subfigure[~$g_{1\perp}^{(2)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}^{\dagger}\psi_{\bar{p}m\bar{\sigma}}\psi_{p\bar{m}\sigma}$]{\includegraphics[width=6.4cm]{g1.eps}}
\subfigure[~$g_{2\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{\bar{p}m\sigma}$]{\includegraphics[width=6.4cm]{g2.eps}}\\
\subfigure[~$g_{3\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}m\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{\bar{p}\bar{m}\sigma}$]{\includegraphics[width=6.4cm]{g3.eps}}
\subfigure[~$f_{3\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}m\sigma}^{\dagger}\psi_{p0\sigma}\psi_{\bar{p}0\sigma}$]{\includegraphics[width=6.4cm]{f3.eps}}
\end{center}
\caption{ Four dominant scattering processes at incommensurate filling: (a) $g_{1\perp}^{(2)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}^{\dagger}\psi_{\bar{p}m\bar{\sigma}}\psi_{p\bar{m}\sigma}$,
(b) $g_{2\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{\bar{p}m\sigma}$,
(c) $g_{3\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}m\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{\bar{p}\bar{m}\sigma}$
and (d) $f_{3\parallel}^{(1)}\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}m\sigma}^{\dagger}\psi_{p0\sigma}\psi_{\bar{p}0\sigma}$.}
\label{fig:example}
\end{figure*}
The long wavelength physics is dominated by low energy scattering processes near the Fermi points.
These $g$-ology classified scattering processes serve as building blocks for the low energy effective theory. We can decouple the microscopic Hamiltonian in terms of these processes to obtain the effective theory.
For instance, the inter-band Hubbard repulsive interaction $U'$ between the two degenerate $E^{\prime}$ bands $m=\pm1$ can be decoupled as follows,
\begin{eqnarray}
& & U'\sum_{m\sigma\sigma'}c_{im\sigma}^{\dagger}c_{im\sigma}c_{i\bar{m}\sigma'}^{\dagger}c_{i\bar{m}\sigma'}\nonumber \\
& = & U'\sum_{pm\sigma}\left(\psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{\bar{p}m\sigma} + \psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}\psi_{pm\sigma}\right.\nonumber\\
& & + \psi_{pm\sigma}^{\dagger}\psi_{p\bar{m}\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\sigma}\psi_{\bar{p}m\sigma} + \psi_{pm\sigma}^{\dagger}\psi_{p\bar{m}\sigma}^{\dagger}\psi_{p\bar{m}\sigma}\psi_{pm\sigma}\nonumber \\
& & + \psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}^{\dagger}\psi_{p\bar{m}\bar{\sigma}}\psi_{\bar{p}m\sigma} + \psi_{pm\sigma}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}\psi_{pm\sigma}\nonumber\\
& & \left. + \psi_{pm\sigma}^{\dagger}\psi_{p\bar{m}\bar{\sigma}}^{\dagger}\psi_{\bar{p}\bar{m}\bar{\sigma}}\psi_{\bar{p}m\sigma} + \psi_{pm\sigma}^{\dagger}\psi_{p\bar{m}\bar{\sigma}}^{\dagger}\psi_{p\bar{m}\bar{\sigma}}\psi_{pm\sigma}\right).
\end{eqnarray}
The initial values of the coupling constants ($f$'s and $g$'s) in the effective theory are determined by the microscopic Hamiltonian $H^{F}$.
Decoupling all the terms in $H^{F}_{int}$ in Eq.~(\ref{Eq:Hubbard}) and collecting all the scattering processes, we obtain the values of nonzero coupling constants,
\begin{subequations}\label{Eq:couplings}
\begin{align}
g_{1\perp}^{(1)}&=g_{1\perp}^{(2)}=g_{3\perp}^{(1)}=g_{3\perp}^{(2)}\nonumber\\
&=f_{1\perp}^{(1)}=f_{1\perp}^{(2)}=f_{3\perp}^{(1)}=f_{3\perp}^{(2)}=J, \\
g_{4\perp}^{(1)}&=g_{4\perp}^{(2)}=g_{\perp}^{(1)}=g_{\perp}^{(2)}=U, \\
g_{2\perp}^{(1)}&=g_{2\perp}^{(2)}=f_{2\perp}^{(1)}=f_{2\perp}^{(2)}=U-2J, \\
g_{2\parallel}^{(1)}&=g_{2\parallel}^{(2)}=f_{2\parallel}^{(1)}=f_{2\parallel}^{(2)}=U-3J,
\end{align}
\end{subequations}
where the relations Eq.~(\ref{Eq:J}) and Eq.~(\ref{Eq:U-J}) have been used in deriving the above equations. Finally, we shall take the continuum limit
by using Eq.~(\ref{Eq:c-field}) to obtain the fermion field theory.
\section{bosonization}\label{bosonization}
To study low energy effective theory, we shall utilize the standard bosonization technique to analyze the continuum fermion model.
In abelian bosonization, the fermion operators can be expressed in terms of boson operators as follows \cite{TGiamarchi_bosonization},
\begin{subequations}\label{Eq:bosonization}
\begin{equation}
\psi_{pm\sigma}=\frac{\eta_{m\sigma}}{\sqrt{2\pi a}}e^{ipk_{Fm}x}e^{-ip\varphi_{pm\sigma}},
\end{equation}
where $k_{Fm}$ is the Fermi momentum for band $m$, $a$ is the cutoff which can be chosen as the lattice constant, and $p=1(-1)$ stands for $R(L)$ branch.
The Klein factors $\eta_{m\sigma}$ ensure the fermionic statistics and obey the anticommutation relations
\begin{equation}
\left\{ \eta_{m\sigma},\eta_{m'\sigma'}\right\} =2\delta_{mm'}\delta_{\sigma\sigma'}.
\end{equation}
Counting the four-fermion interactions, there are still some gauge degrees of freedom for choosing the values of product of two Klein factors with different band indices $m$.
In this paper, we adopt the convention
\begin{align}
\eta_{m\sigma}\eta_{\bar{m}\sigma}&=\eta_{0\sigma}\eta_{m\sigma}=im\sigma, \\
\eta_{m\sigma}\eta_{m\bar{\sigma}}&=\eta_{0\sigma}\eta_{0\bar{\sigma}}=i\sigma, \\
\eta_{m\sigma}\eta_{\bar{m}\bar{\sigma}}&=\eta_{0\sigma}\eta_{m\bar{\sigma}}=im,
\end{align}
where $m=\pm1$ and $\sigma=+1\left(-1\right)$ for spin up(down). As we will see later in this section, these products of two Klein factors will determine the sign of coupling constants in the bosonic interacting Hamiltonian.
The chiral fields $\varphi_{pm\sigma}$ can be written in terms of two non-chiral fields $\phi_{m\sigma}$ and $\theta_{m\sigma}$ through
\begin{equation}
\varphi_{pm\sigma}=\phi_{m\sigma}-p\theta_{m\sigma}.
\end{equation}
Their gradients are proportional to fermionic density and current operator respectively,
\begin{align}
\nabla\phi_{m\sigma}\varpropto n_{m\sigma}&=\psi_{Rm\sigma}^{\dagger}\psi_{Rm\sigma}+\psi_{Lm\sigma}^{\dagger}\psi_{Lm\sigma}, \\
\nabla\theta_{m\sigma}\varpropto j_{m\sigma}&=\psi_{Rm\sigma}^{\dagger}\psi_{Rm\sigma}-\psi_{Lm\sigma}^{\dagger}\psi_{Lm\sigma}.
\end{align}
Thus the four-fermion density-density and current-current interaction can be bosonized into quadratic terms in the bosonic Hamiltonian.
Furthermore, the fields $\phi_{m\sigma}$ and $\theta_{m\sigma}$ can be decomposed into their charge and spin degrees of freedom,
\begin{align}
\phi_{m\sigma}=\frac{1}{\sqrt{2}}\left(\phi_{cm}+\sigma\phi_{sm}\right), \\
\theta_{m\sigma}=\frac{1}{\sqrt{2}}\left(\theta_{cm}+\sigma\theta_{sm}\right).
\end{align}
\end{subequations}
Since both charge and spin are conserved, $\phi_{cm}(\theta_{cm})$ and $\phi_{sm}(\theta_{sm})$ can be diagonalized separately in the quadratic part of the bosonic Hamiltonian $H_0^B$.
The diagonalization can be carried out explicitly by the following transformation,
\begin{equation}\label{Eq:diagonalize-HB0}
\left(\begin{array}{c}
\phi(\theta)_{\mu+1}\\
\phi(\theta)_{\mu-1}\\
\phi(\theta)_{\mu0}
\end{array}\right)=\left(\begin{array}{ccc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}}\\
-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}}\\
0 & -\frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}}
\end{array}\right)\left(\begin{array}{c}
\tilde{\phi}(\tilde{\theta})_{\mu+1}\\
\tilde{\phi}(\tilde{\theta})_{\mu-1}\\
\tilde{\phi}(\tilde{\theta})_{\mu0}
\end{array}\right),
\end{equation}
where $\mu=c,s$ refers to charge and spin components.
Near the Fermi points, the energy dispersion $\xi_{km}$ can be linearized as
\begin{equation}
\xi_{km}=v_{Fm}\left(k-k_{Fm}\right),
\end{equation}
where $v_{Fm}$ is the Fermi velocity and $k_{Fm}$ is the Fermi momentum. According to the DFT calculation, the difference between $v_{F0}$ and $v_{F\pm1}$ is small.
As we will show below, the Fermi velocity is renormalized by the forward scattering, thus this small difference is inessential and we will approximate $v_{F0}=v_{F\pm1}=v_{F}$ at first.
The bosonized Hamiltonian $H^B$ also consists of two parts,
\begin{equation}\label{Eq:Hamiltonian_B}
H^{B}=H_{0}^{B}+H_{int}^{B}.
\end{equation}
$H_{0}^{B}$ is the quadratic or non-interacting part, and $H_{int}^B$ is the interacting part.
The non-interacting part $H_{0}^{B}$ can be diagonalized by Eq.~(\ref{Eq:diagonalize-HB0}), resulting in
\begin{equation}\label{Eq:H0B}
H_{0}^{B}=\frac{1}{2\pi}\int dx\sum_{\mu\nu}
v_{\mu\nu}\left[K_{\mu\nu}\left(\nabla\tilde{\theta}_{\mu\nu}\right)^{2}+\frac{1}{K_{\mu\nu}}\left(\nabla\tilde{\phi}_{\mu\nu}\right)^{2}\right]
\end{equation}
where $\mu=c,s$ and $\nu=0,\pm1$. The renormalized Fermi velocity $v_{\mu\nu}$ and Tomonaga-Luttinger parameters $K_{\mu\nu}$ are given by
\begin{subequations}\label{Eq:v-K}
\begin{align}
\frac{v_{c\left(s\right)\pm1}}{v_{F}} &= \sqrt{1-\frac{\left[+\left(-\right)g_{4\perp}^{(2)}-\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]^{2}}{\left(2\pi v_{F}\right)^{2}}}, \\
\frac{v_{c\left(s\right)0}}{v_{F}} &= \sqrt{ 1-\frac{\left[+\left(-\right)g_{4\perp}^{(2)}+2\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]^{2}}{\left(2\pi v_{F}\right)^{2}}}, \\
K_{c\left(s\right)\pm1} &= \sqrt{ \frac{1-\frac{1}{2\pi v_{F}}\left[+\left(-\right)g_{4\perp}^{(2)}-\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]}{1+\frac{1}{2\pi v_{F}}\left[+\left(-\right)g_{4\perp}^{(2)}-\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]}}, \\
K_{c\left(s\right)0} &= \sqrt{ \frac{1-\frac{1}{2\pi v_{F}}\left[+\left(-\right)g_{4\perp}^{(2)}+2\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]}{1+\frac{1}{2\pi v_{F}}\left[+\left(-\right)g_{4\perp}^{(2)}+2\left(g_{2\parallel}^{(2)}+\left(-\right)g_{2\perp}^{(2)}\right)\right]}}.
\end{align}
\end{subequations}
The spin-charge separation is reflected in $v_{cm}\neq v_{sm}$, which is similar to single band Tomonaga-Luttinger liquids.
The difference between single-band and three-band model is the following. For the single-band model, all the forward scattering processes contribute to the bosonic non-interacting Hamiltonian $H_0^B$.
However, for the three-band model, some $g^{(2)}$ forward scattering processes contribute to the interacting part $H_{int}^B$ but not the non-interacting part $H_0^B$, which can be seen in Eq.~(\ref{Eq:HB-int}) below.
This difference is due to the fact that there is only partial forward scattering processes that can be expressed in the form of density-density or current-current interaction, renormalizing the Tomonaga-Luttinger parameters $K_{\mu\nu}$.
Note that we have omitted the coupling constants with zero initial values in the derivation of $H_0^B$. These coupling constants do not flow under the RG transformation.
We have also dropped all the $g^{(3)}$ umklapp scattering processes, which is negligible when the fermion system is away from half-filling.
All the $g^{(4)}$ and $f^{(4)}$ scattering processes happen within the same chirality and have small momentum transfer. These small-momentum-transfer terms is irrelevant in the sense of RG.
In fact, the $g^{(4)}$ terms will renormalize both $v_{\mu\nu}$ and $K_{\mu\nu}$. However, if one expands $v_{\mu\nu}$ and $K_{\mu\nu}$ in power of $g^{(4)}$, the first order terms will vanish.
So that we can safely neglect $g^{(4)}$ and $f^{(4)}$ in both $H_0^B$ and $H_{int}^B$ in perturbation RG, which will not change the conclusions of our perturbation RG analysis in remaining parts of this paper.
The bosonic interacting Hamiltonian $H_{int}^B$ is given by
\begin{widetext}
\begin{eqnarray}\label{Eq:HB-int}
H_{int}^{B}= & - & g_{1\perp}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\cos\left(2\tilde{\theta}_{s+1}\right)\nonumber \\
& + & g_{2\parallel}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\phi}_{c+1}\right)\cos\left(2\tilde{\phi}_{s+1}\right)\nonumber \\
& + & g_{2\perp}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\phi}_{c+1}\right)\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\nonumber \\
& + & g_{3\parallel}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\theta}_{c+1}\right)\cos\left(2\tilde{\theta}_{s+1}\right)\nonumber \\
& + & g_{3\perp}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\theta}_{c+1}\right)\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\nonumber \\
& + & g_{4\perp}^{(1)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\phi}_{s+1}\right)\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\nonumber \\
& - & g_{1\perp}^{(2)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\phi}_{c+1}\right)\cos\left(2\tilde{\theta}_{s+1}\right)\nonumber \\
& + & g_{3\perp}^{(2)}\frac{4}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\theta}_{c+1}\right)\cos\left(2\tilde{\phi}_{s+1}\right)\nonumber \\
& - & f_{1\perp}^{(1)}\frac{8}{\left(2\pi a\right)^{2}}\int dx\left[\cos\tilde{\phi}_{s+1}\cos\left(-\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\cos\tilde{\theta}_{s+1}\cos\sqrt{3}\tilde{\theta}_{s-1}+\left(\cos\rightarrow\sin\right)\right]\nonumber \\
& + & f_{3\parallel}^{(1)}\frac{8}{\left(2\pi a\right)^{2}}\int dx\left[\cos\tilde{\theta}_{c+1}\cos\sqrt{3}\tilde{\theta}_{c-1}\cos\tilde{\theta}_{s+1}\cos\sqrt{3}\tilde{\theta}_{s-1}+\left(\cos\rightarrow\sin\right)\right]\nonumber \\
& + & f_{3\perp}^{(1)}\frac{8}{\left(2\pi a\right)^{2}}\int dx\left[\cos\tilde{\theta}_{c+1}\cos\sqrt{3}\tilde{\theta}_{c-1}\cos\tilde{\phi}_{s+1}\cos\left(-\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)+\left(\cos\rightarrow\sin\right)\right]\nonumber \\
& + & f_{3\perp}^{(2)}\frac{8}{\left(2\pi a\right)^{2}}\int dx\left[\cos\tilde{\theta}_{c+1}\cos\sqrt{3}\tilde{\theta}_{c-1}\cos\tilde{\phi}_{s+1}\cos\sqrt{3}\tilde{\phi}_{s-1}+\left(\cos\rightarrow\sin\right)\right]\nonumber \\
& + & g_{\perp}^{(1)}\frac{2}{\left(2\pi a\right)^{2}}\int dx\cos\left(-\frac{4}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right).
\end{eqnarray}
\end{widetext}
Unlike the non-interacting situation in $H_0^B$, here we retain the terms with zero initial values of coupling constants in $H_{int}^B$, namely, $g_{3\parallel}^{(1)}$ and $f_{3\parallel}^{(1)}$.
These terms will be automatically generated in the one-loop RG to form the closed algebra of operator product expansion (OPE).
Since different scattering processes may give rise to the same form in the bosonized Hamiltonian, we have incorporated them into a single term, e.g. $g_{3\parallel}^{(1)}$ and $g_{3\parallel}^{(2)}$ terms.
As mentioned before, both the forward and back scattering processes will contribute to $H_{int}^B$ in the three-band case, which is different from the single-band case.
The non-interacting Hamiltonian $H_0^B$ describes a three-band Tomonaga-Luttinger liquid, which is a Gaussian fixed point under RG transformation.
In the remaining sections, we shall treat the interacting part $H_{int}^B$ as a perturbation and perform RG analysis to investigate its relevance.
The (most) relevant terms in $H_{int}^B$ will give the low energy effective field theories. In such effective field theories, the fields $\phi_{\mu\nu}$ and $\theta_{\mu\nu}$
will be locked around some saddle points, say, the extrema of cosine functions in Eq.~(\ref{Eq:HB-int}), which gives rise to some ordered states or relevant instabilities.
To classify such orders or instabilities, we shall introduce order parameters in the next section at first.
\section{order parameter}\label{order}
To characterize different effective field theories in low energies, we shall introduce order parameters in this section.
In general, the order parameter can be defined as fermionic bilinear, or more precisely, long-ranged correlation of bilinear fermionic operators.
By this definition, there are two classes of order parameters in such a three-band system. One is defined in the particle-hole channels,
\begin{subequations}\label{Eq:OP-def}
\begin{equation}
O_{ph}^{ij} = \sum_{mm'\sigma\sigma'}\lambda_{mm'}^{i}\tau_{\sigma\sigma'}^{j}\psi_{Rm\sigma}^{\dagger}\psi_{Lm'\sigma'},
\end{equation}
and the other is defined in particle-particle channels (or their Hermitian conjugates in hole-hole channels),
\begin{equation}
O_{pp}^{ij} = \sum_{mm'\sigma\sigma'}\sigma\lambda_{mm'}^{i}\tau_{\sigma\sigma'}^{j}\psi_{Rm\sigma}^{\dagger}\psi_{Lm'\bar{\sigma'}}^{\dagger}.
\end{equation}
\end{subequations}
Here $\lambda^{i}(i=1,\cdots,8)$ are Gell-Mann matrices, $\tau^{j}(j=1,2,3)$ are Pauli matrices. We have also defined $\lambda^{0}$ and $\sigma^0$ as $3\times 3$ and $2\times 2$ unit matrices respectively.
$\psi_{pm\sigma}(\psi_{pm\sigma}^{\dagger})$ is the electron annihilation (creation) operator with chirality $p$, band $m$ and spin $\sigma$.
Note that we only keep opposite-chirality terms, $\psi_R^{\dagger}\psi_L$ and $\psi_R^{\dagger}\psi_L^{\dagger}$ in Eq.~(\ref{Eq:OP-def}) and ignore equal-chirality terms,
such as $\psi_R^{\dagger}\psi_R$, $\psi_L^{\dagger}\psi_L$, $\psi_R^{\dagger}\psi_R^{\dagger}$, and $\psi_L^{\dagger}\psi_L^{\dagger}$. This is because four-fermion operators in the same chirality,
e.g. $\psi_R^{\dagger}\psi_R^{\dagger}\psi_R^{\dagger}\psi_R$ and $\psi_R^{\dagger}\psi_R\psi_R^{\dagger}\psi_R$ are all irrelevant in the sense of RG.
Physically, all our familiar ordered states, including charge density wave (CDW), spin density wave (SDW), and superconducting (SC) states, arise from scattering or pairing in opposite chiralities.
Therefore, Eq.~(\ref{Eq:OP-def}) contains all possible physically relevant order parameters constructed by fermionic bilinears.
We shall identify physical ordered states for each order parameters in Eq.~(\ref{Eq:OP-def}) and bosonize them.
For particle-hole channels, we find that $O_{ph}^{i0}$ refers to CDW and $O_{ph}^{i1-3}$ refer to the three components of SDW.
There are total $9\times 4=36$ order parameters in particle-hole channels. Below we only list 4 $\lambda^1$-components as examples, which involve only two $E^{\prime}$ bands with $m=\pm 1$.
After bosonization, these four order parameters read
\begin{widetext}
\begin{subequations}\label{Eq:OP-ph}
\begin{eqnarray}
O_{ph}^{10} & \propto & e^{-i2k_{F}x+i\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0}\right)\cos\tilde{\theta}_{c+1}\sin\tilde{\theta}_{s+1}+i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{ph}^{11} & \propto & e^{-i2k_{F}x+i\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{s0}\right)\sin\tilde{\theta}_{c+1}\cos\tilde{\phi}_{s+1}+i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{ph}^{12} & \propto & e^{-i2k_{F}x+i\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{c0}\right)}\left[\sin\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{s0}\right)\sin\tilde{\theta}_{c+1}\cos\tilde{\phi}_{s+1}-i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{ph}^{13} & \propto & e^{-i2k_{F}x+i\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0}\right)\sin\tilde{\theta}_{c+1}\cos\tilde{\theta}_{s+1}+i\left(\cos\leftrightarrow\sin\right)\right],
\end{eqnarray}
\end{subequations}
\end{widetext}
where $2k_{F}=k_{F+1}+k_{F-1}$, and $(\cos \leftrightarrow \sin)$ means replacing all the cosine functions by sine functions and vice versa.
For particle-particle channels, we find that $O_{pp}^{i0}$ serves as singlet superconducting (SSC) pairing order parameter and $O_{pp}^{i1-3}$ serve as three components of triplet superconducting (TSC) pairing order parameters.
The bosonization for $\lambda^2$-components is the following,
\begin{widetext}
\begin{subequations}\label{Eq:OP-pp}
\begin{eqnarray}
O_{pp}^{20} & \propto & e^{i\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0}\right)\sin\tilde{\phi}_{c+1}\sin\tilde{\theta}_{s+1}-i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{pp}^{21} & \propto & e^{i\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{s0}\right)\cos\tilde{\phi}_{c+1}\cos\tilde{\phi}_{s+1}-i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{pp}^{22} & \propto & e^{i\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{c0}\right)}\left[\sin\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{s0}\right)\cos\tilde{\phi}_{c+1}\cos\tilde{\phi}_{s+1}+i\left(\cos\leftrightarrow\sin\right)\right],\\
O_{pp}^{23} & \propto & e^{i\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{c0}\right)}\left[\cos\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0}\right)\cos\tilde{\phi}_{c+1}\cos\tilde{\theta}_{s+1}-i\left(\cos\leftrightarrow\sin\right)\right].
\end{eqnarray}
\end{subequations}
\end{widetext}
Each bosonized order parameter contains two parts, which are related to each other by interchanging $\cos\leftrightarrow\sin$.
If one shifts the bosonic fields $\theta_{\mu+1}$ and $\phi_{\mu+1}$ by $\frac{\pi}{\sqrt{2}}$,
\begin{equation}\label{Eq:shift-1}
\phi(\theta)_{\mu+1} \to \phi(\theta)_{\mu+1} + \frac{\pi}{\sqrt{2}},
\end{equation}
and leaves other $\theta_{\mu\nu}$'s and $\phi_{\mu\nu}$'s unchanged, the diagonalized fields $\tilde\theta_{\mu\nu}$ and $\tilde\phi_{\mu\nu}$ will transfer accordingly,
\begin{eqnarray}
\tilde{\phi}(\tilde{\theta})_{\mu+1} & \to & \tilde{\phi}(\tilde{\theta})_{\mu+1} + \frac{\pi}{2}, \nonumber\\
\tilde{\phi}(\tilde{\theta})_{\mu-1} & \to & \tilde{\phi}(\tilde{\theta})_{\mu-1} + \frac{\pi}{2\sqrt{3}}, \nonumber\\
\tilde{\phi}(\tilde{\theta})_{\mu0} & \to & \tilde{\phi}(\tilde{\theta})_{\mu0}+ \frac{\pi}{\sqrt{6}}. \nonumber
\end{eqnarray}
So that
\begin{equation*}
\frac{1}{\sqrt{3}}\tilde{\phi}(\tilde{\theta})_{\mu-1}+\frac{2}{\sqrt{6}}\tilde{\phi}(\tilde{\theta})_{\mu0} \to \frac{1}{\sqrt{3}}\tilde{\phi}(\tilde{\theta})_{\mu-1}+\frac{2}{\sqrt{6}}\tilde{\phi}(\tilde{\theta})_{\mu0}+{\pi\over 2}.
\end{equation*}
It means that the order parameters $O_{ph}^{i1-3}$ and $O_{pp}^{i1-3}$ will not change under the phase shift given in Eq.~(\ref{Eq:shift-1}).
This can be verified by the bosonization formula Eq.~(\ref{Eq:bosonization}) too.
In the following RG analysis, the coupling constants will flow to zero if they are irrelevant and to strong coupling limit if they are relevant.
The relevant coupling constants will lock the corresponding bosonic fields $\theta_{\mu\nu}$ and $\phi_{\mu\nu}$ around the saddle points, say, in the extremum of cosine or sine functions to minimize the action.
If we substitute these saddle-point values of bosonic fields into the order parameters, we will obtain the nonzero order parameters. For instance, the saddle point
\begin{equation}
\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0},\,\tilde{\phi}_{c+1},\,\tilde{\theta}_{s+1}\right)=\left(0,\,0,\,0\right)
\end{equation}
will give rise to nonzero amplitude for order parameter $O_{pp}^{23}$ in Eq.~(\ref{Eq:OP-pp}). The remaining phase factor
$$
e^{i\left(\frac{1}{\sqrt{3}}\tilde{\theta}_{c-1}+\frac{2}{\sqrt{6}}\tilde{\theta}_{c0}\right)}=e^{\frac{i}{\sqrt{2}}\left(\theta_{c+1}+\theta_{c-1}\right)}
$$
reflects the $U(1)$ gauge symmetry, which will be spontaneously broken when the SC long ranged order is established.
\section{Renormalization-Group analysis}\label{RG}
The quadratic part of the Hamiltonian, $H_0^B$, is a well defined Gaussian fixed point under RG, describing the three-band TLLs at high temperatures, which servers as a good starting point for our study.
In this section, we begin with the quadratic (non-interacting) part $H_0^B$ and treat the nonquadratic (interacting) part $H_{int}^B$ by a RG method perturbatively.
We shall use OPE method \cite{JCardy_OPE} to derive the RG equations for the 13 coupling constants in Eq.~(\ref{Eq:HB-int}) up to one loop.
The general form of one-loop perturbative RG equations read
\begin{equation}\label{Eq:general-RGE}
\frac{dg_{k}}{dl}=\left(d-\Delta_{k}\right)g_{k}-\sum_{ij}C_{ij}^{k}g_{i}g_{j},
\end{equation}
where $g_k$ represents the coupling constants ($g$'s and $f$'s) in $H_{int}^B$ in Eq.~(\ref{Eq:HB-int}). The linear term in Eq.~(\ref{Eq:general-RGE}) is the tree-level contribution
and depends on space-time dimension $d$ and scaling dimension $\Delta_{k}$. The quadratic terms are the one-loop contributions. The coefficients $C_{ij}^{k}$ are the structure constants of the OPE,
which can be obtained by the fusion of arbitrary two terms in $H_{int}^B$. This process will generate new terms, which are absent in the original microscopic Hamiltonian, until all terms form a closed algebra.
This is the reason why we retain the terms with zero initial values of coupling constants in Eq.~(\ref{Eq:HB-int}).
In the spirit of perturbation theory, we shall firstly derive and analyze RG equations at tree-level, and then carry out one-loop analysis in the remaining parts of this section.
\subsection{Tree-level RG}\label{sec:tree-RG}
To simplify, we introduce the dimensionless coupling constants
\begin{subequations}\label{Eq:xyi}
\begin{align}
y_{i}&=\frac{g_{i}}{\pi v_{F}}, \\
x_{i}&=\frac{f_{i}}{\pi v_{F}}.
\end{align}
\end{subequations}
As shown in Appendix \ref{App:tree-RG},
the tree-level RG equations in weak coupling can be written in terms of $x_i$ and $y_i$'s,
\begin{subequations}\label{Eq:tree-RG2}
\begin{align}
\frac{dy_{1\perp}^{(1)}}{dl}&=\left(y_{2\parallel}^{(2)}-y_{2\perp}^{(2)}\right)y_{1\perp}^{(1)},
\end{align}
\begin{align}
\frac{dy_{2\parallel}^{(1)}}{dl}&=-y_{2\parallel}^{(2)}y_{2\parallel}^{(1)},
\end{align}
\begin{align}
\frac{dy_{2\perp}^{(1)}}{dl}&=-y_{2\perp}^{(2)}y_{2\perp}^{(1)},
\end{align}
\begin{align}
\frac{dy_{3\parallel}^{(1)}}{dl}&=y_{2\parallel}^{(2)}y_{3\parallel}^{(1)},
\end{align}
\begin{align}
\frac{dy_{3\perp}^{(1)}}{dl}&=\left(-y_{4\perp}^{(2)}+y_{2\parallel}^{(2)}\right)y_{3\perp}^{(1)},
\end{align}
\begin{align}
\frac{dy_{4\perp}^{(1)}}{dl}&=-y_{2\perp}^{(2)}y_{4\perp}^{(1)},
\end{align}
\begin{align}
\frac{dy_{1\perp}^{(2)}}{dl}&=\left(y_{4\perp}^{(2)}-y_{2\perp}^{(2)}\right)y_{1\perp}^{(2)},
\end{align}
\begin{align}
\frac{dy_{3\perp}^{(2)}}{dl}&=\left(-y_{4\perp}^{(2)}+y_{2\perp}^{(2)}\right)y_{3\perp}^{(2)},
\end{align}
\begin{align}
\frac{dx_{1\perp}^{(1)}}{dl}&=\left(y_{2\parallel}^{(2)}-y_{2\perp}^{(2)}\right)x_{1\perp}^{(1)},
\end{align}
\begin{align}
\frac{dx_{3\parallel}^{(1)}}{dl}&=y_{2\parallel}^{(2)}x_{3\parallel}^{(1)},
\end{align}
\begin{align}
\frac{dx_{3\perp}^{(1)}}{dl}&=\left(-y_{4\perp}^{(2)}+y_{2\parallel}^{(2)}\right)x_{3\perp}^{(1)},
\end{align}
\begin{align}
\frac{dx_{3\perp}^{(2)}}{dl}&=\left(-y_{4\perp}^{(2)}+y_{2\perp}^{(2)}\right)x_{3\perp}^{(2)},
\end{align}
\begin{align}
\frac{dy_{\perp}^{(1)}}{dl}&=-y_{4\perp}^{(2)}y_{\perp}^{(1)}.
\end{align}
\end{subequations}
In the formulation of Abelian bosonization, Eqs.~(\ref{Eq:bosonization}), the variables $y_{2\parallel}^{(2)}$, $y_{2\perp}^{(2)}$ and $y_{4\perp}^{(2)}$ in Eqs.~(\ref{Eq:K-y}) and (\ref{Eq:tree-RG2})
only appear in the quardratic part $H_0^B$ in the original microscopic Hamiltonian, thus do not flow under the RG transformation. So that we use their initial values
$y_{2\parallel}^{(2)}=\frac{U-3J}{\pi v_F}$, $y_{2\perp}^{(2)}=\frac{U-2J}{\pi v_F}$ and $y_{4\perp}^{(2)}=\frac{U}{\pi v_F}$ in tree-level analysis.
However, this formulation does not conserve spin rotational symmetry. We shall discuss how to restore spin $SU(2)$ symmetry in the next subsection, where $y_{2\parallel}^{(2)}$, $y_{2\perp}^{(2)}$ and $y_{4\perp}^{(2)}$
can be expressed in terms of the 13 coupling constants in Eq.~(\ref{Eq:HB-int}) and make the RG equations close.
The slope $\{\frac{1}{x_i}\frac{d x_i}{dl},\frac{1}{y_i}\frac{d y_i}{dl}\}$ around the Tomonaga-Luttinger liquid fixed point will determine which coupling constants are relevant.
In weak coupling, this slope is given by the initial values of the coupling constants in Eq.~(\ref{Eq:couplings}), say, the microscopic Hamiltonian with two parameters $U>0$ and $J>0$. We find that there exist three parameters regions.
(1) For $0<J<U/3$, the coupling constants $x_{3\parallel}^{(1)}$, $y_{3\parallel}^{(1)}$ and $y_{1\perp}^{(2)}$ are relevant,
other coupling constants are irrelevant.
(2) For $U/3<J<U/2$, there are only two relevant coupling constants, $y_{2\parallel}^{(1)}$ and $y_{1\perp}^{(2)}$.
(3) For the unphysical region $J>U/2$, there are four relevant coupling constants, $y_{2\parallel}^{(1)}$, $y_{2\perp}^{(1)}$, $y_{4\perp}^{(1)}$ and $y_{1\perp}^{(2)}$.
However, the above analysis relies largely on tree-level RG equations. We now proceed to one-loop RG equations for further study.
\subsection{One-loop RG}\label{sec:oneloop-RG}
With the help of spin $SU(2)$ symmetry and the microscopic Hamiltonian, we are able to derive one-loop RG equations (see Appendix~\ref{App:oneloop-RG}) as follows,
\begin{subequations}\label{Eq:one-loopRGE}
\begin{align}\label{Eq:y1perp1}
\frac{dy_{1\perp}^{(1)}}{dl} = -\left(y_{1\perp}^{(1)}\right)^{2}-y_{2\perp}^{(1)}y_{1\perp}^{(2)}+y_{3\parallel}^{(1)}y_{3\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:y2para1}
\frac{dy_{2\parallel}^{(1)}}{dl} = \frac{1}{2}y_{1\perp}^{(1)}y_{2\parallel}^{(1)}-y_{2\perp}^{(1)}y_{4\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:y2perp1}
\frac{dy_{2\perp}^{(1)}}{dl} = -\frac{1}{2}y_{1\perp}^{(1)}y_{2\perp}^{(1)}-y_{1\perp}^{(1)}y_{1\perp}^{(2)} -y_{2\parallel}^{(1)}y_{4\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:y3para1}
\frac{dy_{3\parallel}^{(1)}}{dl} = -\frac{1}{2}y_{1\perp}^{(1)}y_{3\parallel}^{(1)}+y_{1\perp}^{(1)}y_{3\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:y3perp1}
\frac{dy_{3\perp}^{(1)}}{dl} = & -\left(y_{4\perp}^{(1)}+\frac{1}{2}y_{1\perp}^{(1)}\right)y_{3\perp}^{(1)} +y_{1\perp}^{(1)}y_{3\parallel}^{(1)}\nonumber\\
& -y_{4\perp}^{(1)}y_{3\perp}^{(2)},
\end{align}
\begin{align}\label{Eq:y4perp1}
\frac{dy_{4\perp}^{(1)}}{dl} = \frac{1}{2}y_{1\perp}^{(1)}y_{4\perp}^{(1)}-y_{2\parallel}^{(1)}y_{2\perp}^{(1)} -y_{3\perp}^{(1)}y_{3\perp}^{(2)},
\end{align}
\begin{align}\label{Eq:y1perp2}
\frac{dy_{1\perp}^{(2)}}{dl} = \left(y_{4\perp}^{(1)}-\frac{1}{2}y_{1\perp}^{(1)}\right)y_{1\perp}^{(2)} -y_{1\perp}^{(1)}y_{2\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:y3perp2}
\frac{dy_{3\perp}^{(2)}}{dl} = \left(-y_{4\perp}^{(1)}+\frac{1}{2}y_{1\perp}^{(1)}\right)y_{3\perp}^{(2)} -y_{3\perp}^{(1)}y_{4\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:x1perp1}
\frac{dx_{1\perp}^{(1)}}{dl} = -\left(x_{1\perp}^{(1)}\right)^{2}+x_{3\parallel}^{(1)}x_{3\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:x3para1}
\frac{dx_{3\parallel}^{(1)}}{dl} = -\frac{1}{2}x_{1\perp}^{(1)}x_{3\parallel}^{(1)}+x_{1\perp}^{(1)}x_{3\perp}^{(1)},
\end{align}
\begin{align}\label{Eq:x3perp1}
\frac{dx_{3\perp}^{(1)}}{dl} = -\left(y_{\perp}^{(1)}+\frac{1}{2}x_{1\perp}^{(1)}\right)x_{3\perp}^{(1)} +x_{1\perp}^{(1)}x_{3\parallel}^{(1)}-y_{\perp}^{\left(1\right)}x_{3\perp}^{\left(2\right)},
\end{align}
\begin{align}\label{Eq:x3perp2}
\frac{dx_{3\perp}^{(2)}}{dl} = \left(-y_{\perp}^{(1)}+\frac{1}{2}x_{1\perp}^{(1)}\right)x_{3\perp}^{(2)}-y_{\perp}^{\left(1\right)}x_{3\perp}^{\left(1\right)},
\end{align}
\begin{align}\label{Eq:y1perp}
\frac{dy_{\perp}^{(1)}}{dl} = -\left(y_{\perp}^{(1)}\right)^{2}-x_{3\perp}^{\left(1\right)}x_{3\perp}^{\left(2\right)}.
\end{align}
\end{subequations}
The above 13 RG equations can be classified into two categories. The first eight equations, Eq.~(\ref{Eq:y1perp1}) to Eq.~(\ref{Eq:y3perp2}), describe the RG flow of coupling constants within the two degenerate $E^{\prime}$ bands,
which coincide with those derived in the two-leg-ladder model\cite{EOrignac_twochain}.
The last five equations, Eq.~(\ref{Eq:x1perp1}) to Eq.~(\ref{Eq:y1perp}), couple the two $E^{\prime}$ bands to the non-degenerate $A_{1}^{\prime}$ band.
Note that the last five RG equations are decoupled from the first eight ones. This will greatly simplify our analysis. Such decoupling originates from the particular form of the Hamiltonian \eqref{Eq:Hubbard}, which satisfies
Eq.~\eqref{constraint_added}.
The key to analyze these one-loop RG equations is to find fixed points, where the coupling constants will no longer flow under RG transformation \cite{AAltland_RG}. We rewrite the RG equations in vector form
\begin{equation}
\frac{d\vec{y}}{dl}\equiv \vec{R}\left(\vec{y}\right),
\end{equation}
where $\vec{y}=\left\{ y_{i}\right\}$ is the vector of 13 running coupling constants, and $\vec{R}\left(\vec{y}\right)$ is a vector function of $\vec{y}$. By definition, the fixed points $\vec{y}=\vec{y}^{*}$ are given by
\begin{equation}\label{Eq:fixed point}
\vec{R}\left(\vec{y}^{*}\right)=0.
\end{equation}
It is obvious that $\vec{y}^{*}=0$ is the trivial Tomonaga-Luttinger liquid fixed point. Nontrivial fixed points $\vec{y}^{*}\neq 0$ can be found in perturbation as follows.
In perturbation, we are able to find nontrivial fixed points in two different parameter regions of the microscopic Hamiltonian.
(1) For $0<J<U/3$, we have nontrivial fixed points characterized by the following nonvanishing coupling constants,
\begin{eqnarray}
y_{3\parallel}^{(1)} & = & y_{3\parallel}^{(1)*},\nonumber \\
y_{1\perp}^{(2)} & = & y_{1\perp}^{(2)*}\nonumber, \\
x_{3\parallel}^{(1)} & = & x_{3\parallel}^{(1)*},
\end{eqnarray}
while other coupling constants are all zero.
(2) For $J>U/3$, nontrivial fixed points are given by
\begin{eqnarray}
y_{2\parallel}^{(1)} & = & y_{2\parallel}^{(1)*},\nonumber \\
y_{1\perp}^{(2)} & = & y_{1\perp}^{(2)*},
\end{eqnarray}
while other coupling constants equal zero.
These nontrivial fixed points form hypersurfaces in the 13-dimensional parameter space of coupling constants. By examining the RG flow around these hypersurfaces, we find that these fixed points are phase transition points
rather than stable fixed points describing stable phases. Then we shall analyze the RG flow near the fixed points using one-loop RG equations to find out what kind of instabilities are favored.
In the vicinity of the fixed points, the RG equations can be expanded to linear order
\begin{equation}
\vec{R}\left(\vec{y}\right)=\vec{R}\left(\left(\vec{y}-\vec{y}^{*}\right)+\vec{y}^{*}\right)\simeq W\left(\vec{y}-\vec{y}^{*}\right),
\end{equation}
where the $W$ matrix is defined as
\begin{equation}\label{eq:defW}
W_{ab}=\frac{\partial R_{a}}{\partial y_{b}}|_{\vec{y}=\vec{y}^{*}}.
\end{equation}
We diagonalize the $W$ matrix with the left-eigenvectors $\phi_{\alpha}$,
\begin{equation}
\phi_{\alpha}^{T}W=\phi_{\alpha}^{T}\lambda_{a},
\end{equation}
where $\lambda_{\alpha}$ are corresponding eigenvalues. The scaling fields are defined as
\begin{equation}
v_{\alpha}=\phi_{\alpha}^{T}\left(\vec{y}-\vec{y}^{*}\right).
\end{equation}
Under RG these scaling fields show different behaviors,
\begin{eqnarray}
\frac{dv_{\alpha}}{dl} & = &\phi_{\alpha}^{T}\frac{d}{dl}\left(\vec{y}-\vec{y}^{*}\right)=\phi_{\alpha}^{T}W\left(\vec{y}-\vec{y}^{*}\right) = \lambda_{a}\phi_{\alpha}^{T}\left(\vec{y}-\vec{y}^{*}\right) \nonumber\\
& = &\lambda_{a}v_{\alpha},
\end{eqnarray}
which becomes relevant, irrelevant and marginal when $\lambda_{\alpha}>0$, $\lambda_{\alpha}<0$ and $\lambda_{\alpha}=0$, respectively.
Note that $\{y_{1\perp}^{(1)}, y_{2\parallel}^{(1)}, y_{2\perp}^{(1)}, y_{3\parallel}^{(1)}, y_{3\perp}^{(1)}, y_{4\perp}^{(1)}, y_{1\perp}^{(2)}, y_{3\perp}^{(2)}\}$
and $\{x_{1\perp}^{(1)}, x_{3\parallel}^{(1)}, x_{3\perp}^{(1)},x_{3\perp}^{(2)},y_{\perp}^{(1)} \}$ form two separated sets in Eqs.~(\ref{Eq:one-loopRGE}). The $W$ matrix is block diagonal as follows,
\begin{equation}\label{Eq:W12}
W=\left(\begin{array}{cc}
W_1 & 0 \\
0 & W_2
\end{array}\right),
\end{equation}
where $W_1$ is a $8\times 8$ matrix and $W_2$ is a $5\times 5$ matrix. The generic forms for $W_1$ and $W_2$ can be found in Appendix \ref{App:dual}.
To illustrate how to carry out the analysis, we firstly consider a simplified case, say, special fixed points when $J>U/3$,
\begin{equation}
y_{1\perp}^{(2)}=y_{1\perp}^{(2)*},
\end{equation}
with other coupling constants equal to zero. In this case, $W_2=0$, and $W_1$ matrix reads
\begin{equation}
W_1=\left(\begin{array}{cccccccc}
0 & 0 & -y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-\frac{1}{2}y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & y_{1\perp}^{(2)*} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right).
\end{equation}
For this non-symmetric matrix, we find that there are only two nonzero eigenvalues, $-y_{1\perp}^{(2)*}$ with corresponding eigenvector $y_{1\perp}^{(1)}+y_{2\perp}^{(1)}$
and $y_{1\perp}^{(2)*}$ with eigenvector $y_{1\perp}^{(1)}-y_{2\perp}^{(1)}$.
According to the microscopic model, the initial value $g_{1\perp}^{(2)}=J>0$. Then we expect $y_{1\perp}^{(2)*}>0$ in the RG. Thus the relevant scaling field is given by the eigenvector corresponding to the eigenvalue $y_{1\perp}^{(2)*}$, say,
\begin{equation}
y_{1\perp}^{(1)}-y_{2\perp}^{(1)}.
\end{equation}
Then we can extract relevant terms from the bosonic Hamiltonian $H_{int}^B$. Thus the low energy effective interacting Hamiltonian becomes
\begin{widetext}
\begin{eqnarray}
H_{int}^{B} & = & -\left(g_{1\perp}^{(1)}-g_{2\perp}^{(1)}\right)\frac{2}{\left(2\pi a\right)^{2}}\int dx\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right)\cos\left(2\tilde{\theta}_{s+1}\right)\nonumber \\
& & -\left(g_{1\perp}^{(1)}-g_{2\perp}^{(1)}\right)\frac{2}{\left(2\pi a\right)^{2}}\int dx\cos\left(2\tilde{\phi}_{c+1}\right)\cos\left(\frac{2}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{4}{\sqrt{6}}\tilde{\phi}_{s0}\right).
\end{eqnarray}
\end{widetext}
When $J>U/3$, the initial value of $y_{1\perp}^{(1)}-y_{2\perp}^{(1)} \propto 3J-U$ is positive. This relevant scaling field will flow to strong coupling and lock the corresponding bosonic fields around the saddle points,
\begin{equation}\label{Eq:saddle-point1}
\begin{array}{ccc}
\left(\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0},\,\tilde{\phi}_{c+1},\,\tilde{\theta}_{s+1}\right) & = & \left(0,\,0,\,0\right)\\
& \mbox{or} & \left(\frac{\pi}{2},\,\frac{\pi}{2},\,\frac{\pi}{2}\right).
\end{array}
\end{equation}
As discussed following Eq.~(\ref{Eq:shift-1}), these two saddle points will give rise to the same physical states.
The nonzero order parameter corresponding to these locked bosonic fields is $O_{pp}^{23}$, which describes a TSC phase.
Let us turn to generic situations now. We shall neglect vanishing components in $\vec{y}$ for short, and denote $\vec{y}$ as
$\left(y_{3\parallel}^{(1)},y_{1\perp}^{(2)},x_{3\parallel}^{(1)}\right)$ and $\left(y_{1\perp}^{(2)},y_{2\parallel}^{(1)}\right)$ for $0<J<U/3$ and $J>U/3$ respectively.
(1) For $J>U/3$, all the fixed points are in a plane. We can generalize the above analysis for fixed points with two nonzero components, $\left(y_{2\parallel}^{(1)},y_{1\perp}^{(2)}\right)=\left(y_{2\parallel}^{(1)*},y_{1\perp}^{(2)*}\right)$.
In this situation, we still have $W_2=0$, while $W_1$ becomes
\begin{equation}
W_1=\left(\begin{array}{cccccccc}
0 & 0 & -y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & 0 \\
\frac{1}{2}y_{2\parallel}^{(1)*} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & -y_{2\parallel}^{(1)*} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -y_{2\parallel}^{(1)*} & 0 & 0 & 0 & 0 & 0 \\
-\frac{1}{2}y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & y_{1\perp}^{(2)*} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right).
\end{equation}
Now we have four eigenvectors with two corresponding to the zero eigenvalue and the other two corresponding to nonzero eigenvalues. The two nonzero eigenvalues are $\pm \sqrt{\left(y_{2\parallel}^{(1)*}\right)^2+\left(y_{1\perp}^{(2)*}\right)^2}$.
The eigenvector corresponding the positive eigenvalue $\sqrt{\left(y_{2\parallel}^{(1)*}\right)^2+\left(y_{1\perp}^{(2)*}\right)^2}$ is
\begin{align}
y_{1\perp}^{(2)*} y_{1\perp}^{(1)} - \sqrt{\left(y_{2\parallel}^{(1)*}\right)^2+\left(y_{1\perp}^{(2)*}\right)^2} y_{2\perp}^{(1)} + y_{2\parallel}^{(1)*} y_{4\perp}^{(1)}.
\end{align}
Considering the initial value $g_{2\parallel}^{(1)}=U-3J<0$, we expect that $y_{2\parallel}^{(1)*}<0$ for the same reason of perturbation.
In the limit $y_{1\perp}^{(2)*}\to 0$, the relevant scaling field will become $y_{2\perp}^{(1)}+ y_{4\perp}^{(1)}$, which will flow to a strong coupling limit too.
The corresponding saddle point gives rise to a SDW state with order parameter $O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$.
In the other limit $y_{2\parallel}^{(1)*}\to 0$, the eigenvector will become $ y_{1\perp}^{(1)}\pm y_{2\perp}^{(1)}$. Then we restore the simplified situation,
where the TSC instability dominates with the order parameter $O_{pp}^{23}$.
Starting from fixed points between the above two limits, which form a quarter plane $\left(y_{2\parallel}^{(1)*}<0,y_{1\perp}^{(2)*}>0\right)$, the RG trajectory will flow to one of the two strong coupling limits, SDW and TSC.
There must be a phase boundary separating the SDW phase (with order parameter $O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$) from the TSC phase (with order parameter $O_{pp}^{23}$).
The RG flow diagram is skectched in Fig.~\ref{fig:RG_flow}. And all the possible ordered ground states for $J>U/3$ are summarized in Table~\ref{table:$U<3J$}.
\begin{figure}[hptb]
\begin{centering}
\includegraphics[width=8.0cm]{flow.eps}
\caption{(Color online) Sketched RG flow for $J>U/3$. Fixed points ($y_{2\parallel}^{(1)*}<0$ and $y_{1\perp}^{(2)*}>0$) form a quarter plane. The origin is the trivial Tomonaga-Luttinger liquid (TLL) fixed point.
The dashed line denotes the phase boundary separating two phases, TSC and SDW.}
\label{fig:RG_flow}
\end{centering}
\end{figure}
There exist two competing phases, SDW and TSC, when $J>U/3$. Which one will win out is governed by the microscopic model, say, initial values of the coupling constants.
The most relevant (strongest) instability is given by the largest eigenvalue of $W$ matrix. Assuming that $\vec{y}^{*}$ close to the initial value of $\vec{y}$,
we estimate that the SDW state ($O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$) will dominate when $J>U/2$.
And the TSC state ($O_{pp}^{23}$) will become dominant in the region $U/3<J<U/2$.
\begin{table*}[hptb]
\begin{center}
\caption{Possible ordered ground states from one-loop RG analysis when $J>U/3$.}
\label{table:$U<3J$}
\begin{tabular}{|c|c|c|}
\hline
\rule{0pt}{1.5em} Scaling field & $y_{2\perp}^{(1)}+y_{4\perp}^{(1)}$ & $y_{1\perp}^{(1)}-y_{2\perp}^{(1)}$\tabularnewline
\hline
\rule{0pt}{1.5em} Instability & SDW & TSC\tabularnewline
\hline
\rule{0pt}{1.5em} Order parameter & $O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$ & $O_{pp}^{23}$\tabularnewline
\hline
\rule{0pt}{3.5em} Saddle point & $\begin{array}{ccc}
\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0} & = & 0 \left(\frac{\pi}{2}\right)\\
\tilde{\phi}_{c+1} & = & \frac{\pi}{2} \left(0\right)\\
\tilde{\phi}_{s+1} & = & \frac{\pi}{2} \left(0\right)
\end{array}$ & $\begin{array}{ccc}
\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0} & = & 0 \left(\frac{\pi}{2}\right)\\
\tilde{\phi}_{c+1} & = & 0 \left(\frac{\pi}{2}\right)\\
\tilde{\theta}_{s+1} & = & 0 \left(\frac{\pi}{2}\right)
\end{array}$\tabularnewline
\hline
\end{tabular}
\end{center}
\end{table*}
(2) For $0<J<U/3$, we have three non-vanishing components $\left(y_{3\parallel}^{(1)},y_{1\perp}^{(2)},x_{3\parallel}^{(1)}\right)$ in $\vec{y}^{*}$, which form a three dimensional hypersurface.
In this case, we have $W_1$ and $W_2$ matrices as follows,
\begin{subequations}
\begin{equation}
W_1=\left(\begin{array}{cccccccc}
0 & 0 & -y_{1\perp}^{(2)*} & 0 & y_{3\parallel}^{(1)*} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-\frac{1}{2}y_{3\parallel}^{(1)*} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
y_{3\parallel}^{(1)*} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-\frac{1}{2}y_{1\perp}^{(2)*} & 0 & 0 & 0 & 0 & y_{1\perp}^{(2)*} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right),
\end{equation}
and
\begin{equation}
W_2=\left(\begin{array}{ccccc}
0 & 0 & x_{3\parallel}^{(1)*} & 0 & 0 \\
-\frac{1}{2}x_{3\parallel}^{(1)*} & 0 & 0 & 0 & 0 \\
x_{3\parallel}^{(1)*} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{array}\right).
\end{equation}
\end{subequations}
These matrices have five nonzero eigenvalues, with three from $W_1$ and two from $W_2$. The three nonzero eigenvalues from $W_1$ are $y_{1\perp}^{(2)*}$, $\pm\sqrt{\left(y_{3\parallel}^{(1)*}\right)^2+\left(y_{1\perp}^{(2)*}\right)^2}$,
and the two from $W_2$ are $\pm x_{3\parallel}^{(1)*}$. Since the related initial values are $g_{3\parallel}^{(1)}=0$, $g_{1\perp}^{(2)}=J$, and $f_{3\parallel}^{(1)}=0$. We expect $y_{1\perp}^{(2)*}>0$ in perturbation theory.
Moreover, considering the one-loop RG flow around the TLL fixed point, we can deduce that $y_{3\parallel}^{(1)*}>0$ and $x_{3\parallel}^{(1)*}>0$.
Similar analysis can be carried out as in the situation when $J>U/3$. The RG trajectory will flow to three strong coupling limits with relevant scaling fields, $x_{1\perp}^{(1)}+x_{3\perp}^{(1)}$, $y_{1\perp}^{(1)}+y_{3\perp}^{(1)}$, and $y_{1\perp}^{(1)}-y_{2\perp}^{(1)}$.
These relevant scaling fields are associated with positive eigenvalues of $W$ matrix, $x_{3\parallel}^{(1)*}$, $y_{3\parallel}^{(1)*}$ and $y_{1\perp}^{(2)*}$ respectively.
They give rise to two different SDW states (one with order parameter $O_{ph}^{43}$ and $O_{ph}^{63}$ the other with order parameter $O_{ph}^{13}$) and one spin-singlet SC (SSC) state (with order parameter $O_{pp}^{20}$).
All these possible ordered ground states for $0<J<U/3$ and corresponding order parameters are summarized in Table~\ref{table:$U>3J$}.
Then we will compare these three instabilities and find out the strongest one, which is determined by the largest eigenvalue of $W$ matrix.
In the spirit of perturbation theory, we still assume that $\vec{y}^{*}$ close to the initial value of $\vec{y}$.
Note that related initial values are $g_{3\parallel}^{(1)}=f_{3\parallel}^{(1)}=0$ and $g_{1\perp}^{(2)}=J$, we conclude that the SSC state with order parameter $O_{pp}^{20}$ will dominate among these three possible ground states.
\begin{table*}[hptb]
\begin{center}
\caption{Possible ordered ground states from one-loop RG analysis when $0<J<U/3$.}
\label{table:$U>3J$}
\begin{tabular}{|c|c|c|c|}
\hline
\rule{0pt}{1.5em} Scaling field & $x_{1\perp}^{(1)}+x_{3\perp}^{(1)}$ & $y_{1\perp}^{(1)}+y_{3\perp}^{(1)}$ & $y_{1\perp}^{(1)}-y_{2\perp}^{(1)}$\tabularnewline
\hline
\rule{0pt}{1.5em} Instability & SDW & SDW & SSC\tabularnewline
\hline
\rule{0pt}{1.5em} Order parameter & $O_{ph}^{43},O_{ph}^{63}$ & $O_{ph}^{13}$ & $O_{pp}^{20}$\tabularnewline
\hline
\rule{0pt}{3.5em} Saddle point & $\begin{array}{ccc}
\frac{1}{2}\tilde{\phi}_{s+1}-\frac{1}{\sqrt{12}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0} & = & 0 \left(\frac{\pi}{2}\right)\\
\frac{1}{2}\tilde{\theta}_{c+1}+\frac{3}{\sqrt{12}}\tilde{\theta}_{c-1} & = & \frac{\pi}{2} \left(0\right)\\
\frac{1}{2}\tilde{\theta}_{s+1}+\frac{3}{\sqrt{12}}\tilde{\theta}_{s-1} & = & 0 \left(\frac{\pi}{2}\right)
\end{array}$ & $\begin{array}{ccc}
\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0} & = & 0 \left(\frac{\pi}{2}\right)\\
\tilde{\theta}_{c+1} & = & \frac{\pi}{2} \left(0\right)\\
\tilde{\theta}_{s+1} & = & 0 \left(\frac{\pi}{2}\right)
\end{array}$ & $\begin{array}{ccc}
\frac{1}{\sqrt{3}}\tilde{\phi}_{s-1}+\frac{2}{\sqrt{6}}\tilde{\phi}_{s0} & = & 0 \left(\frac{\pi}{2}\right)\\
\tilde{\phi}_{c+1} & = & \frac{\pi}{2} \left(0\right)\\
\tilde{\theta}_{s+1} & = & \frac{\pi}{2} \left(0\right)
\end{array}$\tabularnewline
\hline
\end{tabular}
\end{center}
\end{table*}
Let us now summarize the one-loop RG analysis by using OPE and present the phase diagram which are listed in Table~\ref{table:$U<3J$} and \ref{table:$U>3J$}.
In the region $0<J<U/3$, the most relevant instability is the spin-singlet SC instability with order parameter $O_{pp}^{20}$. At $U/3<J<U/2$, the spin-triplet SC instability with order parameter $O_{pp}^{23}$ is favored.
In the region $J>U/2$ (since $U=U^{\prime}+2J$, $U^{\prime} <0$ in this region), the SDW instability with order parameter $O_{ph}^{03}$ will dominate. The phase diagram is shown in Fig.~\ref{fig:phase_diagram1}.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm]{phase1.eps}
\caption{Phase diagram for the three-band Hubbard model with two degenerate $E^{\prime}$ orbitals, $k_{F+1}=k_{F-1}$.}
\label{fig:phase_diagram1}
\end{center}
\end{figure}
It worth noting that the one-loop RG Eqs.~\eqref{Eq:one-loopRGE} have been obtained and solved perturbatively in this subsection.
For comparison, we have reproduced established results for two-degenerate-band model\cite{AJMillis_twoband} to examine the validity of this method.
Indeed, there exist other fixed points that is beyond this perturabtive approach. An example for non-perturbative solution is given in Appendix \ref{App:dual}.
We shall also solve the one-loop RG Eqs.~\eqref{Eq:one-loopRGE} numerically in Appendix \ref{App:NRG} to further confirm present results.
\section{Discussions and conclusions}\label{conclusion}
Now we shall relate our theory to experimental results of K$_{2}$Cr$_{3}$As$_{3}$. Firstly, we would like to discuss NMR and NQR experiments.
The spin-lattice relaxation rate $1/T_{1}$ in a NMR experiment measures the local spin correlation which sums over $q$ in the momentum space. The dominant contribution comes from $q\sim 0$ and $q\sim 2k_F$ components.
For a three-band Tomonaga-Luttinger liquid governed by the Hamiltonian $H_0^B$ in Eq.~(\ref{Eq:H0B}), we have the following temperature dependence of $1/T_{1}$ (see Appendix~\ref{App:T1} for details),
\begin{equation}
\frac{1}{T_{1}}\propto A~T + B~T^{1-\frac{U}{2\pi v_F}},
\end{equation}
where $U$ is the effective on-site intra-orbital electron interaction.
The first linearly temperature dependent term follows Korringa law as in Fermi liquids. The second term follows power law with a non-integer exponent as long as $U\neq 0$.
When electron Coulomb repulsion governs the system, $U$ is positive, the dominant contribution at low temperatures will come from the second term.
The spin-lattice relaxation rate $1/T_{1}$ will exhibit non-integer power law temperature dependence.
However, $U$ may become negative effectively, e.g., when electron-phonon interaction dominates over Coulomb repulsion. In this case, $1/T_{1}$ will become linearly temperature dependent at low temperatures as in Fermi liquids.
It is consistent with the well-known single-band result that SDW will become irrelevant when $U<0$. In the NQR experiment on K$_{2}$Cr$_{3}$As$_{3}$, $1/T_{1}$ exhibits non-integer power law temperature dependence and gives rise to
$1-\frac{U}{2\pi v_F}\sim 0.75$. However, the NQR experiment on Rb$_{2}$Cr$_{3}$As$_{3}$ shows linear temperature dependence at high temperature while critical spin fluctuations appear near to the SC transition temperature $T_c$.
These diversified $1/T_1$ behaviors in K$_{2}$Cr$_{3}$As$_{3}$ and Rb$_{2}$Cr$_{3}$As$_{3}$ imply different effective electron interaction in the two systems. The Rb compound has large unit cell volume than K compound,
resulting in a smaller electron repulsion, which is consistent with larger exponent, $1-\frac{U}{2\pi v_F} \sim 1$ in $1/T_1$ in Rb$_{2}$Cr$_{3}$As$_{3}$.
We next discuss two possible SC ground states in physical parameter regions $0<J<U/3$ and $U/3<J<U/2$.
(1) At $0<J<U/3$, the order parameter $O_{pp}^{20}$ indicates that the SC pairing is spin-singlet and orbital antisymmetric, and the pairing electrons come from the two degenerate $E^{\prime}$ bands.
(2) While for $U/3<J<U/2$, the order parameter $O_{pp}^{23}$ gives rise to spin-triplet ($\left|\uparrow\downarrow\right\rangle+\left|\downarrow\uparrow\right\rangle$) and orbital antisymmetric SC pairing,
and the pairing electrons come from $E^{\prime}$ bands too. This kind of even-parity, spin-triplet and orbital antisymmetric SC pairing was firstly proposed by Dai \textit{et al.} in the context of iron-pnictide \cite{DaiX08}.
Note that the degeneracy of two $E^{\prime}$ bands plays a crucial role in the formation of SC ground states.
The role of the two degenerate $E^{\prime}$ bands can be also seen from the effective Hamiltonian $H_{int}^{B}$ and order parameters for different ground states.
To do this, we consider the situation when the two-fold degeneracy is slightly lifted, for instance, by inter-chain coupling,
In this case, we have $k_{F+1}\neq k_{F-1}$. Introducing $\Delta k_F = k_{F+1}-k_{F-1}$, we can generalize the bosonic interacting Hamiltonian in Eq.~(\ref{Eq:HB-int}) to the expression in Eq.~(\ref{Eq:HB-int2}) in Appendix~\ref{App:HB-int},
where an additional phase factor $2\Delta k_F x$ appears in $g_{2\parallel}^{(1)}$, $g_{2\perp}^{(1)}$ and $g_{1\perp}^{(2)}$ terms. Thus these terms will be suppressed by this phase factor in the integrand. Consequently,
both the spin-triplet SC order parameter $O_{pp}^{20}$ (for $U/3<J<U/2$) and the spin-singlet SC order parameter $O_{pp}^{23}$ (for $0<J<U/3$) will be suppressed and be modulated by the phase factor $2\Delta k_F x$,
indicating a possible FFLO state when $\Delta k_F \neq 0$ \cite{FF,LO}. This is because that both $O_{pp}^{20}$ and $O_{pp}^{23}$ arise from inter-orbital pairing, namely, pairing between $\pm k_{F+1}$ and $\mp k_{F-1}$.
Then we shall investigate how the lifted degeneracy will affect the SDW gound states characterized by order parameters, $O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$, $O_{ph}^{43}(O_{ph}^{63})$, and $O_{ph}^{13}$ respectively.
The expression for $O_{ph}^{43}(O_{ph}^{63})$ and $O_{ph}^{13}$ will not change as we turn on $\Delta k_F$. Since SDW instabilites in these states come from the scattering from $\pm k_{F+1}$ to $\mp k_{F-1}$.
However, the order parameter $O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$ arising from intra-orbital scattering, say, from $\pm k_{F+1}$ to $\pm k_{F-1}$, will be suppressed and be modulated by the phase factor $2\Delta k_F x$ too.
Thus, we expect that (1) for $0<J<U/3$, the SDW states will win out since the SSC state is suppressed;
(2) for $U/3<J<U/2$, the TSC state will survive and be modulated by a phase factor $2\Delta k_F x$, since the possible competing SDW order ($O_{ph}^{03}+\frac{\sqrt{3}}{2}O_{ph}^{83}$) will be suppressed too;
(3) for the unphysical region $J>U/2$, the SDW state will be modulated by a phase factor $2\Delta k_F x$. The new phase diagram is illustrated in Fig.~\ref{fig:phase_diagram2}.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm]{phase2.eps}
\caption{Phase diagram for the three-band Hubbard model with lifted degeneracy in $E^{\prime}$ orbitals, $k_{F+1}\neq k_{F-1}$.}
\label{fig:phase_diagram2}
\end{center}
\end{figure}
Finally, we would like to point out that these ordered states will not survive in a single chain due to strong quantum fluctuations,
as stated by Mermin-Wagner-Hohenberg theorm \cite{Mermin-Wagner,Hohenberg}. However, these instabilities will be enhanced at low temperatures, so that small inter-chain couplings will stabilize these ordered states. Moreover, the inter-chain couplings will also determine the spatial pairing symmetry for SC states. Work along this line is under progress.
In summary, we have studied a three-band Hubbard model at incommensurate filling with intra-orbital electron repulsion $U$, inter-orbital electron repulsion $U^{\prime}=U-2J$, and Hund's coupling $J>0$.
With the help of bosonization and RG, we find that the Tomonaga-Luttinger fixed point gives rise to the experimental observed normal state at high temperature.
The ground state instability depends on the ratio $J/U$ and the degeneracy of $E^{\prime}$ bands. When the two $E^{\prime}$ bands are degenerate,
for $0<J<U/3$, the ground state is a spin-singlet SC state; for $U/3<J<U/2$, a spin-triplet SC state is favored;
a SDW state can be achieved in the parameter region $J>U/2$. However, when the two-fold degeneracy of $E^{\prime}$ bands is lifted,
the phase diagram will change. In the physically relevant regions, a SDW state may dominate instead of the spin-singlet SC state when $0<J<U/3$, the spin-triplet SC state is still favored when $U/3<J<U/2$;
but the SC order parameter will be modulated by a spatially varying phase factor $2\Delta k_F x$. Our theoretical results support the existence of a spin-triplet SC state in K$_{2}$Cr$_{3}$As$_{3}$.
\section{acknowledgement}
We would like to thank Guang-Han Cao, Chao Cao, Jian-Hui Dai, Xiao-Yong Feng for helpful discussions, and thank Jun-ichi Okamoto and A. J. Millis for the communications on the two-band situation.
This work is partially supported by National Basic Research Program of China
(No.2014CB921201/2014CB921203), National Key R\&D Program of the MOST of China (No.2016YFA0300202), NSFC (No.11374256/11274269), and the Fundamental Research Funds for the Central Universities in China.
|
1,116,691,497,149 | arxiv | \section*{Acknowledgments}
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2033 - 390677874 - RESOLV (C.B., U.B, K.M., J.T.) as well as EXC 2008/1-390540038 - UniSysCat (C.P. and P.S.). The DFG is furthermore acknowledged for funding within Project ID No. 278162697-SFB 1242 (J.T.). C.P. is grateful to the Alexander von Humboldt foundation for financial support within the Feodor Lynen program. We thank M. Meyer for experimental support, as well as M. Wolf and A. Rubio for fruitful discussions.
|
1,116,691,497,150 | arxiv | \section{INTRODUCTION}
Although emission lines
in the nuclei of galaxies were recognized at the beginning
of the twentieth century, a half century more would pass before
active galactic nuclei (AGN) became a focus of intense research effort.
The leisurely pace of optical discoveries
in the first half of the century gave way to the fierce competition
of radio work in the 1950s. The race has never let up. Today,
AGN are a focus of observational effort in every frequency band
from radio to gamma rays. Several of these bands
involve emission lines as well as continuum. AGN theory centers
on extreme gravity and black holes, among the most exotic concepts of
modern astrophysics. Ultrarelativistic particles, magnetic fields,
hydrodynamics, and radiative transfer all come into play. In
addition, AGN relate to the
question of galactic evolution in general. For most
of the time since the
recognition of quasar redshifts in 1963, these objects have reigned as
the most luminous and distant objects in the Universe.
Their use as probes of intervening
matter on cosmic scales adds a further dimension to the
importance of AGN.
For all these reasons, the enormous effort to describe and explain AGN
in all their variety and complexity is quite natural.
We are far from having a detailed and certain understanding of AGN.
However, the working hypothesis that they
involve at their core a supermassive black hole producing energy
by accretion of gas has little serious competition today. If this picture
is confirmed, then the past decade may be seen as a time when
AGN research shifted from guessing the nature of AGN
to trying to prove it.
Although the story is not finished, this seems a good time
to take stock of the progress that has been made. The
present short summary is intended to give students of
AGN an account of some of the key developments
in AGN research.
The goal is to bring the story to the point
where a contemporary review of
some aspect of AGN might begin its detailed discussion.
Thus, various threads typically are followed
to a significant point in the 1980s.
I have attempted to trace
the important developments without excessive technical detail,
relying on published sources, my own recollections,
and conversations with a number of researchers.
The focus is on the actual active nucleus. Fascinating aspects such as
intervening absorption lines, statistical surveys, and links to galactic
evolution receive relatively little discussion. The volume of
literature is such that only a tiny fraction of the important papers can be
cited.
\section{BEGINNINGS}
Early in the twentieth century, Fath (1909) undertook at Lick Observatory
a series of observations aimed at clarifying the
nature of the ``spiral nebulae''. A major question at the time was
whether spirals were relatively nearby, gaseous
objects similar to the Orion nebula, or very distant collections
of unresolved stars. Fath's goal was to test the claim that
spirals show a continuous spectrum consistent with a collection of
stars, rather than the bright line spectrum characteristic of
gaseous nebulae. He constructed a spectrograph
designed to record the spectra of faint objects, mounted it
on the 36-inch Crossley reflector, and guided the long
exposures necessary to obtain photographic spectra of these
objects. For most of his objects, Fath found a continuous spectrum
with stellar absorption lines,
suggestive of an unresolved collection of solar type stars.
However, in the
case of NGC 1068, he observed that the ``spectrum is composite,
showing both bright and absorption lines''. The six bright lines
were recognizable as ones seen in the spectra of gaseous
nebulae.
The bright and dark lines of NGC 1068 were confirmed by
Slipher (1917) with spectra taken in
1913 at Lowell Observatory. In 1917,
he obtained a spectrum with a narrow spectrograph slit, and found
that the emission lines were not images of the slit but rather
``small disks'', i.e., the emission was spread over a substantial
range of wavelengths. (However,
he rejected an
``ordinary radial velocity interpretation'' of the line widths.)
During the following years, several astronomers noted the
presence of nuclear emission lines in the spectra of
some spiral nebulae. For example, Hubble (1926)
mentioned that the relatively rare spirals with stellar
nuclei show a planetary nebula type
spectrum, notably NGC 1068, 4051, and 4151.
The systematic study
of galaxies with nuclear emission lines began with the
work of Seyfert (1943). Seyfert obtained spectrograms
of 6 galaxies with nearly stellar nuclei showing emission lines
superimposed on a normal G-type (solar-type) spectrum: NGC 1068, 1275, 3516,
4051, 4151, and 7469.
The two brightest (NGC 1068, 4151) showed
``all the stronger emission lines ... in planetary nebulae like
NGC 7027.'' Seyfert attributed the large widths of the lines to
Doppler shifts, reaching up to 8,500 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi\ for the hydrogen lines
of NGC 3516 and 7469. The emission-line profiles differed from
line to line and from object to object, but two patterns
were to prove typical of this class of galaxy. The
forbidden and permitted lines in NGC 1068 had roughly similar
profiles with widths of $\sim$3000 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi. In contrast, NGC 4151
showed relatively narrow forbidden lines, and corresponding
narrow cores of the permitted lines; but the hydrogen lines
had very broad
(7500 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi) wings that were absent from the profiles of the
forbidden lines. Seyfert contrasted these spectra with the narrow
emission lines
of the diffuse nebulae (H II regions) seen in irregular
galaxies and in the arms of spiral galaxies.
Galaxies with high excitation nuclear emission lines are
now called ``Seyfert galaxies''. However, Seyfert's paper
was not enough to launch the study of AGN as a major focus
of astronomers' efforts. The impetus for this came from a
new direction -- the development of radio astronomy.
Jansky (1932), working at the Bell Telephone Laboratories,
conducted a study of the sources of static affecting
trans-Atlantic radio communications. Using a rotatable
antenna and a short-wave receiver operating at a wavelength of
14.6 m, he systematically measured the intensity of the
static arriving from all directions throughout the day.
From these records, he identified three types of static: (1) static
from local thunderstorms, (2) static from distant thunderstorms,
and (3) ``a steady hiss type static of unknown origin''. The
latter seemed to be somehow associated with the sun (Jansky 1932). Continuing
his measurements throughout the year, Jansky (1933) observed
that the source of the static moved around in azimuth every
24 hours, and the time and direction of maximum changed gradually
throughout the year in a manner consistent with the earth's
orbital motion around the sun. He inferred that the radiation
was coming from the center of the Milky Way galaxy. After
further study of the data, Jansky (1935) concluded that
the radiation came from the entire disk of the Milky Way, being
strongest in the direction of the Galactic center.
Few professional astronomers took serious note of Jansky's work,
and it fell
to an engineer, working at home in his spare time, to advance
the subject of radio astronomy. Reber (1940a,b) built
a 31 foot reflector in his backyard
near Chicago.
He published
a map of the radio sky at 160 MHz showing several local
maxima, including one in the constellation Cygnus
that would prove important for AGN studies (Reber 1944). He also noted
that the ratio of radio radiation to optical light
was vastly larger for the Milky Way than the sun.
With the end of World War II, several groups of radio engineers
turned their efforts to the study of radio
astronomy. Notable among these were the groups at Cambridge
and Manchester in England and at CSIRO
in Australia. The study
of discrete sources began with the accidental discovery of
a small, fluctuating source in Cygnus by Hey, Parsons, and Phillips
(1946) in the course of a survey of the Milky Way at 60 MHz. With
their 6 degree beam, they set an upper limit of 2 degrees on the
angular diameter of the source. The intensity fluctuations,
occurring on a time scale of seconds, were proved a few years
later to originate in the earth's ionosphere; but at first they served
to suggest that the radiation ``could only originate from
a small number of discrete sources''. The discrete nature of
the Cygnus source was confirmed by Bolton and Stanley (1948),
who used a sea-cliff interferometer to set an upper limit
of 8 arcmin to the width of the source. These authors deduced a
brightness temperature of more than $4 \times 10^6$ K at 100 MHz and
concluded that a thermal origin of the noise was ``doubtful''.
Bolton (1948) published a catalog of 6 discrete sources
and introduced the nomenclature Cyg A, Cas A, etc.
Ryle and Smith (1948)
published results from
a radio interferometer at Cambridge analogous to the optical interferometer
used by Michelson at Mt. Wilson to measure stellar diameters.
Observing at 80 MHz, they set an upper limit of 6 arcmin
to the angular diameter of the source in Cygnus.
Optical identifications of discrete sources (other than the sun)
were finally achieved by Bolton, Stanley, and
Slee (1949).
Aided by more accurate positions from sea cliff observations,
they identified Taurus A with the Crab Nebula supernova
remnant (M 1); Virgo A with M 87, a large elliptical galaxy
with an optical jet; and Centaurus A with NGC 5128,
an elliptical galaxy with a prominent dust lane.
The partnership
of optical and radio astronomy was underway.
The early 1950s saw progress in radio surveys, position
determinations, and optical identifications.
A class of sources fairly uniformly distributed over the sky
was shown
by the survey by Ryle, Smith, and Elsmore (1950) based on observations
with the Cambridge interferometer.
Smith (1951) obtained accurate positions
of four discrete sources, Tau A, Vir A, Cyg A, and Cas A.
Smith's positions enabled Baade and Minkowski (1954)
to make optical identifications
of Cas A and Cyg A in 1951 and 1952.
At the position of
Cyg A, they found an object with a distorted morphology, which they
proposed was two galaxies in
collision. Baade and Minkowski found emission lines of
[Ne V], [O II], [Ne III], [O III], [O I], [N II], and H$\alpha$,
with widths of about 400 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi. The redshift of 16,830 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi\
implied a large distance, 31 Mpc, for the assumed
Hubble constant of $\ifmmode {\rm H_0} \else H$_0$\fi = 540~\kmpspmpc.$
The large distance
of Cyg A implied an enormous luminosity, $8 \times 10^{42}$ \ifmmode \rm erg~s^{-1} \else $\rm erg~s^{-1}$ \fi\
in the radio, larger than the optical
luminosity of $6 \times 10^{42}$ \ifmmode \rm erg~s^{-1} \else $\rm erg~s^{-1}$ \fi. (Of course, these values
are larger for a modern
value of \ifmmode {\rm H_0} \else H$_0$\fi.)
This period also saw progress in the measurement of the structure
of radio sources.
Hanbury Brown, Jennison,
and Das Gupta (1952) reported results from the new
intensity interferometer
developed at Jodrell Bank, including a demonstration that
Cyg A was elongated, with dimensions roughly 2 arcmin
by 0.5 arcmin.
Interferometer measurements of Cyg A
by Jennison and Das Gupta (1952) showed two
equal components separated by 1.5 arcmin that straddled the
optical image, a puzzling morphology that proved to be common for
extragalactic radio sources.
Radio sources were categorized as
`Class I' sources, associated
with the plane of the Milky Way, and `Class II' sources, isotropically
distributed and possibly mostly extragalactic (e.g., Hanbury Brown 1959).
Some of the latter had very small angular sizes, encouraging
the view that many were ``radio stars'' in our Galaxy.
Morris, Palmer, and Thompson (1957) published upper limits
of 12 arcsec on the size of 3 class II sources, implying
brightness temperatures in excess of $2 \times 10^7$ K. They suggested
that these were extragalactic sources of the Cyg A type.
Theoretically, Whipple and Greenstein (1937)
attempted to explain the Galactic radio background measured
by Jansky in terms of thermal emission by interstellar dust,
but the expected dust temperatures were far too low
to give the observed radio brightness. Reber
(1940a) considered free-free emission by ionized gas in
the interstellar medium. This process was considered more
accurately by Henyey and Keenan (1940) and Townes (1947), who
realized that Jansky's brightness temperature of $\sim 10^5~K$
could not be reconciled with thermal emission from interstellar
gas believed to have a temperature $\sim 10,000\ K$.
Alfv\'en and Herlofson (1950) proposed
that ``radio stars'' involve cosmic ray electrons in a magnetic
field emitting by the synchrotron process.
This quickly led Kiepenheuer (1950) to
explain the Galactic radio background in terms of synchrotron
emission by cosmic rays in the general Galactic magnetic field.
He showed order-of-magnitude agreement between the observed
and predicted intensities, supported by a more careful
calculation by Ginzburg (1951).
The synchrotron explanation became accepted for extragalactic
discrete sources by the end of the 1950's. The theory
indicated enormous energies, up to
$\sim 10^{60}$ ergs for the ``double lobed'' radio galaxies
(Burbidge 1959). The confinement of the plasma in these lobes
would later be attributed to ram pressure as the material
tried to expand into the intergalactic medium
(De Young and Axford 1967). A mechanism for production of
bipolar flows to power the lobes was given by the
``twin exhaust model'' of Blandford and Rees (1974).
The third Cambridge (3C) survey at 159 MHz
(Edge ~et al.\ 1959) was followed by the revised 3C survey
at 178 MHz (Bennett 1962). Care was taken to
to minimize the confusion problems
of earlier surveys, and many radio sources
came to be known by their 3C numbers.
These and the
surveys that soon followed provided many
accurate radio positions
as the search for
optical identifications accelerated.
(AGN were also discovered in optical searches based
on morphological ``compactness'' [Zwicky 1964] and strong
ultraviolet continuum [Markarian 1967] and later infrared
and X-ray surveys.)
Source counts as a function of
flux density (``log N -- log S'') showed
a steeper increase in numbers with decreasing flux density
than expected for a homogeneous, nonevolving universe with Euclidean
geometry (e.g., Mills, Slee, and Hill 1958;
Scott and Ryle 1961). This was used to argue against the
``steady state'' cosmology (Ryle and Clark 1961),
although some disputed such
a conclusion (e.g., Hoyle and Narlikar 1961).
\section{THE DISCOVERY OF QUASARS}
Minkowski's
studies of radio galaxies culminated with
identification of 3C 295 with a member of a cluster
of galaxies at the unprecedented
redshift of 0.46 (Minkowski 1960).
Allan Sandage of the Mt. Wilson
and Palomar Observatories and Maarten Schmidt of
the California Institute of Technology (Caltech) then took
up the quest for optical identifications and redshifts
of radio galaxies. Both worked with Thomas A. Matthews, who obtained
accurate radio positions with the new interferometer at the
Owens Valley Radio Observatory operated by Caltech.
In 1960, Sandage obtained a
photograph of 3C 48 showing a $16^m$ stellar object with
a faint nebulosity. The spectrum of
the object showed broad emission lines at unfamiliar wavelengths,
and photometry showed the object to be variable
and to have an excess of ultraviolet emission compared with
normal stars. Several other apparently star-like images
coincident with radio sources were found to show strange,
broad emission lines. Such objects
came to be known as quasi-stellar radio sources (QSRS),
quasi-stellar sources (QSS), or quasars. Sandage reported
the work on 3C 48 in an unscheduled paper in the December, 1960,
meeting of the AAS (summarized by the editors of
{\it Sky and Telescope} [Matthews et al. 1961]). There was a
``remote possibility that it may be a distant galaxy of stars''
but ``general agreement'' that it was ``a relatively nearby star
with most peculiar properties.''
The breakthrough came on February 5, 1963,
as Schmidt was pondering the spectrum
of the quasar 3C 273. An accurate position had been obtained
in August, 1962 by Hazard, Mackey, and Shimmins (1963), who used the
210 foot antenna at the Parkes station in Australia to
observe a lunar occultation of 3C 273. From the precise time
and manner in which the source disappeared and reappeared,
they determined that the source had two components.
3C 273A had a fairly typical class II radio
spectrum, $F_{\nu} \sim
\nu^{-0.9}$; and it was separated by 20 seconds of arc from
component `B', which had a size less than 0.5 arcsec and
a ``most unusual'' spectrum, $f_{\nu} \sim \nu^{0.0}$.
Radio positions B and A, respectively,
coincided with those of a 13$^m$ star like
object and with a faint wisp or jet pointing away from
the star. At first suspecting
the stellar object to be a foreground star, Schmidt
obtained spectra of it at
the 200-inch telescope in late December, 1962.
The spectrum showed broad emission lines at unfamiliar wavelengths,
different from those of 3C 48.
Clearly, the object was no ordinary star.
Schmidt noticed that four emission lines
in the optical spectrum showed a pattern of decreasing
strength and spacing toward the blue, reminiscent of the
Balmer series of hydrogen. He found that the four lines
agreed with the expected wavelengths of H$\beta$, H$\gamma$,
H$\delta$, and H$\epsilon$\ with a redshift of z = 0.16. This redshift
in turn allowed him to identify a line in the ultraviolet part
of the spectrum with Mg II $\lambda$ 2798.
Schmidt consulted with his colleagues, Jesse L. Greenstein and
J. B. Oke.
Oke had obtained
photoelectric spectrophotometry of 3C 273 at the 100-inch telescope,
which revealed an emission-line in the infrared at $\lambda$ 7600. With
the proposed redshift, this feature
agreed with the expected wavelength of H$\alpha$.
Greenstein's spectrum of 3C 48 with a redshift of z = 0.37,
supported by the presence of Mg II in both objects.
The riddle of the spectrum of quasars was solved.
These results were published in {\it Nature} six weeks later
in adjoining papers
by Hazard et al. (1963); Schmidt (1963); Oke (1963); and Greenstein
and Matthews (1963). The objects might be galactic stars with a very
high density, giving a large
gravitational redshift. However, this explanation was
difficult to reconcile with the widths of the
emission lines and the presence of forbidden
lines. The ``most direct and least objectionable''
explanation was that the objects were extragalactic,
with redshifts reflecting the Hubble expansion. The redshifts
were large but not unprecedented;
that of 3C 48 was second only to that of 3C 295.
The radio
luminosities of the two quasars were comparable with those
of Cyg A and 3C 295. However, the optical luminosities
were staggering,
``10 - 30 times brighter than the brightest
giant ellipticals''; and the radio surface brightness
was larger than for the radio galaxies. The redshift of 3C 273
implied a velocity of 47,400 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi\ and a distance of
about 500 Mpc (for $\ifmmode {\rm H_0} \else H$_0$\fi~\approx 100~\kmpspmpc$). The nuclear
region would then be less than 1 kpc in diameter. The jet would be
about 50 kpc away, implying a timescale greater than $10^5$ years
and a total energy radiated of at least $10^{59}$ ergs.
Before the redshift of 3C 273 was announced,
Matthews and Sandage (1963) had submitted a paper
identifying 3C 48,
3C 196 and 3C 286 with stellar optical objects.
They explored the popular notion that these
objects were some kind of Galactic star, arguing
from their isotropic distribution on the sky and
lack of observed proper motion that the most
likely distance from the sun was about 100 pc.
The objects had peculiar colors, and 3C 48 showed
light variations of 0.4 mag. In a section added following the
discovery of the redshifts of
3C 273 and 3C 48, they pointed out that the size limit of $\le$0.15
pc implied by the optical light variations was important in the
context of the huge distance and luminosity implied by
taking the redshift to result from the Hubble expansion.
A detailed analysis of 3C48 and 3C 273 was published
by Greenstein and Schmidt (1964). They considered
explanations of the redshift involving (1) rapid motion
of objects in or near the Milky Way, (2) gravitational
redshifts, and (3) cosmological redshifts. If 3C 273
had a transverse velocity comparable with the radial
velocity implied by its redshift, the lack of an observed
proper motion implied a distance of at least 10 Mpc
(well beyond the nearest galaxies).
The corresponding absolute magnitude was
closer to the luminosity of galaxies than stars.
The four quasars with known velocities were all receding;
and accelerating a massive, luminous
object to an appreciable fraction of the speed of light
seemed difficult. Regarding gravitational redshifts,
Greenstein and Schmidt argued that the widths of the
emission lines required the line emitting gas to be
confined to a small fractional radius around the massive
object producing the redshift. The observed symmetry
of the line profiles seemed unnatural in a gravitational
redshift model. For a 1~\ifmmode {\rm M_\odot} \else M$_\odot$\fi\ object,
the observed H$\beta$\ flux
implied an electron density
$N_e \approx 10^{19}$ cm$^{-3}$, incompatible with the observed
presence of forbidden lines in the spectrum. The emission-line
constraint, together with a requirement that the massive
object not disturb stellar orbits in the Galaxy, required
a mass $\ge 10^9$ \ifmmode {\rm M_\odot} \else M$_\odot$\fi. The stability of such a
``supermassive star'' seemed doubtful in the light of theoretical
work by Hoyle and Fowler (1963a), who had examined such objects
as possible sources for the energy requirements of extragalactic
radio sources. Adopting the cosmological explanation of the
redshift, Greenstein and Schmidt derived radii for
a uniform spherical emission-line region of 11 and 1.2 pc for 3C 48 and
3C 273, respectively. This was based on the H$\beta$\ luminosities
and electron densities estimated from the H$\beta$, [O II], and [O
III] line ratios. Invoking light travel time constraints based on
the observed optical variability (Matthews and Sandage 1963; Smith
and Hoffleit 1963),
they proposed a model
in which a central source of optical continuum was surrounded by
the emission-line region, and a still larger radio emitting region.
They suggested that a central mass of order $10^9$ \ifmmode {\rm M_\odot} \else M$_\odot$\fi\
might provide adequate energy for the lifetime of $\ge 10^6$ yr
implied by the jet of 3C 273 and the nebulosity of 3C 48.
This mass was about right to confine the line emitting gas,
which would disperse quickly if it expanded at the observed
speeds of 1000 \ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi\ or more. Noting that such a mass would
correspond to a Schwarzschild radius of $\sim 10^{-4}$ pc,
they observed that ``It would be important to know whether
continued energy and mass input from such a `collapsed'
region are possible''. Finally, they noted that there
could be galaxies around 3C 48 and 3C 273
hidden by the glare of the nucleus. Many features of this
analysis are recognizable in current thinking about AGN.
The third and fourth quasar redshifts were published by
Schmidt and Matthews (1964), who found
z = 0.425 and 0.545 for
3C 47 and 3C 147, respectively.
Schmidt (1965) published redshifts for 5 more quasars.
For 3C 254, a redshift z = 0.734, based
on several familiar lines, allowed the identification
of C III] $\lambda$ 1909 for the first time. This in turn allowed
the determination of redshifts of 1.029 and 1.037 from
$\lambda$ 1909 and $\lambda$ 2798 in 3C 245 and CTA 102,
respectively. (CTA is a radio source list from
the Caltech radio observatory.) For
3C 287, a redshift of 1.055 was found from $\lambda$ 1909,
$\lambda$ 2798, and another first, C IV $\lambda$ 1550.
Finally, a dramatically higher redshift of 2.012 was
determined for 3C 9 on the basis of $\lambda$ 1550 and the
first detection of the Lyman $\alpha$ line
of hydrogen at $\lambda$ 1215.
The redshifts were large enough
that the absolute luminosities depended significantly
on the cosmological model used.
Sandage (1965) reported the discovery of a large population
of radio quiet objects that otherwise appeared to resemble quasars.
Matthews and Sandage (1963) had found that quasars
showed an ``ultraviolet excess'' when compared with
normal stars on a color-color (U-B, B-V)
diagram. This led to a search technique in which
exposures in U and B were recorded on the same photographic
plate, with a slight positional offset, allowing rapid
identification of objects with strong ultraviolet continua.
Sandage noticed a number of such objects that did not
coincide with known radio sources. These he called ``interlopers'',
``blue stellar objects'' (BSO),
or ``quasi-stellar galaxies'' (QSG).
\footnote[1]{
Here we adopt the now common practice of using the
term ``quasi-stellar object'' (QSO) to refer to
these objects regardless of radio luminosity
(Burbidge and Burbidge 1967).}
Sandage found that at magnitudes fainter than 15,
the UV excess objects
populated the region occupied by quasars on the color-color
diagram, whereas brighter objects typically
had the colors of main sequence
stars. The number counts of the BSOs as a function of apparent
magnitude also showed a change of slope at $\sim 15^m$,
consistent with an extragalactic population of objects at
large redshift. Spectra showed that many of these objects
indeed had spectra with large redshifts, including
z = 1.241 for BSO 1. Sandage estimated that
the QSGs outnumbered the radio loud quasars by a factor $\sim 500$,
but this was reduced by later work (e.g., Kinman 1965;
Lynds and Villere 1965).
The large redshifts of QSOs immediately made them potential tools
for the study of cosmological questions.
The rough similarity of the emission-line strengths of QSOs to
those observed, or theoretically predicted, for planetary nebulae
suggested that the chemical abundances were
roughly similar to those in our Galaxy (Sklovskii 1964; Osterbrock
and Parker 1966). Thus these objects, suspected by many astronomers
to lie in the nuclei of distant galaxies, had reached fairly
``normal'' chemical compositions when the Universe was considerably
younger than today.
The cosmological importance of redshifts high enough
to make
L$\alpha$\ visible was quickly recognized. Hydrogen gas in intergalactic
space would remove light from the quasar's spectrum at
the local cosmological redshift, and continuously distributed
gas would erase a wide band of continuum to the short wavelength
side of the L$\alpha$\ emission line (Gunn and
Peterson 1965; Scheuer 1965). Gunn and Peterson set a tight
upper limit to the amount of neutral hydrogen in intergalactic
space, far less than the amount that would significantly retard the
expansion of the Universe.
The study of discrete absorption features in quasar spectra
also began to develop. An unidentified sharp line was observed
in the spectrum of 3C 48 by Greenstein and Schmidt (1964). Sandage
(1965) found that the $\lambda$ 1550 emission line of BSO 1 was ``bisected
by a sharp absorption feature''.
The first quasar found with a rich absorption spectrum was 3C 191
(Burbidge, Lynds, and Burbidge 1966; Stockton and Lynds 1966).
More than a dozen sharp lines were identified, including L$\alpha$\
and lines of C II, III, and IV and Si II, III, and IV.
A rich set of narrow absorption lines was also observed
in the spectrum of PKS 0237-23, whose emission-line
redshift, z = 2.223, set a record at the time. Arp, Bolton, and
Kinman (1967) and Burbidge (1967a) respectively proposed
absorption line redshifts of z = 2.20 and 1.95 for this object, but
each value left many lines without satisfactory identifications.
It turned out that
both redshifts were present (Greenstein and Schmidt 1967).
All these absorption systems had
z$_{abs}$ $<$ z$_{em}$. They could be interpreted
as intervening clouds imposing absorption spectra at the appropriate
cosmological redshift, as had been anticipated theoretically
(Bahcall and Salpeter 1965). Alternatively,
they might represent material expelled from the quasar,
whose outflow velocity is subtracted from the cosmological
velocity of the QSO. However, PKS 0119-04 was found to have
z$_{abs}$ $>$ z$_{em}$, implying material that was in some sense falling
into the QSO from the near side with a relative velocity of 10$^3$
\ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi\ (Kinman and Burbidge 1967). Today, a large fraction
of the narrow absorption lines with z$_{abs}$\ substantially
less than z$_{em}$\ are believed to result from intervening
material. This includes the so-called ``Lyman alpha forest''
of closely spaced, narrow L$\alpha$\ lines that punctuate the
continuum to the short wavelength side of the L$\alpha$\ emission line,
especially in high redshift QSOs. The study of intervening
galaxies and gas clouds by means of absorption lines in the
spectra of background QSOs is now a major branch of astrophysics.
A different kind of absorption
was discovered in the spectrum of PHL 5200 by Lynds (1967).
This object showed broad absorption bands on the short wavelength
sides of the L$\alpha$, N V $\lambda$ 1240, and C IV $\lambda$ 1550 emission
lines, with a sharp boundary between the emission and absorption.
Lynds interpreted this in terms of an expanding shell of gas around
the central object. Seen in about 10 percent of radio quiet QSOs (Weymann
~et al.\ 1991), these broad absorption lines (BALs) are among the many
dramatic but poorly understood aspects of AGN.
The huge luminosity of QSOs, rapid
variability, and implied small size caused some astronomers to
question the cosmological nature of the redshifts.
Terrell (1964) considered the possibility that the objects were
ejected from the center of our galaxy. Upper limits on the proper
motion of 3C 273, together with a Doppler interpretation of the
redshift, then implied a distance of at least 0.3 Mpc and an age at
least 5 million years. Arp (1966), pointing to close pairs of
peculiar galaxies and QSOs on the sky, argued for noncosmological
redshifts that might result from ejection from the peculiar galaxies
at high speeds or an unknown cause. Setti and Woltjer (1966) noted
that ejection from the Galactic center would imply for the QSO
population an explosion with energy at least $10^{60}$ ergs, and
more if ejected from nearby radio galaxies such as Cen A
as suggested by Hoyle and Burbidge (1966).
Furthermore, Doppler boosting would cause us to see more blueshifts
than redshifts if the objects were ejected from nearby galaxies
(Faulkner, Gunn, and Peterson 1966). Further evidence for
cosmological redshifts was provided by Gunn (1971), who showed
that two clusters of galaxies containing QSOs had the same
redshifts as the QSOs. Also, Kristian (1973) showed that the
``fuzz'' surrounding the quasistellar image of a sample of QSOs was
consistent with the presence of a host galaxy.
\section{CHARTING THE TERRAIN}
At this stage, a number of properties of AGN were recognized.
Most astronomers
accepted the cosmological redshift of QSOs,
and the parallel between Seyfert galaxies and
QSOs suggested a common physical phenomenon.
Questions included the nature of the energy source,
the nature of the continuum source and emission-line regions,
and the factors that produce an AGN in some galaxies and not others.
\subsection{Emission Lines}
The basic parameters of the region of gas emitting the narrow emission
lines were fairly quickly established.
In one of the first physical analyses of ``emission nuclei'' in galaxies,
Woltjer (1959) derived a density $\rm N_e \approx 10^4~\ifmmode \rm cm^{-3} \else $\rm cm^{-3}$\fi$
and temperature $T \approx 20,000$~K from
the [S II] and [O III] line ratios of Seyfert galaxies.
The region emitting the narrow lines was just resolved
for the nearest Seyfert galaxies,
giving a diameter of order 100 pc (e.g., Walker 1968; Oke and Sargent 1968).
Oke and Sargent derived a mass of $\sim 10^5~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$
and a small volume filling factor
for the narrow line gas in NGC 4151.
Burbidge, Burbidge, and Prendergast (1958) found
that the nuclear emission lines
of NGC 1068 were much broader than could be accounted for by the rotation
curve of the galaxy, and concluded that the material was in a state
of expansion.
A key question was why, in objects showing broad wings,
these were seen on the
permitted lines but not the forbidden lines.
(Seyfert galaxies
with broad wings
came to be called ``Seyfert 1'' or ``Sy 1'' and those without them ``Sy 2''
[Khachikian and Weedman 1974].)
Were these
wings emitted by the same gas that emits the narrow lines? Woltjer (1959)
postulated a separate region of fast moving, possibly gravitationally bound
gas to produce the broad Balmer line wings of Seyfert galaxies.
Souffrin (1969a) adopted such a model in her analysis of NGC 3516 and NGC
4151. Alternatively, broad Balmer line wings might
be produced by electron scattering (Burbidge ~et al.\
1966). Oke and Sargent (1968) supported this possibility for NGC 4151.
Their analysis of the emission-line region gave an electron scattering optical
depth $\tau_e \sim 0.1$. Multiple scattering of Balmer line photons
by the line opacity might increase the effective
electron scattering probability,
explaining the presence of wings only on the permitted lines.
However, analysis of
electron scattering profiles by other authors (e.g.,
Weymann 1970) indicated the need for a dense region only a tiny fraction
of a light year across. Favoring mass motions were
the irregular broad
line profiles in some objects (Anderson 1971),
which demonstrated the presence of bulk velocities of the needed
magnitude.
In addition, Shklovskii (1964) had argued for an electron scattering optical
depth $\tau_{es} < 1$ in 3C 273 to avoid excessive smoothing of the continuum
light variations. The picture of broad lines from
a small region of dense,
fast moving clouds (``Broad Line Region'' or BLR) and narrow lines
from a larger region of slower moving, less dense clouds (``Narrow
Line Region'' or NLR) found support from photoionization
modes (Shields 1974).
Early workers (e.g.,
Seyfert 1943) had noted that the narrow line intensities
resembled those of planetary nebulae,
and photoionization was an obvious candidate
for the energy input to the emitting gas
for both the broad and narrow lines.
For 3C 273, Shklovskii (1964) noted that the kinetic energy of the emission-
line gas could power the line emission only for a very short time, whereas
the extrapolated power in ionizing ultraviolet radiation was in rough
agreement with the emission line luminosities. Osterbrock and Parker
(1965) argued against photoionization because of the observed weakness of the
Bowen O III fluorescence lines. Also eliminating thermal collisional
ionization because of the observed wide range of ionization stages, they
proposed ionization and heating by fast protons resulting from high velocity
cloud collisions. Souffrin (1969b) rejected this on the basis of
thermal equilibrium considerations,
and argued along with Williams and Weymann (1968)
that thermal collisional ionization was inconsistent with
observed temperatures.
Noting that an optical-ultraviolet continuum of roughly the needed power is
observed, and that the thermal equilibrium gives roughly the observed
temperature, Souffrin concluded that
a nonthermal ultraviolet continuum was
``the only important source of ionization''.
Searle and Sargent (1968) likewise noted that the
equivalent widths of the broad
H$\beta$\ emission lines were similar among AGN over a wide range of luminosity
and were consistent with an extrapolation of the observed ``nonthermal''
continuum as a power law to ionizing frequencies.
Detailed models of gas clouds photoionized by a power-law continuum were
calculated with the aid of electronic computers, with application to the Crab
nebula, binary X-ray sources, and AGN (Williams 1967; Tarter and
Salpeter 1969; Davidson 1972; MacAlpine 1972).
Such models showed that photoionization can account
for the intensities of the strongest optical
and ultraviolet emission lines.
In particular, the penetrating high frequency photons can
explain the simultaneous presence of
very high ionization stages and strong emission
from low ionization stages, in the context of a ``nebula'' that is optically
thick to the ionizing continuum. Photoionization quickly became accepted as
the main source of heating and ionization in the emission-line gas.
Attention then focussed on improving photoionization models and understanding
the geometry and dynamics of the gas emitting the broad lines. It was clear
that the emitting gas had only a tiny volume filling factor, and one possible
possible geometry was the traditional nebular picture of clouds or
``filaments'' scattered through the BLR volume.
Photoionization models typically assumed a slab geometry representing the
ionized face of a cloud that was optically thick to the Lyman continuum.
Model parameters included the density and chemical composition of the gas and
the intensity and energy distribution of the incident ionizing continuum.
Various line ratios, such as C III]/C IV, were used to constrain the
``ionization parameter'', i.e., the ratio of ionizing photon density to gas
density. Chemical abundances were assumed to be
approximately solar but were hard to determine because the high densities
prevented a direct measurement of the electron temperature from available
line ratios.
A challenge for photoionization models was the discovery that the
L$\alpha$/H$\alpha$\ ratio was an order-of-magnitude smaller than the value $\sim
50$ predicted by photoionization models at the time (Baldwin 1977a; Davidsen,
Hartig, and Fastie 1977). This stimulated models
with an improved treatment of radiative transfer in optically thick hydrogen
lines (e.g., Kwan and Krolik 1979). These models found strong Balmer line
emission from a ``partially ionized zone'' deep in the cloud, heated by
penetrating X-rays, from which Lyman line emission was unable to escape.
The models still did not do a perfect job of explaining the observed
ratios (e.g., Lacy ~et al.\
1982) of the Paschen, Balmer, and Lyman lines. Models by
Collin-Souffrin, Dumont, and Tully (1982) and Wills, Netzer, and Wills (1985)
suggested the need for densities as high as $N_e \approx 10^{11}~\ifmmode \rm cm^{-3} \else $\rm cm^{-3}$\fi$ to
explain the H$\alpha$/H$\beta$\ ratio.
The X-ray heated region also was important for the formation of the strong Fe
II multiplet blends observed in the optical and ultraviolet. Theoretical
efforts by several authors culminated in models involving thousands of Fe
lines, with allowance for the fluorescent interlocking of different lines
(Wills ~et al.\ 1985). These models enjoyed some success in
explaining the relative line intensities, but the total energy in the Fe II
emission was less than observed. Although some of this discrepancy might
involve the iron abundance, Collin-Souffrin ~et al.\ (1980) proposed a
separate Fe II emitting region with a high density ($N_e \approx
10^{11}~\ifmmode \rm cm^{-3} \else $\rm cm^{-3}$\fi$) heated by some means other than photoionization. This
region might be associated with an accretion disk. The Fe II emission and
the Balmer continuum emission that combined to form the 3000~\AA\ ``little
bump'' still are not fully explained, nor is the tendency for radio loud AGN
to have weaker Fe II and steeper Balmer decrements than radio quiet objects
(Osterbrock 1977).
A tendency for the equivalent width of the C IV emission line to decrease
with increasing luminosity was found by Baldwin (1977b). Explanations of
this involved a possible decrease, with increasing luminosity, in the
ionization parameter and in the ``covering factor'', i.e., the fraction
($\Omega/4\pi$) of the ionizing continuum intercepted by the BLR gas
(Mushotzky and Ferland 1984). The ionization parameter was also the
leading candidate to explain the difference in ionization level between
classical Seyfert galaxies and the ``low ionization nuclear emission regions''
or ``LINERs'' (Heckman 1980; Ferland and Netzer 1983; Halpern and Steiner
1983).
The geometry and state of motion of the BLR
gas has been a surprisingly stubborn problem. If the BLR was a swarm of
clouds, they might be falling in (possibly related to the accretion supply),
orbiting, or flying out. Alternatively, the gas might be associated with an
accretion disk irradiated by the ionizing continuum (e.g., Shields 1977;
Collin-Souffrin 1987). Except for the BAL QSOs,
there was little evidence for blueshifted absorption analogous to the P Cygni
type line profiles of stars undergoing vigorous mass loss. The approximate
symmetry of optically thick lines such as L$\alpha$\ and
H$\alpha$\ suggested that the motion was circular or random rather than
predominantly radial (e.g., Ferland, Netzer, and Shields 1979). However,
for orbiting (or infalling) gas, the line widths implied rather large masses
for the central object, given prevailing estimates of the BLR radius. In
addition, gas in Keplerian orbit seemed likely to give a double peaked line
profile or to have other problems (Shields 1978a). In the face of these
conflicting indications, the most common assumption was that the gas took the
form of clouds flying outward from the central object. The individual clouds
would disperse quickly unless confined by some intercloud medium, and a
possible physical model was provided by the two-phase medium discussed by
Krolik, McKee, and Tarter (1981). Radiation pressure of the ionizing
continuum, acting on the bound-free opacity of the gas, seemed capable of
producing the observed velocities and giving a natural explanation of the
``logarithmic'' shape of the observed line profiles (Mathews 1974;
Blumenthal and Mathews 1975). Interpretation of the line profiles was
complicated by the recognition of systematic offsets in velocity between the
high and low ionization lines (Gaskell 1982; Wilkes and Carswell 1982; Wilkes
1984)
A powerful new tool was provided by the use of
``echo mapping'' or ``reverberation mapping'' of the BLR. Echo
mapping relies on the time delays between the continuum and line variations
caused by the light travel time across the BLR (Blandford and McKee
1982). Early results showed that the BLR is smaller and denser than most
photoionization models had indicated (Ulrich ~et al.\ 1984; Peterson ~et al.\
1985). Masses of the central object,
by this time assumed to be a black hole, could be derived with increased
confidence. The smaller radii implied smaller masses that seemed
reasonable in the light of other considerations, and the idea of
gravitational motions for the BLR gained in popularity. This was supported
by the rough tendency of the line profiles to vary symmetrically,
consistent with ``chaotic'' or circular motions (e.g., Ulrich
~et al.\ 1984).
\subsection{Energy Source}
The question of the ultimate energy source for AGN stimulated creativity even
before the discovery of QSO redshifts. The early concept of radio galaxies
as galaxies in collision gave way to the recognition of galactic nuclei as
the sites of concentrated, violent activity. Burbidge (1961) suggested
that a chain reaction of supernovae (SN) could occur in a dense star cluster
in a galactic nucleus. Shock waves from one SN would compress neighboring
stars, triggering them to explode in turn. Cameron (1962) considered
a coeval star cluster leading to a rapid succession of SN as the massive stars
finished their short lives. Spitzer and Saslaw (1966),
building on earlier suggestions, developed another model involving a dense star
cluster. The cluster core would evolve to higher star densities through
gravitational ``evaporation'', and this would lead to frequent stellar
collisions and tidal encounters, liberating large amounts of gas.
Additional ideas involving dense star clusters included pulsar
swarms (Arons, Kulsrud, and Ostriker 1975) and starburst models
(Terlevich and Melnick 1985).
Hoyle and Fowler (1963a,b) discussed the idea of a supermassive
star (up to $\sim10^8~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$) as a source of gravitational and thermonuclear
energy. In additional to producing large amounts of energy per unit mass,
all these models seemed capable of accelerating particles to relativistic
energies and producing gas clouds ejected at speeds of
$\sim 5000~\ifmmode \rm km~s^{-1}\else $\rm km~s^{-1}$\fi$, suggestive of the broad emission-line wings of Seyfert
galaxies. In this
regard, Hoyle and Fowler (1963a) suggested that ``a magnetic
field could be wound toroidally between the central star and a surrounding
disk.'' The field could store a large amount of energy, leading to powerful
``explosions'' and jets like that of M87. Hoyle and Fowler (1963b) suggested
that ``only through the contraction of a mass of $10^7 - 10^8~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$ to the
relativistic limit can the energies of the strongest sources be obtained.''
Soon after, Salpeter (1964) and
Zeldovich (1964) proposed the idea of QSO energy production from
accretion onto a supermassive black hole. For material gradually spiraling to
the innermost stable orbit of a nonrotating black hole at $r =
6GM/c^2$, the energy released per unit mass would be $0.057c^2$, enough to
provide the energy of a luminous QSO from a reasonable mass. Salpeter imagined
some kind of turbulent transport of angular momentum, allowing
the matter to move
closer to the hole, which would grow in mass during the accretion process.
The black hole model received limited attention until Lynden-Bell (1969)
argued that dead quasars in the form of ``collapsed bodies''
(black holes) should
be common in galactic nuclei, given the lifetime energy output of quasars and
their prevalence at earlier times in the history of the universe.
Quiescent ones
might be detectable through their effect on the mass-to-light ratio of nearby
galactic nuclei. Lynden-Bell explored the thermal radiation and fast particle
emission to be expected in a disk of gas orbiting the hole, with energy
dissipation related to magnetic and turbulent processes. For QSO
luminosities, the disk would have a maximum effective temperature of $\sim
10^5~\rm K$, possibly leading to photoionization and broad line emission. He
remarked that ``with different values of the [black hole mass and accretion
rate] these disks are capable of providing an explanation for a large
fraction of the incredible phenomena of high energy astrophysics, including
galactic nuclei, Seyfert galaxies, quasars and cosmic rays."
Further evidence for relativistic conditions in AGN came from other
theoretical arguments. Hoyle,
Burbidge, and Sargent (1966) noted that relativistic electrons emitting
optical and infrared
synchrotron radiation would also Compton scatter ambient photons,
boosting their
energy by large factors. This would lead to ``repeated stepping
up of the energies of quanta'', yielding a divergence that came to be known
as the ``inverse Compton catastrophe''. This would be attended by
rapid quenching of the energy of the electrons. They argued that
this supported the
idea of noncosmological redshifts. In response, Woltjer (1966) invoked a model
with electrons streaming radially on field lines, which could greatly reduce
Compton losses. He further noted that because ``the relativistic electrons and
the photons they emit both move nearly parallel to the line of sight, the time
scale of variations in emission can be much shorter than the size of the region
divided by the speed of light.'' The emission would also likely be anisotropic,
reducing the energy requirements for individual objects.
\subsection{Superluminal Motion}
Dramatic confirmation of the suspected relativistic motions came from
the advancing technology of radio astronomy.
Radio astronomers using conventional interferometers had shown that
many sources had structure on a sub-arcsec scale.
Scintillation of the radio signal from some AGN, caused by the
interplanetary medium of our solar system, also implied sub-arcsec dimensions
(Hewish, Scott, and Wills 1964). The compact radio sources in
some AGN showed flat spectrum components and
variability on timescales of months (Dent 1965; Sholomitsky 1965).
The variability suggested milliarcsec dimensions on the basis of light
travel time arguments. The spectral shape and evolution found explanation
in terms of multiple, expanding components that were optically thick to
synchrotron self-sbsorption, which causes a low frequency cutoff in the
emitted continuum (Pauliny-Toth and Kellermann 1966, and
references therein). Such models had interesting theoretical consequences,
including angular sizes (for cosmological redshifts) as small as
$10^{-3}$ arcsec, and large amounts of energy in relativistic electrons, far
exceeding the energy in the magnetic field.
These inferences made clear the need for angular resolution finer than was
practical with conventional radio interferometers connected by wires or
microwave links. This was achieved by recording
the signal from the two antennas separately on magnetic tape, and correlating
the recorded signals later by analog or digital means. This technique came to
be known as ``very long baseline interferometry'' (VLB, later VLBI).
After initial difficulties finding ``fringes'' in
the correlated signal, competing groups in Canada and the United States
succeeded in observing several AGN in the spring of 1967, over baselines of
roughly 200~km (see Cohen ~et al.\
1968). The U.S. experiments typically used the 140 foot antenna at the
National Radio Astronomy Observatory in Green Bank, West Virginia, in
combination with increasingly remote
antennas in Maryland, Puerto Rico,
Massachusetts, California, and Sweden. The latter gave an angular resolution
of 0.0006 arcsec. Within another year, observations were made between Owens
Valley, California, and Parkes, Australia, a baseline exceeding 10,000 km or
80 percent of the earth's diameter. A number of AGN showed components
unresolved on a scale of $10^{-3}$ arcsec.
On October 14 and 15, 1970, Knight ~et al.\ (1971) observed quasars at
7840 MHz with the Goldstone, California - Haystack, Massachusetts
``Goldstack'' baseline. 3C 279 showed fringes consistent with a symmetrical
double source separated by
$(1.55 \pm 0.03) \times 10^{-3}$ arcsec. Later observations on February 14
and 26, 1971, by Whitney ~et al.\ (1971) showed a double source structure at
the same position angle, but separated by a
distinctly larger angle of $(1.69 \pm 0.02)
\times 10^{-3}$ arcsec. Given the distance implied by the redshift of 0.538,
this rate of angular separation corresponded to a linear separation rate of
ten times the speed of light! Cohen ~et al.\ (1971), also using Goldstack data,
observed ``superlight expansion'' in 3C 273 and 3C 279. Whitney
~et al.\ and Cohen ~et al.\ considered a number of interpretations of their
observations, including multiple components that blink on and off (the
``Christmas tree model'') and noncosmological redshifts. However, most
astronomers quickly leaned toward an explanation involving motion of emitting
clouds ejected from the central object at speeds close to, but not exceeding,
the speed of light.
Rees (1966) had
calculated the appearance of relativistically expanding sources, and
apparent expansion speeds faster than that of light were predicted. A picture
emerged in which a stationary component was associated with the central object,
and clouds were ejected at intervals of several years along a fairly stable
axis. (Repeat ejections were observed in the course of time by VLBI
experiments.) If this ejection occurred in both
directions, it could supply energy to the extended double
sources. The receding components would be greatly
dimmed by special relativistic
effects, while the approaching components were brightened. The two observed
components are then associated with the central object
and the approaching cloud,
respectively. The fact that the two observed components had roughly equal
luminosities found an explanation in the relativistic jet model
of Blandford and K\"onigl (1979).
Apparent superluminal motion has now been seen in a number of quasars and
radio galaxies, and a possibly analogous phenomenon has been
observed in connection with black hole systems of stellar mass in our Galaxy
(Mirabel and Rodriguez 1994)
\subsection{X-rays from AGN}
One June 18, 1962, an Aerobee sounding rockets blasted skyward from White
Sands proving ground in New Mexico. It carried a Geiger counter designed
to detect astronomical sources of X-rays.
The experiment,
carried out by Giacconi ~et al.\ (1962), discovered
an X-ray background and a ``large peak'' in a 10 degree error box near the
Galactic center and the constellation Scorpius. A rocket experiment by Bowyer
~et al.\ (1964) also found an isotropic background, confirmed the Scorpius source,
and detected X-rays from the Crab nebula. Friedman
and Byram (1967) identified X-rays from the active galaxy M 87. A
rocket
carrying collimated proportional counters sensitive in the 1 to 10 keV energy
range, found sources coincident with 3C 273, NGC 5128 (Cen A), and M87 (Bowyer,
Lampton, and Mack 1970). The positional error box for
3C 273 was small enough to
give a probability of less that
$10^{-3}$ of a chance coincidence. The X-ray luminosity, quoted as $\sim
10^{46}~\ifmmode \rm erg~s^{-1} \else $\rm erg~s^{-1}$ \fi$, was comparable with quasar's optical luminosity.
The first dedicated X-ray astronomy satellite, {\em Uhuru}, was launched in
1970. Operating until 1973, it made X-ray work
a major branch of astronomy. X-rays were reported from the Seyfert galaxies
NGC 1275 and NGC 4151 (Gursky ~et al.\ 1971).
The spectrum of NGC 5128 was consistent with a power law
of energy index $\alpha = -0.7$, where $\rm L_\nu \propto \nu^\alpha$; and
there was low energy absorption corresponding to
a column density of $9 \times 10^{22} ~\rm atoms~cm^{-2}$,
possibly caused by gas
in the nucleus (Tucker
~et al.\ 1973).
Early variability studies were hampered by the need to compare results from
different experiments, but Winkler and White (1975) found a large change
in the flux from Cen A in only 6 days from {\em OSO-7} data. Using {\em Ariel
V} observations of NGC 4151, Ives ~et al.\ (1976) found a significant increase
in flux from earlier {\em Uhuru} measurements. Marshall ~et al.\ (1981),
using Ariel V data on AGN gathered over a 5 year period, found that roughly
half of the sources varied by up to a factor of 2 on times less than or equal
to a year. A number of sources varied in times of 0.5 to 5 days. Marshall
~et al.\ articulated the importance of X-ray variability observations, which
show that the X-rays ``arise deep in the nucleus'' and ``relate therefore to
the most fundamental aspect of active galaxies, the nature of the central
`power house'.''
Strong X-ray emission as a characteristic of Sy 1 galaxies was established
by Martin Elvis and his coworkers from {\em Ariel V} data (Elvis ~et al.\ 1978).
This work increased to 15 the number of known Seyfert X-ray sources, of which
at least three were variable. Typical luminosities were $\sim 10^{42.5}$ to
$10^{44.5}~\ifmmode \rm erg~s^{-1} \else $\rm erg~s^{-1}$ \fi$. The X-ray power correlated with the infrared and optical
continuum and H$\alpha$\ line. Seyfert galaxies evidently made a significant
contribution to the X-ray background, and limits could be set on the
evolution of Seyfert galaxy number densities and X-ray luminosities in order
that they not exceed the observed background. Elvis ~et al.\ considered thermal
bremsstrahlung ($10^7~\rm K$), synchrotron, and synchrotron self-Compton
models of the X-ray emission.
{\em HEAO-1}, the first of the {\em High Energy Astronomy Observatories},
was an X-ray facility that
operated from 1977 to 1979. It gathered data on a sufficient sample of
objects to allow comparisons of different classes of AGN and to construct a
log N-log S diagram and improved luminosity function.
{\em HEAO-1} provided broad-band X-ray spectral information for a
substantial set of AGN, showing spectral indices $\alpha \approx -0.7$,
with rather little scatter, and absorbing columns
$<5\times10^{22}~\rm cm^{-2}$ (Mushotzky ~et al.\ 1980).
The {\em Einstein
Observatory (HEAO-2)} featured grazing incidence focusing optics allowing
detection of sources as faint as $\sim 10^{-7}$ the intensity of the Crab
nebula. Tananbaum ~et al.\ (1979) used {\em Einstein} data to study QSOs as a
class of X-ray emitters. Luminosities of $10^{43}$ to $10^{47}~\ifmmode \rm erg~s^{-1} \else $\rm erg~s^{-1}$ \fi$
(0.5 to 4.5 keV) were found. OX169 varied substantially in under 10,000 s,
indicating a small source size. This suggested a black hole mass not
greater than
$2 \times 10^8~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$, if the X-rays came from the inner portion of an
accretion flow. By this time, strong X-ray emission was established as a
characteristic of all types of AGN and a valuable diagnostic of their
innermost workings.
\subsection{The Continuum}
Today, the word ``continuum'' in the context of AGN might bring to mind
anything from radio to gamma ray frequencies. However, in the early
days of QSO studies, the term generally meant the optical continuum, extending
to the ultraviolet and infrared as observations in these bands became
available. Techniques of photoelectric
photometry and spectrum scanning were becoming established as QSO studies
began. The variability of QSOs, including 3C 48 and 3C 273 (e.g., Sandage
1963), was known and no doubt contributed to astronomers' initial hesitation
to interpret QSO spectra in terms of large redshifts. In his contribution to
the four discovery papers on 3C 273, Oke (1963) presented spectrophotometry
showing a continuum slope $L_\nu \propto \nu^{+0.3}$ in the optical,
becoming redder toward the near infrared. He noted that the energy
distribution did not resemble a black body, and inferred that there must be a
substantial contribution of synchrotron radiation.
A key issue for continuum studies has been the relative importance of thermal
and nonthermal emission processes in various wavebands. Early work tended to
assume synchrotron radiation, or ``nonthermal emission'', in the absence of
strong evidence to the contrary. The free-free and bound-free emission from
the gas producing the observed emission lines was generally a small
contribution. The possibility of thermal emission from very hot gas was
considered for some objects such as the flat blue continuum of 3C 273 (e.g.,
Oke 1966). The energy distributions tend
to slope up into the infrared; and for thermal emission from optically thin
gas, this would would have required a rather low temperature and an excessive
Balmer continuum jump. This left the possibilities of nonthermal emission or
thermal emission from warm dust, presumably heated by the ultraviolet
continuum.
Observational indicators of thermal or nonthermal emission include
broad features in the energy distribution,
variability, and polarization. For the infrared, one also has correlations
with reddening, the silicate
absorption and emission features, and possible angular
resolution of the source (Edelson ~et al.\ 1988). For some objects, rapid
optical variability implied brightness temperatures that clearly required a
nonthermal emission mechanism. For example, Oke (1967) observed day-to-day
changes of 0.25 and 0.1 mag for 3C 279 and 3C 446, respectively. For many
objects, the energy distributions were roughly consistent with a power law of
slope near $\nu^{-1.2}$. Power laws of similar slopes were familiar
from radio galaxies and the Crab nebula, where the emission extended through
the optical band. These spectra were interpreted in terms of synchrotron
radiation with power-law energy distributions for the radiating,
relativistic electrons. Such a power-law energy distribution was also
familiar from studies of cosmic rays, and thus power laws seemed natural in
the context of high energy phenomena like AGN. In addition to simple
synchrotron radiation, there might be a hybrid
process involving synchrotron emission in the submillimeter and far infrared,
with some of these photons boosted to the optical by ``inverse'' Compton
scattering (Shklovskii 1965). The idea of a nonthermal continuum in the
optical, whose high frequency extrapolation provided the ionizing radiation
for the emission-line regions, was widely held for many years. This was
invoked not only for QSOs but also for Seyfert galaxies, where techniques such
as polarization were used to separate the ``nonthermal" and galaxy components
(e.g., Visvanathan and Oke 1968).
Infrared observations were at first plagued by low sensitivity and inadequate
telescope apertures.
Measurements of 3C 273 in the K filter (2.2 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi), published by
Johnson (1964) and Low and Johnson (1965), showed a continuum steeply rising
into the infrared.
Infrared
radiation from NGC 1068 was observed by Pacholczyk and Wisniewski (1967),
also with a flux density ($F_\nu$) strongly rising to the longest wavelength
observed (``N'' band, or 10
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi). The infrared radiation dominated the power output of this object.
Becklin ~et al.\ (1973) found that much of the 10 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\
emission from NGC 1068 came from a resolved source 1 arcsec (90 pc) across and
concluded that most of the emission was not synchrotron emission. In
contrast, variability of the 10 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ emission from 3C 273 (e.g., Rieke
and Low 1972) pointed to a strong nonthermal component. Radiation from hot
dust has a minimum source size implied by the black body limit on the surface
brightness, and this is more stringent for longer wavelengths radiated by
cooler dust. This in turn implies a minimum variability timescale as a
function of wavelength. The near infrared emission of NGC 1068 was found to
be strongly polarized (Knacke and Capps 1974).
Improving infrared technology, and optical instruments such as the
multichannel spectrometer on the 200-inch telescope (Oke 1969), led
to larger and better surveys of the AGN continuum.
Oke, Neugebauer, and Becklin (1970) reported observations of 28 QSOs from 0.3
to 2.2~\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi. The energy distributions were similar in radio loud
and radio quiet QSOs. They found that the energy
distributions could generally be described as a power law (index
-0.2 to -1.6 for $F_\nu\propto\nu^\alpha$) and that they remained ``sensibly
unchanged'' during the variations of highly variable objects. Penston
~et al.\ (1974) studied the continuum from 0.3 to 3.4
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ in 11 bright Seyfert galaxies. All turned up toward the
infrared, and consideration of the month-to-month
variability pointed to different
sources for the infrared and optical continua. From an extensive survey of
Seyfert galaxies, Rieke (1978) concluded that strong infrared emission was a
``virtually universal'' feature, and that the energy distributions in general
did not fit a simple power law. The amounts of dust required were roughly
consistent with the expected dust in the emission-line gas of the active nucleus
and the surrounding interstellar medium. A consensus emerged that the infrared
emission of Seyfert 2's was thermal dust emission, but the situation for Seyfert
1's was less clear (e.g., Neugebauer
~et al.\ 1976, Stein and Weedman 1976). From a survey of the optical and
infrared energy distribution of QSOs, Neugebauer ~et al.\ (1979) concluded that
the slope was steeper in the 1-3
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ band than in the 0.3-1 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ band, and that an apparent broad
bump around 3 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ might be dust emission. Neugebauer ~et al.\ (1987)
obtained energy distributions from 0.3 to 2.2 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ for the complete set
of quasars in the Palomar-Green (PG) survey (Green, Schmidt, and Liebert
1986) as well as some longer wavelength observations. A majority of objects
could be fit with two power laws ($\alpha \approx -1.4$ at lower frequencies,
$\alpha \approx -0.2$ at higher frequencies) plus a ``3000 \AA\ bump''.
Measurements at shorter and
longer wavelengths were facilitated by the {\em International Ultraviolet
Explorer} (IUE) and the {\em Infrared Astronomical Satellite} (IRAS),
launched in 1978 and 1983, respectively. Combining such measurements with
ground based data, Edelson and Malkan (1986) studied the spectral energy
distribution of AGN over the wavelength range 0.1-100
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi. The 3-5 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ ``bump'' was present in most Seyferts and QSOs,
involving up to 40 percent of the luminosity between 2.5 and 10 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi. All
Sy 1 galaxies without large reddening appeared to require a hot thermal
component, identified with the increasingly popular concept of emission from
an accretion disk. Edelson and Malkan (1987) used IRAS observations to study
the variability of AGN in the far infrared. The high polarization objects
varied up to a factor 2 in a few months, but no variations greater than 15
percent were observed for ``normal'' quasars or Seyfert galaxies. The former
group was consistent with a class of objects known as
``blazars'' that are dominated at all wavelengths by a variable, polarized
nonthermal continuum.
Blazars were found to be highly variable at all wavelengths, but most AGN
appeared to be systematically less variable in the far infrared than at
higher frequencies. This supported the idea of thermal emission from dust in
the infrared. This was further supported by observations at submillimeter
wavelengths that showed a very steep decline in flux longward of the infrared
peak at around 100 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi. For example, an upper limit on the flux from NGC
4151 at 438
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ (Edelson ~et al.\ 1988) was so far below the measured flux at 155
\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ as to require a slope steeper than
$\nu^{+2.5}$, the steepest that can be obtained from a self-absorbed
synchrotron source without special geometries. Dust emission could explain a
steeper slope because of the decreasing efficiency of emission toward longer
wavelengths.
Sanders ~et al.\ (1989)
presented measurements of 109 QSOs from 0.3 nm to 6 cm ($10^{10} - 10^{18}$
Hz). The gross shape of the energy distributions was quite similar for most
objects, excepting the flat spectrum radio loud objects such as 3C 273. This
typical energy distribution could be fit by
a hot accretion disk at shorter
wavelengths and heated dust at longer wavelengths.
Warping of the disk at larger radii was invoked to give the needed amount of
reprocessed radiation as a function of radius. As noted by Rees
~et al.\ (1969) and others, the rather steep slope in the infrared, giving rise
to an apparent minimum in the flux around 1 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi, could be explained
naturally by the fact that grains evaporate if heated to temperatures above
about 1500~K. Sanders ~et al.\ saw ``no convincing evidence for
energetically significant nonthermal radiation'' in the wavelength range 3 nm
to 300 \ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ in the continua of radio quiet and steep-spectrum
radio-loud quasars.
This paper marked the culmination of a gradual shift of sentiment
from nonthermal to thermal explanations for the continuum of non-blazar AGN.
The blazar family comprised ``BL Lac objects'' and ``Optically Violent
Variable'' (OVV) QSOs. BL Lac objects, named after the prototype object
earlier listed in catalogs of variable stars, had a nonthermal continuum
but little or no line emission. OVVs have the emission lines of QSOs.
These objects all show a continuum that is fairly well described as a power
law extending from X-ray to infrared frequencies. They typically show rapid
(sometimes day-to-day) variability and strong, variable polarization. The
continuum in blazars is largely attributed to nonthermal processes
(synchrotron emission and inverse Compton scattering). 3C 273
seems to be a borderline OVV (Impey, Malkan, and Tapia 1989). The need for
relativistic motions, described above, arises in connection with this class
of objects. A comprehensive study of the energy distributions of blazars
from $10^8$ to
$10^{18}$ Hz was given by Impey and Neugebauer (1988). Bolometric
luminosities ranged from $10^9$ to $10^{14}~\ifmmode {\rm L_\odot} \else L$_\odot$\fi$, dominated by the 1 to 100
~\ifmmode {\mu \rm m} \else {$\mu \rm m$}\fi\ band. There was evidence for a thermal infrared component in many of
the less luminous objects, and an ultraviolet continuum bump
associated with the presence of emission lines. When gamma rays are observed
from AGN (e.g., Swanenburg ~et al.\ 1978), they appear to be associated with the
beamed nonthermal continuum. The relationship of blazars to ``normal'' AGN is
a key question in the effort to unify the diverse appearance of AGN.
{\it IRAS}\ revealed a large population of galaxies whose luminosity was strongly
dominated by the far infrared (Soifer, Houck, and Neugebauer 1987).
(Rieke [1972] had found early indications of a class of ultraluminous
infrared galaxies.) The infrared emission is thermal emission from dust,
energized in many cases by star formation but in some cases by an AGN. One
suggested scenario was that some event, possibly a galactic merger, injected
large quantities of gas and dust into the nucleus. This fueled a luminous
episode of accretion onto a black hole, at first enshrouded by
the dusty gas, whose dissipation revealed the AGN at optical and
and ultraviolet wavelengths (Sanders ~et al.\ 1988).
\subsection{The Black Hole Paradigm}
The intriguing paper by Lynden-Bell (1969) still did not launch a widespread
effort to understand AGN in terms of accretion disks around black holes.
Further impetus came from the discovery of black holes of stellar
mass in our Galaxy. Among the objects discovered by {\em Uhuru} and
other early X-ray experiments were sources involving binary star systems
with a neutron star or black hole. ``X-ray pulsars'' emitted regular
pulses of X-rays every few seconds as the neutron star turned on its axis.
The X-ray power was essentially thermal emission from gas transferred from the
companion star, impacting on the neutron star with sufficient velocity to
produce high temperatures. Another class of source, exemplified by Cyg X-1,
showed no periodic variations but a rapid flickering (Oda ~et al.\ 1971)
indicating a very small size.
Analysis of the orbit gave a mass too large to be a neutron star or white
dwarf, and the implication was that the system contained a black hole
(Webster and Murdin 1972; Tananbaum ~et al.\ 1972). The X-ray emission was
attributed to gas from the companion O-star heated to very high
temperatures as it spiraled into the black hole by way of a disk (Thorne and
Price 1975).
Galactic X-ray sources, along with cataclysmic variable stars, protostars, and
AGN, stimulated efforts to develop the theory of accretion disks. In many
cases, the disk was expected to be geometrically thin, and the structure in
the vertical and radial directions could be analyzed separately. A key
uncertainty was the mechanism by which angular momentum is transported
outward as matter spirals inward. In a highly influential paper, Shakura and
Sunyaev (1973) analyzed disks in terms of a dimensionless parameter $\alpha$
that characterized the stresses that led to angular momentum transport and
local energy release. General relativistic corrections were added by
Novikov and Thorne (1973). This ``$\alpha$-model'' remains the
standard approach to disk theory, and only recently have detailed mechanisms
for dissipation begun to gain favor (Balbus and Hawley 1991). The
$\alpha$-model gave three radial zones characterized by the relative
importance of radiation pressure, gas pressure, electron scattering, and
absorption opacity. The power producing regions of AGN disks would fall in
the ``inner'' zone dominated by radiation pressure and electron scattering.
Electron scattering would dominate in the atmosphere as well as the interior,
and modify the local surface emission from an approximate black body
spectrum. The ``inner'' disk zone suffers both thermal and viscous
instabilities (Pringle 1976; Lightman and Eardley 1974), but the
ultimate consequence of these was unclear. A model in which the ions and
electrons had different, very high temperatures was proposed for Cyg X-1 by
Eardley, Lightman, and Shapiro (1975). This led to models of ``ion supported
tori'' for AGN (Rees ~et al.\ 1982). The related idea
of ``advection dominated accretion
disks''or ``ADAFs'' (Narayan and Yi 1994) recently has attracted attention.
A key question was, do expected physical processes in disks explain the
phenomena observed in AGN? In broad terms, this involved producing the
observed continuum and, at least in some objects, generating relativistic
jets, presumably along the rotation axis. Shields (1978b) proposed that the
flat blue continuum of 3C 273 was thermal emission from the
surface of an accretion disk around a black hole. For a mass $\sim
10^9~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$ and accretion rate $~3~\ifmmode {\rm M_\odot~yr^{-1}}\else${\rm M_\odot~yr^{-1}}$\fi$, the size and temperature of the
inner disk was consistent with the observed blue continuum.
This component dominated an assumed nonthermal power law, which would explain
the infrared upturn and the X-rays. Combining optical, infrared, and
ultraviolet observations, Malkan (1983) successfully fitted the continua of a
number of QSOs with accretion disk models. Czerny and Elvis (1987)
suggested that the soft X-ray excess of some AGN could be the high frequency
tail of the thermal disk component or ``Big Blue Bump'', which appeared to
dominate the luminosity of some objects.
Problems confronted the simple picture of thermal emission from a disk
radiating its locally produced energy. Correlated continuum variations
at different wavelengths in the optical and ultraviolet were
observed in the optical and ultraviolet on timescales shorter than the
expected timescale for viscous or thermal processes to modify the
surface temperature distribution in an AGN disk
(e.g., Clavel, Wamsteker, and Glass
1989; Courvoisier and Clavel 1991). This suggested that reprocessing of X-rays
incident on the disk made a substantial contribution to the optical and
ultraviolet continuum (Collin-Souffrin 1991). Also troublesome
was the low optical polarization observed in normal QSOs, typically one
percent or less. The polarization generally is oriented parallel to the disk
axis, when this can be inferred from jet structures (Stockman, Angel, and Miley
1979). Except for face on disks, electron
scattering in disk atmospheres should produce strong polarization oriented
perpendicular to the axis. Yet another problem was the prediction of strong
Lyman edge absorption features, given effective temperatures similar to those
of O stars (Kolykhalov and Sunyaev 1984). These issues remain
under investigation today.
The question of fueling a black hole in a galactic nucleus has been
difficult. Accretion rates of only a few solar masses a year suffice to
power a luminous quasar, and even a billion solar masses is a small
fraction of the mass of a QSO host galaxy. However, the specific
angular momentum of gas orbiting a black hole at tens or hundreds of
gravitational radii is tiny compared to that of gas moving with normal speeds
even in the central regions of a galaxy.
The angular momentum must be removed if the gas is to feed the black
hole. Moreover, some galaxies with massive central black holes are not
currently shining. Indeed, the rapid increase in the number of quasars with
increasing look back time (Schmidt 1972), implies that there are many
dormant black holes in galactic nuclei. What caused some to blaze forth as
QSOs while others are inert? A fascinating possibility was the tidal
disruption of stars orbiting close to the black hole (Hills 1975).
However, the rate at which new stars would have their orbits evolve into
disruptive ones appeared to be too slow to maintain a QSO luminosity (Frank
and Rees 1976). The probability of an AGN in a galaxy appeared to be
enhanced if it was interacting with a nearby galaxy (Adams 1977; Dahari 1984),
which suggested that tidal forces could induce gas to sink into the galactic
nucleus. There, unknown processes might relieve it of its angular
momentum and allow it to sink closer and closer to the black hole.
The growing acceptance of the black hole model resulted, not from any one
compelling piece of evidence, but rather from the accumulation of observational
and theoretical arguments suggestive of black holes and from the lack of viable
alternatives (Rees 1984).
\subsection{Unified Models}
After the discovery of QSOs, the widely different appearances of different
AGN became appreciated. The question arose, what aspects of this diversity
might result from the observer's location relative to the AGN? A basic
division was between radio loud and radio quiet objects. Since the extended
radio sources radiate fairly isotropically, their presence or absence could
not be attributed to orientation. Furthermore, radio loud objects seemed
to be associated with elliptical galaxies, and radio quiet AGN with
spiral galaxies. The huge range of luminosities from Seyferts to QSOs
clearly was largely intrinsic. However, some aspects could be a
function of orientation.
Blandford and
Rees (1978) proposed that BL Lac objects were radio galaxies viewed down the
axis of a relativistic jet. Relativistic beaming caused the nonthermal
continuum to be very bright when so viewed, and the emission lines (emitted
isotropically) would be weak in comparison. The same object, viewed from
the side, would have normal emission-line equivalent widths, and the radio
structure would be dominated by the extended lobes rather than the core.
A key breakthrough occurred as a result of advances in the techniques of
spectropolarimetry. Rowan-Robinson (1977) had raised
the possibility that the BLR of Seyfert 2
galaxies was obscured by dust, rather than being truly absent.
Using a sensitive spectropolarimeter on the 120-inch Shane telescope at Lick
Observatory, Antonucci and Miller (1985) found that the polarized flux of NGC
1068, the prototype Seyfert 2, had the appearance of a normal Seyfert 1
spectrum. This was interpreted in terms of a BLR and central continuum
source obscured from direct view by an opaque, dusty torus. Electron
scattering material above the nucleus near the axis of the torus scattered
the nuclear light to the observer, polarizing it in the process. This
allowed Seyfert 2's to have a detectable but unreddened continuum. However,
the broad lines had escaped notice because the scattered light was feeble
compared with the narrow lines from the NLR, which was outside the presumed
obscuring torus. The same object, viewed face on, would be a Seyfert 1.
Such a picture had also been proposed by Antonucci (1984) for the broad line
radio galaxy 3C 234. Various forms of toroidal geometry had been anticipated
by Osterbrock (1978) and others, and the idea received support from the
discovery of ``ionization cones'' in the nuclei of some AGN (Pogge 1988).
Orientation indicators were developed involving the ratio of the core and
extended radio luminosities (Orr and Browne 1982; Wills and Browne 1986). The
concepts of a beamed nonthermal continuum and an obscuring, equatorial torus
remain fundamental to current efforts to unify AGN.
Consideration of the obscuring torus supports the idea that the
X-ray background is produced mostly by AGN (Setti and Woltjer 1989).
\section{THE VIEW FROM HERE}
The efforts described above led to many of the observational and theoretical
underpinnings of our present understanding of AGN. The enormous
effort devoted to AGN in recent years has led to many further
discoveries and posed exciting challenges.
Massive
international monitoring campaigns (Peterson 1993) have revealed
ionization stratification with respect to radius in the BLR, that the
BLR radius increases with luminosity, and that the gas is not
predominantly in a state of radial flow inwards or outwards. This
suggests the likelihood of orbiting material. Models involving a mix of
gas with a wide range of densities and radii may give a natural
explanation of AGN line ratios (Baldwin ~et al.\ 1995). Chemical
abundances in QSOs have been analyzed in the context of galactic chemical
evolution (Hamann and Ferland 1993). Recent theoretical work indicates
that the observed, centrally peaked line profiles can be obtained from a
wind leaving the surface of a Keplerian disk (Murray and Chiang 1997).
Efforts to understand the broad absorption lines (BALs) of QSOs have
intensified in recent years. The geometry and acceleration mechanism
are still unsettled, although disk winds may be involved here too
(Murray ~et al.\ 1995). Partial coverage of the continuum
source by the absorbing clouds complicates the effort to determine
chemical abundances (e.g., Arav 1997).
The black hole model has gained support from indirect evidence for massive
black holes in the center of the Milky Way and numerous nearby galaxies
(see Rees 1997). This includes the remarkable ``H$_2$O megamaser'' VLBI
measurements of the Seyfert galaxy NGC 4258 (Miyoshi ~et al.\ 1995), which give
strong evidence for a black hole of mass $4\times10^7~\ifmmode {\rm M_\odot} \else M$_\odot$\fi$.
X-ray observations suggest reflection of X-rays incident on an
accretion disk (Pounds ~et al.\ 1989), and extremely broad
Fe K$\alpha$ emission lines may give a direct look at material
orbiting close to the black hole (Tanaka ~et al.\ 1998).
These
results reinforce the black hole picture, but much remains to be done to
understand the physical processes at work in AGN. In spite of
much good work, the origin and fueling of
the hole, the physics of the disk, and the jet production mechanism still are
not well understood.
The nature of the AGN continuum remains unsettled; for example, the
contribution of the disk to the optical and ultraviolet continuum is still
debated (Koratkar and Blaes 1999). The primary X-ray emission mechanism and
the precise role of thermal and nonthermal emission in the infrared remain
unclear (Wilkes 1999). Blazars have proved to be strong $\gamma$-ray
sources, with detections up to TeV energies (Punch ~et al.\ 1992).
Radio emission was key to the discovery of quasars, and radio techniques have
seen great progress. The Very Large Array in New Mexico has produced
strikingly detailed maps of radio sources, and shown the narrow channels of
energy from the nucleus to the extended lobes.
Maps of ``head-tail'' sources in clusters of
galaxies shows the interplay between the active galaxy
and its environment. The Very
Long Baseline Array (VLBA) will yield improved measurements of structures on
light-year scales in QSOs and provide insights into relativistic motions in
AGN. Likewise, new orbiting X-ray observatories promise great advances in
sensitivity and spectral resolution.
The Hubble Deep Field and other deep galaxy surveys have led to the
measurement of redshifts for galaxies as high as those of QSOs. This is
already stimulating increased efforts to understand the interplay between AGN
and the formation and evolution of galaxies.
The decline of AGN as an active subject of research is nowhere in sight.
\section{BIBLIOGRAPHY}
In addition to the primary literature, I have drawn on a
number of reviews, books, and personal
communications. For the early work in radio
astronomy, the books by Sullivan (1982,
1984) were informative and enjoyable; the former conveniently
reproduces many of the classic papers. The book by Burbidge and Burbidge
(1967) was an invaluable guide. A
brief summary of early studies is contained in the introduction to
Osterbrock's (1989) book.
The {\it Conference on Seyfert Galaxies and
Related Objects} (Pacholczyk and Weymann 1968) makes fascinating
reading today. The status of AGN research in the late 1970s is
indicated by the {\it Pittsburgh Conference on BL Lac
Objects} (Wolfe 1978). Many aspects of AGN are discussed in the
volume in honor of Professor Donald E. Osterbrock (Miller 1985),
which remains of interest both from an historical and a modern
perspective.
Review articles that especially influenced this work include those by
Bregman (1990) on the continuum; Mushotzky, Done, and Pounds
(1993) and Bradt, Ohashi, and Pounds (1992)
on X-rays; and Stein and Soifer (1983) on dust in galaxies. Historical
details of the discovery of QSO redshifts are given by Schmidt (1983, 1990);
and an historical account of early AGN studies is given in the
introduction to the volume by Robinson
~et al.\ (1964). A comprehensive early review of AGN was given by Burbidge
(1967b). A review of superluminal radio sources is given by
Kellermann (1985), and the emission-line regions are reviewed by
Osterbrock and Mathews (1986). A succinct review of important papers in the
history of AGN research is given by Trimble (1992).
Recent books on AGN include those of
Krolik (1999), Peterson (1997), and Robson (1996).
Many interesting articles are contained
in the volume edited by Arav ~et al.\ (1997).
Recent technical reviews include
those by Koratkar and Blaes (1999) on the disk continuum; Antonucci (1993)
and Urry and Padovani (1995) on unified models; Lauroesch ~et al.\ (1996) on
absorption lines and chemical evolution; Ulrich, Maraschi, and Urry (1997)
on variability; and Hewett
and Foltz (1994) on quasar surveys.
\acknowledgments
The author is indebted to many colleagues
for valuable
communications and comments on the
manuscript, including Stu Bowyer,
Geoff and Margaret Burbidge, Marshall Cohen, Suzy Collin, Martin
Elvis, Jesse Greenstein, Ken Kellermann, Matt Malkan,
Bill Mathews, Richard Mushotzky, Gerry
Neugebauer, Bev Oke, Martin Rees, George Rieke,
Maarten Schmidt, Woody Sullivan, Marie-Helene Ulrich, and
Bev and Derek Wills.
Don Osterbrock was especially
supportive and helpful.
This article was written in part during visits to the Department of
Space Physics and Astronomy,
Rice University; Lick Observatory; and the Institute for
Theoretical Physics, University of California, Santa
Barbara. The hospitality of these institutions is
gratefully acknowledged.
This work was supported in part by The Texas Higher
Education Coordinating Board.
\def Astrofizica{ Astrofizica}
\def Proc.~Nat.~Acad.~Sci.{ Proc.~Nat.~Acad.~Sci.}
\def Austral.~J.~Sci.~Res.{ Austral.~J.~Sci.~Res.}
\def Proc.~IRE{ Proc.~IRE}
\def Austral.~J.~Phys.{ Austral.~J.~Phys.}
\def Phys.~Rev.{ Phys.~Rev.}
\def Phys.~Rev.~Lett.{ Phys.~Rev.~Lett.}
\def Dokl.~Akad.~Nauk~SSSR{ Dokl.~Akad.~Nauk~SSSR}
\def Lowell~Obs.~Bull.{ Lowell~Obs.~Bull.}
\def Observ.{ Observ.}
\def Science{ Science}
|
1,116,691,497,151 | arxiv | \section{Introduction}
\vspace{5mm}
AdS/CFT correspondence \cite{Maldacena:1997re} between ${\cal N}=4$ super Yang-Mills theory and Type IIB string theory on $AdS_5\times S^5$ has been studied extensively during the last decade. One remarkable result obtained from the study is exact computation for expectation value of Wilson loop operators at strong coupling \cite{Rey:1998ik}\cite{Maldacena:1998im}. For a half-BPS circular Wilson loop, based on perturbative calculations at weak `t Hooft coupling \cite{Erickson:2000af}, exact form of the expectation value was conjectured in \cite{Drukker:2000rr}, precisely reproducing the result expected from the string theory computation \cite{Rey:1998ik}, \cite{Maldacena:1998im} and conformal anomaly therein. Their conjecture was confirmed later in \cite{Pestun:2007rz} using a localization technique.
In this paper, we study aspects of half-BPS circular Wilson loops in ${\cal N}=2$ supersymmetric gauge theories. We focus on a class of ${\cal N}=2$ superconformal gauge theories
--- the $A_1$ (quiver) gauge theory of gauge group SU$(N)$ and $2N$ fundamental hypermultiplets and $\hat{A}_1$ quiver gauge theory of gauge group SU($N)\times$SU$(N)$ and bifundamental hypermultiplets --- and compute the Wilson loop expectation value by adapting the localization technique of \cite{Pestun:2007rz}. We then compare the results with the ${\cal N}=4$ super Yang-Mills theory, which is a special limit of the $\hat{A}_0$ quiver gauge theory of gauge group SU($N$) and an adjoint hypermultiplet. Their quiver diagrams are depicted in Fig. 1.
\vskip1cm
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.68]{quiver.eps}
\caption{\small \sl
Quiver diagram of ${\cal N}=2$ superconformal gauge theories under study: (a) $\hat{A}_0$ theory with $G$ = SU$(N)$ and one adjoint hypermultiplet, (b) $A_1$ theory with $G$=SU($N$) and $2N$ fundamental hypermultiplets, (c) $\hat{A}_1$ theory with $G=$SU($N) \times$ SU($N$) and $2N$ bifundamental hypermultiplets. The $A_1$ theory is obtainable from $\hat{A}_1$ theory by tuning ratio of coupling constants to 0 or $\infty$. See sections 3 and 4 for explanations.}
\label{}
\end{figure}
\vskip1cm
We show that, on general grounds, path integral of these ${\cal N}=2$ superconformal gauge theories on $\mathbb{S}^4$ is reducible to a finite-dimensional matrix integral. The resulting matrix model turns out very complicated mainly because the one-loop determinant around the localization fixed point is non-trivial. This is in shartp contrast to the ${\cal N}=4$ super Yang-Mills theory, where the one-loop determinant is absent and further evaluation of Wilson loops or correlation functions is straightforward manipulation in Gaussian matrix integral.
Nevertheless, in the $N \rightarrow \infty$ planar limit, we show that expectation value of the half-BPS circular Wilson loop is determinable {\sl provided} the 't Hooft coupling $\lambda$ is large. In the large $\lambda$ limit, the one-loop determinant evaluated by the zeta-function regularization admits a suitable asymptotic expansion. Using this expansion, we can solve the saddle-point equation of the matrix model and obtain large $\lambda$ behavior of the Wilson loop expectation value. In ${\cal N}=4$ super Yang-Mills theory, it is known that the Wilson loop grows exponentially large $ \sim \exp(\sqrt{2 \lambda})$ as $\lambda$ becomes infinitely strong.
In $\hat{A}_0$ gauge theory, we find that the Wilson loop expectation value grows exponentially, exactly the same as the ${\cal N}=4$ super Yang-Mills theory.
The result for $A_1$ gauge theory is surprising. We find that the Wilson loop is finite at large $\lambda$. This means that the Wilson loop exhibits {\sl non-exponential} growth.
The $\hat{A}_1$ quiver gauge theory is also interesting. There are two Wilson loops associated with each gauge groups, equivalently, one in untwisted sector and another in twisted sector. We find that the Wilson loop in untwisted sector scales exponentially large, coinciding with the behavior of the Wilson loop ${\cal N}=4$ super Yang-Mills theory and the $\hat{A}_0$ gauge theory. On the other hand, the Wilson loop in twisted sector exhibits {\sl non-analytic} behavior with respect to difference of two `t Hooft coupling constants. We also find that we can interpolate the two surprising results in $A_1$ and $\hat{A}_1$ gauge theories by tuning the two `t Hooft couplings in $\hat{A}_1$ theory hierarchically different.
In all these, we ignored possible non-perturbative corrections to the Wilson loops. This is because, recalling the fishnet picture for the stringy interpretation of Wilson loops, the perturbative contributions would be the most relevant part for exploring the AdS/CFT correspondence and the holography therein.
We also studied how holographic dual descriptions may explain the exact results. Expectation value of the Wilson loop is described by worldsheet path integral of Type IIB string in dual geometry and that, in case the dual geometry is macroscopically large such as AdS$_5 \times \mathbb{S}^5$, it is evaluated by saddle-points of the path integral -- worldsheet configurations of extremal area surface. We first suggest that non-exponential growth of the $A_1$ Wilson loop arise from delicate cancelation among multiple --- possibly infinitely many --- saddle-points. This implies that holographic dual geometry of the ${\cal N}=2$ $A_1$ gauge theory ought to be (AdS$_5\times {\cal M}_2) \times {\cal M}$ where the internal space ${\cal M}= [\mathbb{S}^1 \times \mathbb{S}^2]$ necessarily involves a geometry of string scale. The string worldsheet sweeps on average an extremal area surface inside AdS$_5$, but many nearby saddle-point configurations whose worldsheet sweep two cycles over ${\cal M}$ cancel among the leading, exponential contributions of each. We next suggest that $\hat{A}_1$ Wilson loop in untwisted sector is given by a macroscopic string in AdS$_5 \times \mathbb{S}^5/\mathbb{Z}_2$ and hence grows exponentially with average of the two `t Hooft coupling constants. In twisted sector, however, it is negligibly small and scales with difference of the two `t Hooft coupling constants. This is again due to delicate cancelation among multiple worldsheet instantons that sweep around collapsed two cycles at the $\mathbb{Z}_2$ orbifold fixed point. We also demonstrate that Wilson loop expectation values are interpolatable between $\hat{A}_1$ and $A_1$ behaviors (or vice versa) by tuning NS-NS 2-form potential on the collapsed two cycle from $1/2$ to $0, 1$ or vice versa.
This paper is organized as follows. In section \ref{sectionN=2}, we show that evaluation of the expectation value of the half-BPS circular Wilson loop in a generic ${\cal N}=2$ superconformal gauge theory reduces to a related problem in a one-matrix model. The reduction procedure is based on localization technique and is parallel to \cite{Pestun:2007rz}. Compared to \cite{Pestun:2007rz}, our derivations are more direct and elementary and hence makes foregoing analysis in the planar limit far clearer physicswise. In section \ref{asymptotic}, we evaluate the Wilson loop at large `t Hooft coupling limit. Based on general analysis for one-matrix model (subsection \ref{generalMM}), we evaluate the matrix model action which is induced by the one-loop determinant (subsection \ref{subseczeta}).
As a result, we obtain a saddle-point equation whose solution provides the large `t Hooft coupling behavior of the Wilson loop (subsection \ref{saddlepoint}).
In section \ref{holography}, we discuss interpretation of these results in holographic dual string theory.
For both $A_1$ and $\hat{A}_1$ types, we argue contribution of worldsheet instanton effects can explain non-analytic behavior of the exact gauge theory results.
Section \ref{discuss} is devoted to discussion, including a possible implication of the present results to our previous work \cite{Rey:2008bh} (see also \cite{Drukker:2008zx}\cite{Chen:2008bp}) on ABJM theory \cite{Aharony:2008ug}.
We relegated several technical points in the appendices. In appendix A, we summarize Killing spinors on $\mathbb{S}^4$. In appendix B, we work out off-shell closure of supersymmetry algebra. In appendix C, we present asymptotic expansion of the Wilson loop. In appendix D, we present detailed computation of $c_1$ that arise in the evaluation of one-loop determinant.
Results of this work were previously reported at KEK workshop and at Strings 2009 conference. For online proceedings, see \cite{suyama} and \cite{rey}, respectively.
\vspace{1cm}
\section{Reduction to One-Matrix Model} \label{sectionN=2}
\vspace{5mm}
The work \cite{Pestun:2007rz} provided a proof for the conjecture \cite{Erickson:2000af, Drukker:2000rr} that the evaluation of the half-BPS Wilson loop in ${\cal N}=4$ super Yang-Mills theory \cite{Rey:1998ik, Maldacena:1998im} is reduced to a related problem in a Gaussian Hermitian one-matrix model. In this section, we show that the similar reduction also works for ${\cal N}=2$ superconformal gauge theories of general quiver type. The resulting matrix model is, however, not Gaussian but includes non-trivial vertices due to {\color{red} nontrivial } one-loop determinant.
\vspace{5mm}
\subsection{From ${\cal N}=4$ to ${\cal N}=2$}
\vspace{5mm}
A shortcut route to an ${\cal N}=2$ gauge theory of general quiver type --- with matters in various different representations and coupling constants in different values --- is to start with ${\cal N}=4$ super Yang-Mills theory. In this section, for completeness of our treatment, we elaborate on this route. Let $G$ be the gauge group.
The latter theory consists of a gauge field $A_m$ with $m=1,2,3,4$, scalar fields $A_0,A_5,\cdots,A_9$ and an $SO(9,1)$ Majorana-Weyl spinor $\Psi$, all in the adjoint representation of $G$.
The action can be written compactly as
\begin{equation}
S_{{\cal N}=4} = \int_{\mathbb{R}^4} \rmd^4 x\ \, \mbox{Tr}\Bigl( -\frac14F_{MN}F^{MN}-\frac i2\bar{\Psi}\Gamma^M D_M\Psi \Bigr),
\label{N=4}
\end{equation}
where $M,N=0,\cdots,9$ and
\begin{eqnarray}
F_{MN} &=& \partial_MA_N-\partial_NA_M-ig[A_M,A_N], \\
D_M\Psi &=& \partial_M\Psi-ig[A_M,\Psi], \\
\Gamma \Psi &=& +\Psi.
\end{eqnarray}
Note that the metric of the base manifold $\mathbb{R}^4$ is taken in the Euclidean signature, while the ten-dimensional 'metric' $\eta^{MN}$ is taken Lorentzian with $\eta^{00}=-1$.
As usual in the dimensional reduction, the derivatives other than $\partial_m$ are set to zero.
The action (\ref{N=4}) is invariant under the supersymmetry transformations
\begin{eqnarray}
\delta A_M &=& -i\bar{\xi}\Gamma_M\Psi,
\label{SUSY1} \\
\delta \Psi &=& \frac12F_{MN}\Gamma^{MN}\xi,
\label{SUSY2}
\end{eqnarray}
where $\xi$ is a constant $SO(9,1)$ Majorana-Weyl spinor-valued supersymmetry parameter satisfying the chirality condition $\Gamma\xi=+\xi$.
In what follows, we rewrite the action (\ref{N=4}) so that the resulting action provides a useful guide to deduce the action of an ${\cal N}=2$ gauge theory with hypermultiplet fields of arbitrary representations.
\vspace{5mm}
We first choose which half of the supercharges of the ${\cal N}=4$ supersymmetry is to be preserved.
This choice corresponds to the choice of embedding the SU(2) R-symmetry of ${\cal N}=2$ theory into the SU(4) R-symmetry of the ${\cal N}=4$ theory. Consider one such embedding defined by the matrix
\begin{equation}
M := \left(
\begin{array}{cc}
x_6+ix_7 & -(x_8-ix_9) \\ x_8+ix_9 & \, x_6-ix_7
\end{array}
\right).
\label{M}
\end{equation}
Its determinant is
\begin{equation}
\det M = (x_6)^2+(x_7)^2+(x_8)^2+(x_9)^2,
\end{equation}
so it is obvious that any transformation of the form
\begin{equation}
M \to g_LMg_R, \hspace{10mm} g_L\in \mbox{SU}(2)_L, \hspace{5mm} g_R\in \mbox{SU}(2)_R
\label{SO(4)}
\end{equation}
belongs to the SO(4) transformation acting on $(x_6,\cdots,x_9)\in\mathbb{R}^4$.
Note that this transformation preserves the embedding (\ref{M}).
In the ten-dimensional language, SU(4) R-symmetry of the ${\cal N}=4$ theory is realized as the rotational symmetry SO(6) of $\mathbb{R}^6$.
Therefore, one embedding of SU(2) R-symmetry into SU(4) is chosen by selecting SU$(2)_L$ or SU$(2)_R$.
We choose the latter as the R-symmetry of the ${\cal N}=2$ theories.
There is a U(1) subgroup of SU$(2)_L$ generated by $\sigma^3$.
Let $R(\theta)$ be an element of this U(1).
This is $\theta$-rotation in 67-plane and $(-\theta)$-rotation in 89-plane.
In the following, we require that the supercharges preserved in ${\cal N}=2$ theory should be invariant under the $R(\theta)$. For an infinitesimal $\theta$, $R(\theta)$ acts on the supersymmetry transformation parameter $\xi$ as
\begin{equation}
\delta_\theta\xi = -\frac12\theta(\Gamma^6\Gamma^7-\Gamma^8\Gamma^9)\xi.
\end{equation}
Therefore, $\xi$ should satisfy
\begin{equation}
\Gamma^{6789}\xi = -\xi,
\label{half}
\end{equation}
selecting eight components out of the original sixteen ones.
The scalar fields $A_s$ with $s=6,7,8,9$ can be combined into the doublet $q^\alpha$ ($\alpha = 1, 2$) of SU$(2)_R$ as
\begin{equation}
q^1 := \frac1{\sqrt{2}}(A_6-iA_7), \qquad q^2 := -\frac1{\sqrt{2}}(A_8+iA_9),
\end{equation}
and their conjugates $q_\alpha = (q^\alpha)^\dag$.
Gamma matrices $\gamma^\alpha,\gamma_\alpha$ are defined similarly in terms of $\Gamma^s$.
They satisfy
\begin{equation}
\{\gamma^\alpha,\gamma_\beta\} = 2\delta^\alpha_\beta, \hspace{5mm}
\{\gamma^\alpha,\gamma^\beta\}=0=\{ \gamma_\alpha,\ \gamma_\beta \ \}.
\end{equation}
Note that, for arbitrary vectors $V^s$ and $W^s$, one has
\begin{equation}
V_sW^s = V_\alpha W^\alpha+V^\alpha W_\alpha.
\end{equation}
The Majorana-Weyl spinor $\Psi$ is split into the chirality eigenstates with respect to
$\Gamma^{6789}$ as follows:
\begin{equation}
\lambda := \frac{1}{2} (1-\Gamma^{6789}) \ \Psi, \qquad \quad \eta := \frac{1}{2} (1+\Gamma^{6789}) \ \Psi.
\end{equation}
Both fermions are Majorana-Weyl. We further split $\eta$ into $\eta_\pm$, which are eigenstates of
\begin{equation}
\gamma := \frac12[\gamma^\alpha, \gamma_\alpha] = \frac i2(\Gamma^6\Gamma^7-\Gamma^8\Gamma^9) \, .
\end{equation}
Note that $\gamma$ is the generator for $R(\theta)$ and hence satisfies
\begin{equation}
\gamma^2 = \frac12(1+\Gamma^{6789}), \qquad [\gamma,\gamma^\alpha] = + \gamma^\alpha, \qquad
[\gamma,\gamma_\alpha] = -\gamma_\alpha.
\end{equation}
Now, $\eta_\pm$ are not Majorana-Weyl.
In fact, they are related {\sl by charge conjugation}
\begin{equation}
(\eta^A_\pm)^{*} = {\cal C} \, \eta^A_\mp,
\end{equation}
where $A$ is the index for the adjoint representation of $G$ and ${\cal C}$ is the complex conjugation matrix.
So, we shall denote $\eta_-$ by $\psi$. Then, modulo a phase factor, $\eta_+$ is $\psi^\dagger$.
In terms of $A_\mu$ $(\mu=0,\cdots,5)$, $q^\alpha, q_\alpha$, $\lambda$ and $\psi$, the action (\ref{N=4}) can be written as
\begin{eqnarray}
S_{{\cal N}=4}
&=& \int_{\mathbb{R}^4} \rmd^4 x\, \mbox{Tr}\Bigl( -\frac14F_{\mu\nu}F^{\mu\nu}-D_\mu q_\alpha D^\mu q^\alpha
-\frac i2\bar{\lambda}\Gamma^\mu D_\mu\lambda-i\bar{\psi}\Gamma^\mu D_\mu\psi \nonumber \\
& & -g\bar{\lambda}\gamma^\alpha[q_\alpha,\psi]-g\bar{\psi}\gamma_\alpha[q^\alpha,\lambda]
-g^2[q_\alpha,q^\beta][q_\beta,q^\alpha]+\frac12g^2[q_\alpha,q^\alpha][q_\beta,q^\beta] \Bigr),
\label{N=4'}
\end{eqnarray}
with the understanding that the dimensional reduction sets $\partial_\mu = 0$ for $\mu =0,5$.
The supersymmetry transformations (\ref{SUSY1}),(\ref{SUSY2}) can be written as
\begin{eqnarray}
\delta A_\mu &=& -i\bar{\xi}\Gamma_\mu\lambda, \\
\delta q^\alpha &=& -i\bar{\xi}\gamma^\alpha\psi, \\
\delta q_\alpha &=& - i \overline{\psi} \gamma_\alpha \xi \\
\delta \lambda &=& + \frac12F_{\mu\nu}\Gamma^{\mu\nu}\xi-ig[q_\alpha,q^\beta]\gamma^\alpha{}_\beta\xi, \\
\delta \psi &=& + D_\mu q^\alpha\Gamma^\mu\gamma_\alpha\xi.
\end{eqnarray}
Again, if $\xi$ obeys the projection condition (\ref{half}), the action (\ref{N=4'}) has ${\cal N}=2$ supersymmetry.
\vspace{5mm}
At this stage, we shall be explicit of representation contents of $(q^\alpha, \psi)$ fields and their conjugates.
Let $(T^A)^B_C=-if^{AB}_C$ be the generators of Lie$(G)$ in the adjoint representation. We also impose on $\xi$ the projection condition (\ref{half}). In terms of them, the action (\ref{N=4'}) can be written as
\begin{eqnarray}
S_{{\cal N}=2}
&=& \int_{\mathbb{R}^4} \rmd^4 x \, \Bigl( -\frac14\mbox{tr}(F_{\mu\nu}F^{\mu\nu})
-\frac i2\mbox{tr}(\bar{\lambda}\Gamma^\mu D_\mu\lambda) - D_\mu q_\alpha D^\mu q^\alpha
-i\bar{\psi}\Gamma^\mu D_\mu\psi \nonumber \\
& & \hspace*{0.3cm} +g\bar{\lambda}^A\gamma^\alpha q_\alpha T_A\psi+g\bar{\psi}\gamma_\alpha T_Aq^\alpha \lambda^A
-g^2(q_\alpha T^Aq^\beta)^2+\frac12g^2(q_\alpha T_Aq^\alpha)^2 \Bigr),
\label{N=2}
\end{eqnarray}
where the gauge covariant derivatives are
\begin{eqnarray}
D_\mu q^\alpha &=& \partial_\mu q^\alpha-iA_\mu^AT_Aq^\alpha, \\
D_\mu q_\alpha &=& \partial_\mu q_\alpha+iq_\alpha T_AA_\mu^A, \\
D_\mu\psi &=& \partial_\mu\psi-iA_\mu^AT_A\psi.
\end{eqnarray}
The ${\cal N}=2$ supersymmetry transformation rules are
\begin{eqnarray}
\delta A_\mu &=& -i\bar{\xi}\Gamma_\mu\lambda,
\label{SUSY3} \\
\delta \lambda^A &=& + \frac12F^A_{\mu\nu}\Gamma^{\mu\nu}\xi+iq_\alpha T^Aq^\beta\gamma^\alpha{}_\beta\xi, \\
\delta q^\alpha &=& -i\bar{\xi}\gamma^\alpha\psi, \\
\delta q_\alpha &=& -i \bar{\psi} \gamma_\alpha \xi \\
\delta \psi &=& + D_\mu q^\alpha\Gamma^\mu\gamma_\alpha\xi.
\label{SUSY4}
\end{eqnarray}
The above action (\ref{N=2}) is equivalent to the original action (\ref{N=4}): we have just rewritten the original action in terms of renamed component fields.
The supersymmetry transformations (\ref{SUSY3})-(\ref{SUSY4}) are also equivalent to
(\ref{SUSY1}) - (\ref{SUSY2}) in so far as $\xi$ is projected to ${\cal N}=2$ supersymmetry as (\ref{half}).
\vspace{5mm}
It turns out that the action (\ref{N=2}) is invariant under ${\cal N}=2$ supersymmetry transformations (\ref{SUSY3})-(\ref{SUSY4}) even for $T^A$ in a generic representation $R$ of the gauge group $G$, which
can also be reducible. Therefore, (\ref{N=2}) defines an ${\cal N}=2$ gauge theory with matter fields $(q^\alpha, \psi)$ in the representation $R$ and their conjugates.
It is also possible to treat $\hat{A}_{k-1}$ quiver gauge theories on the same footing. We embed the orbifold action $\mathbb{Z}_k$ into SU$(2)_L$. In this paper, we shall focus on $\hat{A}_1$ quiver gauge theory. In this case, we should substitute
\begin{eqnarray}
A_\mu = \left(
\begin{array}{cc}
{A_\mu}^{(1)} & \\ & {A_\mu}^{(2)}
\end{array}
\right), &\hspace{5mm}&
\lambda = \left(
\begin{array}{cc}
\lambda^{(1)} \, & \\ & \, \lambda^{(2)}
\end{array}
\right), \nonumber \\
q^\alpha = \left(
\begin{array}{cc}
& q^{(1)\alpha} \\ q^{(2)\alpha} &
\end{array}
\right), &\hspace{5mm}&
\psi = \left(
\begin{array}{cc}
& \psi^{(1)} \\ \psi^{(2)} &
\end{array}
\right).
\end{eqnarray}
into (\ref{N=4'}).
Note that the ${\cal N}=2$ supersymmetry (\ref{SUSY3})-(\ref{SUSY4}) is preserved even when the gauge coupling constant $g$ is replaced with
the matrix-valued one:
\begin{equation}
g = \left(
\begin{array}{cc}
g_1 \, \mathbb{I} & \\ & g_2 \, \mathbb{I}
\end{array}
\right).
\end{equation}
In general, $g_1 \ne g_2$ and can be extended to complex domain. Extension to $\hat{A}_k (k \ge 2)$ is straightforward.
\vspace{1cm}
\subsection{Superconformal symmetry on $\mathbb{S}^4$}
\vspace{5mm}
Following \cite{Pestun:2007rz}, we now define the ${\cal N}=2$ superconformal gauge theory on $\mathbb{S}^4$ of radius $r$. For definiteness, we consider the round-sphere with the metric $h_{mn}$ induced through the standard stereographic projection. Details are summarized in Appendix \ref{spinor}.
For this purpose, it also turns out convenient to start with ${\cal N}=4$ super Yang-Mills theory defined on $\mathbb{S}^4$. To maintain conformal invariance, the scalars ought to have the conformal coupling to the curvature scalar of $\mathbb{S}^4$. The action thus reads
\begin{equation}
S_{{\cal N}=4} = \int_{\mathbb{S}^4} \rmd^4 x \, \sqrt{h}\ \, \mbox{Tr}\Bigl( -\frac14F_{MN}F^{MN}-\frac1{r^2}A_SA^S -\frac i2\bar{\Psi}\Gamma^M D_M\Psi \Bigr),
\label{N=4SC}
\end{equation}
where $S=0,5,6,\cdots,9$.
The action is invariant under the ${\cal N}=4$ supersymmetry transformations
\begin{eqnarray}
\delta A_M \! &=& -i \, \overline{\xi}\Gamma_M\Psi, \\
\delta \Psi \, &=& + \frac12F_{MN}\Gamma^{MN}\xi-2\Gamma^SA_S\widetilde{\xi} \ ,
\end{eqnarray}
provided that $\xi$ and $\widetilde{\xi}$ satisfy the conformal Killing equations:
\begin{equation}
\nabla_m\xi = \Gamma_m\widetilde{\xi}, \qquad \nabla_m\widetilde{\xi} = -\frac1{4r^2}\Gamma_m\xi.
\label{Killing}
\end{equation}
Explicit form of the solution to these equations are given in Appendix \ref{spinor}.
The action of an ${\cal N}=2$ gauge theory on $\mathbb{S}^4$ with a hypermultiplet of representation $R$
can be deduced easily as in the previous subsection. One obtains
\begin{eqnarray}
S_{{\cal N}=2}
&=& \int_{{\mathbb{ S}}^4} \rmd^4 x\, \sqrt{h} \, \Bigl( -\frac14\mbox{Tr}(F_{\mu\nu}F^{\mu\nu})
-\frac i2\mbox{Tr}(\bar{\lambda}\Gamma^\mu D_\mu\lambda) -\frac1{r^2}\mbox{Tr}(A_aA^a)
\nonumber \\
& & \hspace*{1cm} - \, D_\mu q_\alpha D^\mu q^\alpha -i\bar{\psi}\Gamma^\mu D_\mu\psi -\frac2{r^2}q_\alpha q^\alpha \nonumber \\
& & \hspace*{1cm} + \, g\bar{\lambda}^A\gamma^\alpha q_\alpha T_A\psi+g\bar{\psi}\gamma_\alpha T_Aq^\alpha \lambda^A
-g^2(q_\alpha T^Aq^\beta)^2+\frac12g^2(q_\alpha T_Aq^\alpha)^2 \Bigr),
\label{action-on-s4}
\end{eqnarray}
where $a=0,5$.
The action is invariant under the ${\cal N}=2$ superconformal symmetry
\begin{eqnarray}
\delta A_\mu &=& -i\, \overline{\xi}\Gamma_\mu\lambda, \nonumber \\
\delta \lambda^A &=& + \frac12F^A_{\mu\nu}\Gamma^{\mu\nu}\xi+igq_\alpha T^Aq^\beta\gamma^\alpha{}_\beta\xi
-2\Gamma^aA_a^A\widetilde{\xi}, \nonumber \\
\delta q^\alpha &=& -i\, \overline{\xi}\gamma^\alpha \psi, \nonumber \\
\delta q_\alpha &=& - i \, \overline{\psi} \gamma_\alpha \xi \nonumber \\
\delta \,\psi \, &=& + D_\mu q^\alpha\Gamma^\mu\gamma_\alpha\xi-2\gamma_\alpha q^\alpha\widetilde{\xi} \, ,
\nonumber
\end{eqnarray}
where $\xi$ satisfies the conformal Killing equations (\ref{Killing}) in addition to the projection condition (\ref{half}). We emphasize that this is the transformation of the ${\cal N}=2$ superconformal symmetry, not just the Poincar\'e part of it. This can be checked explicitly, for example, by examining the commutator of two transformations on the fields.
\vspace{5mm}
We find it convenient to define a fermionic transformation $Q$ corresponding to the above superconformal transformation $\delta$. It is obtained easily by the replacement $\delta\to\theta Q$ and $\xi\to\theta\xi$ with $\theta$ a real Grassmann parameter.
The resulting transformation is
\begin{eqnarray}
Q A_\mu &=& -i\overline{\xi}\Gamma_\mu\lambda, \nonumber \\
Q \lambda^A &=& + \frac12F^A_{\mu\nu}\Gamma^{\mu\nu}\xi+igq_\alpha T^Aq^\beta\gamma^\alpha{}_\beta\xi
-2\Gamma^aA_a^A\widetilde{\xi}, \nonumber \\
Q q^\alpha &=& -i\overline{\xi}\gamma^\alpha\psi, \nonumber \\
Q q_\alpha &=& -i\overline{\psi}\gamma^\alpha\xi, \nonumber \\
Q \psi &=& + D_\mu q^\alpha\Gamma^\mu\gamma_\alpha\xi-2\gamma_\alpha q^\alpha\tilde{\xi},
\end{eqnarray}
where now $\xi$ and $\widetilde{\xi}$ are {\it bosonic} SO(9,1) Majorana-Weyl spinors satisfying ${\cal N}=2$ projection (\ref{half}) and conformal Killing equation (\ref{Killing}).
\vspace{1cm}
\subsection{Localization}
\vspace{5mm}
By extending the localization technique of \cite{Pestun:2007rz}, we now show that computation of Wilson loop expectation value in ${\cal N}=2$ superconformal gauge theory of quiver type can be reduced to computation of a one-matrix integral.
Let $\mathfrak{Q}$ be a fermionic transformation. Suppose that an action $S$ under consideration is invariant under $\mathfrak{Q}$. Then, the following modification
\begin{equation}
S(t) := S + t\int \rmd^4x \, \sqrt{h}\, \mathfrak{Q} V(x)
\end{equation}
does not change the partition function provided that
\begin{equation}
\int \rmd^4x\, \sqrt{h}\, \mathfrak{Q}^2 V(x) = 0.
\label{Q^2V}
\end{equation}
Likewise, correlation functions remain unchanged if operators under consideration are $\mathfrak{Q}$-invariant.
We shall choose $V(x)$ such that the bosonic part of $\mathfrak{Q} V(x)$ is positive semi-definite.
For this choice, since $t$ can be chosen to be an arbitrary value, we can take the limit $t\to+\infty$ so that the path-integral is localized to configurations where the bosonic part of $\mathfrak{Q} V(x)$ vanishes.
It will turn out later that the vanishing locus of $\mathfrak{Q} V(x)$ is parametrized by a constant matrix. This is why the evaluation of the expectation value of a $\mathfrak{Q}$-invariant operator reduces to a matrix integral. The action of the resulting
matrix model is the sum of $S$ evaluated at the vanishing locus and the one-loop determinant obtained from the quadratic terms of $\mathfrak{Q} V(x)$ when expanded around the vanishing locus.
\vspace{5mm}
One might think that the fermionic transformation $Q$ defined in the previous section can be used as $\mathfrak{Q}$ above. In fact, $Q^2$ is a sum of bosonic transformations, and therefore, (\ref{Q^2V}) appears to hold as long as $V(x)$ is invariant under the transformations. The problem of this choice is that $Q^2$ is such a sum only on-shell. According to \cite{Berkovits:1993zz},\cite{Evans:1994cb} and \cite{Baulieu:2007ew}, $Q$ has to be modified so that the resulting $\mathfrak{Q}$ closes to a sum of bosonic transformations for off-shell.
To this end, we introduce auxiliary fields $K^{\dot{m}}$ $(\dot{m}=\hat{2},\hat{3},\hat{4})$, $K^\alpha$ and $K_\alpha$. They transform in the adjoint, $R$ and $\bar{R}$ representations of the gauge group $G$, respectively.
Utilizing them, we modify the action (\ref{action-on-s4}) in a trivial manner:
\begin{eqnarray}
S_{{\cal N}=2}
&=& \int_{\mathbb{S}^4} \rmd^4 x\, \Bigl( -\frac14\mbox{Tr}(F_{\mu\nu}F^{\mu\nu})
-\frac i2\mbox{Tr}(\bar{\lambda}\Gamma^\mu D_\mu\lambda)-\frac1{r^2}\mbox{Tr}(A_aA^a) \nonumber \\
&& \hspace*{1.5cm} -\, D_\mu q_\alpha D^\mu q^\alpha
-i\bar{\psi}\Gamma^\mu D_\mu\psi -\frac2{r^2}q_\alpha q^\alpha \nonumber \\
& & \hspace*{1.5cm} +\, g\bar{\lambda}^A\gamma^\alpha q_\alpha T_A\psi+g\bar{\psi}\gamma_\alpha T_Aq^\alpha \lambda^A -g^2(q_\alpha T^Aq^\beta)^2+\frac12g^2(q_\alpha T_Aq^\alpha)^2 \nonumber \\
& & \hspace*{1.5cm} +\, \frac12K^{\dot{m}}K_{\dot{m}}+K_\alpha K^\alpha \Bigr).
\label{modified}
\end{eqnarray}
Evidently, this action is physically equivalent to the original one.
The modified action (\ref{modified}) is now invariant under the following $\mathfrak{Q}$ transformations:
\begin{eqnarray}
\mathfrak{Q} \, A_\mu \, &=& -i\overline{\xi}\Gamma_\mu\lambda, \nonumber \\
\mathfrak{Q} \, \lambda^A &=& + \frac12F^A_{\mu\nu}\Gamma^{\mu\nu}\xi+igq_\alpha T^Aq^\beta\gamma^\alpha{}_\beta\xi
-2\Gamma^aA_a^A\widetilde{\xi}+K^{\dot{m}A}\nu_{\dot{m}}, \nonumber \\
\mathfrak{Q} \, q^\alpha \, &=& -i\overline{\xi}\gamma^\alpha\psi, \nonumber \\
\mathfrak{Q} \, q_\alpha &=& -i\overline{\psi}\gamma_\alpha\xi, \nonumber \\
\mathfrak{Q} \, \psi \,\, &=& + D_\mu q^\alpha\Gamma^\mu\gamma_\alpha\xi-2\gamma_\alpha q^\alpha\widetilde{\xi}+K^\alpha\nu_\alpha, \nonumber \\
\mathfrak{Q} \, \overline{\psi} \,\, &=& + D_\mu q_\alpha\bar{\xi}\gamma^\alpha\Gamma^\mu+2\bar{\widetilde{\xi}}\gamma^\alpha q_\alpha
+K_\alpha\overline{\nu}^\alpha, \nonumber \\
\mathfrak{Q} K^{\dot{m}A} \!\!\! &=& -\overline{\nu}^{\dot{m}}\Bigl( -i\Gamma^\mu D_\mu\lambda^A+g\gamma^\alpha q_\alpha T^A\psi
-g\gamma_\alpha\psi^*T^Aq^\alpha \Bigr), \nonumber \\
\mathfrak{Q} K^\alpha \, &=& -\overline{\nu}^\alpha\Bigl( -i\Gamma^\mu D_\mu\psi+\gamma_\beta T_Aq^\beta g\lambda^A \Bigr), \nonumber \\
\mathfrak{Q} K_\alpha \, &=& -\Bigl( -iD_\mu\overline{\psi}\Gamma^\mu-g\bar{\lambda}^A\gamma^\beta q_\beta T_A \Bigr) \nu_\alpha.
\end{eqnarray}
To make $\mathfrak{Q}^2$ close to a sum of bosonic transformations off-shell, the spinors $\nu^{\dot{m}}$, $\nu^\alpha$, $\overline{\nu}_\alpha$ should be chosen appropriately out of $\xi, \widetilde{\xi}$. Details on them are summarized in Appendix \ref{spinor2}.
With the correct choice, $\mathfrak{Q}^2$ closes, for example, on $\lambda$ as follows:
\begin{equation}
-i\mathfrak{Q}^2\lambda \,
= \, \left( v^m\nabla_m\lambda-\frac12(\overline{\xi}\Gamma_{mn}\widetilde{\xi})\Gamma^{mn}\lambda-ig[v^\mu A_\mu,\lambda] \right)
+\frac12(\overline{\xi}\Gamma_{st}\widetilde{\xi})\Gamma^{st}\lambda.
\end{equation}
This shows that $\mathfrak{Q}^2$ is a sum of a diffeomorphism on $\mathbb{S}^4$, a $G$ gauge transformation and a global SU$(2)_R$ transformation. In particular, notice that $\overline{\xi}\Gamma_{st}\tilde{\xi}$ turns out to be independent of $x^m$. The action of $\mathfrak{Q}^2$ on the auxiliary fields is slightly different.
For example, on $K^{\dot{m}}$, one obtains
\begin{equation}
-i\mathfrak{Q}^2K^{\dot{m}}
= v^k\nabla_kK^{\dot{m}}-ig[v^\mu A_\mu,K^{\dot{m}}]+\bar{\nu}^{\dot{m}}\Gamma^k\nabla_k\nu_{\dot{n}}K^{\dot{n}}.
\end{equation}
Here, the index $\dot{m}$ does not transform as a part of the four-vector on $\mathbb{S}^4$.
This is not a problem since $K^{\dot{m}}$ is contracted with $\nu_{\dot{m}}$ in $V$ defined below,
and not with some other four-vectors.
The $\mathfrak{Q}$ defined above is the right transformation available for the localization procedure.
\vspace{5mm}
We are at the position to choose $V$. We take
\begin{equation}
V := \mbox{Tr}(V_\lambda\lambda)+V_\psi\psi+\bar{\psi}V_{\bar{\psi}},
\end{equation}
where
\begin{eqnarray}
V_\lambda &=& \frac12F_{\mu\nu}\overline{\xi}\Gamma^0\Gamma^{\mu\nu}+igq_\alpha T^Aq^\beta t_A\overline{\xi}\Gamma^0\gamma^\alpha{}_\beta
+2\overline{\tilde{\xi}}\Gamma^0\Gamma^aA_a+K^{\dot{m}}\overline{\nu}_{\dot{m}}\Gamma^0, \\
V_\psi &=& D_\mu q_\alpha\overline{\xi}\Gamma^0\Gamma^\mu\gamma^\alpha+2\overline{\tilde{\xi}}\Gamma^0\gamma^\alpha q_\alpha
+K_\alpha\overline{\nu}^\alpha\Gamma^0, \\
V_{\bar{\psi}} &=& D_\mu q^\alpha\gamma_\alpha\Gamma^\mu\Gamma^0\xi-2\gamma_\alpha q^\alpha\Gamma^0\tilde{\xi}
+K^\alpha\Gamma^0\nu_\alpha.
\end{eqnarray}
Note that $V$ is a scalar with respect to a particular combination of the diffeomorphism on $\mathbb{S}^4$, the $G$ gauge transformation and the global SU$(2)_R$ transformation. This follows from the identities for the spinors, for example,
\begin{equation}
v^m\nabla_m\xi-\frac12(\overline{\xi}\Gamma_{mn}\widetilde{\xi})\Gamma^{mn}\xi+\frac12(\overline{\xi}\Gamma_{st}
\widetilde{\xi})\Gamma^{st}\xi
= 0,
\end{equation}
and similar ones for $\widetilde{\xi}$ and $\nu^I$ which are summarized in Appendix \ref{spinor} and \ref{spinor2}. Therefore, (\ref{Q^2V}) is satisfied with this choice, as required.
After straightforward but tedious algebra, one obtains the bosonic part of $\mathfrak{Q} V$ expressed as
\begin{eqnarray}
& & \mbox{Tr}(V_\lambda \mathfrak{Q} \lambda)+V_\psi \mathfrak{Q} \psi+\mathfrak{Q} \overline{\psi}V_{\overline{\psi}}\, \Big|_{\rm bosonic} \nonumber \\
&=& \mbox{Tr}\Bigl[ \cos^2\frac\theta2(F^+_{mn}+w^+_{mn}A_5)^2+\sin^2\frac\theta2(F^-_{mn}+w^-_{mn}A_5)^2
-(K^{\dot{m}}-2A_0\overline{\nu}^{\dot{m}}\tilde{\xi})^2 \nonumber \\
& & +D_mA_aD^mA^a-\frac12g^2[A_a,A_b]^2
+g^2t_At_B(2q_\alpha T^Aq^\beta q_\beta T^Bq^\alpha-q_\alpha T^A q^\alpha q_\beta T^B q^\beta)
\Bigr] \nonumber \\
& & +2D_0q_\alpha D^0q^\alpha+2|D_{\dot{\mu}}q^\alpha+\overline{\xi}\Gamma^0{}_{\dot{\mu}}\gamma^\alpha{}_\beta\widetilde{\xi} q^\beta|^2 +\frac3{2r^2}q_\alpha q^\alpha-2K_\alpha K^\alpha,
\end{eqnarray}
where $\theta$ is the polar angle on $\mathbb{S}^4$, $\dot{\mu}=1,2,\cdots,5$ and
\begin{eqnarray}
w^{+}_{mn}
&:=& \frac1{\cos^2\frac\theta2}\overline{\xi}\Gamma^{05}\Gamma_{mn}
\frac{1-\Gamma^{\hat{1}\hat{2}\hat{3}\hat{4}}}2\widetilde{\xi}, \\
w^{-}_{mn}
&:=& \frac1{\sin^2\frac\theta2}\overline{\xi}\Gamma^{05}\Gamma_{mn}
\frac{1+\Gamma^{\hat{1}\hat{2}\hat{3}\hat{4}}}2\widetilde{\xi} \ .
\end{eqnarray}
Here, the hatted indices are the Lorentz ones.
The above expression shows that, after a suitable Wick rotation for $A_0$ and the auxiliary fields, the bosonic part of $\mathfrak{Q} V$ is positive semi-definite.
Therefore, by taking the limit $t\to+\infty$, the path-integral is localized at the vanishing locus of $\mathfrak{Q} V$.
It turns out that, as in \cite{Pestun:2007rz}, non-zero fields at the vanishing locus are
\begin{equation}
A_0 = -\frac i{gr}\Phi, \qquad K^{\hat{2}} = -\frac i{gr^2}\Phi,
\label{locus}
\end{equation}
where $\Phi$ is a {\sl constant} Hermitian matrix.
The coefficients are chosen for later convenience.
Now, the path-integral is reduced to an integral over the Hermitian matrix $\Phi$.
The action of the corresponding matrix model is a sum of the action (\ref{modified})
evaluated at the vanishing locus and the one-loop determinant for the quadratic terms in $\mathfrak{Q} V$.
Note that higher-loop contributions vanish in the large $t$ limit since $t^{-1}$ plays the role of the loop-counting parameter. At the vanishing locus, the action (\ref{modified}) takes the value
\begin{equation}
S = -\int_{\mathbb{S}^4} \rmd^4 x\, \sqrt{h}\,\mbox{Tr}\Bigl( \frac1{r^2}(A_0)^2+\frac12(K^{\hat{2}})^2 \Bigr) = \frac{4\pi^2}{g^2}\, \mbox{Tr}\, \Phi^2.
\end{equation}
An important difference from the ${\cal N}=4$ super Yang-Mills theory is that the one-loop determinant around the vanishing locus does not cancel and has a complicated functional structure.
In the next section, we show that the presence of the non-trivial one-loop determinant is crucial for determining the large `t Hooft coupling behavior of the half-BPS Wilson loop.
\vspace{5mm}
The half-BPS Wilson loop of ${\cal N}=2$ gauge theory has the following form:
\begin{equation}
W[C] := \mbox{Tr}\, P_s \exp\biggl[ ig\int_0^{2\pi} \rmd s \Bigl( \dot{x}^mA_m(x)+\theta^aA_a(x) \Bigr) \biggr].
\end{equation}
The functions $x^m(s)$, $\theta^a(s)$ are chosen appropriately to preserve a half of the ${\cal N}=2$ superconformal symmetry. We shall choose $C$ to be the great circle at the equator of $\mathbb{S}^4$ (i.e. $\theta=\frac\pi2$) specified by
\begin{equation}
(x^1,x^2,x^3,x^4) = (2r\cos s, 2r\sin s,0,0),
\end{equation}
and $\theta^a$ as
\begin{equation}
\theta^0 = r, \hspace{5mm} \theta^5 = 0.
\end{equation}
For this choice, one can show that
\begin{equation}
\dot{x}^mA_m(x)+\theta^aA_a(x) = -rv^\mu A_\mu(x),
\end{equation}
where $v^\mu=\overline{\xi}\Gamma^\mu\xi$.
See Appendix \ref{spinor} for the explicit expressions of $v^\mu$.
This implies that $W[C]$ is invariant under $\mathfrak{Q}$ due to the identity
\begin{equation}
\overline{\xi}\Gamma^\mu\xi\,\overline{\xi}\Gamma_\mu\lambda = 0.
\end{equation}
Thus, we have shown that $\langle W[C] \rangle$ is calculable by a finite-dimensional matrix integral. The operator whose expectation value in the matrix model is equal to $\langle W[C] \rangle$ is
\begin{equation}
\mbox{Tr}\, \exp\Bigl( 2\pi \Phi \Bigr).
\end{equation}
Notice that it is solely governed by the constant-valued, Hermitian matrix $\Phi$. This enables us to compute the Wilson loops in terms of a matrix integral. This observation will also play a role in identifying holographic dual geometry later.
\vspace{1cm}
\section{Wilson loops at Large `t Hooft Coupling} \label{asymptotic}
\vspace{5mm}
We have shown that evaluation of the Wilson loop $\langle W[C] \rangle$
is reduced to a related problem in a one-Hermitian matrix model.
Still, the matrix model is too complicated to solve exactly.
In the following, we focus our attention to either the ${\cal N}=2$ superconformal gauge theory of $A_1$ type with $G=$U$(N)$ coupled to $2N$ fundamental hypermultiplets and of $\hat{A}_1$ type with $G=$U$(N)\times$U$(N)$, both at large $N$ limit.
For these theories, we show that the large `t Hooft coupling behavior is determinable by a few quantities extracted from the one-loop determinant. This allows us to exactly evaluate the Wilson loop $\langle W[C] \rangle$ in the large $N$ and large 't Hooft coupling limit.
\vspace{5mm}
\subsection{General results in one matrix model} \label{generalMM}
\vspace{5mm}
Consider a matrix model for an $N\times N$ Hermitian matrix $X$.
In the large $N$ limit, expectation value of any operator in this model is determinable in terms of eigenvalue density function $\rho(x)$ of the matrix $X$.
By definition, $\rho(x)$ is normalized by
\begin{equation}
\int \rmd x\, \rho(x) = 1.
\end{equation}
Let $D$ denote the support of $\rho(x)$.
We assume that\footnote{
If $X$ is traceless, the assumption is always valid since $\int \rmd x\, \rho(x) \, x =0$ must hold.
In the large $N$ limit, the contribution from the trace part is negligible. }
\begin{equation}
\min\{ D \} =: b < 0 < a := \max\{ D \}.
\end{equation}
Expectation value of the operator $\frac1N\mbox{Tr}(e^{cX})$ $(c>0)$ is given in terms of $\rho(x)$ as
\begin{eqnarray}
W
&:=& \left\langle \frac1N\mbox{Tr}(e^{cX}) \right\rangle \nonumber \\
&=& \int \rmd x\,\rho(x)\, e^{cx}.
\end{eqnarray}
By the assumption on the support $D$, the value of $W$ is bounded:
\begin{equation}
e^{cb} \le W\le e^{ca}.
\end{equation}
\begin{figure}[ht!]
\vskip1cm
\centering
\includegraphics[scale=0.68]{distribution.eps}
\caption{\small \sl
Typical distribution of the eigenvalue density $\rho$.}
\label{}
\vskip1cm
\end{figure}
We are interested in the behavior of $W$ in the limit $a\to+\infty$.
Introducing the rescaled density function $\tilde{\rho}(x)=a\rho(ax)$, $W$ is written as
\begin{equation}
W = e^{ca}\int_0^{1-\frac ba}\rmd u\,\tilde{\rho}(1-u)e^{-cau} \qquad \mbox{where} \qquad
x=a(1-u).
\end{equation}
At the right edge of the support $D$, we expect that the density cuts off with a power-law tail:
\begin{equation}
\tilde{\rho}(1-u) = \beta u^\alpha+\chi(u) \qquad \mbox{where} \qquad |\chi(u)|\le Ku^{\alpha+\epsilon}, \hspace{5mm}
u\in (0,\delta)
\label{AE}
\end{equation}
for a positive $K,\epsilon,\delta$. See figure 2. Here, $\alpha > 0$ signifies the leading power of the fall-off at the right edge: $\chi$ refers to the sub-leading remainder. Then, for a large positive $a$, (\ref{AE}) leads to the following asymptotic behavior:
\begin{equation}
W \sim \beta\Gamma(\alpha+1)(ca)^{-\alpha-1}e^{ca},
\label{estimate}
\end{equation}
Details of the derivation of (\ref{estimate}) are relegated to Appendix \ref{AEestimate}.
We have found that the large $a$ behavior of $W$ is determined by the functional
form of $\rho(x)$ in the vicinity of the right edge of its support.
In particular, we found that the leading exponential part is determined solely by the location of the right edge of the eigenvalue distribution.
\vspace{5mm}
For comparison, let us recall the exact form of the Wilson loop in ${\cal N}=4$ super Yang-Mills theory \cite{Erickson:2000af}, which is a special case of the $\hat{A}_0$ gauge theory.
In this case, the eigenvalue density function is given by
\begin{equation}
\rho(x) = \frac{4\pi}{\lambda}\sqrt{\frac{\lambda}{2\pi^2}-x^2} \ , \label{gaussiandensity}
\end{equation}
which is the solution of the saddle-point equation
\begin{equation}
\frac{4\pi^2}{\lambda}\phi = \int\hspace{-3.5mm}-\hspace{2mm}d\phi'\frac{\rho(\phi')}{\phi-\phi'}.
\label{SYM}
\end{equation}
The Wilson loop is evaluated as follows:
\begin{eqnarray}
\langle W[C] \rangle
&=& \frac{4\pi}{\lambda}\int_{-\sqrt{\lambda}/\pi}^{+\sqrt{\lambda}/\pi} dx\,e^{2\pi x}\sqrt{\frac{\lambda}{2\pi^2}-x^2}
\nonumber \\
&=& \frac2{\sqrt{2\lambda}}I_1(\sqrt{2\lambda}) \nonumber \\
&\sim& \sqrt{\frac2\pi}(2\lambda)^{-\frac34}e^{\sqrt{2\lambda}}.
\end{eqnarray}
We see that this asymptotic behavior is reproduced exactly by (\ref{estimate}) with $\alpha = {1 \over 2}$
of (\ref{gaussiandensity}) \footnote{
Here, the definition of the gauge coupling constant $g$ is different by the factor 2 from that in \cite{Erickson:2000af}
}.
\vspace{5mm}
\subsection{One-loop determinant and zeta function regularization} \label{subseczeta}
\vspace{5mm}
Let us return to the evaluation of $\langle W[C] \rangle$.
To determine the eigenvalue density function $\rho$ of the Hermitian matrix $\Phi$, it is necessary to know the explicit functional form of the one-loop determinant. However, this is a formidable task for a generic ${\cal N}=2$ gauge theory. Fortunately, as shown in the previous subsection, the leading behavior of $\langle W[C] \rangle$ is governed by a small number of data if $a=\mbox{max}\,(D)$ is large.
So, we shall assume that the limit $\lambda\to+\infty$ induces indefinite growth of $a$. This is a reasonable assumption since otherwise $\langle W[C] \rangle$ does not grow exponentially in the limit $\lambda\to+\infty$, implying that any ${\cal N}=2$ gauge theory with such a behavior of the Wilson loop cannot have an AdS dual in the usual sense.
In other words, we assume that the rescaled density function $\lambda^\gamma\rho(\lambda^\gamma x)$ has a
reasonable large $\lambda$ limit for a {\sl positive} $\gamma$.
Under this assumption, we now show that the large $\lambda$ behavior of the Wilson loop is determined by the behavior of the one-loop determinant in the region where the eigenvalues of $\Phi$ are large.
The asymptotic behavior in such a limit is most transparently derivable from the heat-kernel expansion for a certain differential operator in the zeta-function regularization of the one-loop determinant.
\vspace{5mm}
\noindent
$\bullet$ {\sl $A_1$ gauge theory}: \hfill\break
Consider first the $A_1$ gauge theory. There are contributions to the one-loop effective action both from the hypermultiplet and the vector multiplet. We first focus on the hypermultiplet contribution. If $\mathfrak{Q} \ V$ is expanded around the vanishing locus (\ref{locus}), quadratic terms of the hypermultiplet scalars become:
\begin{equation}
-q_\alpha(\Delta)^\alpha{}_\beta q^\beta+\frac1{r^2}\Phi^A\Phi^Bq_\alpha T_AT_Bq^\alpha,
\label{quad_boson}
\end{equation}
where
\begin{eqnarray}
(\Delta)^\alpha{}_\beta
&=& (\nabla_m\delta^\alpha_\gamma+V_m{}^\alpha{}_\gamma)(\nabla^m\delta^\gamma_\beta+V^{m\gamma}{}_\beta)
-\frac1{4r^2}(3+\cos^2\theta)\delta^\alpha_\beta, \\
V_m{}^\alpha{}_\beta &=& \bar{\xi}\Gamma^0{}_m\gamma^\alpha{}_\beta\tilde{\xi}.
\end{eqnarray}
If $\Phi$ is diagonalized as $\Phi=\mbox{diag}(\phi_1,\cdots,\phi_N)$, then the second term in (\ref{quad_boson}) can be written
as
\begin{equation}
\frac {2N}{r^2}\sum_{i=1}^N(\phi_i)^2q_{i\alpha}q^\alpha_i.
\end{equation}
Now the quadratic terms are decomposed into the sum of terms for components $q_i^\alpha$. So, the one-loop determinant of the hypermultiplet scalars is the product of determinants for each components.
Let $F_h^B(\Phi)$ denote a part of the matrix model action induced by the one-loop determinant for the hypermultiplet scalars $q^\alpha$. Its contribution to the effective action can be written as
\begin{equation}
F_h^B(\Phi) = 2N\sum_{i=1}^NF_h^B(\phi_i),
\end{equation}
where $F_h^B(m)$ is formally given as
\begin{equation}
F_h^B(m) := \log\mbox{Det}\Bigl( -\Delta+\frac{m^2}{r^2} \Bigr).
\label{formaldef}
\end{equation}
Notice that the eigenvalues $\phi_i$ enter as masses of $q_i^\alpha$.
Therefore, what we need to analyze is the large $m$ behavior of $F_h^B(m)$.
We now evaluate the function $F_h^B(m)$ in the limit $m \rightarrow \infty$. In terms of Feynman
diagrammatics, this amounts to expanding the one-loop determinant in the background of scalar field $(m/r)^2$. Let $D(m)=\mbox{Det}(-\Delta+m^2/r^2)$.
The relation (\ref{formaldef}) is afflicted by ultraviolet infinities, so it should be regularized appropriately. The determinant is formally defined over the space spanned by the normalizable
eigenfunctions of $-\Delta$. Let $\lambda_k$ $(k=0,1,2,\cdots)$ be eigenvalues of $-\Delta$:
\begin{equation}
-\Delta\psi_k = \lambda_k\psi_k.
\end{equation}
Then, $D(m)$ can be formally written as
\begin{equation}
D(m) = \prod_{k=0}^\infty\Bigl( \lambda_k+\frac{m^2}{r^2} \Bigr).
\end{equation}
To make this expression well-defined, let us define a regularized function
\begin{equation}
\zeta(s,m) := r^{-2s}\sum_{k=0}^\infty\frac1{(\lambda_k+m^2/r^2)^s}, \label{regularized}
\end{equation}
where $s$ is a complex variable.
This summation may be well-defined for $s$ with sufficiently large Re$(s)$.
One can formally differentiate $\zeta(s,m)$ with respect to $s$ to obtain
\begin{equation}
\partial_s \zeta(s,m)\Big|_{s=0} = -\sum_{k=0}^\infty \log( r^2\lambda_k+m^2 ) = -\log [r^2D(m)].
\end{equation}
Since the left-hand side makes sense via a suitable analytic continuation of (\ref{regularized}), it can be regarded that the right-hand side is defined by the left-hand side. Therefore, we define the function
$F_h^B(m)$ via the zeta-function regularization:
\begin{equation}
F_h^B(m) := -\partial_s\zeta(s,m)\Big|_{s=0}.
\end{equation}
The large $m$ behavior of $F_h^B(m)$ is determined as follows. For a suitable range of $s$, $\zeta(s,m)$ can be written as
\begin{equation}
\zeta(s,m) = \frac{r^{-2s}}{\Gamma(s)}\int_0^\infty \rmd t\, t^{s-1}e^{-m^2t/r^2}K(t),
\label{zeta}
\end{equation}
where
\begin{equation}
K(t) := \sum_{k=0}^\infty e^{-\lambda_kt} = \mbox{Tr}(e^{t\Delta})
\end{equation}
is the heat-kernel of $\Delta$. The convergence of this sum is assumed.
The asymptotic expansion of $K(t)$ is known as the heat-kernel expansion.
For a review on this subject, see e.g. \cite{Vassilevich:2003xt}.
Since $\Delta$ is a differential operator on $\mathbb{S}^4$, the heat-kernel expansion has the form
\begin{equation}
K(t) \sim \sum_{i=0}^\infty t^{i-2}a_{2i}(\Delta)
\label{expansion}
\end{equation}
In the expansion, $a_{2i}(\Delta)$ are known as the heat-kernel coefficients for $\Delta$.
The expression (\ref{zeta}) of $\zeta(s,m)$ is only valid for a range of $s$, but $\zeta(s,m)$ can be analytically continued to the entire complex plane provided that the asymptotic expansion (\ref{expansion}) is known.
In particular, there exists a formula for the asymptotic expansion of $\zeta(s,m)$ in the large $m$ limit \cite{Voros:1986vw}
\begin{equation}
\zeta(s,m) \sim \sum_{i=0}^\infty a_{2i}(\Delta)r^{2i-4}\frac{\Gamma\left( s+i-2 \right)}{\Gamma(s)}m^{-2s-2i+4},
\end{equation}
valid in the entire complex $s$-plane.
Note that $a_{2i}(\Delta)r^{2i-4}$ are dimensionless combinations.
Differentiating with respect to $s$ and setting $s=0$, one obtains
\begin{eqnarray}
F_h^B(m)
&=& \Bigl( \frac12m^4\log m^2-\frac34m^4 \Bigr)a_0(\Delta)r^{-4}-\Bigl( m^2\log m^2-m^2 \Bigr)a_2(\Delta)r^{-2} \nonumber \\
& & +\log m^2\ a_4(\Delta)+O(m^{-2}\log m).
\end{eqnarray}
The evaluation of the one-loop determinant for the hypermultiplet fermions can be done similarly.
The quadratic terms of the fermions are given by
\begin{equation}
i\bar{\psi}\Gamma^m\nabla_m\psi-\frac ir\bar{\psi}\Gamma^0\Phi^AT_A\psi
+\frac i2(\bar{\xi}\Gamma_{\mu\nu}\tilde{\xi})\bar{\psi}\Gamma^0\Gamma^{\mu\nu}\psi.
\end{equation}
We need to evaluate $-\log\mbox{Det}(iD\hspace*{-2.5mm}/\hspace{1mm})$ where
\begin{equation}
iD\hspace*{-2.5mm}/\hspace{1mm} := i\Gamma^m\nabla_m-\frac mri\Gamma^0+\frac \kappa2(\bar{\xi}\Gamma_{\mu\nu}\tilde{\xi})
\Gamma^0\Gamma^{\mu\nu}
\end{equation}
with $\kappa=i$.
In the following, we will evaluate $-\frac12\log\mbox{Det}(iD\hspace*{-2.5mm}/\hspace{1mm})^2$ with a real $\kappa$, for which
$(iD\hspace*{-2.5mm}/\hspace{1mm})^2$ is non-negative and its heat-kernel is well-defined, and then
substitute $\kappa=i$ into the final expression. The validity of this procedure is justified by convergence of the result.
The explicit form of $(iD\hspace*{-2.5mm}/\hspace{1mm})^2$ is given by
\begin{eqnarray}
(iD\hspace*{-2.5mm}/\hspace{1mm})^2
&=& -(\nabla_m+{\cal V}_m)(\nabla^m+{\cal V}^m)-\frac12\Gamma^{mn}[\nabla_m,\nabla_n]-\frac{3\kappa^2}{4r^2}\sin^2\theta
\nonumber \\
& & -\frac{\kappa^2}4(\bar{\xi}\Gamma_{\mu\nu}\tilde{\xi})(\bar{\xi}\Gamma_{\rho\sigma}\tilde{\xi})
\Gamma^{\mu\nu}\Gamma^{\rho\sigma}
+i\kappa\frac mr(\bar{\xi}\Gamma_{\mu\nu}\tilde{\xi})\Gamma^{\mu\nu}+\frac{m^2}{r^2} \nonumber \\
&:=& -\Delta_F+\frac{m^2}{r^2}.
\end{eqnarray}
where
\begin{equation}
{\cal V}_m = i\kappa(\bar{\xi}\Gamma_{m\mu}\tilde{\xi})\Gamma^0\Gamma^\mu.
\end{equation}
The fermion case is slightly different from the scalar case since there is a term linear in $m$ in $-\Delta_F$.
However, the asymptotic expansion of the zeta-function-regularized one-loop determinant can be made in the fermion case as well.
The part $F_h^F(\Phi)$ of the matrix model action due to $\psi$ has a similar form with $F_h^B(\Phi)$, with different coefficients.
The total one-loop contribution of hypermultiplet to the effective action is $F_h=F_h^B+F_h^F$.
Because of underlying supersymmetry, the terms of order $m^4$ and $m^4\log m^2$ cancel between $F_h^B$ and $F_h^F$. The resulting expression for $F_h$ is
\begin{eqnarray}
F_h &=& 2N\sum_{i=1}^NF(\phi_i), \\
F(m) &=& c_1m^2\log m^2+c_2m^2+c_3\log m^2+O(m^{-2}\log m).
\label{F(m)}
\end{eqnarray}
The fact that $c_1$ is positive will turn out to be important later, while the exact values of the coefficients are irrelevant for the large `t Hooft coupling behavior of the Wilson loop.
We presented details of computation of $c_1$ in Appendix \ref{coeff}.
Notice that, at least up to this order, $F(m)$ is an even function of $m$.
\vspace{5mm}
Obviously, $F_h$ depends on field contents.
The expression for $F_h$ when $R$ is the adjoint representation can be found easily by noticing that, for example, the 'mass' term of $q^\alpha$ can be put to
\begin{equation}
\frac1{r^2}\sum_{i\ne j}(\phi_i-\phi_j)^2q_{ij\alpha}q^{\alpha}_{ji}.
\end{equation}
In this case, $F_h$ is written as
\begin{eqnarray}
F_h \Big\vert_{\rm adj.} = \sum_{i\ne j}F(\phi_i-\phi_j).
\label{f-adj}
\end{eqnarray}
Note that $F(m)$ here is the same function as (\ref{F(m)}).
Direct evaluation of the contribution from the vector multiplet, which we denote as $F_v$, appears more complicated since there are mixing terms between $A_m$ and $A_a$.
Fortunately, it was shown in \cite{Pestun:2007rz} that $F_v$ and $F_h$ cancel each other in ${\cal N}=4$ super Yang-Mills theory. This implies from (\ref{f-adj}) that
\begin{equation}
F_v = -\sum_{i\ne j}F(\phi_i-\phi_j).
\end{equation}
\noindent
$\bullet$ {\sl $\hat{A}_1$ gauge theory}:\hfill\break
We next consider the $\hat{A}_1$ quiver gauge theory. In this case, $q^\alpha$ and $\psi$ consist of bi-fundamental fields. The $\Phi$ is a block-diagonal matrix:
\begin{equation}
\Phi = \left(
\begin{array}{cc}
\Phi^{(1)} & \\ & \Phi^{(2)}
\end{array}
\right),
\end{equation}
in which $\Phi^{(1)}=\mbox{diag}(\phi^{(1)}_1,\cdots,\phi^{(1)}_N)$ and $\Phi^{(2)}=\mbox{diag}(\phi^{(2)}_1,\cdots,\phi^{(2)}_N)$, respectively.
By repeating the similar computations, one can easily show that $F_h$ has the form
\begin{equation}
F_h = 2\sum_{i, j=1}^NF(\phi^{(1)}_i-\phi^{(2)}_j),
\end{equation}
and $F_v$ has the form
\begin{equation}
F_v = -\sum_{i\ne j}F(\phi^{(1)}_i-\phi^{(1)}_j)-\sum_{i\ne j}F(\phi^{(2)}_i-\phi^{(2)}_j).
\end{equation}
The total one-loop contribution is the sum $F = F_h + F_v$.
As a consistency check of the above result,
consider taking the two nodes identical. This reduces the number of nodes from two to one, and hence must map the $\hat{A}_1$ gauge theory to $\hat{A}_0$ one. The reduction puts $\Phi^{(1)}$ and $\Phi^{(2)}$ equal. Then, up to an irrelevant constant, $F_v$ is precisely minus of $F_h$. We thus see that $F$ vanishes identically, reproducing the known result of the $\hat{A}_0$ gauge theory.
\vspace{5mm}
\subsection{Saddle-point equations} \label{saddlepoint}
\vspace{5mm}
We can now extract the saddle-point equations for the matrix model and determine the large `t Hooft coupling behavior of the Wilson loop from them. \hfill\break
\vskip0.3cm
\noindent
$\bullet$ {\sl $A_1$ gauge theory}: \hfill\break
In this theory, the saddle-point equation reads
\begin{equation}
\frac{8\pi^2}{\lambda}\phi_k+2F'(\phi_k)-\frac2N\sum_{i\ne k}F'(\phi_k-\phi_i) = \frac2N\sum_{i\ne k}\frac1{\phi_k-\phi_i}.
\label{a1saddle}
\end{equation}
As explained before, we assume that $\lambda^\gamma\rho(\lambda^\gamma \phi)$ for a {\sl positive} $\gamma$ has a sensible large $\lambda$ asymptote. By rescaling $\phi_k\to\lambda^\gamma\phi_k$, one obtains
\begin{equation}
8\pi^2\phi_k+2\lambda^{1-\gamma}F'(\lambda^\gamma\phi_k)-\frac2N\sum_{i\ne k}\lambda^{1-\gamma}F'(\lambda^\gamma(\phi_k-\phi_i))
= \frac2N\lambda^{1-2\gamma}\sum_{i\ne k}\frac1{\phi_k-\phi_i}.
\end{equation}
Recall that $F(x)\sim c_1x^2\log x^2$ for large $x$.
This shows that the leading-order equation for large $\lambda$ is given by
\begin{equation}
4c_1\phi_k\log \phi_k+2(c_1+c_2)\phi_k-\frac2N\sum_{i\ne k}\Bigl[ 2c_1(\phi_k-\phi_i)\log (\phi_k-\phi_i)+(c_1+c_2)
(\phi_k-\phi_i) \Bigr] = 0.
\end{equation}
Differentiating twice with respect to $\phi_k$, one obtains
\begin{equation}
\frac1{\phi_k} = \frac1N\sum_{i\ne k}\frac1{\phi_k-\phi_i}.
\end{equation}
Notice that $c_1$ and $c_2$ dropped out. Now, this equation has no sensible solution.
Therefore, we conclude that the scaling assumption we started with is invalid, implying that the Wilson loop in this theory cannot grow exponentially in the large `t Hooft coupling limit.
There is another way to check the finiteness of the Wilson loop.
Let us rewrite the saddle-point equation as follows:
\begin{equation}
\frac{8\pi^2}{\lambda}\phi_k+2F'(\phi_k) = \frac2N\sum_{i\ne k}F'(\phi_k-\phi_i) + \frac2N\sum_{i\ne k}\frac1{\phi_k-\phi_i}.
\label{a1saddle}
\end{equation}
The left-hand side represents the external force acting on the eigenvalues, while the right-hand side represents the interactions among the eigenvalues.
For a large $\phi_k$, the external force is dominated by $2F'(\phi_k)$, which is nonzero.
This implies that the large $\lambda$ limit must be smooth, and the Wilson loop expectation value approaches a finite value.
Recall that in the case of ${\cal N}=4$ super Yang-Mill theory, the large $\lambda$ limit renders the external force to vanish, resulting in an indefinite spread of the eigenvalues. This is reflected in the exponential growth of the Wilson loop expectation value.
Implications of this surprising conclusion are far reaching: the ${\cal N}=2$ supersymmetric gauge theory coupled to $2N$ fundamental hypermultiplets, although superconformal, must have a holographic dual whose geometry does not belong to the more familiar cases such as ${\cal N}=4$ super Yang-Mills theory. Central to this phenomenon is that there are two `t Hooft coupling parameters whose ratio can be tuned hierarchically large or small. In particular, we can tune one of them to be smaller than ${\cal O}(1)$, which also renders two widely separated length scales (in units of string scale) in the putative gravity dual background.
In the next section, we shall discuss how nonstandard the dual geometry ought to be by using the non-exponential behavior of the Wilson loop as a probe.
\vspace{3mm}
\noindent
$\bullet$ {\sl $\hat{A}_1$ gauge theory}: \hfill\break
In this theory,
there are two saddle-point equations corresponding to two matrices $\Phi^{(1)}$ and $\Phi^{(2)}$:
\begin{eqnarray}
\frac{8\pi^2}{\lambda_1}\phi^{(1)}_k+\frac2N\sum_{i=1}^NF'(\phi^{(1)}_k-\phi^{(2)}_i)
-\frac2N\sum_{i\ne k}F'(\phi^{(1)}_k-\phi^{(1)}_i) = \frac2N\sum_{i\ne k}\frac1{\phi^{(1)}_k-\phi^{(1)}_i}, \\
\frac{8\pi^2}{\lambda_2}\phi^{(2)}_k+\frac2N\sum_{i=1}^NF'(\phi^{(2)}_k-\phi^{(1)}_i)
-\frac2N\sum_{i\ne k}F'(\phi^{(2)}_k-\phi^{(2)}_i) = \frac2N\sum_{i\ne k}\frac1{\phi^{(2)}_k-\phi^{(2)}_i},
\label{discretesaddlepointeqn}
\end{eqnarray}
where $\lambda_1=g_1^2N$ and $\lambda_2=g_2^2N$ are the `t Hooft coupling constants of each gauge groups.
Denote $\rho^{(1)}(\phi)$, $\rho^{(2)}(\phi)$ the eigenvalue distribution functions for the $\Phi^{(1)}$, $\Phi^{(2)}$ matrices, respectively. It is convenient to define
\begin{eqnarray}
\rho(\phi) &:=& \frac12(\rho^{(1)}(\phi)+\rho^{(2)}(\phi)), \\
\delta\rho(\phi) &:=& \frac12(\rho^{(1)}(\phi)-\rho^{(2)}(\phi)).
\end{eqnarray}
In terms of them, the above saddle-point equations are simplified as follows:
\begin{eqnarray}
\frac{4\pi^2}{\lambda}\phi &=& \int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\frac{\rho(\phi')}{\phi-\phi'},
\label{A_2} \label{untwisted} \\
2\pi^2\Bigl[ \frac1{\lambda_1}-\frac1{\lambda_2} \Bigr]\phi
-2\int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\delta\rho(\phi')F'(\phi-\phi')
&=& \int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\frac{\delta\rho(\phi')}{\phi-\phi'}, \label{twisted}
\end{eqnarray}
where
\begin{equation}
\frac1{{\lambda}} := {1 \over |\Gamma|} \left( \frac1{\lambda_1}+\frac1{\lambda_2} \right) \qquad \mbox{and} \qquad |\Gamma| = 2.
\label{lambdaforA_2}
\end{equation}
For obvious reasons, we refer these two as untwisted and twisted saddle-point equations.
By the scaling argument, one can show that $\delta\rho(\phi)$ is negligible compared to $\rho(\phi)$
in the large $\lambda$ limit. In particular, when $\lambda_1 = \lambda_2$, it follows that $\delta \rho = 0$ is a solution, consistent with $\mathbb{Z}_2$ parity exchanging the two nodes. Therefore, the large $\lambda$ behavior of the Wilson loop is determined by (\ref{A_2}), which is exactly the same as (\ref{SYM}). Indeed, $\lambda$ defined by (\ref{lambdaforA_2}) is exactly what is related to $g_sN$ \cite{Klebanov:1999rd}.
The two Wilson loops are then obtainable from the one-matrix model with eigenvalue density $\rho \pm \delta \rho$:
\bea
&& W_{1} = \int_D \rmd x \, e^{a x} \rho^{(1)}(x)\ = \ \int_D \rmd x \, e^{a x} [\rho(x) + \delta \rho(x)] \nonumber \\
&& W_{2} = \int_D \rmd x \, e^{a x} \rho^{(2)}(x) \ = \ \int_D \rmd x \, e^{a x} [\rho(x) - \delta \rho(x)]. \label{twowilsonloops}
\eea
We see that the untwisted and the twisted Wilson loops are given by
\bea
&& W^{(0)} := {1 \over 2} (W_1 + W_2) = \int_D \rmd x \, e^{a x} \, \rho (x) \nonumber \\
&& W^{(1)}:= {1 \over 2} (W_1 - W_2) = \int_D \rmd x \, e^{a x} \, \delta \rho (x).
\eea
Inferring from the saddle-point equations (\ref{untwisted}, \ref{twisted}), we see that these Wilson loops are directly related to the
average and difference of the two gauge coupling constants. It also shows that the twisted Wilson loop will have nonzero expectation value once the two gauge couplings are set different. In the next section, we shall see that they descend from moduli parameters of six-dimensional twisted sectors at the orbifold singularity in the holographic dual description.
We have found the following result for the Wilson loop in $\hat{A}_1$ quiver gauge theory. The two Wilson loops, corresponding to the two quiver gauge groups, have exponentially growing behavior at large `t Hooft coupling limit. Its functional form is exactly the same as the one exhibited by the Wilson loop in ${\cal N}=4$ super Yang-Mills theory.
\vskip0.5cm
\subsection{Interpolation among the quivers}
With the saddle-point equations at hand, we now discuss various interpolations among $\hat{A}_0, A_1, \hat{A}_1$ theories and learn about the gauge dynamics. Our starting point is the $\hat{A}_1$ theory, whose quiver diagram has two nodes. See figure 1.
\hfill\break
\vskip0.3cm
\nonumber
$\bullet$ Consider the symmetric quiver for which the two `t Hooft coupling constants take the ratio $\lambda_1 / \lambda_2 = 1$. Then the twisted saddle-point equation (\ref{twisted}) asserts that $\delta \rho = 0$ is the solution. It follows that $\langle W_1 \rangle - \langle W_2 \rangle = 0$, viz. the Wilson loop in the twisted sector vanishes identically. Intuitively, the two gauge interactions are of equal strength, so the two Wilson loops are indistinguishable.
Moreover, from the untwisted saddle-point equation (\ref{untwisted}), we see that the Wilson loop in the untwisted sector behaves exactly the same as the one in $\hat{A}_0$ theory and, in particular, ${\cal N}=4$ super Yang-Mills theory:
\bea
W^{(0)} = {1 \over 2} \Big(\langle W_1 \rangle + \langle W_2 \rangle \Big) = {1 \over \sqrt{2 \lambda}} I_1 (\sqrt{2 \lambda}). \label{untwistedWilson}
\eea
It follows that the Wilson loop grows exponentially at large `t Hooft coupling limit, much the same way as the $\hat{A}_0$ theory does.
\hfill\break
\vskip0.3cm
\nonumber
$\bullet$ Consider the asymmetric quiver where the ratio $\lambda_1 / \lambda_2 \ne 1$ but finite. The twisted saddle-point equation (\ref{twisted}) can be recast as
\bea
{1 \over \lambda} \left( B - {1 \over 2} \right) \int\hspace{-4mm}-\hspace{2.5mm} \rmd \phi' {\rho(\phi') \over \phi - \phi'} = \int\hspace{-4mm}-\hspace{2.5mm} \rmd \phi' \, \delta \rho (\phi') \left[{1 \over 2} {1 \over \phi - \phi'}+ F'(\phi - \phi')\right]. \label{twisted2}
\eea
Here, we parametrized the difference of two inverse `t Hooft couplings as
\bea
\left(B - {1 \over 2} \right) := {1 \over 2} \left({1 \over \lambda_1} - {1 \over \lambda_2} \right) \Big/ \left({1 \over \lambda_1} + {1 \over \lambda_2}\right).
\eea
Obviously, taking into account the $\mathbb{Z}_2$ exchange symmetry between the two quiver nodes, $B$ ranges over the interval $[0, +1]$. The symmetric quiver considered above corresponds to $B = {1 \over 2}$. Solving first $\rho$ from (\ref{untwisted}) and substituting the solution to (\ref{twisted2}), one solves $\delta \rho$ as a function of $B$. We see from (\ref{twisted2}) that $\delta \rho$ ought to be a {\sl linear} function of $B$ throughout the interval $[0, +1]$. Equivalently, extending the range of $B$ to $(-\infty, +\infty)$, we see that $\delta \rho$ is a sawtooth function, piecewise linear over each unit interval of $B$. In particular, it is discontinuous across $B=0$ (and across all other nonzero integer values). This is depicted in figure 3. Therefore, we conclude that the Wilson loops $W_1, W_2$ at strong `t Hooft coupling limit are nonanalytic not only in $\lambda$ but also in $B$. In fact, as we shall recall in the next section, $B=0$ is a special point where the spacetime gauge symmetry is enhanced and the worldsheet conformal field theory becomes singular. Nevertheless, the Wilson loop in the untwisted sector behaves exactly the same as the symmetric quiver, viz. (\ref{untwistedWilson}). We conclude that the untwisted Wilson loop is independent of strength of the gauge interactions.
\begin{figure}[ht!]
\vskip1cm
\centering
\includegraphics[scale=0.68]{B.eps}
\caption{\small \sl
Dependence of twisted sector Wilson loops on the parameter $B$. It shows discontinuity at $B=0$, resulting in non-analytic behavior of the Wilson loops to both gauge couplings. }
\label{}
\vskip1cm
\end{figure}
\hfill\break
\vskip0.3cm
$\bullet$ Consider an extreme limit of the asymmetric quiver where the ratio $\lambda_1/\lambda_2 \rightarrow 0$, equivalently, $\lambda_2/\lambda_1 \rightarrow \infty$, viz. the two `t Hooft couplings are hierarchically separated. In this case, one gauge group is infinitely stronger than the other gauge group and the $\hat{A}_1$ quiver gauge theory ought to become the $A_1$ gauge theory
. This can be seen as follows. In the $\hat{A}_1$ saddle-point equations (\ref{discretesaddlepointeqn}), we see that $\phi^{(1)} \rightarrow 0$ solves the first equation.
Plugging this into the second equation, we see it is reduced to the $A_1$ saddle-point equation (\ref{a1saddle}). This reduction poses a very interesting physics since from the above considerations the Wilson expectation value interpolates from the exponential growth of the $\hat{A}_1$ quiver gauge theory to the non-exponential behavior of the $A_1$ gauge theory. In the next section, we shall argue that this is a clear demonstration (as probed by the Wilson loops) that holographic dual of the $A_1$ gauge theory ought to have internal geometry of {\sl string scale} size.
We can also understand the interpolation directly in terms of the Wilson loop. Consider, for example, $\lambda_2 / \lambda_1 \rightarrow \infty$. From the $\hat{A}_1$ Wilson
loops, using the fact that $\rho^{(1)}(x), \rho^{(2)}(x)$ are strictly positive-definite, we have
\bea
\langle W_{2} \rangle &=& \int \rmd \lambda \ \rho^{(2)}(\lambda) \ e^\lambda \nonumber \\
& \le & 2 \int \rmd \lambda \ {1 \over 2} [\rho^{(1)}(\lambda) + \rho^{(2)}(\lambda) ] e^\lambda \nonumber \\
&= & \frac4{\sqrt{2\lambda}}I_1(\sqrt{2\lambda}) \ .
\eea
Since $\lambda \sim \lambda_1 \to 0$, the Wilson loop is bounded from above by a constant.
Note that the limit $\lambda_1\to0$ can be safely taken: the saddle-point equation (\ref{untwisted}) is in fact exact in $\lambda$.
\vskip0.3cm
\noindent
$\bullet$ Consider the limit $\lambda_1, \lambda_2 \rightarrow 0$. In this limit,
\bea
\lambda = 2{ \lambda_1 \lambda_2 \over \lambda_1 + \lambda_2} \ \rightarrow \ 0 \ , \qquad
\kappa := {\lambda_2 \over \lambda_1} = {\rm fixed} \label{changeofvariables}
\eea
and the exact result (\ref{untwistedWilson}) is expandable in power series of $\lambda$ and $\kappa$:
\bea
W^{(0)} \Big\vert_{\rm exact} = {1 \over 2} \Big(\langle W_1 \rangle + \langle W_2 \rangle \Big)
= {1 \over 2} \sum_{\ell=0}^\infty \sum_{m=0}^\infty {(-)^m (\ell + m - 1)! \over (\ell - 1)! \ell ! (\ell + 1)!}
\lambda_1^\ell \kappa^{\ell + m} \ . \label{weakcoupling}
\eea
Here, the exact result (\ref{untwistedWilson}) is symmetric under $\lambda_1 \leftrightarrow \lambda_2$, so we assumed in (\ref{weakcoupling}) that $\kappa < 1$. On the other hand, from standpoint of the quiver gauge theory, the Wilson loop in the fixed-order perturbation theory is given by power series in $\lambda_1$ or $\lambda_2$:
\bea
W^{(0)} \Big\vert_{\rm pert} = \sum_{\ell=0}^\infty \sum_{m=0}^\infty
W_{\ell, m} \lambda_1^\ell \lambda_2^m = \sum_{\ell = 1}^\infty \sum_{m=1}^\infty W_{\ell, m} \lambda_1^{\ell + m} \kappa^m. \label{weakcouplingexpansion}
\eea
We see that the exact result (\ref{weakcoupling}) and the perturbative result (\ref{weakcouplingexpansion}) do not agree each other. Recall that both results are obtained at planar limit $N \rightarrow$ and ought to be absolutely convergent in $(\lambda, B)$ and in $(\lambda_1, \lambda_2)$, respectively. The reason may be
that the two sets of coupling constants are not analytic in $\mathbb{C}^2$ complex plane. In fact, from (\ref{changeofvariables}), we see that $\lambda(\lambda_1, \lambda_2)$ has a codimension-1 singularity at $\lambda_1 + \lambda_2 = 0$. An exceptional situation is when $\lambda_1 = \lambda_2$. In this case, the
singularity disappears and, with the same power series expansion, we expect the exact result (\ref{weakcoupling}) and the perturbative result (\ref{weakcouplingexpansion}) are the same.
We should note that the change of variables is well-defined at strong coupling regime. In this regime, power series expansions in $1/\lambda_1$ and $1 / \lambda_2$ is related unambiguously to power series expansions in $1/\lambda$ and $B$. In fact, the change of variables
\bea
\Big( \ {1 \over \lambda_1}, {1 \over \lambda_2} \ \Big) \quad \longrightarrow \quad
\Big(\ {1 \over \lambda}, B \ \Big)
\eea
is analytic and does not introduce any singularity around $\lambda_1, \lambda_2 = \infty$. In fact, as we
will recapitulate, these are the variables naturally introduced in the gravity dual description.
We remark that the analytic structure of the Wilson loops in quiver gauge theories is similar to the Ising model in a magnetic field on a planar random lattice \cite{kazakov}. The latter is defined by a matrix model involving two interacting Hermitian matrices and involves two coupling parameters: average `t Hooft coupling and magnetic field. Here again, by turning on the magnetic field, one can scale two independent `t Hooft coupling parameters differently. In light of our results, it would be extremely interesting to study this
system in the limit the magnetic field is sent to infinity.
\vskip0.3cm
\section{Intuitive Understanding of Non-Analyticity}
In the last section, the distinguishing feature of the $A_1$ theory from the $\hat{A}_0, \hat{A}_1$ theories was that growth of the Wilson loop expectation value was less than exponential. Yet, these theories are connected one another by continuously deforming gauge coupling parameters. How can then such a non-analytic behavior come about? \footnote{This question was raised to us by Juan Maldacena.} In this section, we offer an intuitive understanding of this in terms of competition between screening and over-screening of color charges and also draw analogy to the Kondo effect of magnetic impurity in a metal.
\vskip0.5cm
\noindent $\bullet$ {\sl screening versus anti-screening}: \hfill\break
Consider first the weak coupling regime. The representation contents of these ${\cal N}=2$ quiver gauge theories are such that the $\hat{A}_0$ theory contains field contents in adjoint representations only, while the $\hat{A}_1$ and the $A_1$ theories contain additional field contents in bi-fundamental or fundamental representations, respectively. The $A_1$ theory contains additional massless multiplets in fundamental representation, so we see immediately that the theory is capable of screening an external color charge sourced by the Wilson loop for any representations. Since the theory is conformal, the screening length ought to be infinite (zero is also compatible with conformal symmetry, but it just means there is no screening) and impeding creation of an excitation energy above the ground state. Even more so, `tension' of the color flux tube would go to zero. In other words, once a static color charge is introduced to the theory, massless hypermultiplets in fundamental representation will immediately screen out the charge to arbitrary long distances. Though this intuitive picture is based on weak coupling dynamics, due to conformal symmetry, it fits well with the non-exponential growth of the Wilson loop in the $A_1$ theory, which we derived in the previous section in the planar limit.
We stress that the screening has nothing to do with supersymmetry but is a consequence of elementary consideration of gauge dynamics with massless matter in complex representations. This is clearly illustrated by the well known two-dimensional Schwinger model. Generalization of this Schwinger mechanism to nonabelian gauge theories showed that massless fermions in {\sl arbitrary} complex representation screens the heavy probe charge in the fundamental representation \cite{Gross}. The screening and consequent string breaking by the dynamical massless matter was observed convincingly in both two-dimensional QED \cite{screen2} and three-dimensional QCD \cite{screen3}. In four-dimensional lattice QCD, the static quark potential $V(R)a$ was computed ($a$ denotes the lattice spacing) for fermions in both quenched and dynamical simulations \cite{screen4}. For quenched simulation, the potential scaled linearly with $R/a$, indicating confinement behavior. For dynamical simulation, the potential exhibited flattening over a wide range of the separation distance $R/a$.
\begin{figure}[ht!]
\vskip1cm
\centering
\includegraphics[scale=0.68]{cloud.eps}
\caption{\small \sl Response of gauge theories to external color charge source. (a) For $A_1$ theory, an external color charge in fundamental representation of the gauge group is screened by the $N_f = 2 N_c$ flavors of massless matter fields, which are in fundamental representation (blue arrow). (b) For $\hat{A}_1$ theory, an external color charge in fundamental representation of the first gauge group is screened by the massless matter fields. As the matter fields are in bi-fundamental representations (black and white arrows), color charge in the second gauge group is regenerated and anti-screened. The process repeats between the two gauge groups and leads the theory to exhibit Coulomb behavior.}
\label{}
\vskip1cm
\end{figure}
The case of $\hat{A}_1$ theory is more interesting. Having two gauge groups associated with each nodes, consider introducing a static color charge of the representation $R$ for, say, the first gauge group in SU$(N) \times $SU$(N)$. The hypermultiplets transforming in $({\bf N}, \overline{\bf N})$ and $(\overline{\bf N}, {\bf N})$ are in defining representations with respect to the first gauge group, so they will rearrange their ground-state configuration to screen out the color charge. But then, as these hypermultiplets are in defining representation with respect to the second gauge group as well, a complete screening with respect to the first gauge group will reassemble the resulting configuration to be in the representation $\overline{R}$ of the second gauge group in SU$(N)\times$SU($N$). This configuration is essentially the same as the starting configuration except that the two gauge groups are interchanged (along with charge conjugation). The hypermultiplets may opt to rearrange their ground-state configuration to screen out the color charge of the second gauge group, but then the process will repeat itself and returns back to the original static color charge of the first gauge group --- in $\hat{A}_1$ theory, perfect screening of the first gauge group is accompanied by perfect anti-screening of the second gauge group and vice versa. This is depicted in figure 4. Consequently, a complete screening never takes place for {\sl both} gauge groups simultaneously. Instead, the external color charge excites the ground-state to a conformally invariant configuration with the Coulomb energy. Again, we formulated this intuitive picture from weak coupling regime, but the picture fits well with the exponential growth of the Wilson loop expectation value of $\hat{A}_1$ theory we derived in the previous section at planar limit.
\vskip0.5cm
\noindent $\bullet$ {\sl Analogy to Kondo effect}: \hfill\break
It is interesting to observe that the screening vs. anti-screening process described above is reminiscent of the multi-channel Kondo effect in a metal \cite{affleck}. There, a static magnetic impurity carrying a spin $S$ interacts with conduction electrons and profoundly affects electrical transport property at long distances. Suppose in a metal there are $N_{\rm f}$ flavors of conduction band electrons. Thus, there are $N_{\rm f}$ channels and they are mutually non-interacting. The antiferromagnetic spin-spin interaction between the impurity and the conduction electrons leads at weak coupling to screening of the impurity spin $S$ to $S_{\rm ren} = (S - N_{\rm f}/2)$. We see that the system with $N_{\rm f} < 2 S$ is under-screened, leading to an asymptotic screening of the impurity spin and that the system with $N_{\rm f} > 2S$ is over-screened, leading to an asymptotic anti-screening of the impurity spin. The marginally screened case, $N_{\rm f} = 2 S$, is at the border between the screening and the anti-screening: the spin $S$ of the magnetic impurity is intact under renormalization by the conduction electrons (modulo overall flip of the spin orientation, which is a symmetry of the system). We thus observe that the Coulomb behavior of the external color source in $\hat{A}_1$ theory is tantalizingly parallel to the marginally screened case of the multi-channel Kondo effect.
\vskip0.5cm
\noindent $\bullet$ {\sl Interpretation via brane configurations}: \hfill\break
We can also understand the screening-Coulomb transition from the brane configurations describing $\hat{A}_1$ and $A_1$ theories \footnote{For a comprehensive review of brane configurations, see \cite{giveonkutasov}.}. Consider Type IIA string theory on $\mathbb{R}^{8,1} \times \mathbb{S}^1$, where the circle direction is along $x^9$ and have circumference $L$. We set up the brane configuration by introducing two NS5-branes stretched along $(012345)$ directions and $N$ stack of D4-branes stretched along $(01239)$ directions on intervals between the two NS5-branes. Generically, the two NS5-branes are located at separate position on $\mathbb{S}^1$ and this corresponds to the $\hat{A}_1$ theory. The gauge couplings $1/g_1^2$ and $1/g_2^2$ of the two quiver gauge groups are proportional to the length of the two $x^9$-intervals of the D4-branes. When the two NS5-branes are located at diagonally opposite points, say, at $x^9 = 0, L/2$, the two gauge couplings of the $\hat{A}_1$ theory are equal. This is depicted in figure 5(a). By approaching one
NS5-brane to another, say, at $x^9=0$, we can obtain the configuration in figure 5(b). This corresponds to $A_1$ theory since the gauge coupling of the D4-branes encircling the $\mathbb{S}^1$ becomes arbitrarily weak compared to that of the D4-branes stretched infinitesimally between the two overlapping NS5-branes.
\begin{figure}[ht!]
\vskip1cm
\centering
\includegraphics[scale=0.68]{screen.eps}
\caption{\small \sl
Semiclassical Wilson loop in brane configuration of ${\cal N}=2$ superconformal gauge theories under study: (a) $\hat{A}_1$ theory with $G=$SU($N) \times$ SU($N$) and $2N$ bifundamental hypermultiplets. $N$ D4-branes stretch between two widely separated NS5-branes on a circle. The F1 (fundamental string) ending on or emanating from D4-brane represent static charges. On D4-branes, having finite gauge coupling, conservation of the F1 flux is manifestly. (b) $A_1$ theory with $G$=SU($N$) and $2N$ fundamental hypermultiplets. The $A_1$ theory is obtained from $\hat{A}_1$ in (a) by approaching the two NS5-branes. The flux is leaked into the coincident NS5-branes and run along their worldvolumes. On D4-branes, having vanishing gauge coupling, conservation of the F1 flux is not manifest. }
\label{}
\vskip1cm
\end{figure}
We now introduce external color charge to the D4-branes and examine fate of the color fluxes. The external color sources are provided by a macroscopic IIA fundamental string ending on the stacked D4-branes. Consider first the configuration of the $\hat{A}_1$ theory. The color charge is an endpoint of the fundamental string on one stack of the D4-branes, viz. one of the two quiver gauge groups. Along the D4-branes, the endpoint sources color Coulomb field. The color field will sink at another external color charge located at a finite distance from the first external charge. See figure 2(a). We see that the color flux is conserved on the first stack of D4-branes. We also see that, at weak coupling regime, effects of the NS5-branes are negligible.
Consider next the configuration of the $A_1$ theory. Based on the considerations of the previous section,
we consider an external color charge to the stack of D4-branes encircling the $\mathbb{S}^1$. In this configuration, the two NS5-branes are coincident and this opens up a new possible color flux configuration. To understand this, we recall the situation of stack of D1-D5 branes, which is related to the macroscopic IIA string and stack of NS5-branes. In the D1-D5 system, it is well known that there are threshold bound states of D1-branes on D5-branes {\sl provided} two or more D5-branes are stacked. For a single D5-brane, the D1-brane bound-state does not exist. This suggests in the brane configuration of the $A_1$ theory that
the color flux may now be pulled to and smear out along the two coincident NS5-branes. From the viewpoint
of stack of the D4-branes encircling $\mathbb{S}^1$, the color flux appears not conserved.
\section{Holographic Dual} \label{holography}
The exact results of the ${\cal N}=2$ Wilson loops at strong `t Hooft coupling limit we obtained in the previous section revealed many intriguing aspects. In particular, compared to the more familiar, exponential growth behavior of the ${\cal N}=4$ Wilson loops, we found the following distinguishing features and consequences:
\begin{list}{$\bullet$}{}
\item In $A_1$ gauge theory, the Wilson loop $\langle W \rangle$ does {\sl not} exhibit the exponential growth. Replacing $2N$ fundamental representation hypermultiplets by single adjoint representation hypermultiplet restores the exponential growth, since the latter is nothing but the ${\cal N}=4$ counterpart. This suggests that $\langle W \rangle$ in $\hat{A}_1$ gauge theory has (possibly infinitely) many saddle points and potential leading exponential growth is canceled upon summing over the saddle points. We stress that, in this case, the ratio of two `t Hooft coupling goes to zero, equivalently, infinite. The limit decouples dynamics of the two quiver gauge groups and render the global gauge symmetry as a newly emergent flavor symmetry. The non-exponential behavior of the Wilson loop originates from the decoupling, as can be understood intuitively from the screening phenomenon.
\item In $\hat{A}_1$ quiver gauge theory, the two Wilson loops $\langle W_1 \rangle, \langle W_2 \rangle$ associated with the two quiver nodes exhibit the same exponential growth as the ${\cal N}=4$ counterpart. The exponents depend not only on the largest edge of the eigenvalue distribution but also on the two `t Hooft coupling constants, $\lambda_1, \lambda_2$, equivalently, $\lambda, B$.
\item In $\hat{A}_1$ quiver gauge theory, in case the two `t Hooft couplings are the same, so are the two Wilson loops. If the two `t Hooft couplings differ {\sl but} remain finite, the two Wilson loops will also differ. As such, $\langle W_1 \rangle - \langle W_2 \rangle$ is an order parameter of the $\mathbb{Z}_2$ parity exchanging the two quiver nodes. It scales linearly with $B$ and shows {\sl non-analyticity} over the fundamental domain $[-{1 \over 2}, + {1 \over 2}]$.
\end{list}
In this section, we pose these features from holographic dual viewpoint and extract several new perspectives. Much of success of the AdS/CFT correspondence was based on the observation that holographic dual geometry is macroscopically large compared to the string scale. In this limit, string scale effects are suppressed and physical observables and correlators are computable in saddle-point, supergravity approximation. For example, the AdS$_5 \times \mathbb{S}^5$ dual to the ${\cal N}=4$ super Yang-Mills theory has the size $R^2 = O(\sqrt{\lambda})$:
\bea
\rmd s^2 = R^2 \rmd s^2 (\mbox{AdS}_5) + R^2 \rmd \Omega_5^2 (\mathbb{S}^5),
\eea
growing arbitrarily large at strong `t Hooft coupling. Many other examples of the AdS/CFT correspondence share essentially the same behavior. In such a background, expectation value of the Wilson loop $\langle W \rangle$ is evaluated by the Polyakov path integral of a fundamental string in the holographic dual background:
\bea
\langle W \rangle := \int_C [{\cal D} X {\cal D} h]^\perp \, \exp ( i S_{\rm ws}[X^*g])
\label{polyakov}
\eea
with a prescribed boundary condition along the contour $C$ of the Wilson loop at timelike infinity. The worldsheet coupling parameter is set by the pull-back of the spacetime metric, and hence by $R^2$. As $R$ grows large at strong `t Hooft coupling, the path integral is dominated by a saddle point and $\langle W \rangle$ exhibits exponential growth whose Euclidean geometry is the minimal surface ${\cal A}_{\rm cl}$:
\bea
\langle W \rangle \ \ \simeq \ \ e^{{\cal A}_{\rm cl}} \qquad \mbox{where} \qquad
{\cal A}_{\rm cl} \simeq O(R^2) \ .
\eea
Note that the minimal surface of the Wilson loop sweeps out an AdS$_3$ foliation inside the AdS$_5$. This explains the $R^2$ growth of the area of the minimal surface at strong `t Hooft coupling.
Central to our discussions will consist of re-examination on global geometry of the gravity dual to ${\cal N}=2$ superconformal gauge theories in comparison to ${\cal N}=4$ super Yang-Mills theory.
\subsection{Holographic dual of $A_1$ gauge theory}
At present, gravity dual to the $A_1$ gauge theory is not known. Still, it is not difficult to guess what the
dual theory would be. In general, ${\cal N}=2$ gauge theory is defined in perturbation theory by three
coupling parameters:
\bea
\lambda, \qquad g_{\rm c}^2 := {1 \over N^2}, \qquad g_{\rm o} := {N_{\rm f} \over N},
\eea
associated `t Hooft coupling, closed surface coupling associated with adjoint vector and hypermultiplets, and open puncture coupling associated with fundamental hypermultiplets. For $A_1$ gauge theory,
$g_{\rm o} = 2 \sim O(1)$ and it indicates that dual string theory is described by the worldsheet with proliferating open boundaries. Moreover, as we studied in earlier sections, the $A_1$ gauge theory is related to the $\hat{A}_1$ quiver gauge theory as the limit where one of the two `t Hooft coupling constants is sent to zero while the other is held finite. Equivalently, in the large $N$ limit, one of the two `t Hooft coupling constants is dialed infinitely stronger than the other. This hierarchical scaling limit of the two `t Hooft coupling constants, along with the PSU$(2, 2 \vert 2)$ superconformal symmetry and the SU(2)$\times$U(1) R-symmetry imply that the gravity dual is a noncritical superstring theory involving AdS$_5$ and $\mathbb{S}^2 \times \mathbb{S}^1$ space.
One thus expects that the gravity dual of $A_1$ gauge theory has the local geometry of the form:
\bea
(\mbox{AdS}_5 \times {\cal M}_2) \times [\mathbb{S}^1 \times \mathbb{S}^2] \ .
\label{a1dual}
\eea
By local geometry, we mean that the internal space consists of $\mathbb{S}^1$ and $\mathbb{S}^2$, possibly fibered or warped over an appropriate 2-dimensional base-space ${\cal M}_2$ \footnote{The expected gravity dual (\ref{a1dual}) may be anticipated from the Argyres-Seiberg S-duality \cite{ASduality}. At finite $N$, S-duality maps an infinite coupling ${\cal N}=2$ superconformal gauge theory to a weak coupling ${\cal N}=2$ gauge theory combined with strongly interacting, isolated conformal field theory. The presence of the strongly interacting, isolated conformal field theory suggests that putative holographic dual ought to involve a string geometry whose size is typically of order $O(1)$ in string unit.}. The curvature scales of AdS$_5$ and of ${\cal M}_2$ are equal and are set by $R \sim \lambda^{1/4}$, much as in the ${\cal N}=4$ super Yang-Mills theory. The remaining internal geometry $[\mathbb{S}^1 \times \mathbb{S}^2]$ involves geometry of string scale, and is describable in terms of a (singular) superconformal field theory. In particular, the internal space $[\mathbb{S}^1 \times \mathbb{S}^2]$ may have collapsed 2-cycles.
Therefore, the ten-dimensional geometry is schematically given by
\bea
\rmd s^2 = R^2 (\rmd s^2 (\mbox{AdS}_5) + \rmd s^2 ({\cal M}_2)) + r^2 \rmd s^2 ([\mathbb{S}^1 \times \mathbb{S}^2])
\eea
where $R, r$ are the curvature radii that are hierarchically different, $r \ll R$ (measured in string scale). In particular, $r$ can become smaller than ${\cal O}(1)$ in the regime that the two `t Hooft coupling constants are taken hierarchically disparate.
Consider now evaluating the Wilson loop $\langle W[C] \rangle$ in the gravity dual (\ref{a1dual}). As well-known, the Wilson loop is holographically computed by free energy of a macroscopic string whose endpoint sweeps the contour $C$. From the viewpoint of evaluating it in terms of a minimal area worldsheet, since the internal space has nontrivial 2-cycles, there will not be just one saddle-point but infinitely many. These saddle-point configurations are approximately a combination of minimal surface of area ${\cal A}_{\rm sw}$ inside the AdS$_5$ and surfaces of area $a_{\rm sw}^{(i)}$ wrapping 2-cycles inside the internal space multiple times.
Note that ${\cal A}_{\rm sw}$ has the area of order $O(r^2) \gg 1$ in string unit and $a_{\rm sw}^{(i)}$ has the area of order $O(1)$ since the 2-cycles are collapsed. Therefore, all these configurations have nearly degenerate total worldsheet area and correspond to infinitely many, nearby saddle points. In effect, the surfaces of area $a_{\rm sw}^{(i)}$
wrapping the collapsed 2-cycle multiple times produce sizable worldsheet instanton effects. We thus have
\bea
\langle W \rangle &=& \sum_{i = \rm saddles} c_a \, \exp \left( {\cal A}_{\rm sw} + a_{\rm sw}^{(i)} + \cdots \right) \nonumber \\
&\simeq& \Big[ \sum_{i = \rm saddles} c_a \, \exp ( a_{\rm sw}^{(i)} ) \Big] \cdot \exp \left( {\cal A}_{\rm sw} \right), \label{summingup}
\eea
where $c_a$ denotes calculable coefficients of each saddle-point, including one-loop string worldsheet determinants and integrals over moduli parameters, if present. This is depicted in figure 6. Since we do not have exact worldsheet result for each saddle point configurations available, we can only guess what must happen in order for the final result to yield the exact result we derived from the gauge theory side. In the last expression of (\ref{summingup}), even though contribution of individual saddle point is same order, summing up infinitely many of them could produce an exponentially small effect of order $O(\exp (-{\cal A}_{\rm sw}))$. What then happens is that summing up infinitely many worldsheet instantons over the internal space cancels against the leading $O(\exp ( {\cal A}_{\rm sw}))$ contribution from the worldsheet inside the AdS$_5$. After the cancelation, the leading nonzero contribution is of the same order as the pre-exponential contribution. It scales as $R^\nu$ for some {\sl finite} value of the exponent $\nu$ at strong `t Hooft coupling.
\begin{figure}[ht!]
\vskip1cm
\centering
\includegraphics[scale=0.68]{Wsum.eps}
\caption{\small \sl
Schematic view of holographic computation of Wilson loop expectation value in instanton expansion. Each hemisphere represents minimal surface of semiclassical string in AdS spacetime. Instantons are string worldsheets $\mathbb{P}^1$'s stretched into the internal space $X_5$. Their sizes are of string scale, and hence of order ${\cal O}(1)$ for any number of instantons. The gauge theory computations indicate that these worldsheet instantons ought to proliferate and lead to delicate cancelations of the leading-order result (the first term) upon resummation.}
\label{}
\vskip1cm
\end{figure}
At the orbifold fixed point, there are in general torsion components of the NS-NS 2-form potential $B_2$, whose integral over a 2-cycle is denoted by $B$:
\bea
B_a := \oint_{C_a} {B_2 \over 2 \pi}, \qquad B_a = [0, 1)
\eea
The $A_1$ theory has the global flavor symmetry $G_{\rm f} =$ U$(N_{\rm f}) =$ U$(2N)$.
For a well-defined conformal field theory of the internal geometry, $B_a$ must take the value $1/2$.
But then, the string worldsheet wrapping the 2-cycle $C_a$ $n_a$ times picks up the phase factor
\bea
\prod_{a=1}^\infty \exp (2 \pi i B_a n_a) = \prod_{a=1}^\infty (-)^{n_a},
\eea
giving rise to $\pm$ relative signs among various worldsheet instanton contributions to the minimal surface dual to the Wilson loop.
\subsection{Holographic dual of $\hat{A}_1$ quiver gauge theory}
Consider next holographic description of the $\hat{A}_1$ quiver gauge theory. It is known that the holographic dual is provided by the AdS$_5 \times \mathbb{S}^5/\mathbb{Z}_2$ orbifold, where the $\mathbb{Z}_2$ acts on $\mathbb{C}^2 \subset \mathbb{C}^3$ of the covering space of $\mathbb{S}^5$. Locally, the spacetime geometry is exactly the same as AdS$_5 \times \mathbb{S}^5$:
\bea
\rmd s^2 = R^2 \rmd s^2 (\mbox{AdS}_5) + R^2 \rmd \Omega_5^2 (\mathbb{S}^5).
\eea
The size of both the $AdS_5$ and the $\mathbb{S}^5/\mathbb{Z}_2$ is $R$, which grows as $(\lambda)^{1/4}$ at large `t Hooft coupling limit.
Located at the orbifold fixed point is a twisted sector. The massless fields of the twisted sector consists of a tensor multiplet of $(5+1)$-dimensional (2,0) chiral supersymmetry. The multiplet contains five massless scalars. Three of them are associated with $\mathbb{S}^2$ replacing the orbifold fixed point, and the other two are associated with
\bea
B = \oint_{\mathbb{S}^2} {B_2 \over 2 \pi} \qquad \mbox{and} \qquad
C = \oint_{\mathbb{S}^2} {C_2 \over 2 \pi},
\eea
where $B_2, C_2$ are NS-NS and R-R 2-form potentials. Both of them are periodic, ranging over $B, C = [0, 1)$ \footnote{The periodicity can be seen from the T-dual, brane configuration as well. Consider the moduli $B$. The quiver gauge theories are mapped to D4 branes connecting adjacent NS5 branes on a circle in two different directions. The sum over gauge couplings is then related to circle size,
while the difference between adjacent gauge couplings is given by the length of each interval. Evidently,
the interval cannot be longer than the circumference.}.
These two massless moduli are well-defined even in the limit that the other three moduli vanish, viz. $\mathbb{S}^2$ shrinks back to the orbifold singularity.
Along with the type IIB dilaton and axion of the untwisted sector, these two twisted scalar
fields are related to the gauge theory parameters. In particular, we have
\bea
{1 \over g_s} = {1 \over g_1^2} + {1 \over g_2^2}; \qquad
{1 \over g_s} (B - {1 \over 2}) = {1 \over g_1^2} - {1 \over g_2^2}.
\label{relation}
\eea
The other moduli field $C$ is related to the theta angles. This can be seen by uplifting the brane configuration to M-theory. There, the theta angle is nothing but the M-theory circle. It would vary if we turn on C-potential on two cycles.
Consider now computation of the Wilson loop expectation value from the Polyakov path integral (\ref{polyakov}). Again, as the contour $C$ of the Wilson loop lies at the boundary of AdS$_3$ foliation inside AdS$_5$, the Type IIB string worldsheet would sweep a minimal surface in AdS$_3$. The area is of order $O(R^2)$. On the other hand,
the Type IIB string may sweep over the vanishing $\mathbb{S}^2$ at the orbifold fixed point. As the area of the cycle vanishes, the corresponding worldsheet instanton effect is of order $O(1)$ and unsuppressed. Thus, the situation is similar to the $A_1$ case. In the $\hat{A}_1$ case, however, we have a new direction of turning on the twisted moduli associated with $B$. From (\ref{relation}), we see that this amounts to turning on the two gauge couplings asymmetrically. Now, for the worldsheet instanton configuration, the Type IIB string worldsheet couples to the $B_2$ field. Therefore, the Wilson loop will get contributions of $\exp ( \pm 2 \pi i B)$ once the moduli $B$ is turned on.
There is another reason why infinitely many worldsheet instantons needs to be resummed. We proved that the
twisted sector Wilson loop is proportional to $| B |$. As $B$ ranges over the interval $[-{1 \over 2}, + {1 \over 2}]$, we see that the Wilson loop has nonanalytic behavior at $B = 0$. In gravity
dual, we argued that the Wilson loop depends on $B$ through the string worldsheet sweeping vanishing two-cycle at the orbifold fixed point. The $n$ instanton effect is proportional to $\exp (2 \pi i n B)$ for $n = \pm 1, \pm 2, \cdots$. It shows that $B$ has the periodicity over $[-{1 \over 2}, +{1 \over 2}]$ and effect of individual instanton is analytic over the period. Obviously, in order to exhibit non-analyticity
such as $|B|$, infinitely many instanton effects needs to be resummed.
\subsection{Comments on Wilson loops in Higgs phase}
Starting from the $\hat{A}_1$ quiver gauge theory, we have another limit we can take. Consider now the
D3-branes displaced away from the orbifold singularity. If all the branes are moved to a smooth point,
then the quiver gauge symmetry $G$ is broken to the diagonal subgroup $G_{\rm D}$:
\bea
G = {\rm U}(N) \times {\rm U}(N) \qquad \rightarrow \qquad G_{\rm D} = {\rm U}_{\rm D}(N)
\eea
modulo center-of-mass U(1) group. Of the two bifundamental hypermultiplets, one of them is Higgsed away and the other forms a hypermultiplet transforming in adjoint representation of the diagonal subgroup. This theory flows in the infrared below the Higgs scale to the ${\cal N}=4$ superconformal Yang-Mills theory, as expected since the $N$ D3-branes are stacked now at a smooth point.
We should be able to understand the two Wilson loops of the $\hat{A}_1$ quiver gauge theory in this limit. Obviously, the two Wilson loops $W_1, W_2$ are independent and distinguishable at an energy above the Higgs scale, while they are reduced to one and the same Wilson loop at an energy below the Higgs scale. Noting that Higgs scale is set by the location of the D3-branes from the orbifold singularity, we therefore see that the minimal surface of the macroscopic string worldsheet must exhibit a crossover. How this crossover takes place is a very interesting problem left for the future.
The above consideration is also generalizable to various partial breaking patterns such as
\bea
{\rm SU}(2N) \times {\rm SU}(2N) \rightarrow {\rm SU}(N) \times {\rm SU}(N) \times {\rm SU}_{\rm D}(N) \ .
\eea
Now, there are several types of strings. There are strings corresponding to Wilson loops of three SU($N$)'s. There are also W-bosons that connect diagonal SU($N$) to either of the two SU($N$)'s. The fields now transform as $({\bf N}, \overline{\bf N}; {\bf 1}), (\overline{\bf N}, {\bf N}; {\bf 1})$ and $({\bf 1}, {\bf 1}, {\bf N}^2 - 1)$. As the theory is Higgsed, localization method we relied on is no longer valid. Still, Nevertheless, taking holographic geometry of the conformal points of quiver gauge theories as the starting point, the gravity dual is expected to be a certain class of multi-centered deformations. We expect that one can still learn a lot of (quiver) gauge theory dynamics by taking suitable approximate gravity duals and then computing Wilson loop expectation values and comparing them with weak `t Hooft coupling perturbative results.
\vspace{1cm}
\section{Generalization to $\hat{A}_{k-1}$ Quiver Gauge Theories}
So far, we were mainly concerned with $A_1$ and $\hat{A}_1$ of ${\cal N}=2$ (quiver) gauge theories.
These are the simplest two within a series of $\hat{A}_{k-1}$ type.
These quiver gauge theories are obtainable from D3-branes sitting at the orbifold singularity $\mathbb{C} \times (\mathbb{C}^2/\mathbb{Z}_k)$. There are $(k-1)$ orbifold fixed points whose blow-up consists of $\mathbb{S}^2_i$ $(i=1, \cdots, k-1)$.
The twisted sector of the Type IIB string theory includes $(k-1)$ tensor multiplets of $(5+1)$-dimensional (2,0) chiral supersymmetry.
Two sets of $(k-1)$ scalar fields are associated with
\bea
B_i = \oint_{\mathbb{S}^2_i} {B_2 \over 2 \pi} \qquad \mbox{and} \qquad
C_i = \oint_{\mathbb{S}^2_i} {C_2 \over 2 \pi} \qquad (i = 1, \cdots, k-1).
\label{2moduli}
\eea
Again, after T-duality to Type IIA string theory, we obtain the $\hat{A}_{k-1}$ brane configuration. As for $k=2$, we first partially compactify the orbifold to $\mathbb{S}^1$ of a fixed asymptotic radius and resolve the $\hat{A}_{k-1}$ singularities. This results in a hyperk\"ahler space where the $\mathbb{S}^1$ is fibered over the base space $\mathbb{R}^3$. The manifold is known as $k$-centered Taub-NUT space. There are $3(k-1)$ geometric moduli associated with $(k-1)$ degeneration centers (where the $\mathbb{S}^1$ fiber degenerates) which, along with the $2(k-1)$ moduli in (\ref{2moduli}), constitute 5 scalar fields of the aforementioned $(k-1)$ tensor multiplets. Now, T-dualizing along the $\mathbb{S}^1$ fiber, we obtain Type IIA background involving $k$ NS5-branes, which source nontrivial dilaton and NS-NS $H_3$ field strength, sitting at the degeneration centers on the base space $\mathbb{R}^3$ and at various positions on the T-dual circle $\widetilde{\mathbb{S}}^1$ set by the $B_i$'s in (\ref{2moduli}).
In the Type IIA brane configuration, there are various limits where global symmetries are enhanced. At generic distribution of $k$ NS5-branes on the dual circle $\widehat{\mathbb{S}}^1$, the global symmetry is given by SU$(2) \times $U(1) associated with the base space $\mathbb{R}^3$ and the dual circle $\widehat{\mathbb{S}}^1$. When (fraction of) NS5-branes all coalesce together, the space transverse to the NS5-branes approaches $\mathbb{C}^2$ very close to them and the U$(1)$ symmetry is enhanced to SU(2). In this limit, (a subset of) gauge couplings of D4-branes become zero and we have global symmetry enhancement. It is well known that $k$-stack of NS5-branes, which source the dilation and the NS-NS $H_3$ field strength, generate the near-horizon geometry of linear dilaton \cite{lineardilaton}.
In string frame, the geometry is the exact conformal field theory \cite{exactCFT}
\bea
\mathbb{R}^{5,1} \times \Big(\mathbb{R}_{\phi, Q} \times {\rm SU}(2)_{k} \Big) \qquad \mbox{where} \qquad Q = \sqrt{2 \over k} \ .
\label{liouville}
\eea
Modulo the center of mass part, the worldvolume dynamics on D4-branes stretched between various NS5-branes can be described in terms of various boundary states \cite{kutasov}, representing localized and extended states in the bulk.
The string theory in this background breaks down at the location of NS5-branes, as the string coupling becomes infinitely strong. To regularize the geometry and define the string theory, we may take $\mathbb{C}$ inside the aforementioned near-horizon $\mathbb{C}^2$, split the coincident $k$ NS5-branes at the center and array them on a concentric circle of a nonzero radius. The string coupling is then cut off at a value set by the radius. The resulting worldsheet theory is the ${\cal N}=2$ supersymmetric Liouville theory.
In the regime we are interested in, $k$ takes values larger than $2$, $k =3, 4, \cdots$. In this regime, the ${\cal N}=2$ Liouville theory (\ref{liouville}) is strongly coupled. By the supersymmetric extension of the Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality, we can turn the ${\cal N}=2$ supersymmetric Liouville theory to Kazama-Suzuki coset theory. To do so, we T-dualize along the angular direction of the arrayed NS5-branes. Conserved winding modes around the angular direction is mapped to
conserved momentum modes and the resulting Type IIB background is given by another
exact conformal field theory
\bea
\mathbb{R}^{5,1} \times \Big({{\rm SL}(2; \mathbb{R})_k \over {\rm U}(1)} \times {{\rm SU}(2)_k \over {\rm U}(1)} \Big)
\label{FZZdual}
\eea
modulo $\mathbb{Z}_k$ orbifolding. For large $k$, the conformal field theory is weakly coupled and describes the well-known cigar geometry \cite{GK2}.
In the large (finite or infinite) $k$, what do we expect for the Wilson loop expectation value and, from the expectation values, what information can we extract for the holographic geometry of gravity dual? Here, we
shall remark several essential points that are extendible straightforwardly from the results of $\hat{A}_1$
and relegate further aspects in a separate work. For $\hat{A}_{k-1}$ quiver gauge theories,
there are $k$ nodes of gauge groups U($N$). Associated with them are $k$ independent Wilson loops:
\bea
W^{(i)}[C] := \mbox{Tr}_{(i)}P_s \exp\Bigl[ ig \int_C \rmd \left(\dot{x}^mA^{(i)}_m (x) +\theta^a A_a^{(i)}(x) \right) \Bigr] \qquad (i=1, \cdots, k) \ .
\eea
From these, we can construct the Wilson loop in untwisted and twisted sectors. Explicitly, they are
\bea
W_0 = {1 \over k} \Big( W^{(1)}+ W^{(2)}+ \cdots + W^{(k-1)} + W^{(k)} \Big)
\eea
for the untwisted sector Wilson loop and
\bea
&& W_1 = W^{(1)} + \omega W^{(2)} + \cdots + \omega^{k-1} W^{(k)} \nonumber \\
&& W_2 = W^{(1)} + \omega^2 W^{(2)} + \cdots + \omega^{2(k-1)} W^{(k)} \nonumber \\
&& \cdots \nonumber \\
&& W_{k-1} = W^{(1)} + \omega^{k-1} W^{(2)} + \cdots + \omega^{(k-1)^2} W^{(k)}
\eea
for the $(k-1)$ independent twisted sector Wilson loops. They are simply $k$ normal modes of Wilson loops constructed from $\{ \omega^n \vert n=0, \cdots, k-1\}$ Fourier series of $\mathbb{Z}_k$ over the $k$ quiver nodes.
Consider now the planar limit $N \rightarrow \infty$. The Wilson loops $W^{(i)}$ are all same. Equivalently, all the twisted Wilson loops vanish. Furthermore, as in $\hat{A}_1$ quiver gauge theory, the untwisted Wilson loop will show exponential growth at large `t Hooft coupling.
It is not difficult to extend the gauge theory results to $\hat{A}_{k-1}$ case. After taking large $N$ limit, the saddle point equations now read
\begin{eqnarray}
\frac{4\pi^2}{\lambda}\phi &=& \int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\frac{\rho(\phi')}{\phi-\phi'},
\label{A_2} \label{akuntwisted} \\
\frac{2\pi^2}{\lambda_a} \phi
-(1-\overline{\omega})\int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\delta_a\rho(\phi')F'(\phi-\phi')
&=& \int\hspace{-4mm}-\hspace{2.5mm}\rmd\phi'\frac{\delta_a \rho(\phi')}{\phi-\phi'},
\qquad (a=1, \cdots, k-1) \label{aktwisted} \nonumber \\
\end{eqnarray}
where
\bea
\rho &:=& {1 \over k} \Big( \rho^{(1)} + \cdots + \rho^{(k)} \Big) \nonumber \\
\delta_a \rho &:=& {1 \over k} \sum_{i=1}^k \omega^{i-1} \rho^{(i)} \qquad
(a=1,2, \cdots, k-1),
\eea
and
\bea
&& {1 \over \lambda} := {1 \over k} \Big( {1 \over \lambda^{(1)}} + \cdots +{1 \over \lambda^{(k)}} \Big) \nonumber \\
&& {1 \over \lambda}_a := {1 \over k} \sum_{i=1}^{k} \omega^{i-1} {1 \over \lambda^{(i)}} \qquad
(a = 1, 2, \cdots, k-1).
\eea
It is evident that $\delta_a \rho$ is proportional to $1/\lambda_a$ linearly, and hence exhibits {\sl non-analytic} behavior.
By the AdS/CFT correspondence, the Wilson loops are mapped to macroscopic fundamental Type IIB string in
the geometry AdS$_5 \times \mathbb{S}^5/\mathbb{Z}_k$. There are $(k-1)$ 2-cycles of vanishing volume.
As in the $\hat{A}_1$ case, $n$ worldsheet instanton picks up a phase factor exp$(2 \pi i B n)$. Again, since $B=1/2$ for the exact conformal field theory, the phase factor is given by $(-)^n$. As (fraction of) the gauge couplings are tuned to zero, we again see from (\ref{aktwisted}) that twisted Wilson loops are
suppressed by the worldsheet instanton effects. This is the effect of the screening we explained in the previous section, but now extended to the $\hat{A}_{k-1}$ quiver theories. The suppression, however, is less significant as $k$ becomes large since the one-loop contribution in (\ref{aktwisted}) is hierarchically small compared to the classical contribution. We see this as a manifestation of the fact we recalled above that, at $k \rightarrow \infty$, the worldsheet conformal field theory is weakly coupled in Type IIB setup and the holographic dual geometry, the cigar geometry, becomes weakly curved.
It is also illuminating to understand the above Wilson loops from the viewpoint of the brane configuration.
For the brane configuration,
we start from the Type IIA theory on a compact spatial circle of circumference $L$.
We place $k$ NS5-branes on the circle on intervals $L_a, (a=1, 2, \cdots, k)$ such that $L_1 + L_2 + \cdots + L_k = L$ and then stretch $N$ D4-branes on each interval. The low-energy dynamics of these D4-branes is then described by ${\cal N}=2$ quiver gauge theory of $\hat{A}_{k-1}$ type. In this setup, the $W^{(a)}$ Wilson loop is represented by a semi-infinite, macroscopic string emanating from $a$-th D4-brane to infinity. Since there are $k$ different states for identical macroscopic strings, we can also form linear combinations of them. There are $k$ different normal modes: the untwisted Wilson loop $W_0$ is the lowest normal mode obtained by algebraic average of the $k$ strings, $W_1$ is the next lowest normal mode obtained by discrete lattice translation $\omega$ for adjacent strings, $\cdots$, and the $W_{k-1}$ is the highest normal mode obtained by discrete lattice translation $\omega^{k-1}$ (which is the same as the configuration with lattice momentum $\bar{\omega}$ by the Unklapp process) for adjacent strings.
If the intervals are all equal, $L_1 = L_2 = \cdots = L_k = (L/k)$, then the brane configuration has cyclic permutation symmetry. This symmetry then ensures that all twisted Wilson loops vanish. If the intervals are different, (some of) the twisted Wilson loops are non-vanishing. If (fraction of) NS5-branes become coalescing, the geometry and the worldvolume global symmetries get enhanced. We see that fundamental strings ending on the weakly coupled D4-branes will be pulled to the coalescing NS5-branes. The difference from the $A_1$ theory is that, effect of other NS5-branes away from the coalescing ones becomes larger as $k$ gets larger. This is the brane configuration counterpart of the suppression of twisted Wilson loop expectation value which were attributed earlier to the weak curvature of the holographic geometry (\ref{FZZdual}) in this limit.
\section{Discussion} \label{discuss}
\vspace{5mm}
In this paper, we investigated aspects of four-dimensional ${\cal N}=2$ superconformal gauge theories.
Utilizing the localization technique, we showed that the path integral of these theories are reduced to a finite-dimensional matrix integral, much as for the ${\cal N}=4$ super Yang-Mills theory. The resulting matrix model is, however, non-Gaussian. Expectation value of half-BPS Wilson loops in these theories can also be evaluated using the matrix model techniques. We studied two theories in detail:
$A_1$ gauge theory with gauge group U$(N)$ and $2N$ fundamental hypermultiplets and $\hat{A}_1$ quiver gauge theory with gauge group U$(N)\times$U$(N)$ and two bi-fundamental hypermultiplets.
In the planar limit, $N \rightarrow \infty$, we determined exactly the leading asymptotes of the circular Wilson loops as the `t Hooft coupling becomes strong, $\lambda \rightarrow \infty$ and then compared it to the exponential growth $\sim \exp(\sqrt{\lambda})$ seen in the ${\cal N}=4$ super Yang-Mills theory.
In the $A_1$ theory, we found the Wilson loop exhibits {\sl non-exponential} growth:
it is bounded from above in the large $\lambda$ limit.
In the $\hat{A}_1$ theory, there are two Wilson loops, corresponding to the two $U(N)$ gauge groups. We found that the untwisted Wilson loop exhibits exponential growth, exactly the same leading behavior as the Wilson loop in ${\cal N}=4$ super Yang-Mills theory, but the twisted Wilson loop exhibits a new {\sl non-analytic} behavior in difference of the two gauge coupling constants.
We also studied holographic dual of these ${\cal N}=2$ theories and macroscopic string configurations representing the Wilson loops. We argued that both the {\sl non-exponential} behavior of the $A_1$ Wilson loop and the {\sl non-analytic} behavior of the $\hat{A}_1$ Wilson loops are indicative of string scale geometries of the gravity dual. For gravity dual of $A_1$ theory, there are infinitely many vanishing 2-cycles around which the macroscopic string wraps around and produce worldsheet instantons. These different saddle-points interfere among themselves, canceling out the would-be leading exponential growth. What remains thereafter then yields a non-exponential behavior, matching with the exact gauge theory results. For gravity dual of $\hat{A}_1$ theory, there is again a vanishing 2-cycle at the $\mathbb{Z}_2$ orbifold singularity. On the 2-cycle, NS-NS 2-form potential can be turned on and it is set by asymmetry between the two gauge coupling constants. The macroscopic string wraps around and each worldsheet instanton is weighted by $\exp (2 \pi i B)$. Again, since the 2-cycle has a vanishing area, infinite number of worldsheet instantons needs to be resummed. The resummation can then yield a non-analytic dependence on $B$, and this fits well with the exact gauge theory result.
A key lesson drawn from the present work is that holographic dual of these ${\cal N}=2$ superconformal gauge theories must involve geometry of string scale. For $A_1$ theory, suppression of exponential growth of Wilson loop expectation value hints that the holographic duals must be a noncritical string theory. In the brane construction viewpoint, this arose because the two coinciding NS5-branes generates the well-known linear dilaton background near the horizon and macroscopic string is pulled to the NS5-branes. In the holographic dual gravity viewpoint, this arose because worldsheet of macroscopic string representing the Wilson loop is not peaked to a semiclassical saddle-point but is affected by proliferating worldsheet instantons. We argued that delicate cancelation among the instanton sums lead to non-exponential behavior
of the Wilson loop.
It should be possible to extend the analysis in this paper to general ${\cal N}=2$ superconformal gauge theories. Recently, various quiver constructions were put forward \cite{gaiotto} and some of its gravity duals were studied \cite{gaiottomaldacena}. Main focus of this line of research were on quiver generalization of the Argyres-Seiberg S-duality, which does not commute with the large $N$ limit. Aim of the present work was to characterize behavior of the Wilson loop in large $N$ limit in terms of representation contents of matter fields and, from the results, infer the holographic geometry of gravity duals. We also remarked that our approach is complementary to the researches based on various worldsheet formulations \cite{Kawai:2007ek}\cite{Kawai:2007eg}\cite{Berkovits:2007rj}\cite{Azeyanagi:2008mi}.
Recently, localization in the ${\cal N}=6$ superconformal Chern-Simons theory was obtained and Wilson loops therein was studied in detail \cite{ABJMlocalization}. It should also be possible to extend the analysis to the superconformal (quiver) Chern-Simons theories. In particular, given that these two types of theories are related roughly speaking by partially compactifying on $\mathbb{S}^1$ and flowing into infrared, understanding similarities and differences between quiver gauge theories in (3+1) dimensions and in (2+1) dimensions would be extremely useful for elucidating further relations in gauge and string dynamics.
Finally, it should be possible to extend the analysis in this work to ${\cal N}=1$ superconformal quiver gauge theories and study implications to the Seiberg duality. Candidate non-critical string duals of these gauge theories were proposed by \cite{israel}.
We are currently investigating these issues but will relegate reporting our findings to follow-up publications.
\vspace{2cm}
\section*{Acknowledgments}
We are grateful to Zoltan Bajnok, Dongsu Bak, David Gross and Juan Maldacena for useful discussions on topics related to this work and comments. SJR thanks Kavli Institute for Theoretical Physics for hospitality during this work. TS thanks KEK Theory Group, Institute for Physics and Mathematics of the Universe and Asia-Pacific Center for Theoretical Physics for hospitality during this work. This work was supported in part by the National Science Foundation of Korea Grants 2005-084-C00003, 2009-008-0372, 2010-220-C00003, EU-FP Marie Curie Research \& Training Networks HPRN-CT-2006-035863 (2009-06318) and U.S. Department of Energy Grant DE-FG02-90ER40542.
|
1,116,691,497,152 | arxiv | \section{Introduction}
\subsection{Background and main objectives}
One of the fundamental results in the probability theory is the law of large numbers.
The large deviation theory describes the rate of convergence in the law of large numbers.
The most important results in this direction are the Bahadur-Rao and the Petrov precise large deviation asymptotics that we recall below
for independent and identically distributed (i.i.d.)\ real-valued random variables $(X_{i})_{i\geqslant 1}$.
Let $S_{n}=\sum_{i=1}^{n}X_{i}.$
Denote by $I_{\Lambda}$ the set of real numbers $s\geqslant 0$
such that $\Lambda(s):=\log\mathbb{E}[e^{s X_{1}}]< +\infty$
and by $I_{\Lambda}^\circ$ the interior of $I_{\Lambda}$.
Let $\Lambda^{*}$ be the Frenchel-Legendre transform of $\Lambda$.
Assume that $s\in I_{\Lambda}^\circ$ and $q$ are related by $q=\Lambda'(s)$.
Set $\sigma^2_s = \Lambda''(s).$
From the results of Bahadur and Rao \cite{BR1960} and Petrov \cite{Petrov65} it follows that
if the law of $X_{1}$ is non-lattice, then the following large deviation asymptotic holds true:
\begin{align}
\label{Petrov theorem 001}
\mathbb{P}(S_{n}\geqslant n(q+l)) \sim
\frac{\exp( -n \Lambda^*(q+l))}{s \sigma_s \sqrt{2\pi n}},
\ n\to\infty,
\end{align}
where
$
\Lambda^*(q+l)
= \Lambda^{*}(q) + sl + \frac{l^2}{2 \sigma_s^2} + O(l^3)
$
and $l$ is a vanishing perturbation as $n\to\infty.$
Bahadur and Rao \cite{BR1960}
have established the equivalence \eqref{Petrov theorem 001} with $l=0$.
Petrov
improved it
by showing that \eqref{Petrov theorem 001} holds uniformly in
$| l | \leqslant l_n \to 0$ as $n\to \infty.$
Actually, Petrov's result is also uniform in $q$
and is therefore stronger than Bahadur-Rao's theorem
even with $l=0.$
The relation \eqref{Petrov theorem 001} with $l=0$ and
its extension to $|l|\leqslant l_n\to 0$ have multiple implications
in various domains of probability and statistics.
The main goal of the present paper is to establish an equivalence
similar to \eqref{Petrov theorem 001}
for products of i.i.d.\ random matrices.
Let $(g_{n})_{n\geqslant 1}$ be a sequence of i.i.d. $d\times d$ real random matrices
defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$
with common law $\mu$.
Denote by $\|\cdot\|$ the operator norm of a matrix and by $| \cdot |$ the Euclidean norm in $\mathbb R^d$.
Set for brevity
$G_{n}:= g_{n}\ldots g_{1},$ $n\geqslant 1$.
The study of asymptotic behavior of the product $G_n$ attracted much attention, since the fundamental work of
Furstenberg and Kesten \cite{FursKesten1960}, where
the strong law of large numbers for $\log \|G_n\|$ has been established.
Under additional assumptions, Furstenberg \cite{Furstenberg1963} extended it to $\log |G_n x|$,
for any starting point $x$ on the unit sphere $\mathbb{S}^{d-1} = \{ x \in \mathbb{R}^d: |x| = 1 \}.$
A number of noteworthy results in this area can be found in Kesten \cite{Kesten1973},
Kingman \cite{Kingman1973},
Le Page \cite{LePage1982}, Guivarc'h and Raugi \cite{GR1985}, Bougerol and Lacroix \cite{Bougerol1985},
Goldsheid and Guivarc'h \cite{GG1996},
Hennion \cite{Hennion1997},
Furman \cite{Furman2002},
Hennion and Herv\'e \cite{Hen-Herve2004},
Guivarc'h \cite{Guivarch2015},
Guivarc'h and Le Page \cite{GE2016},
Benoist and Quint \cite{BQ2016, BQ2017}
to name only a few.
In this paper we are interested in asymptotic behaviour of large deviation probabilities for $\log | G_n x |$
where $x \in \mathbb{S}^{d-1}$.
Set $I_{\mu}=\{s\geqslant 0: \mathbb{E}(\|g_1\|^{s})<+\infty\}.$
For $s\in I_\mu$,
let $ \kappa(s)=\lim_{n\to\infty}\left(\mathbb{E}\| G_n \|^{s}\right)^{\frac{1}{n}}.$
Define the convex function $\Lambda(s)=\log\kappa(s)$, $s\in I_\mu$, and consider its Fenchel-Legendre transform
$
\Lambda^{\ast}(q)=\sup_{s\in I_{\mu}}\{sq-\Lambda(s)\},
$
$q\in\Lambda'(I_{\mu}).$
Our first objective is to
establish the following
Bahadur-Rao type precise large deviation asymptotic:
\begin{align}
\label{le page result1}
\mathbb{P}(\log |G_{n} x |\geqslant nq) \sim
\bar r_{s}(x)
\frac{ \exp \left( -n\Lambda^{*}(q) \right) } {s\sigma_s\sqrt{2\pi n}}, \ n\to\infty,
\end{align}
where $\sigma_s>0,$
$\bar r_s = \frac{r_s}{ \nu_s( r_s) } >0,$
$r_s$ and $\nu_{s}$ are, respectively, the unique up to a constant
eigenfunction and unique probability eigenmeasure of the transfer operator
$P_s$ corresponding to the eigenvalue $\kappa(s)$
(see Section \ref{sec:resnorms} for precise statements).
In fact, to enlarge the area of applications in \eqref{le page result1} it is useful to add
a vanishing perturbation for $q$.
In this line we obtain the following Petrov type large deviation expansion:
under appropriate conditions, uniformly in $| l |\leqslant l_n \to 0 $ as $n\to\infty,$
\begin{align} \label{intro entries 01}
\mathbb{P}(\log | G_n x | \geqslant n(q+l))
\sim \bar r_{s}(x)
\frac{ \exp\left( -n\Lambda^{*}(q+l) \right)} {s\sigma_{s}\sqrt{2\pi n}},
\ \ n\to\infty.
\end{align}
As an consequence of
\eqref{intro entries 01}
we are able to infer new results,
such as large deviation principles for $\log \| G_n \|$,
see Theorem \ref{Thm-LDP-Norm}.
From \eqref{intro entries 01} we also deduce a local large deviation asymptotic:
there exists
a sequence $\Delta_n > 0$ converging to $0$ such that,
uniformly in $\Delta \in [\Delta_n, o(n)]$,
\begin{align} \label{local the001}
\mathbb{P}(\log |G_{n} x |\in [nq, nq + \Delta ) ) \sim
\Delta \frac{\bar r_{s}(x)}
{ s \sigma_s \sqrt{2\pi n} } e^{-n\Lambda^{*}(q)},\ n\to\infty.
\end{align}
Our results are established for both invertible matrices and positive matrices.
For invertible matrices,
Le Page \cite{LePage1982}
has obtained \eqref{le page result1}
for $s>0$ small enough under more restrictive conditions,
such as the existence of exponential moments of $\|g_1\|$ and $\|g_1^{-1}\|$.
The asymptotic \eqref{le page result1} clearly implies a large deviation result due to Buraczewski and Mentemeier
\cite{BS2016}
which holds for invertible matrices and positive matrices:
for $q=\Lambda'(s)$ and $s\in I_\mu^{\circ}$,
there exist two constants
$0<c_s<C_s<+\infty$
such that
\begin{align} \label{LDbounds001}
c_s\leqslant \liminf_{n\to\infty}\frac{\mathbb{P}(\log |G_{n} x|\geqslant nq)}{\frac{1}{\sqrt{n}}~e^{-n\Lambda^{\ast}(q)}}
\leqslant \limsup_{n\to\infty}\frac{\mathbb{P}(\log |G_{n} x|\geqslant nq)}{\frac{1}{\sqrt{n}}~e^{-n\Lambda^{\ast}(q)}}
\leqslant C_s.
\end{align}
Consider the Markov chain $X_{n}^{x}:=G_n x / | G_n x|$.
Our second objective is to
give precise large deviations for the couple $(X_{n}^{x}, \log | G_n x |)$
with target functions.
We prove that for any H\"{o}lder continuous target
function $\varphi$ on
$X_{n}^{x}$, and
any target function $\psi$ on
$\log |G_{n} x|$
such that $y\mapsto e^{-sy}\psi(y)$ is directly Riemann integrable,
it holds that
\begin{align} \label{Introdution result2}
&\mathbb{E} \Big[ \varphi(X_{n}^{x}) \psi(\log |G_{n} x| -n(q+l)) \Big] \nonumber \\
&\qquad \sim
\bar r_{s}(x)\nu_{s}(\varphi)
\int_{\mathbb{R}}e^{-sy}\psi(y)dy
\ \frac{ \exp\left( -n\Lambda^{*}(q+l) \right)} {\sigma_{s}\sqrt{2\pi n}},
\ \ n\to\infty.
\end{align}
As a special case
of \eqref{Introdution result2} with $l=0$ and $\psi$ compactly supported we obtain
Theorem 3.3 of Guivarc'h \cite{Guivarch2015}.
With $l=0$, $\psi$ the indicator function of the interval $[0,\infty)$ and $\varphi=r_s$, we get
the main result in \cite{BS2016}.
Our third objective is to establish asymptotics for lower large deviation probabilities:
we prove that
for $q=\Lambda'(s)$ with $s<0$ sufficiently close to $0$,
it holds,
uniformly in $|l|\leqslant l_n$,
\begin{align} \label{intro s negative001}
\mathbb{P} \big( \log|G_n x| \leqslant n(q+l) \big) =
\bar r_{s}(x) \frac{ \exp \left( -n \Lambda^*(q+l) \right) }{ - s \sigma_s \sqrt{ 2 \pi n} } ( 1 + o(1)).
\end{align}
This sharpens the large deviation principle established in \cite[Theorem 6.1]{Bougerol1985} for
invertible matrices.
Moreover, we extend the large deviation asymptotic \eqref{intro s negative001}
to the couple $(X_n^x, \log |G_n x|)$ with target functions.
\subsection{Proof outline}
Our proof
is different from the standard approach of Dembo and Zeitouni \cite{Dembo2009} based on the Edgeworth expansion,
which has been employed for instance in \cite{BS2016}.
In contrast to \cite{BS2016},
we start with the identity
\begin{align} \label{intro001}
\frac{e^{ n \Lambda^{*}(q + l) } }{r_{s}(x)}
& \mathbb{P} \big( \log |G_{n} x| \geqslant n (q+l) \big) \nonumber \\
& = e^{ n h_s(l) } \mathbb{E}_{\mathbb{Q}_{s}^{x}}
\Big( \frac{ \psi_s ( \log |G_{n} x| - n (q+l) ) }{ r_{s}(X_{n}^x) }\Big),
\end{align}
where $\mathbb{Q}_{s}^{x}$ is the change of measure defined in Section \ref{sec:spec gap norm}
for the norm cocycle $\log |G_n x|$,
$\psi_s(y)=e^{-sy} \mathbbm{1}_{\{y\geqslant 0\}}$ and $h_s(l) = \Lambda^{*}(q + l) - \Lambda^{*}(q) -sl$.
Usually the expectation in the right-hand side of \eqref{intro001} is handled via the
Edgeworth expansion for the distribution function ${\mathbb{Q}_{s}^{x}}\big( \frac { \log |G_n x| - nq}{\sqrt{n} \sigma_s } \leqslant t \big)$;
however, the presence of the multiplier $r_s(X^x_n)^{-1}$ makes this impossible.
Our idea is to replace the function $\psi_s$ with some
upper and lower smoothed bounds
using a technique
from
Grama, Lauvergnat and Le Page \cite{GLE2017}.
For simplicity we deal only with the upper bound
$\psi_s \leqslant {\psi}_{s,\varepsilon}^+ * {\rho}_{\varepsilon^2}$,
where $\psi^+_{s,\varepsilon}(y) = \sup_{y': |y'-y| \leqslant \varepsilon} \psi_s(y')$, for some $\varepsilon >0$,
and $\rho_{\varepsilon^2}$ is a density function on the real line
satisfying the following properties:
the Fourier transform $\widehat{\rho}_{\varepsilon^2}$ is supported on $[-\varepsilon^{-2}, \varepsilon^{-2}]$,
has a continuous extension in the complex plane and is analytic in the domain
$\{ z \in \mathbb{C}: |z| < \varepsilon^{-2}, \Im z \neq 0 \}$, see Lemma \ref{LemAnalyExten}.
Let $R_{s,it}$ be the perturbed operator defined by
$R_{s,it}(\varphi)(x)
=\mathbb{E}_{\mathbb{Q}_{s}^{x}} [\varphi(X_1) e^{it ( \log |g_1 x| - q )}],$
for any H\"{o}lder continuous function $\varphi$ on the unit sphere $\mathbb{S}^{d-1}.$
Using the inversion formula we obtain the following upper bound:
\begin{align} \label{intro002}
& \mathbb{E}_{\mathbb{Q}_{s}^{x}} \Big(\frac{ \psi_s( \log |G_n x| - n (q+l) ) }{r_{s}(X_{n}^x)} \Big) \nonumber \\
&\qquad \qquad\leqslant \frac{1}{2\pi} \int_{\mathbb{R}} e^{-itln} R^{n}_{s,it}(r_{s}^{-1})(x)
\widehat{\psi}_{s,\varepsilon}^+(t) \widehat{\rho}_{\varepsilon^2} (t)
dt,
\end{align}
where
$R^{n}_{s,it}$ is the $n$-th iteration of $R_{s,it}$.
The integral in the right-hand side of \eqref{intro002} is decomposed into two parts:
\begin{align}\label{Intro-Integ}
e^{ n h_s(l) } \Big\{ \int_{ |t| < \delta } + \int_{ |t| \geqslant \delta} \Big\}
e^{-itln} R^{n}_{s,it}(r_{s}^{-1})(x) \widehat{\psi}_{s,\varepsilon}^+(t) \widehat{\rho}_{\varepsilon^2} (t) dt.
\end{align}
Since $\widehat{\rho}_{\varepsilon^2}$ is compactly supported on $\mathbb R$ and
$\mu$ is non-arithmetic, the second integral in \eqref{Intro-Integ} decays exponentially fast to $0$.
To deal with the first integral in \eqref{Intro-Integ}, we make use of
spectral gap decomposition for the perturbed operator $R_{s,it}$:
$R^{n}_{s,it} = \lambda^{n}_{s,it} \Pi_{s,it} + N^{n}_{s,it}.$
Taking into account the fact that the remainder term $N^{n}_{s,it}$ decays exponentially fast to $0$,
the main difficulty is to investigate the integral:
\begin{align*}
e^{ n h_s(l) } \int_{ -\delta }^{\delta} e^{-itln}
\lambda^{n}_{s,it} \Pi_{s,it} (r_{s}^{-1})(x) \widehat{\psi}_{s,\varepsilon}^+(t) \widehat{\rho}_{\varepsilon^2} (t) dt.
\end{align*}
To find the exact asymptotic of this integral, we can apply the saddle point method
(see Fedoryuk \cite{Fedoryuk1987}).
This is possible, since by the analyticity of the functions $\widehat{\psi}_{s,\varepsilon}^+$ and $\widehat{\rho}_{\varepsilon^2}$,
one can apply Cauchy's integral theorem to change the integration path so that it passes through the saddle point
$z_0 = z_0(l)$, which is the unique solution of the saddle point equation $\log \lambda_{s,z} = zl$.
The lower bound of the integral in \eqref{intro001} is a little more delicate, but can be treated in a similar way.
The passage to the targeted version is done by using approximation techniques.
We end this section by fixing some notation,
which will be used throughout the paper.
We denote by $c$, $C$, eventually supplied with indices, absolute constants whose values may change from line to line.
By $c_\alpha$, $C_{\alpha}$ we mean constants depending only on the index $\alpha.$
The interior of a set $A$ is denoted by $A^\circ$.
Let $\mathbb N = \{1,2,\ldots\}$.
For any integrable function $\psi: \mathbb{R} \to \mathbb{C}$, define its Fourier transform by
$\widehat{\psi} (t) = \int_{\mathbb{R}} e^{-ity} \psi(y) dy$, $t \in \mathbb{R}$.
For a matrix $g$, its transpose is denoted by $g^{\mathrm{T}}.$
For a measure $\nu$ and a function $\varphi$ we write $\nu(\varphi)=\int \varphi d\nu.$
\section{Main results}\label{sec.prelim}
\subsection{Notation and conditions}\label{subsec.notations}
The space $\mathbb{R}^d$ is equipped with the standard scalar product $\langle \cdot, \cdot\rangle$
and the Euclidean norm $|\cdot|$.
For $d\geqslant 1$, let $M(d,\mathbb{R})$ be the set of $d\times d$ matrices with entries in $\mathbb R$
equipped with the operator norm $\|g\|=\sup_{x\in \mathbb{S}^{d-1}}|g x|$, for $g \in M(d,\mathbb{R})$,
where $\mathbb{S}^{d-1}=\{x\in \mathbb{R}^{d}, |x|=1\}$ is the unit sphere.
We shall work with products of invertible or positive matrices (all over the paper we use the term positive in the wide sense, i.e.\ each entry is non-negative).
Denote by $\mathscr G=GL(d,\mathbb R)$ the general linear group of invertible matrices of $M(d,\mathbb R).$
A positive matrix $g\in M(d,\mathbb R)$ is said to be \emph{allowable},
if every row and every column of $g$ has a strictly positive entry.
Denote by $\mathscr G_+$ the multiplicative semigroup of allowable positive matrices of $M(d,\mathbb R)$.
We write $\mathscr G_+^\circ $ for the subsemigroup of $\mathscr G_+$ with strictly positive entries.
Denote by $\mathbb{S}^{d-1}_{+}=\{x\geqslant 0 : |x|=1\}$ the intersection of the unit sphere
with the positive quadrant.
To unify the exposition,
we use the symbol $\mathcal{S}$ to denote
$\mathbb{S}^{d-1}$ in the case of invertible matrices,
and $\mathbb{S}^{d-1}_{+}$ in the case of positive matrices.
The space $\mathcal S$ is equipped with the metric $\mathbf d$ which we proceed to introduce.
For invertible matrices, the distance $\mathbf{d}$ is defined as the angular distance (see \cite{GE2016}), i.e.,
for any $x, y \in \mathbb{S}^{d-1}$, $\mathbf{d}(x,y)= |\sin \theta(x,y)|$,
where $\theta(x,y)$ is the angle between $x$ and $y$.
For positive matrices, the distance $\mathbf{d}$ is the Hilbert cross-ratio metric
(see \cite{Hennion1997}) defined by
$\mathbf{d}(x,y) = \frac{1- m(x,y)m(y,x)}{1 + m(x,y)m(y,x)}$, where
$m(x,y)=\sup\{\lambda>0 : \ \lambda y_i\leqslant x_i,\ \forall i=1,\ldots, d \}$,
for any two vectors $x=(x_1, \ldots, x_d)$ and $y=(y_1, \ldots, y_d)$ in $\mathbb{S}_{+}^{d-1}$.
Let $\mathcal{C}(\mathcal{S})$ be the space of continuous functions on $\mathcal{S}$.
We write $\mathbf{1}$ for the identity function $1(x)$, $x \in \mathcal{S}$.
Throughout this paper, let $\gamma>0$ be a fixed small constant.
For any $\varphi\in \mathcal{C(S)}$, set
\begin{align}
\|\varphi\|_{\infty}:= \sup_{x\in \mathcal{S}}|\varphi(x)| \quad \mbox{and} \quad
\|\varphi\|_{\gamma}:= \|\varphi\|_{\infty}
+ \sup_{x,y\in \mathcal{S}}\frac{|\varphi(x)-\varphi(y)|}{\mathbf{d}(x,y)^{\gamma}}, \nonumber
\end{align}
and introduce the Banach space
$
\mathcal{B}_{\gamma}:=\{\varphi\in \mathcal{C(S)}: \|\varphi\|_{\gamma}<+\infty\}.
$
For $g\in M(d,\mathbb R)$ and $x \in \mathcal{S}$,
write $g \cdot x= \frac{gx}{|gx|}$ for the projective action of $g$ on $\mathcal{S}$.
For any $g \in M(d,\mathbb R)$, set $\iota(g):=\inf_{x\in \mathcal S}|gx|.$
For both invertible matrices and allowable positive matrices, it holds that $\iota(g)>0.$
Note that for any invertible matrix $g$, we have $\iota(g) = \| g^{-1} \|^{-1}$.
Let $(g_{n})_{n\geqslant 1}$
be a sequence of i.i.d.\ random matrices of the same probability law $\mu$ on $M(d,\mathbb{R})$.
Set $G_n=g_{n}\ldots g_{1},$ for $n\geqslant 1.$
Our goal is to establish, under suitable conditions,
a large deviation equivalence similar to \eqref{Petrov theorem 001}
for the norm cocycle
$\log |G_n x|$
for invertible matrices and positive matrices.
In both cases,
we denote by $\Gamma_{\mu}:=[\supp\mu]$
the smallest closed semigroup of $M(d,\mathbb{R})$ generated by $\supp \mu$ (the support of $\mu$),
that is, $\Gamma_{\mu} = \overline{\cup_{n=1}^\infty \{ \supp \mu\}^n}$.
Set
$$
I_{\mu}
=\{s\geqslant 0: \mathbb{E}(\|g_1\|^{s})<+\infty\}.
$$
Applying H\"{o}lder's inequality to $\mathbb{E}(\|g_1\|^{s})$, it is easily seen that $I_{\mu}$ is an interval.
We make use of the following exponential moment condition:
\begin{conditionA}\label{Aexp}
There exist $s\in I_\mu^\circ$ and $\alpha\in(0,1)$ such that
$\mathbb{E} \| g_1 \|^{s+\alpha} \iota(g_{1})^{-\alpha} < + \infty.$
\end{conditionA}
For invertible matrices, we introduce the following strong irreducibility and proximality conditions,
where we recall that a matrix $g$ is said to be \emph{proximal} if it has an algebraic simple dominant eigenvalue.
\begin{conditionA}\label{A1}
{\rm (i)(Strong irreducibility)}
No finite union of proper subspaces of $\mathbb{R}^d$ is $\Gamma_{\mu}$-invariant.
{\rm (ii)(Proximality)} $\Gamma_{\mu}$ contains at least one proximal matrix.
\end{conditionA}
The conditions of strong irreducibility and proximality are always satisfied for $d=1$.
If $g$ is proximal, denote by $\lambda_{g}$ its dominant eigenvalue
and by $v_{g}$ the associated normalized eigenvector ($|v_g|=1$).
In fact, $g$ is proximal iff the space $\mathbb{R}^{d}$ can be decomposed as $\mathbb{R}^{d}=\mathbb{R}\lambda_{g}\oplus V'$
such that $gV'\subset V'$
and the spectral radius of $g$ on the invariant subspace $V'$
is strictly less than $|\lambda_{g}|$.
For invertible matrices, condition \ref{A1}
implies that the Markov chain $X_{n}^{x}$ has a unique $\mu$-stationary measure, which is supported on
$$
V(\Gamma_{\mu})=\overline{\{ \pm v_{g}\in
\mathbb S^{d-1}: g\in\Gamma_{\mu}, \ g \mbox{ is proximal} \}}.
$$
For positive matrices, introduce the following condition:
\begin{conditionA}\label{A2}
{\rm (i) (Allowability)}
Every $g\in\Gamma_{\mu}$ is allowable.
{\rm (ii) (Positivity)}
$\Gamma_{\mu}$ contains at least one matrix belonging to $\mathscr G_+^\circ$.
\end{conditionA}
It can be shown (see \cite[Lemma 4.3]{BDGM2014}) that for positive matrices, condition \ref{A2} ensures the existence and uniqueness of the invariant measure for the Markov chain $X_{n}^{x}$
supported on
$$
V(\Gamma_{\mu})=\overline{\{v_{g}\in
\mathbb S^{d-1}_+
: g\in \Gamma_{\mu}, \ g \in \mathscr G_+^{\circ} \}}.
$$
In addition, $V(\Gamma_{\mu})$ is the unique minimal $\Gamma_{\mu}$-invariant subset (see \cite[Lemma 4.2]{BDGM2014}).
According to the Perron-Frobenius theorem, a strictly positive matrix always has a unique dominant eigenvalue,
so condition \ref{A2}(ii) implies condition \ref{A1}(ii) for $d>1$.
For any $s\in I_{\mu}$, for invertible matrices and for positive matrices,
the following limit exists (see \cite{GE2016} and \cite{BS2016}):
\begin{align}
\kappa(s)=\lim_{n\to\infty} \left(\mathbb{E} \| G_n \|^{s}\right)^{\frac{1}{n}}. \nonumber
\end{align}
The function
$\Lambda=\log\kappa: I_{\mu} \to \mathbb R$ is convex and
analytic on $I_{\mu}^{\circ}$
(it plays the same role as the $\log$-Laplace transform of $X_1$ in the real i.i.d.\ case).
Introduce the
Fenchel-Legendre transform of $\Lambda$ by
$\Lambda^{\ast}(q)=\sup_{s\in I_{\mu}}\{sq-\Lambda(s)\},$
$q\in\Lambda'(I_{\mu}).$
We have that $\Lambda^*(q)=s q - \Lambda(s)$
if $q=\Lambda'(s)$ for some $s\in I_{\mu}$,
which implies $\Lambda^*(q)\geq0$ on $\Lambda'(I_{\mu})$ since $\Lambda(0)=0$ and $\Lambda(s)$ is convex on $I_{\mu}$.
We say that the measure $\mu$
is \emph{arithmetic}, if there exist $t>0$, $\beta \in[0,2\pi)$ and a function
$\vartheta: \mathcal{S}
\to \mathbb{R}$
such that for any $g\in \Gamma_{\mu}$ and any $x\in V(\Gamma_{\mu})$, we have
$
\exp[it\log|gx|-i\beta + i\vartheta(g\!\cdot\!x)-i \vartheta(x)]=1.
$
For positive matrices, we need the following condition:
\begin{conditionA}\label{A3}
{\rm (Non-arithmeticity)} The measure $\mu$ is non-arithmetic.
\end{conditionA}
A simple sufficient condition established in \cite{Kesten1973} for
the measure $\mu$
to be non-arithmetic
is that the additive subgroup of $\mathbb{R}$ generated by the set
$\{ \log \lambda_{g} : g\in \Gamma_{\mu}, \ g \in \mathscr G_+^\circ \}$
is dense in $\mathbb{R}$ (see \cite[Lemma 2.7]{BS2016}).
Note that for positive matrices,
condition \ref{A3} is used to ensure that
$\sigma_s^2=\Lambda''(s) >0$.
For invertible matrices,
condition \ref{A1}
implies the non-arithmeticity of the measure $\mu$,
hence, $\sigma_s$ is also strictly positive
(for a proof see Guivarc'h and Urban \cite[Proposition 4.6]{GU2005}).
For any $s\in I_{\mu}$, the transfer operator $P_s$ and the conjugate transfer operator $P_{s}^{*}$
are defined, for any $\varphi \in \mathcal{C(S)}$ and $x\in \mathcal S$, by
\begin{align}\label{transfoper001}
\! P_{s}\varphi(x) \!=\! \int_{\Gamma_{\mu}} \! |g_1 x |^{s} \varphi( g_1\!\cdot\!x ) \mu(dg_1), \
P_{s}^{*}\varphi(x) \!=\! \int_{\Gamma_{\mu}} \! |g_1^{\mathrm{T}}x|^{s} \varphi(g_1^{\mathrm{T}}\!\cdot\!x) \mu(dg_1),
\end{align}
which are bounded linear on $\mathcal{C(S)}$.
Under condition \ref{A1} for invertible matrices,
or condition \ref{A2} for positive matrices,
the operator $P_s$
has a unique probability eigenmeasure $\nu_s$ on $\mathcal{S}$
corresponding to the eigenvalue $\kappa(s)$:
$P_s \nu_s = \kappa(s)\nu_s.$
Similarly, the operator $P_{s}^{*}$
has a unique probability eigenmeasure $\nu^*_s$
corresponding to the eigenvalue $\kappa(s)$:
$P_{s}^{*} \nu^*_s = \kappa(s)\nu^*_s.$
Set, for $x\in \mathcal{S}$,
$$
r_{s}(x)= \int_{\mathcal{S}} |\langle x, y\rangle|^{s}\nu^*_{s}(dy), \ \
r_{s}^*(x)= \int_{\mathcal{S}} |\langle x, y\rangle|^{s}\nu_{s}(dy).
$$
Then, $r_s$ is the unique, up to a scaling constant,
strictly positive eigenfunction of $P_s$:
$P_s r_s = \kappa(s)r_s$;
similarly
$r^*_s$ is the unique, up to a scaling constant,
strictly positive eigenfunction of $P_{s}^{*}$: $P_{s}^{*} r^*_s = \kappa(s)r^*_s$.
We refer for details to Section \ref{sec:spec gap norm}.
Below we shall also make use of normalized eigenfunction $\bar r_s$ defined by
$\bar r_s(x)= \frac{r_s(x)}{ \nu_s(r_s) }$, $x \in \mathcal{S}$, which is
strictly positive and H\"{o}lder continuous on the projective space $\mathcal{S}$,
see Proposition \ref{transfer operator}.
\subsection{Large deviations for the norm cocycle} \label{sec:resnorms}
The following theorem gives the exact asymptotic behavior
of the large deviation probabilities for the norm cocycle.
\begin{theorem} \label{main theorem1}
Assume that $\mu$ satisfies either conditions \ref{Aexp}, \ref{A1} for invertible matrices,
or conditions \ref{Aexp}, \ref{A2}, \ref{A3} for positive matrices.
Let $q=\Lambda'(s)$, where $s\in I_{\mu}^{\circ}$.
Then for any positive sequence $(l_n)_{n \geqslant 1}$ satisfying $\lim_{n\to \infty}l_n = 0$,
we have, as $n \to \infty$,
uniformly in $x\in \mathcal{S}$ and $|l|\leqslant l_n$,
\begin{align}\label{theorem-main001}
\mathbb{P} \big( \log|G_n x| \geqslant n(q+l) \big) =
\bar r_{s}(x) \frac{ \exp \left( -n \Lambda^*(q+l) \right) }{s\sigma_s\sqrt{2\pi n}} ( 1 + o(1)).
\end{align}
In particular, with $l=0$, as $n \to \infty$, uniformly in $x\in \mathcal{S}$,
\begin{align}\label{devrez001}
\mathbb{P} \big( \log|G_n x| \geqslant nq \big) =
\bar r_{s}(x) \frac{ \exp \left( -n \Lambda^*(q) \right) }{ s \sigma_s \sqrt{2 \pi n} } ( 1+ o(1)).
\end{align}
\end{theorem}
The rate function $\Lambda^*(q+l)$ admits the following expansion: for $q=\Lambda'(s)$ and $l$ in a small neighborhood
of $0$, we have
\begin{align} \label{Def Jsl}
\Lambda^*(q+l)
= \Lambda^{*}(q) + sl + \frac{l^2}{2 \sigma_s^2} - \frac{l^3}{\sigma_s^3} \zeta_s\Big(\frac{l}{\sigma_s}\Big),
\end{align}
where $\zeta_s(t)$ is the
Cram\'{e}r series,
$\zeta_s(t) = \sum_{k=3}^{\infty} c_{s,k} t^{k-3}= \frac{ \Lambda'''(s) }{6 \sigma_s^3} + O(t),$
with $\Lambda'''(s)$ and $\sigma_s$ defined in Proposition \ref{perturbation thm}.
We refer for details to Lemma \ref{lemmaCR001},
where the coefficients $c_{s,k}$ are given in terms of the cumulant generating function $\Lambda=\log \kappa$.
For invertible matrices, a point-wise version of \eqref{devrez001},
without $\sup_{x\in \mathcal{S}}$ and with $l=0$, namely the asymptotic \eqref{le page result1},
has been first established by Le Page \cite[Theorem 8]{LePage1982}
for small enough $s>0$
under a stronger exponential moment condition.
For positive matrices, the asymptotic \eqref{devrez001} is new and implies
the large deviation bounds \eqref{LDbounds001}
established in
Buraczewski and Mentemeier \cite[Corollary 3.2]{BS2016}.
We note that there is a misprint in \cite{BS2016}, where $e^{nsq}$ should be replaced by $e^{\Lambda^*(q)}$.
Now we consider the precise large deviations for the couple $(X_n^x, \log |G_{n} x|)$
with target functions $\varphi$ and $\psi$
on $X_{n}^{x}:=G_n \!\cdot\! x$ and $\log |G_{n} x|$, respectively.
\begin{theorem} \label{main theorem3}
Assume the conditions of Theorem \ref{main theorem1} and
let $q=\Lambda'(s)$ for $s\in I_\mu^\circ$.
Then, for any $\varphi \in \mathcal{B}_{\gamma}$,
any measurable function $\psi$ on $\mathbb{R}$
such that $y\mapsto e^{-sy}\psi(y)$ is
directly Riemann integrable,
and any positive sequence $(l_n)_{n \geqslant 1}$ satisfying $\lim_{n\to \infty}l_n = 0$,
we have, as $n \to \infty$, uniformly in $x \in \mathcal{S}$ and $|l| \leqslant l_n$,
\begin{align} \label{Petrov-Target01}
& \mathbb{E} \Big[ \varphi(X_{n}^{x})\psi( \log|G_n x|-n(q+l)) \Big] \nonumber\\
& \qquad\qquad
= \bar r_{s}(x)
\frac{ \exp \left( -n \Lambda^*(q+l) \right) }{\sigma_s \sqrt{2\pi n} }
\Big[ \nu_s(\varphi) \int_{\mathbb{R}} e^{-sy} \psi(y) dy + o(1) \Big].
\end{align}
\end{theorem}
With
$\varphi = \mathbf{1}$ and $\psi(y)=\mathbbm{1}_{\{y\geq0\}}$ for $y\in \mathbb R,$
we obtain Theorem \ref{main theorem1}.
For invertible matrices and with $l=0$,
Theorem \ref{main theorem3}
strengthens the point-wise large deviation result stated
in Theorem 3.3 of Guivarc'h \cite{Guivarch2015}, since
we do not assume the function $\psi$ to be compactly supported and
our result is uniform in $x\in \mathcal{S}$.
By the way we would like to remark that
in Theorem 3.3 of \cite{Guivarch2015} $\kappa^n(s)$ should be replaced by $\kappa^{-n}(s)$,
and $\nu_s(\varphi r_s^{-1})$ should be replaced by $\frac{\nu_s(\varphi)}{\nu_s(r_s)}$.
For positive matrices, Theorem \ref{main theorem3} is new.
Since $r_s$ is a strictly positive and H\"{o}lder continuous function on $\mathcal{S}$
(see Proposition \ref{transfer operator}),
taking $\varphi=r_s$ and $\psi(y)=\mathbbm{1}_{\{y\geq0\}}$, $y\in \mathbb R$ in Theorem \ref{main theorem3},
we get the main result of \cite{BS2016} (Theorem 3.1).
Unlike the case of i.i.d.\ real-valued random variables,
Theorems \ref{main theorem1} and \ref{main theorem3}
do not imply the similar asymptotic for lower large deviation probabilities $\mathbb{P}( \log|G_n x| \leqslant n(q+l))$, where $q <\Lambda'(0)$.
To formulate our results, we need an exponential moment condition, as in
Le Page \cite{LePage1982}.
For $g \in \Gamma_{\mu}$, set $N(g) = \max\{ \|g\|, \iota(g)^{-1} \}$, which reduces to $N(g) = \max\{ \|g\|, \|g^{-1}\| \}$
for invertible matrices.
\begin{conditionA}\label{CondiMoment}
There exists a constant $\eta \in (0,1)$ such that $\mathbb{E} [N(g_1 )^{\eta}] < +\infty$.
\end{conditionA}
Under condition \ref{CondiMoment},
the functions $s \mapsto \kappa(s)$ and $s \mapsto \Lambda(s) = \log \kappa(s)$
can be extended analytically in a small neighborhood of $0$ of the complex plane;
in this case the expansion \eqref{Def Jsl} still holds and we have $\sigma_s = \Lambda''(s)>0$ for $s<0$ small enough.
We also need to extend the function $\overline r_s$ for small $s<0$, which is positive and H\"older continuous on the projective space $\mathcal S$,
as in the case of $s>0$: we refer to Proposition \ref{transfer operator s negative}
for details.
\begin{theorem} \label{Thm-Neg-s}
Assume that $\mu$ satisfies either conditions \ref{A1}, \ref{CondiMoment} for invertible matrices
or conditions \ref{A2}, \ref{A3}, \ref{CondiMoment} for positive matrices.
Then, there exists $\eta_0 < \eta$ such that for any $s \in (-\eta_0, 0)$ and $q=\Lambda'(s)$,
for any positive sequence $(l_n)_{n \geqslant 1}$ satisfying $\lim_{n\to \infty}l_n = 0$,
we have, as $n \to \infty$,
uniformly in $x\in \mathcal{S}$ and $|l|\leqslant l_n$,
\begin{align*}
\mathbb{P} \big( \log|G_n x| \leqslant n(q+l) \big) =
\bar r_{s}(x) \frac{ \exp \left( -n \Lambda^*(q+l) \right) }{ - s \sigma_s \sqrt{ 2 \pi n} } ( 1 + o(1)).
\end{align*}
In particular, with $l=0$, as $n \to \infty$, uniformly in $x\in \mathcal{S}$,
\begin{align*}
\mathbb{P} \big( \log|G_n x| \leqslant nq \big) =
\bar r_{s}(x) \frac{ \exp \left( -n \Lambda^*(q) \right) }{ - s \sigma_s \sqrt{2 \pi n} } ( 1+ o(1)).
\end{align*}
\end{theorem}
For invertible matrices, this result sharpens the large deviation principle established in \cite{Bougerol1985}.
For positive matrices, our result is new, even for the large deviation principle.
More generally, we also have the precise large deviations result for the couple $(X_n^x, \log |G_n x|)$
with target functions.
\begin{theorem} \label{Thm-Neg-s-Target}
Assume the conditions of Theorem \ref{Thm-Neg-s}.
Then, there exists $\eta_0 < \eta$ such that for any $s \in (-\eta_0, 0)$ and $q=\Lambda'(s)$,
for any $\varphi \in \mathcal{B}_{\gamma}$,
any measurable function $\psi$ on $\mathbb{R}$
such that $y\mapsto e^{-sy}\psi(y)$ is
directly Riemann integrable,
and any positive sequence $(l_n)_{n \geqslant 1}$ satisfying $\lim_{n\to \infty}l_n = 0$,
we have, as $n \to \infty$,
uniformly in $x \in \mathcal{S}$ and $|l| \leqslant l_n$,
\begin{align*}
& \mathbb{E} \Big[ \varphi(X_{n}^{x})\psi( \log|G_n x|-n(q+l)) \Big] \nonumber\\
& \qquad\qquad\quad
= \bar r_{s}(x)
\frac{ \exp \left( -n \Lambda^*(q+l) \right) }{\sigma_s \sqrt{2\pi n} }
\Big[ \nu_s(\varphi) \int_{\mathbb{R}} e^{-sy} \psi(y) dy + o(1) \Big].
\end{align*}
\end{theorem}
With $\varphi = \mathbf{1}$ and $\psi(y)=\mathbbm{1}_{\{y \leqslant 0 \}}$ for $y\in \mathbb R,$
we obtain Theorem \ref{Thm-Neg-s}.
\subsection{Applications to large deviation principle for the matrix norm}
We use Theorems \ref{main theorem1} and \ref{Thm-Neg-s}
to deduce large deviation principles for the matrix norm $\|G_n\|$.
Our first result concerns the upper tail and the second one deals with lower tail.
\begin{theorem}\label{Thm-LDP-Norm}
Assume the conditions of Theorem \ref{main theorem1}.
Let $q=\Lambda'(s)$, where $s\in I_\mu^\circ$.
Then,
for any positive sequence $(l_n)_{n \geqslant 1}$ with $l_n \to 0$ as $n \to \infty$,
we have, uniformly in $|l|\leqslant l_n$,
\begin{align*}
\lim_{n\to \infty}
\frac{1}{n}
\log \mathbb{P} \big(\log \| G_n \| \geqslant n(q+l) \big) = -\Lambda^*(q).
\end{align*}
\end{theorem}
For invertible matrices, with $l=0$, Theorem \ref{Thm-LDP-Norm} improves
the large deviation bounds in Benoist and Quint \cite[Theorem 14.19]{BQ2017},
where the authors consider general groups, but without giving the rate function.
For positive matrices, the result is new for $l=0$ and $l=O(l_n)$.
\begin{theorem}\label{Thm-LDP-Norm-Negs}
Assume the conditions of Theorem \ref{Thm-Neg-s}.
Then, there exists $\eta_0 < \eta$ such that for any $s \in (-\eta_0, 0)$ and $q=\Lambda'(s)$,
for any positive sequence $(l_n)_{n \geqslant 1}$ with $l_n \to 0$ as $n \to \infty$,
we have, uniformly in $|l|\leqslant l_n$,
\begin{align*}
\lim_{n\to \infty}
\frac{1}{n}
\log \mathbb{P} \big(\log \| G_n \| \leqslant n(q+l) \big) = -\Lambda^*(q).
\end{align*}
\end{theorem}
This result is new for both invertible matrices and positive matrices.
\subsection{Local limit theorems with large deviations} \label{Applic to LocalLD}
Local limit theorems and
large and moderate deviations
for sums of i.i.d.\ random variables have been studied by
Gnedenko \cite{gnedenko1948},
Sheep \cite{Sheep1964},
Stone \cite{Stone1965},
Breuillard \cite{Bre2005},
Borovkov and Borovkov \cite{Borovkov2008}.
Moderate deviation results in the local limit theorem for products of invertible random matrices
have been obtained in \cite[Theorems 17.9 and 17.10]{BQ2017}.
Taking $\varphi = \mathbf{1}$ and $\psi = \mathbbm{1}_{[a,a+\Delta]},$ where $a \in \mathbb{R}$ and $\Delta >0$
do not depend on $n$,
it is easy to understand that Theorem \ref{main theorem3}
becomes, in fact, a statement on large deviations in the local limit theorem.
It turns out that with the Petrov type extension \eqref{Petrov-Target01}
we can derive the following more general statement where $\Delta$ can increase with $n.$
\begin{theorem}\label{Theorem local LD001}
Assume conditions of Theorem \ref{main theorem1} and
let $q=\Lambda'(s)$.
Then there exists a sequence $\Delta_n > 0$
converging to $0$
as $n\to\infty$
such that, for any $\varphi \in \mathcal{B}_{\gamma}$,
for any positive sequence $(l_n)_{n \geqslant 1}$ with $l_n \to 0$ as $n \to \infty$ and any fixed $a\in \mathbb{R}$,
we have, as $n\to\infty,$
uniformly in $\Delta \in [\Delta_n, o(n)]$, $x \in \mathcal{S}$ and
$|l| \leqslant l_n$,
\begin{align*}
& \mathbb{E} \Big[ \varphi(X_n^x) \mathbbm{1}_{ \{ \log |G_{n} x | \in n(q+l) + [ a, a + \Delta) \} } \Big]
\nonumber\\
& \qquad\qquad
= \bar r_{s}(x) e^{-sa} \big( 1 - e^{ -s\Delta } \big)
\frac{ \exp ( - n \Lambda^{*}(q+l) ) }{ s \sigma_s \sqrt{2\pi n} }
\Big[ \nu_s(\varphi) + o(1) \Big].
\end{align*}
Taking $\varphi = \mathbf{1}$, as $n\to\infty,$
uniformly in $\Delta \in [\Delta_n, o(n)]$, $x \in \mathcal{S}$ and $|l| \leqslant l_n$,
\begin{align*
& \mathbb{P} \big(\log |G_{n} x | \in n(q+l) +[a,a+\Delta) \big) \\
& \qquad\qquad = \bar r_{s}(x) e^{-sa} \big( 1 - e^{ -s\Delta } \big)
\frac{ \exp ( - n \Lambda^{*}(q+l) ) }{ s \sigma_s \sqrt{2\pi n} } \Big[ 1 + o(1) \Big].
\end{align*}
\end{theorem}
We can compare this result with Theorem 3.3 in \cite{Guivarch2015},
from which the above equivalence can be deduced for $l=0$ and $\Delta$ fixed.
It is easy to see that, under additional assumption \ref{CondiMoment}, the assertion of Theorem \ref{Theorem local LD001}
remains true for $s<0$ small enough.
This can be deduced from Theorem \ref{Thm-Neg-s-Target}: the details are left to the reader.
\section{Spectral gap theory for the norm} \label{sec:spec gap norm}
\subsection{Properties of the transfer operator}\label{subsec a change of measure}
Recall that the transfer operator $P_{s}$ and the conjugate operator
$P_{s}^{*}$ are defined by
\eqref{transfoper001}.
Below $P_s\nu_{s}$ stands for the measure on $\mathcal S$ such that $P_s\nu_{s}(\varphi)=\nu_{s}(P_s \varphi),$
for continuous functions $\varphi$ on $\mathcal S$, and $P^*_s\nu^*_{s}$ is defined similarly.
The following result was proved in \cite{BDGM2014, BS2016} for positive matrices,
and in \cite{GE2016} for invertible matrices.
\begin{proposition} \label{transfer operator}
Assume that $\mu$ satisfies
either conditions \ref{Aexp}, \ref{A1} for invertible matrices,
or conditions \ref{Aexp}, \ref{A2} for positive matrices.
Let $s\in I_{\mu}$.
Then
the spectral radii $\varrho(P_{s})$ and $\varrho(P_{s}^{*})$ are both equal to $\kappa(s)$,
and there exist a unique, up to a scaling constant,
strictly positive
H\"{o}lder continuous
function $r_{s}$
and a unique probability measure $\nu_{s}$ on $\mathcal S$ such that
\begin{align*}
P_s r_s=\kappa(s)r_s, \quad P_s\nu_{s}=\kappa(s)\nu_{s}.
\end{align*}
Similarly, there exist a unique strictly positive
H\"{o}lder continuous function
$r_{s}^{\ast}$ and
a unique probability measure $\nu_{s}^{*}$ on $\mathcal S$ such that
$$ P_{s}^{*}r_{s}^{*}=\kappa(s)r_{s}^{*},
\quad
P_{s}^{*}\nu_{s}^{*}=\kappa(s)\nu_{s}^{\ast}.$$
Moreover,
the functions
$r_s$ and $r_s^*$ are given by
\begin{align*}
r_{s}(x)= \int_{\mathcal{S}} |\langle x, y\rangle|^{s}\nu^*_{s}(dy),
\quad
r_{s}^*(x)= \int_{\mathcal{S}} |\langle x, y\rangle|^{s}\nu_{s}(dy),
\quad x\in \mathcal{S}.
\end{align*}
\end{proposition}
It is easy to see that the family of kernels
$q_{n}^{s}(x,g)=\frac{|gx|^{s}}{\kappa^{n}(s)}\frac{r_{s}(g \cdot x)}{r_{s}(x)},$
$n\geqslant 1$
satisfies the following cocycle property:
\begin{align} \label{cocycle01}
q_{n}^{s}(x,g_1)q_{m}^{s}(g_1\!\cdot\!x, g_2)=q_{n+m}^{s}(x,g_2g_1).
\end{align}
The equation $P_sr_s=\kappa(s)r_s$
implies that, for any $x\in\mathcal{S}$ and $s\in I_{\mu}$,
the probability measures
$\mathbb Q_{s,n}^x(dg_1,\ldots,dg_n)=q_{n}^{s}(x,g_{n}\dots g_{1})\mu(dg_1)\dots\mu(dg_n),$ $n\geqslant 1,$
form a projective system
on $M(d,\mathbb{R})^{\mathbb{N}}$.
By the Kolmogorov extension theorem,
there is a unique probability measure $\mathbb Q_s^x$ on $M(d,\mathbb{R})^{\mathbb{N}}$,
with marginals $\mathbb Q_{s,n}^x$;
denote by $\mathbb{E}_{\mathbb Q_s^x}$ the corresponding expectation.
If $(g_n)_{n\in \mathbb N}$ denotes
the coordinate process on the space of trajectories
$M(d,\mathbb{R})^{\mathbb{N}}$, then
the sequence $(g_n)_{n \geqslant 1}$
is i.i.d.\
with the common law $\mu$ under $\mathbb{Q}_{0}^{x}.$
However, for any $s\in I_{\mu}^{\circ}$ and $x\in \mathcal{S}$,
the sequence $(g_n)_{n \geqslant 1}$ is Markov-dependent under the measure $\mathbb Q_s^x$.
Let
$$X_0^x=x, \ \ X_{n}^x= G_n \!\cdot\! x, \ \ n\geqslant 1.$$
By the definition of $\mathbb Q_s^x$,
for any bounded measurable function $f$ on $(\mathcal S \times \mathbb R)^{n}$,
it holds that
\begin{align}\label{basic equ1}
\frac{1}{ \kappa^{n}(s) r_{s}(x) }
\mathbb{E} \Big[ r_{s}(X_{n}^{x}) & |G_nx|^{s} f\big( X_{1}^{x}, \log |G_1 x|,\dots, X_{n}^{x}, \log |G_n x|
\big) \Big] \nonumber\\
&\quad
=\mathbb{E}_{\mathbb{Q}_{s}^{x}} \Big[ f \big( X_{1}^{x}, \log |G_1 x|,\dots, X_{n}^{x}, \log |G_n x| \big) \Big].
\end{align}
Under the measure $\mathbb Q_s^x$,
the process $(X_{n}^x)_{n\in \mathbb{N}}$ is a Markov chain with the transition operator given by
\begin{align}
Q_{s}\varphi(x)=\frac{1}{\kappa(s)r_{s}(x)}P_s(\varphi r_{s})(x)
=\frac{1}{\kappa(s)r_{s}(x)}\int_{\Gamma_{\mu}} |gx|^s \varphi(g\!\cdot\! x)r_s(g\!\cdot\!x)\mu(dg). \nonumber
\end{align}
It has been proved in \cite{BDGM2014} for positive matrices,
and in \cite{GE2016} for invertible matrices, that
$Q_{s}$ has a unique invariant probability measure $\pi_{s}$ supported on $V(\Gamma_{\mu})$
and that,
for any $\varphi\in \mathcal{C(S)}$,
\begin{align} \label{equcontin Q s limit}
\lim_{n\to\infty}Q_{s}^{n}\varphi
=\pi_{s}(\varphi), \quad \mbox{where} \
\pi_{s}(\varphi)=\frac{\nu_{s}(\varphi r_{s})}{\nu_{s}(r_{s})}.
\end{align}
Moreover, letting
$\mathbb{Q}_{s}=\int\mathbb{Q}_{s}^{x}\pi_{s}(dx),$
from the results of \cite{BDGM2014, GE2016}, it follows that,
under the assumptions of Theorem \ref{main theorem1}, for any $s\in I_{\mu}$,
we have
$ \lim_{n\to\infty} \frac{ \log |G_nx| }{n} =\Lambda'(s),$
$\mathbb{Q}_{s}$-a.s.\ and
$\mathbb{Q}_{s}^{x}$-a.s.,
where
$\Lambda'(s)=\frac{\kappa'(s)}{\kappa(s)}$.
When $s \in (-\eta_0, 0)$ for small enough $\eta_0>0$,
define the transfer operator $P_s$ as follows: for any $\varphi \in \mathcal{C(S)}$,
\begin{align*}
P_s \varphi (x) = \int_{\Gamma_{\mu}} \! |g_1 x |^{s} \varphi( g_1\!\cdot\!x ) \mu(dg_1),
\quad x \in \mathcal{S},
\end{align*}
which is well-defined under condition \ref{CondiMoment}.
The following proposition is proved in \cite{XGL19}.
\begin{proposition}
\label{transfer operator s negative}
Assume that $\mu$ satisfies
either conditions \ref{A1}, \ref{CondiMoment} for invertible matrices,
or conditions \ref{A2}, \ref{CondiMoment} for positive matrices.
Then there exists $\eta_0 < \eta$ such that for any $s \in (-\eta_0, 0)$, the spectral radius $\varrho(P_{s})$ of the operator $P_s$
is equal to $\kappa(s)$.
Moreover there exist a unique, up to a scaling constant,
strictly positive
H\"{o}lder continuous
function $r_{s}$
and a unique probability measure $\nu_{s}$ on $\mathcal S$ such that
\begin{align*}
P_s r_s=\kappa(s)r_s, \quad P_s\nu_{s}=\kappa(s)\nu_{s}.
\end{align*}
\end{proposition}
Based on Proposition \ref{transfer operator s negative},
in the same way as for $s>0$,
one can define the measure $\mathbb Q_s^x$
for negative values $s<0$ sufficiently close to $0$,
and one can extend the change of measure formula \eqref{basic equ1} to $s<0$.
Under the measure $\mathbb Q_s^x$,
the process $(X_{n}^x)_{n\in \mathbb{N}}$ is a Markov chain with the transition operator $ Q_s$
and the assertion \eqref{equcontin Q s limit} holds true. We refer to \cite{XGL19} for details.
\subsection{Spectral gap of the perturbed operator} \label{sec-spgappert}
Recall that the Banach space $B_{\gamma}$ consists of all $\gamma$-H\"{o}lder continuous function on $\mathcal{S}$,
where $\gamma>0$ is a fixed small constant.
Denote by $\mathcal{L(B_{\gamma},B_{\gamma})}$
the set of all bounded linear operators from $\mathcal{B}_{\gamma}$ to $\mathcal{B}_{\gamma}$
equipped with the operator norm
$\left\| \cdot \right\|_{\mathcal B_{\gamma} \to \mathcal B_{\gamma}}$.
For $s\in I_\mu^\circ$ and $z \in \mathbb{C}$ with $s+ \Re z \in I_{\mu}$,
define a family of perturbed operators $R_{s,z}$ as follows: for any $\varphi \in \mathcal{B}_{\gamma}$,
\begin{align} \label{operator Rsz}
R_{s,z}\varphi(x)
= \mathbb{E}_{\mathbb{Q}_{s}^{x}} \left[ e^{ z( \log| g_1x | - q ) }\varphi(X_{1}^x) \right],
\quad x \in \mathcal{S}.
\end{align}
It follows from the cocycle property \eqref{cocycle01} that
\begin{align*}
R^{n}_{s,z}\varphi(x)
= \mathbb{E}_{\mathbb{Q}_{s}^{x}} \left[e^{ z( \log |G_n x| - nq) } \varphi(X_{n}^x) \right],
\quad x \in \mathcal{S}.
\end{align*}
The following proposition collects useful assertions that we will use in the proofs of our results.
Denote $B_\delta(0): = \{ z \in \mathbb{C}: |z| \leqslant \delta \}$.
\begin{proposition} \label{perturbation thm}
Assume that $\mu$ satisfies either conditions \ref{Aexp}, \ref{A1} for invertible matrices,
or conditions \ref{Aexp}, \ref{A2} for positive matrices.
Then, there exists $\delta>0$ such that for any $z \in B_\delta(0)$,
\begin{align}
\label{perturb001}
R^{n}_{s,z}=\lambda^{n}_{s,z}\Pi_{s,z}+N^{n}_{s,z}, \ n\geqslant 1.
\end{align}
Moreover, for any $s \in I_{\mu}^{\circ}$,
the following assertions hold:
\begin{itemize}
\item[{\rm(i)}]
$\Pi_{s,z}$ is a rank-one projection for $|z| \leqslant \delta$, with
$\Pi_{s,0}(\varphi)(x)=\pi_{s}(\varphi)$ for any $\varphi \in \mathcal{B}_{\gamma}$ and $x\in \mathcal{S}$,
$\Pi_{s,z}N_{s,z}=N_{s,z}\Pi_{s,z}=0$
and
\begin{equation}\label{relationlamkappa001}
\lambda_{s,z} = e^{-qz} \frac{\kappa(s+z)}{\kappa(s)}, \quad \mbox{for} \ z \in B_\delta(0).
\end{equation}
For any fixed $k \geqslant 1$, there exist $\varkappa_s \in(0,1)$ and $c_s$ such that
$$
\sup_{|z| < \delta}
\|\frac{d^{k}}{dz^{k}}N^{n}_{s,z} \|_{\mathcal{B}_{\gamma}\rightarrow\mathcal{B}_{\gamma}}
\leqslant c_s \varkappa_s^{n}, \ n\geqslant 1.
$$
In addition,
the mappings $z \mapsto \Pi_{s,z}: B_\delta(0) \to \mathcal{L(B_{\gamma},B_{\gamma})}$
and $z \mapsto N_{s,z}: B_\delta(0) \to \mathcal{L(B_{\gamma},B_{\gamma})}$ are analytic
in the strong operator sense.
\item[{\rm(ii)}]
For any compact set $K\subseteq\mathbb{R}\backslash\{0\}$,
there exists a constant $C_{K}>0$ such that
for any $n\geqslant 1$ and $\varphi\in \mathcal{B}_{\gamma}$, we have
\begin{align*}
\sup_{t\in K}\sup_{x\in \mathcal{S}}|R^{n}_{s,it}\varphi(x)|\leqslant e^{-nC_{K}}\sup_{x\in \mathcal{S}}|\varphi(x)|.
\end{align*}
\item[{\rm(iii)}]
The mapping $z \mapsto \lambda_{s,z}: B_\delta(0) \to \mathbb{C}$ is analytic, and
\begin{align*}
\lambda_{s,z}=1+\frac{\sigma_s^{2}}{2}z^{2} + \frac{ \Lambda'''(s) }{6} z^3 + o(z^{3}) \quad as \ z \to 0,
\end{align*}
where
$$
\sigma_s^{2}= \Lambda''(s)= \lim_{n\to\infty}
\frac{1}{n} \mathbb{E}_{\mathbb{Q}_{s}}( \log |G_n x| - nq )^{2}
$$
and
$$
\Lambda'''(s) = \lim_{n\to\infty}\frac{1}{n}\mathbb{E}_{\mathbb{Q}_{s}}( \log |G_n x| - nq )^{3}.
$$
In addition, if the measure $\mu$
is non-arithmetic,
then the asymptotic variance $\sigma_s^{2}$ is strictly positive.
\end{itemize}
\end{proposition}
The assertions (i), (ii), (iii) of Proposition \ref{perturbation thm}, except \eqref{relationlamkappa001},
have been proved in \cite{BS2016} for imaginary-valued $z \in (-i\delta, i\delta)$,
based on the perturbation theory (see \cite{HH01}).
The assertions (i), (iii)
can be extended to the complex-valued $z \in B_\delta(0)$ without changes in the proof in \cite{BS2016}.
The identity \eqref{relationlamkappa001} is not proved in \cite{BS2016}, but
can be obtained by using the arguments from \cite{XGL19}.
By the perturbation theory, the operator $P_{s}$ and its spectral radius $\kappa(s)$ can be extended to $P_{s+z}$
and the eigenvalue $\kappa(s+z)$, respectively, with $z$ in the small neighborhood of $0$,
see \cite{GE2016}.
By the definitions of $R_{s,z}$ and $P_{z}$ using the change of measure \eqref{basic equ1}, we obtain
for any $\varphi \in \mathcal{B}_{\gamma}$, $n \geqslant 1$,
$s \in I_{\mu}^{\circ}$ and $z \in B_\delta(0)$,
\begin{align}\label{PfRsw01}
R_{s,z}^n (\varphi)
= e^{ -n z \Lambda'(s)} \frac{ P_{s+z}^n (\varphi r_s) }{ \kappa^n(s) r_s }.
\end{align}
Since $r_s$ is uniformly bounded,
using \eqref{PfRsw01} and
the fact that $\kappa(s+z)$ is the unique eigenvalue of $P_{s+z}$,
we deduce \eqref{relationlamkappa001}.
For negative values $s<0$ sufficiently close to $0$, we can define the perturbed operator $R_{s,z}$
as in \eqref{operator Rsz}.
The following spectral gap property of $R_{s,z}$
is established in \cite{XGL19}.
\begin{proposition} \label{perturbation thm nrgztive s}
Assume that $\mu$ satisfies
conditions \ref{A1}, \ref{CondiMoment} for invertible matrices,
or conditions \ref{A2}, \ref{CondiMoment} for positive matrices.
Then, there exist $\eta_0 < \eta$ and $\delta>0$
such that for any $s \in (-\eta_0, 0)$
and $z \in B_\delta(0)$,
\begin{align*}
R^{n}_{s,z}=\lambda^{n}_{s,z}\Pi_{s,z}+N^{n}_{s,z}, \ n\geqslant 1.
\end{align*}
Moreover, for any $s \in (-\eta_0, 0)$, the assertions (i), (ii), (iii) of Proposition \ref{perturbation thm} hold true.
\end{proposition}
\section{Proof of Theorems \ref{main theorem1} and \ref{Thm-Neg-s}} \label{sec proof of main theroem1}
\subsection{Auxiliary results} \label{secAuxres001}
We need some preliminary statements.
Following Petrov \cite{Petrov75book},
under the changed measure $\mathbb Q_s^x$,
define the Cram\'{e}r series $\zeta_s$ by
\begin{align*}
\zeta_s (t) = \frac{\gamma_{s,3} }{ 6 \gamma_{s,2}^{3/2} }
+ \frac{ \gamma_{s,4} \gamma_{s,2} - 3 \gamma_{s,3}^2 }{ 24 \gamma_{s,2}^3 } t
+ \frac{\gamma_{s,5} \gamma_{s,2}^2 - 10 \gamma_{s,4} \gamma_{s,3} \gamma_{s,2} + 15 \gamma_{s,3}^3 }{ 120 \gamma_{s,2}^{9/2} } t^2
+ \ldots,
\end{align*}
where $\gamma_{s,k} = \Lambda^{(k)} (s)$ and $\Lambda(s) = \log \kappa(s)$.
The following lemma gives a full expansion of $\Lambda^*(q+l)$
in terms of power series in $l$ in a neighborhood of $0$,
for $q=\Lambda'(s)$ and $s \in I_{\mu}^\circ \cup (\eta_0,0)$, where $\eta_0$
is from Proposition \ref{perturbation thm nrgztive s}.
\begin{lemma} \label{lemmaCR001}
Assume conditions of Theorem \ref{main theorem1} or Theorem \ref{Thm-Neg-s}.
Let $q=\Lambda'(s)$.
Then, there exists $\delta>0$ such that, for any $|l|\leqslant \delta,$
\begin{align*}
\Lambda^*(q+l) = \Lambda^{*}(q) + sl + h_s(l),
\end{align*}
where
$h_s$ is linked to the Cram\'{e}r series $\zeta_s$
by the identity
\begin{align}\label{expan hs 01}
h_s(l) = \frac{ l^2}{2 \sigma_s^2} - \frac{l^3}{\sigma_s^3} \zeta_s( \frac{l}{\sigma_s} ).
\end{align}
\end{lemma}
\begin{proof}
Let $(\Lambda')^{-1}$ be the inverse function of $\Lambda'.$
With the notation
$l_s= (\Lambda')^{-1}(q+l) -s $,
we have $\Lambda'(s+l_s) = q+l$.
By the definition of $\Lambda^*$, it follows that
$\Lambda^*(q+l) = (s+l_s)(q+l) - \Lambda(s+l_s)$.
This, together with $\Lambda^*(q) = sq - \Lambda(s)$ and Taylor's formula, gives
\begin{align} \label{expan Lambda 01}
h_s(l):= \Lambda^*(q+l) - \Lambda^{*}(q) - sl
= l_sl - \sum_{k=2}^{\infty} \frac{ \Lambda^{(k)}(s) }{k!} l_s^k.
\end{align}
From $\Lambda'( s + l_s ) = q+l$ and $\Lambda'(s)=q$, we deduce that $l= \Lambda'( s + l_s ) - \Lambda'(s) $,
so that, by Taylor's formula,
\begin{align} \label{expan Lambda 02}
l = \sum_{k=1}^{\infty} \frac{\Lambda^{(k+1)}(s)}{k!} l_s^k.
\end{align}
The rest of the proof is similar to that in Petrov \cite{Petrov75book} (chapter VIII, section 2).
For $|l|$ small enough, the equation \eqref{expan Lambda 02} has a unique solution $l_s$ given by
\begin{align*}
l_s = \frac{l}{ \sigma_s^2 } - \frac{ \Lambda^{(3)}(s) }{ 2 \sigma_s^6 } l^2 -
\frac{\Lambda^{(4)}(s) \sigma_s^2 - 3(\Lambda^{(3)}(s))^2}{6 \sigma_s^{10}} l^3 + \cdots.
\end{align*}
Together with \eqref{expan Lambda 01} and \eqref{expan Lambda 02}, this implies
\begin{align*}
h_s(l) = \sum_{k=2}^{\infty}
\Lambda^{(k)}(s) \frac{k-1}{k!} l_s^{k}
=\frac{ l^2}{2 \sigma_s^2} - \frac{l^3}{\sigma_s^3} \zeta_s( \frac{l}{\sigma_s} ).
\end{align*}
\end{proof}
Let us fix a non-negative Schwartz function $\rho$ on $\mathbb{R}$ with $\int_{\mathbb{R}} \rho(y) dy=1$,
whose Fourier transform $\widehat{\rho}$ is supported on $[-1,1]$
and has a continuous extension in the complex plane.
Moreover, $\widehat{\rho}$ is analytic in the domain
$D : = \{ z \in \mathbb{C}: |z| < 1, \Im z \neq 0 \}$.
Such a function can be constructed as follows.
On the real line define $\widehat{\varsigma}(t)= e^{- \frac{1}{1-t^2}}$ if $t \in [-1,1]$,
and $\widehat{\varsigma} =0$ elsewhere.
The function $\widehat{\varsigma}$ is compactly supported and has finite derivatives of all orders.
Its inverse Fourier transform $\varsigma$, however, is not non-negative.
Let $\widehat{\rho}_0= \widehat{\varsigma} \ast \widehat{\varsigma}$ be the convolution of
$\widehat{\varsigma}$ with itself.
It is supported by $[-2,2]$ and its inverse Fourier transform $\rho_0$ satisfies $\rho_0 = 2\pi \varsigma^2 \geqslant 0$.
We show below that $\widehat{\rho}_0$ has a continuous extension in the complex plane,
and $\widehat{\rho}_0$ is analytic in the domain $D$.
Finally we rescale and renormalize $\rho_0$ by setting
$\rho(y)= \rho_0(y/2)/ [ 2\widehat\rho_0(0)]$ for $y \in \mathbb{R}$.
\begin{lemma}\label{LemAnalyExten}
$\widehat{\rho}_0$ has a continuous extension in the complex plane,
and $\widehat{\rho}_0$ is analytic in the domain $D$.
\end{lemma}
\begin{proof}
The function $\widehat{\varsigma}$ can be extended to the complex plane as follows:
\begin{equation*}
\widehat{\varsigma}(z) =
\begin{cases}
e^{-\frac{1}{1 - z^2}} & |z| < 1, \ z \in \mathbb{C} \\
0 & |z| \geqslant 1, \ z \in \mathbb{C}.
\end{cases}
\end{equation*}
It is easily verified that $\widehat{\varsigma}$ is continuous in the interior of the unit disc and outside it,
but is not continuous at any point on the unit circle $| z | =1$.
Note also that $\widehat{\varsigma}$ is uniformly bounded on $\mathbb{C}$.
Recall that the function $\widehat{\rho}_0 = \widehat{\varsigma} \ast \widehat{\varsigma}$ is defined on the real line.
We extend it to the complex plane by setting
$\widehat{\rho}_0(z)
= \int_{-1}^{1} \widehat{\varsigma}(t) \widehat{\varsigma}(z-t) \mathbbm{1}_{ \{ |z-t| < 1 \} } dt.$
The latter integral is well defined for any $z \in \mathbb{C}$,
since $\widehat{\varsigma}$ is bounded.
We are going to show that $\widehat{\varsigma}$ is continuous in $\mathbb C$.
For any fixed $z \in \mathbb{C}$ and $h \in \mathbb{C}$ with $|h|$ small, we write
\begin{align}\label{EquaLemContZ}
| \widehat{\rho}_0 (z + h) - \widehat{\rho}_0(z) |
\leqslant \int_{-1}^{1} \widehat{\varsigma}(t)
| \widehat{\varsigma}(z-t + h) - \widehat{\varsigma}(z-t) | dt.
\end{align}
The set $T_{z}=\{t: |z-t| =1\}$ of points of discontinuity of the function $t\mapsto \widehat{\varsigma}(z-t)$
consists of at most two points.
For any $t \in [-1, 1]$, $ t\not\in T_{z} $,
by the definition of $\widehat{\varsigma}$, we have that
$| \widehat{\varsigma}(z-t + h) - \widehat{\varsigma}(z-t) | \to 0$ as $|h| \to 0$.
Since the Lebesgue measure of $T_{z}$ is $0$,
applying the Lebesgue dominated convergence theorem and taking into account the boundedness of
the function $\widehat{\varsigma}$ on $\mathbb{C}$,
we see that $\widehat{\rho}_0$ is continuous in the complex plane.
We next show that $\widehat{\rho}_0$ is analytic in the domain $D = \{ z' \in \mathbb{C}: |z'| < 1, \Im z' \neq 0 \}$.
Fix $z\in D$. Let $\varepsilon=\Im z /2 \in (0,\frac{1}{2})$.
Denote $D (\varepsilon) =: \{ z' \in D: |\Im z'| > \varepsilon \}$.
One can verify that the derivative $\widehat{\varsigma}'(z)$ exists and is uniformly bounded by $\frac{c}{\varepsilon^4}$
on the domain $D (\varepsilon)$.
For any
$h \in \mathbb{C}$ with $|h|$ small enough, we have
\begin{align*}
\frac{\widehat{\rho}_0 (z + h) - \widehat{\rho}_0(z)}{h}
= & \ \int_{[-1,1] \setminus T_{z}} \widehat{\varsigma}(t)
\frac{\widehat{\varsigma}(z-t + h)
- \widehat{\varsigma}(z-t)
}{h} dt \nonumber\\
= & \ \int_{[-1,1] \setminus T_{z}} \widehat{\varsigma}(t) \left( \int_0^1 \widehat{\varsigma}'(z-t + \theta h)
\mathbbm{1}_{ \{ |z-t + \theta h | < 1 \} } d\theta \right) dt.
\end{align*}
Since for any $t \in [-1, 1]$ and $\theta \in [0,1]$,
we have $|\Im (z - t + \theta h)| \geqslant \varepsilon$ uniformly in $|h|< \varepsilon$.
This implies that $z - t + \theta h \in D(\varepsilon)$
and thus $\widehat{\varsigma}'(z-t + \theta h)$ is bounded, uniformly in $|h|< \varepsilon$ and $t\in [-1,1]$.
Applying twice the Lebesgue dominated convergence theorem, we obtain
that $\widehat{\rho}_0'(z)$ exists and is given by
$\widehat{\rho}_0'(z) =
\int_{[-1,1] \setminus T_{z}} \widehat{\varsigma}(t) \widehat{\varsigma}'( z- t)
dt$.
Hence $\widehat{\rho}_0$ is analytic in the domain $D$.
\end{proof}
For any $\varepsilon>0$, define the density
$\rho_{\varepsilon}(y)=\frac{1}{\varepsilon}\rho(\frac{y}{\varepsilon})$, $y\in\mathbb R,$
whose Fourier transform has a compact support in $[-\varepsilon^{-1},\varepsilon^{-1}]$ and is analytically extendable in a neighborhood of $0$.
For any non-negative integrable function $\psi$,
following the paper \cite{GE2017}, we introduce two modified functions related to $\psi$ and establish some two-sided bounds.
For any $\varepsilon>0$ and $y\in \mathbb{R}$,
set $\mathbb{B}_{\varepsilon}(y)=\{y' \in\mathbb{R}: |y'-y|\leqslant\varepsilon\}$
and
\begin{align}\label{smoo001}
{\psi}^+_{\varepsilon}(y)=\sup_{y'\in\mathbb{B}_{\varepsilon}(y)}\psi(y')
\quad \text{and} \quad
{\psi}^-_{\varepsilon}(y)=\inf_{y'\in\mathbb{B}_{\varepsilon}(y)}\psi(y').
\end{align}
\begin{lemma} \label{estimate u convo}
Suppose that $\psi$ is a non-negative integrable function and that
${\psi}^+_{\varepsilon}$ and ${\psi}^-_{\varepsilon}$ are measurable for any $\varepsilon>0$,
then for sufficiently small $\varepsilon$,
there exists a positive constant $C_{\rho}(\varepsilon)$ with $C_{\rho}(\varepsilon) \to 0$ as $\varepsilon \to 0$,
such that, for any $x\in \mathbb{R}$,
\begin{align}
{\psi}^-_{\varepsilon}\!\ast\!\rho_{\varepsilon^2}(x) -
\int_{|y|\geqslant \varepsilon} {\psi}^-_{\varepsilon}(x-y) \rho_{\varepsilon^2}(y)dy
\leqslant \psi(x) \leqslant (1+ C_{\rho}(\varepsilon))
{\psi}^+_{\varepsilon}\!\ast\!\rho_{\varepsilon^2}(x). \nonumber
\end{align}
\end{lemma}
The proof of the above lemma, being similar to that of Lemma 5.2 in \cite{GLE2017}, will not be detailed here.
The next assertion is the key point in establishing Theorem \ref{main theorem1}.
Its proof is based on
the spectral gap properties of the perturbed operator $R_{s,z}$
(see Proposition \ref{perturbation thm})
and on the saddle point method,
see Daniels \cite{Daniels1954}, Richter \cite{Richter1957},
Ibragimov and Linnik \cite{IbrLinnik65} and Fedoryuk \cite{Fedoryuk1987}.
Let us introduce the necessary notation.
In the following,
let $\varphi$ be a $\gamma$-H\"{o}lder continuous function on $\mathcal S$.
Assume that
$\psi: \mathbb R \mapsto \mathbb C$
is a continuous function with compact support in $\mathbb{R}$, and moreover,
$\psi$ has a continuous extension
in some neighborhood of $0$ in the complex plane
and can be extended analytically to the domain
$D_{\delta} : = \{ z \in \mathbb{C}: |z| < \delta, \Im z \neq 0 \}$ for some small $\delta >0$.
Recall that $\pi_s$ is the invariant measure of the Markov chain $X_n^x$ under the changed measure
$\mathbb{Q}_s^x$, see \eqref{equcontin Q s limit}.
\begin{proposition} \label{Prop Rn limit1}
Assume conditions of Theorem \ref{main theorem1}.
Let $q=\Lambda'(s)$, where $s\in I_\mu^\circ.$
Then, for any
positive sequence $(l_n)_{n \geqslant 1}$ satisfying $l_n \to 0$ as $n \to \infty$,
we have,
uniformly in $x\in \mathcal{S}$, $|l|\leqslant l_n $ and $\varphi \in \mathcal{B}_{\gamma}$,
\begin{align*}
& \Big| \sqrt{n} \ \sigma_s e^{n h_s(l)}
\int_{\mathbb R} e^{-it l n} R^{n}_{s,it}(\varphi)(x) \psi (t) dt
- \sqrt{2\pi} \psi(0)\pi_{s}(\varphi)
\Big| \nonumber\\
\leqslant &\ C \| \varphi \|_\gamma \Big( \frac{ \log n }{ \sqrt{n} } + l_n \Big).
\end{align*}
\end{proposition}
\begin{proof}
Denote $c_s(\psi)= \frac{\sqrt{2\pi}}{\sigma_s} \psi(0)\pi_{s}(\varphi)$.
Taking sufficiently small $\delta >0$,
we write
\begin{align} \label{Thm1 integral1 J}
&
\Big| \sqrt{n} \ e^{n h_s(l)}
\int_{\mathbb R} e^{-it l n} R^{n}_{s,it}(\varphi)(x) \psi (t) dt
- c_s(\psi) \Big| \nonumber\\
&\leqslant
\Big| \sqrt{n} ~ e^{nh_s(l)}
\int_{|t|\geqslant\delta}
e^{-itln}R^{n}_{s,it}(\varphi)(x) \psi(t) dt \Big|
\nonumber\\
& \ \ +
\Big| \sqrt{n} \ e^{n h_s(l)}\int_{|t|<\delta}
e^{-itl n}R^{n}_{s,it}(\varphi)(x)
\psi(t)dt - c_s(\psi)
\Big|
\nonumber\\
& = I(n) + J(n).
\end{align}
For $I(n)$,
since $\psi$ is bounded and compactly supported on the real line,
taking into account Proposition \ref{perturbation thm} (ii), the fact $|e^{-it l n}| = 1$
and equality \eqref{expan hs 01},
we get
\begin{align} \label{Thm1 integral1 J1}
\sup_{x\in \mathcal{S}} \sup_{|l|\leqslant l_n }
|I(n)| \leqslant C_{\delta} e^{- c_{\delta} n} \| \varphi \|_{\gamma}.
\end{align}
For $J(n)$, by Proposition \ref{perturbation thm} (i), we have
$$
R^{n}_{s,it}(\varphi)(x)
= \lambda^{n}_{s,it}\Pi_{s,it}(\varphi)(x)
+ N^{n}_{s,it}(\varphi)(x).
$$
Set for brevity
$\psi_{s,x}(t) = \Pi_{s,it}(\varphi)(x) \psi(t)$.
It follows that
\begin{eqnarray} \label{Prop Rn1}
J(n)
\leqslant \!\!\!\!\!\!\!\!&&
\Big| \sqrt{n} \ e^{n h_s(l) } \int_{|t|<\delta} e^{-i t l n}
\lambda^{n}_{s,it}
\psi_{s,x}(t) dt
- c_s(\psi)
\Big| \nonumber\\
\!\!\!\!\!\!\!\!&&
+
\Big| \sqrt{n} \ e^{n h_s(l) } \int_{|t|< \delta } e^{-i t l n}
N^{n}_{s,it}(\varphi)(x) \psi(t) dt
\Big| \nonumber\\
=\!\!\!\!\!\!\!\!&& J_1(n)+J_2(n).
\end{eqnarray}
For the second term $J_2(n)$,
applying Proposition \ref{perturbation thm} (i),
we get that there exist constants $c_{\delta}>0$ and $\varkappa \in (0,1)$
such that
\begin{align*}
\sup_{x\in \mathcal{S}} \sup_{|t| < \delta}
|N^{n}_{s,it}(\varphi)(x)|
\leqslant
\sup_{|t| < \delta}
\|N^{n}_{s,it}\|_{\mathcal B_{\gamma} \to \mathcal B_{\gamma}} \| \varphi \|_{\gamma}
\leqslant c_{\delta} \varkappa^n \| \varphi \|_{\gamma}.
\end{align*}
Combining this with the continuity of the function $\psi$ at the point $0$ and the fact
$|e^{-i t l n}| = 1$, we obtain that,
uniformly in $|l| \leqslant l_n$, $x \in \mathcal{S}$ and $\varphi \in \mathcal{B}_{\gamma}$,
\begin{align} \label{SaddleIntegral 2}
J_2(n)
\leqslant C_{\delta} e^{- c_{\delta} n} \| \varphi \|_{\gamma}.
\end{align}
For the first term $J_1(n)$,
we shall use the method of steepest descends to derive a precise asymptotic expansion.
We make a change of variable $z = i t$ to rewrite $J_1(n)$ as an integral over the
complex interval $L_0=(-i\delta,i\delta):$
\begin{align} \label{SaddleInte J21}
J_{1}(n)
=
\Big| -i \sqrt{n}
\ e^{nh_s(l) }
\int_{- i \delta}^{i \delta} e^{n (K_s(z) - zl)} \psi_{s,x}(-iz) dz - c_s(\psi)
\Big|,
\end{align}
where $K_s(z)=\log \lambda_{s,z}$ (we choose the branch where $K_s(0)=0$),
which is an analytic function for $|z| \leqslant \delta$
by Proposition \ref{perturbation thm} (iii).
Since the function $z \mapsto e^{n (K_s(z) - zl)}$ is analytic in the neighborhood of $0$,
and the function $z \mapsto \psi_{s,x}(-iz)$ has an analytic extension in the domain
$D_{\delta} : = \{ z \in \mathbb{C}: |z| < \delta, \Im z \neq 0 \}$ and has a continuous extension in the domain
$\overline{D}_{\delta} : = \{ z \in \mathbb{C}: |z| \leqslant \delta \}$,
by Cauchy's integral theorem
we can choose a special path of the integration
which passes through the saddle point of the function $K_s(z) - zl$.
From \eqref{relationlamkappa001},
we have
$$K_s(z) = -qz + \log \kappa(s+z) - \log \kappa(s),$$
which implies that for $|z| < \delta$,
\begin{align} \label{Expan Ks 01}
K_s(z) = \sum_{k=2}^{\infty} \gamma_{s,k} \frac{z^k}{k!},
\end{align}
where $\gamma_{s,k} = \Lambda^{(k)}(s)$ and $\Lambda(s) = \log \kappa(s)$.
From this Taylor's expansion
and the fact that $\Lambda^{(2)}(s)= \sigma_s^2>0$,
it follows that the function $K_s(z) - zl$ is convex in the neighborhood of $0$.
Consider the saddle point equation
\begin{align} \label{SaddleEqu}
K_s'(z)-l =0.
\end{align}
An equivalent formulation of \eqref{SaddleEqu} is
$l = \sum_{k=2}^{\infty} \gamma_{s,k} \frac{z^{k-1}}{(k-1)!}$, which
by simple series inversion techniques gives the following solution:
\begin{align} \label{SoluSaddleEqu}
z_0=z_0(l) := \frac{l}{ \gamma_{s,2} } - \frac{ \gamma_{s,3} }{ 2 \gamma_{s,2}^3 } l^2
- \frac{ \gamma_{s,4} \gamma_{s,2} - 3 \gamma_{s,3}^2 }{6 \gamma_{s,2}^5 } l^3 + \cdots.
\end{align}
From \eqref{SoluSaddleEqu},
it follows that the solution $z_0=z_0(l)$ is real for sufficiently small $l$
and that $z_0=z_0(l)\to 0$ as $l\to 0.$
Moreover, $z_0>0$ for sufficiently small $l>0$, and $z_0< 0$ for sufficiently small $l<0$.
By Cauchy's integral theorem, $J_{1}(n)$ can be rewritten as
\begin{align*}
J_{1}(n) =
\Big| -i \sqrt{n}
\ e^{n h_s(l) }
\Big\{\int_{L_1} + \int_{L_2} + \int_{L_3} \Big\} e^{n (K_s(z) - zl)} \psi_{s,x}(-iz) dz - c_s(\psi)
\Big|,
\end{align*}
where $L_1 = (-i \delta, z_0 - i \delta)$,
$L_2 = (z_0 - i \delta, z_0 + i \delta)$ and $L_3 = (z_0 + i \delta, i \delta)$.
By \eqref{Expan Ks 01}, we get $K_s(it) = -\frac{1}{2} \sigma_s^2 t^2 + O(t^3)$,
which implies that $|e^{n K_s(it)}| \leqslant e^{-\frac{n}{3} \sigma_s^2 t^2 }$, when $t$ is sufficiently small.
Combining this with \eqref{SoluSaddleEqu} and the continuity of $K_s(z)$ in the neighborhood of $0$
yields that, for sufficiently small $l$, $|e^{n K_s(z)}| \leqslant e^{-\frac{n}{4} \sigma_s^2 \delta^2 }$,
for any $z \in L_1 \cup L_3$.
Since, for sufficiently small $l$, $l z_0>0$, we get that,
for $z \in L_1 \cup L_3$,
$|e^{- nzl}| = |e^{-nl z_0}| \leqslant 1$.
Moreover, using the continuity of the function $z \mapsto \psi_{s,x}(-iz)$
in a small neighborhood of $0$ in the complex plane,
there exists a constant $C_s>0$ such that,
on $L_1$ and $L_3$, we have $\sup_{x \in \mathcal{S}}|\psi_{s,x}(-iz)| \leqslant C_s \| \varphi \|_{\gamma}$.
Therefore, we obtain,
for $n$ sufficiently large,
uniformly in $|l|\leqslant l_n$ and $x\in \mathcal{S}$,
\begin{align*}
\Big| -i \sqrt{n} ~
e^{n h_s(l)}
\Big\{\int_{L_1} + \int_{L_3} \Big\} e^{n (K_s(z) - zl)} \psi_{s,x}(-iz) dz
\Big|
\leqslant O(e^{-\frac{n}{5} \sigma_s^2 \delta^2 }) \| \varphi \|_{\gamma}.
\end{align*}
It follows that
\begin{align*}
J_{1}(n) \leqslant & \
\Big| -i \sqrt{n} ~
e^{n h_s(l) }
\int_{z_0 - i \delta}^{z_0 + i \delta} e^{n (K_s(z) - zl)} \psi_{s,x}(-iz) dz - c_s(\psi)
\Big| \nonumber\\
& \ + O(e^{-\frac{n}{5} \sigma_s^2 \delta^2 }) \| \varphi \|_{\gamma}.
\end{align*}
Without loss of generality, assume that $n\geqslant 3$.
Making a change of variable $z= z_0 + it$ gives
\begin{align} \label{SaddleIntegral}
&J_{1}(n) \nonumber
\leqslant
\Big| \sqrt{n} ~
e^{n h_s(l)}
\int_{-\delta}^{\delta} e^{n [K_s(z_0 + it)-(z_0 + it)l ]} \psi_{s,x}(t-iz_0) dt - c_s(\psi)
\Big| \nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+ O(e^{-\frac{n}{5} \sigma_s^2 \delta^2 }) \| \varphi \|_{\gamma} \nonumber\\
&\leqslant
\Big| \sqrt{n} \,
e^{n h_s(l)}
\int_{n^{-\frac{1}{2}} \log n \leqslant |t|< \delta}
e^{n [K_s(z_0 + it)-(z_0 + it)l ]} \psi_{s,x}(t-iz_0) dt \Big| \nonumber\\
& \ \ \ +
\Big| \sqrt{n}
e^{n h_s(l)} \int_{|t|< n^{-\frac{1}{2}} \log n }
e^{n [K_s(z_0 + it)-(z_0 + it)l ]} \psi_{s,x}(t-iz_0) dt - c_s(\psi)
\Big| \nonumber\\
& \ \ \ + O(e^{-\frac{n}{5} \sigma_s^2 \delta^2 }) \| \varphi \|_{\gamma}.
\end{align}
From \eqref{SaddleEqu} and \eqref{SoluSaddleEqu}, we have $K_s'(z_0)=l$.
By Taylor's formula, we get that for $|t|< \delta$,
\begin{align*}
K_s(z_0 + it)-(z_0 + it)l = K_s(z_0) - z_0l + \sum_{k=2}^{\infty} \frac{K_s^{(k)}(z_0) (it)^k }{k!}.
\end{align*}
Using $K_s'(z_0)=l$ and \eqref{Expan Ks 01}, it follows that
$$K_s(z_0) - z_0 l = K_s(z_0) - z_0 K_s'(z_0) = -\sum_{k=2}^{\infty}\frac{k-1}{k!} \gamma_{s,k} z_0^k. $$
Combining this with \eqref{SoluSaddleEqu} and Lemma \ref{lemmaCR001} gives
$K_s(z_0) - z_0 l = -h_s(l)$.
Thus
\begin{align}\label{Rela Ks hs}
K_s(z_0 + it)-(z_0 + it)l = -h_s(l) + \sum_{k=2}^{\infty} \frac{K_s^{(k)}(z_0) (it)^k }{k!}.
\end{align}
Since $K_s''(z_0) = \sigma_s^2 + O(z_0) > \frac{1}{2} \sigma_s^2 $, for small enough $z_0$, $\delta$ and $l$,
we obtain that
$\Re (\sum_{k=2}^{\infty} \frac{K_s^{(k)}(z_0) (it)^k }{k!}) < -\frac{1}{8} \sigma_s^2 t^2 $.
Therefore, using \eqref{Rela Ks hs} and the fact that uniformly in $x \in \mathcal{S}$,
the function $z \mapsto \psi_{s,x}(z)$ is continuous in a neighborhood of $0$ in the complex plane,
we obtain that, uniformly in $x\in \mathcal{S}$ and $|l|\leqslant l_n$,
\begin{align*}
& \
\Big| \sqrt{n} ~
e^{n h_s(l)}
\int_{n^{-\frac{1}{2}} \log n \leqslant |t|< \delta}
e^{n [K_s(z_0 + it)-(z_0 + it)l ]} \psi_{s,x}(t-iz_0) dt
\Big| \\
\leqslant & \ c_1 \sqrt{n}
\int_{n^{-\frac{1}{2}} \log n \leqslant |t|< \delta} e^{-\frac{1}{8} n \sigma_s^2 t^2} dt \| \varphi \|_{\gamma}
= O(e^{-c \log^2 n}) \| \varphi \|_{\gamma}.
\end{align*}
This, together with \eqref{SaddleIntegral}-\eqref{Rela Ks hs}, implies
\begin{align*}
J_{1}(n) &\leqslant \sup_{x\in \mathcal{S}}
\Big| \sqrt{n}
\int_{|t|< n^{-\frac{1}{2}} \log n }
e^{n \sum_{k=2}^{\infty} \frac{K_s^{(k)}(z_0) (it)^k }{k!} } \psi_{s,x}(t-iz_0) dt - c_s(\psi)
\Big| \nonumber\\
&\quad
+ O(e^{-c \log^2 n}) \| \varphi \|_{\gamma}.
\end{align*}
Noting that $\Pi_{s,0}(\varphi)(x)=\pi_{s}(\varphi)$ and $\psi_{s,x}(0) = \psi(0) \pi_s(\varphi)$,
we write
\begin{align} \label{SaddleIntegral J2}
J_1(n) \leqslant & \ \sup_{x\in \mathcal{S}}
\Big| \sqrt{n}
\int_{|t|< n^{-\frac{1}{2}} \log n }
\Big( e^{n \sum_{k=2}^{\infty} \frac{K_s^{(k)}(z_0) (it)^k }{k!} } - e^{-\frac{n\sigma_s^2 t^2}{2} } \Big)
\psi_{s,x}(t-iz_0) dt
\Big| \nonumber\\
&\ +
\sup_{x\in \mathcal{S}} \Big| \sqrt{n} \int_{|t|< n^{-\frac{1}{2}} \log n } e^{-\frac{n\sigma_s^2 t^2}{2}}
\big[ \psi_{s,x}(t-iz_0) - \psi_{s,x}(0) \big] dt
\Big| \nonumber\\
&\ + \sqrt{n} \psi(0) \pi_s(\varphi) \int_{|t| \geqslant n^{-\frac{1}{2}} \log n } e^{-\frac{n\sigma_s^2 t^2}{2}} dt
+ O(e^{-c \log^2 n}) \| \varphi \|_{\gamma} \nonumber\\
= & \ J_{11}(n) + J_{12}(n) + J_{13}(n) + O(e^{-c \log^2 n}) \| \varphi \|_{\gamma}.
\end{align}
We give a control of $J_{11}(n)$.
Note that $| \psi_{s,x}(t-iz_0) |$ is bounded by $C_s \| \varphi \|_{\gamma}$,
uniformly in $|t|< n^{-\frac{1}{2}} \log n$.
Note also that for $|t|< n^{-\frac{1}{2}} \log n$ and for large enough $n$, we have
$| e^{ n \Re{ \sum_{k=3}^{\infty} \frac{K^{(k)}(z_0) (it)^k }{k!} } }| \leqslant e^{ c n t^4} \leqslant C$.
Hence using the inequality $|e^{z} - 1| \leqslant e^{\Re{z}} |z|$ yields
\begin{align}\label{SaddleIntegral J22}
J_{11}(n) \leqslant C_s \| \varphi \|_{\gamma} \sqrt{n} \int_{|t|< n^{-\frac{1}{2}} \log n }
e^{-\frac{n\sigma_s^2 t^2}{2} } n |t|^3 dt
\leqslant \frac{C_s}{\sqrt{n}} \| \varphi \|_{\gamma}.
\end{align}
Now we control $J_{12}(n)$.
Recalling that $z_0=z_0(l) \leqslant c_s l_n$,
using the fact that uniformly with respect to $x \in \mathcal{S}$,
the map $z\mapsto\psi_{s,x}(z)$ is continuous in the neighborhood of $0$ in the complex plane,
we get that for $|t| \leqslant n^{-\frac{1}{2}} \log n$,
$$\sup_{x \in \mathcal{S}} | \psi_{s,x}(t-iz_0) - \psi_{s,x}(0) |
< c_s (n^{-\frac{1}{2}} \log n + l_n) \| \varphi \|_{\gamma}.$$
We then obtain
\begin{align*}
J_{12}(n) \leqslant c_s (n^{-\frac{1}{2}} \log n + l_n) \| \varphi \|_{\gamma}.
\end{align*}
It is easy to see that $J_{13}(n) \leqslant C \| \varphi \|_{\gamma} e^{ - c_s \log^2 n}$.
This, together with \eqref{SaddleIntegral J2}-\eqref{SaddleIntegral J22},
proves that $J_1(n) \leqslant c_s (n^{-\frac{1}{2}} \log n + l_n) \| \varphi \|_{\gamma}.$
The desired result follows by combining this with \eqref{Thm1 integral1 J}-\eqref{SaddleIntegral 2}.
\end{proof}
Assume that the functions $\varphi$ and $\psi$ satisfy the same properties as in Proposition \ref{Prop Rn limit1}.
The following result, for $s<0$ small enough, will be used to prove Theorem \ref{Thm-Neg-s}.
\begin{proposition} \label{Prop Rn limit2}
Assume conditions of Theorem \ref{Thm-Neg-s}.
Then, there exists $\eta_0 < \eta$ such that for any $s \in (-\eta_0, 0)$, $q=\Lambda'(s)$ and for any
positive sequence $(l_n)_{n \geqslant 1}$ satisfying $l_n \to 0$ as $n \to \infty$,
we have, uniformly in $x\in \mathcal{S}$, $|l|\leqslant l_n $ and $\varphi \in \mathcal{B}_{\gamma}$,
\begin{align*}
& \Big| \sqrt{n} \ \sigma_s e^{n h_s(l)}
\int_{\mathbb R} e^{-it l n} R^{n}_{s,it}(\varphi)(x) \psi (t) dt
- \sqrt{2\pi} \psi(0)\pi_{s}(\varphi)
\Big| \nonumber\\
\leqslant &\ C \| \varphi \|_\gamma \Big( \frac{ \log n }{ \sqrt{n} } + l_n \Big).
\end{align*}
\end{proposition}
\begin{proof}
Using Propositions \ref{transfer operator s negative} and \ref{perturbation thm nrgztive s},
the proof of Proposition \ref{Prop Rn limit2} can be carried out as
the proof of Proposition \ref{Prop Rn limit1}. We omit the details.
\end{proof}
\subsection{Proof of Theorem \ref{main theorem1} } \label{proof T21}
Recall that
$q=\Lambda'(s)$, $\Lambda^*(q+l)=\Lambda^{*}(q) + sl + h_s(l)$,
$x\in \mathcal{S},$ and $|l|\leqslant l_n \to 0$,
as $n \to \infty$.
Taking into account that $e^{n\Lambda^{*}(q)}=e^{sqn}/\kappa^{n}(s)$
and using the change of measure \eqref{basic equ1},
we write
\begin{align} \label{Prop change measure1}
&A_n(x,l) := \sqrt{2\pi n}~s\sigma_s
e^{n \Lambda^*(q+l)}
\frac{1}{r_s(x)}\mathbb{P}(\log |G_n x|\geqslant n(q+l)) \nonumber\\
= &
\sqrt{2\pi n} \, s\sigma_s e^{nsl}e^{n h_s(l)} e^{sqn}\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\Big( \frac{1}{r_{s}(X^x_{n})}e^{-s \log |G_n x| } \mathbbm{1}_{\{ \log |G_nx| \geqslant n(q+l) \} } \Big).
\end{align}
Setting $T_{n}^x= \log |G_n x|
-nq$
and $\psi_s(y)=e^{-sy}\mathbbm{1}_{\{y\geqslant 0\}}$,
from \eqref{Prop change measure1} we get
\begin{eqnarray} \label{change measure equ1}
A_n(x,l)=\sqrt{2\pi n}~s\sigma_s e^{n h_s(l)}
\mathbb{E}_{\mathbb{Q}_{s}^{x}} \left(\frac{1}{r_{s}(X_{n}^x)}\psi_s(T_{n}^x-nl)\right).
\end{eqnarray}
\textit{Upper bound.}
Let $\varepsilon\in(0,1)$
and
${\psi}^+_{s,\varepsilon}(y) = \sup_{y'\in\mathbb{B}_{\varepsilon}(y)} \psi_s(y')$
be defined as in \eqref{smoo001} but with $\psi_s$ instead of $\psi$.
Using Lemma \ref{estimate u convo} leads to
\begin{align}\label{Thm1 upper1}
A_n(x,l)
&\leqslant (1+ C_{\rho}(\varepsilon))
\sqrt{2\pi n}~s\sigma_s
e^{n h_s(l) }
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[\frac{1}{r_{s}(X_{n}^x)}
({\psi}^+_{s,\varepsilon}\!\ast\!\rho_{\varepsilon^2})
(T_{n}^x-nl)\right]
\nonumber \\
&=: B_n^+(x,l).
\end{align}
Denote by
$\widehat{{\psi}}^+_{s,\varepsilon}$ the Fourier transform of ${\psi}^+_{s,\varepsilon}$.
Elementary calculations give
\begin{align} \label{Thm1 estamite1 u01}
\sup_{t\in \mathbb R} |\widehat \psi^+_{s,\varepsilon}(t)|
\leqslant
\widehat \psi^+_{s,\varepsilon}(0)
=
\int_{-\varepsilon}^{\varepsilon} dy
+
\int_{\varepsilon}^{+\infty} e^{-s(y-\varepsilon)} dy
=\frac{1+2s\varepsilon}{s}.
\end{align}
By the inversion formula, for any $y\in \mathbb R,$
$$
{\psi}^+_{s,\varepsilon}\!\ast\!\rho_{\varepsilon^{2}}(y)
=\frac{1}{2\pi}\int_{\mathbb{R}}e^{ity}
\widehat {\psi}^+_{s,\varepsilon}(t) \widehat\rho_{\varepsilon^{2}}(t)dt.
$$
Substituting $y=T_{n}^x-nl$, taking expectation with respect to $\mathbb{E}_{\mathbb{Q}_{s}^{x}}$,
and using Fubini's theorem, we get
\begin{equation} \label{Fubini1}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\Big[ \frac{1}{ r_{s}(X_{n}^x) }
( {\psi}^+_{s,\varepsilon} \!\ast\! \rho_{\varepsilon^{2}} ) (T_{n}^x-nl) \Big]
= \frac{1}{2\pi}
\int_{\mathbb{R}} e^{-itln} R^{n}_{s,it}(r_{s}^{-1})(x)
\widehat {\psi}^+_{s,\varepsilon}(t) \widehat\rho_{\varepsilon^{2}}(t) dt,
\end{equation}
where
\begin{equation}
R^{n}_{s,it}(r_{s}^{-1})(x)
=\mathbb{E}_{\mathbb{Q}_{s}^{x}}\left[e^{it T_{n}^x}\frac{1}{r_{s}(X_{n}^x)}\right]. \nonumber
\end{equation}
Note that $\widehat {\psi}^+_{s,\varepsilon} \widehat\rho_{\varepsilon^{2}}$
is compactly supported in $\mathbb{R}$
since $\widehat\rho_{\varepsilon^{2}}$ has a compact support.
One can verify that $\widehat {\psi}^+_{s,\varepsilon}$ has an analytic extension
in a neighborhood of $0$. By Lemma \ref{LemAnalyExten}, we see that
the function $\widehat\rho_{\varepsilon^{2}}$ has a continuous extension in the complex plane,
and has an analytic in the domain
$D_{\varepsilon^2} : = \{ z \in \mathbb{C}: |z| < \varepsilon^2, \Im z \neq 0 \}$.
Using Proposition \ref{Prop Rn limit1}
with $\varphi=r_s^{-1}$ and $\psi=\widehat {\psi}^+_{s,\varepsilon} \widehat\rho_{\varepsilon^{2}}$,
it follows that
\begin{align} \label{Thm1 identity upper 1}
\lim_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
\left|
B_n^+(x,l)
-(1+ C_{\rho}(\varepsilon)) \pi_{s}(r_{s}^{-1}) s \widehat{\psi}^+_{s,\varepsilon}(0)
\widehat{\rho}_{\varepsilon^2}(0)
\right| =0.
\end{align}
Since $\widehat{\rho}_{\varepsilon^2}(0)=1$,
from \eqref{change measure equ1}-\eqref{Thm1 identity upper 1},
we have that
for sufficiently small $\varepsilon\in (0,1)$,
\begin{align*}
\limsup_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
A_n(x,l)
&\leqslant (1+ C_{\rho}(\varepsilon))
s\pi_{s}(r_{s}^{-1})\widehat{\psi}^+_{s,\varepsilon}(0)\widehat{\rho}_{\varepsilon}(0)
\\
&\leqslant (1+ C_{\rho}(\varepsilon)) (1+ 2s\varepsilon)\pi_{s}(r_{s}^{-1}).
\end{align*}
Letting $\varepsilon\to 0$ and noting that $C_{\rho}(\varepsilon) \to 0$,
we obtain the upper bound:
\begin{align} \label{Thm1 upper bound}
\limsup_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
A_n(x,l)
\leqslant \pi_{s}(r_{s}^{-1}) = \frac{1}{ \nu_s(r_s) }.
\end{align}
\textit{Lower bound.}
For $\varepsilon\in(0,1)$,
let
${\psi}^-_{s,\varepsilon}(y) = \inf_{y'\in\mathbb{B}_{\varepsilon}(y)} \psi_s(y')$
be defined as in \eqref{smoo001} with $\psi_s$ instead of $\psi$.
From \eqref{change measure equ1} and Lemma \ref{estimate u convo}, we get
\begin{align} \label{Lowerbound An 1}
A_n(x,l) \geqslant & \
\sqrt{2\pi n}~s\sigma_s e^{n h_s(l) }
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[\frac{1}{r_{s}(X_{n}^x)}({\psi}^-_{s,\varepsilon}\!\ast\!\rho_{\varepsilon^2})(T_{n}^x-nl)\right] \nonumber\\
& \ -
\sqrt{2\pi n}~s\sigma_s e^{n h_s(l) }
\int_{|y|\geqslant \varepsilon}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[\frac{1}{r_{s}(X_{n}^x)} {\psi}^-_{s,\varepsilon} (T_{n}^x-nl-y)\right]
\rho_{\varepsilon^2}(y)dy \nonumber\\
: = & \ B_n^-(x,l)- C_n^-(x,l).
\end{align}
For the first term $B_n^-(x,l)$,
applying \eqref{Fubini1} with ${\psi}^+_{s,\varepsilon} \rho_{\varepsilon^{2}}$
replaced by ${\psi}^-_{s,\varepsilon} \rho_{\varepsilon^{2}}$, we get
\begin{align*}
B_n^-(x,l) =
\sqrt{\frac{n}{2\pi} }~s\sigma_s e^{n h_s(l) }
\int_{\mathbb{R}} e^{-itln} R^{n}_{s,it}(r_{s}^{-1})(x)
\widehat {\psi}^-_{s,\varepsilon}(t) \widehat\rho_{\varepsilon^{2}}(t) dt.
\end{align*}
In the same way as for the upper bound, using
$
\widehat \psi^-_{s,\varepsilon}(0)
= \frac{e^{-2s\varepsilon}}{s}
$
and Proposition \ref{Prop Rn limit1}
with $\varphi=r_s^{-1}$ and $\psi=\widehat {\psi}^-_{s,\varepsilon} \widehat\rho_{\varepsilon^{2}}$
(one can check that the functions $\varphi$ and $\psi$
satisfy the required conditions in Proposition \ref{Prop Rn limit1}),
we obtain the lower bound:
\begin{align} \label{Thm1 lower bound}
\liminf_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
B_n^-(x,l)
\geqslant \pi_{s}(r_{s}^{-1}) = \frac{1}{ \nu_s(r_s) }.
\end{align}
For the second term $C_n^-(x,l)$, noting that $\psi^-_{s,\varepsilon} \leqslant \psi_s$
and applying Lemma \ref{estimate u convo} to $\psi_s$, we get
$\psi^-_{s,\varepsilon} \leqslant \psi_s
\leqslant (1+ C_{\rho}(\varepsilon)){\psi}_{s,\varepsilon}^+ \! \ast \! \rho_{\varepsilon^2}$.
We use the same argument as in \eqref{Fubini1} to obtain
\begin{align*}
C_n^-(x,l)
& \leqslant (1+ C_{\rho}(\varepsilon))
\sqrt{2\pi n}~s\sigma_s e^{n h_s(l) }\\
&\ \quad \times \int_{|y|\geqslant \varepsilon}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[\frac{1}{r_{s}(X_{n}^x)}
({\psi}_{s,\varepsilon}^+ \ast \rho_{\varepsilon^2})
(T_{n}^x-nl-y)\right]
\rho_{\varepsilon^2}(y)
dy \nonumber\\
& = (1+ C_{\rho}(\varepsilon))
\sqrt{\frac{n}{2 \pi} }~s\sigma_s e^{n h_s(l) }
\\
&\ \quad \times
\int_{|y|\geqslant \varepsilon}
\left( \int_{\mathbb{R}} e^{-it(ln+y)} R^{n}_{s,it}(r_{s}^{-1})(x)
\widehat {\psi}^+_{s,\varepsilon}(t)
\widehat\rho_{\varepsilon^{2}}(t)
dt \right)\rho_{\varepsilon^2}(y)
dy.
\end{align*}
Notice that, from Lemma \ref{lemmaCR001},
for any fixed $y \in \mathbb{R}$,
it holds, uniformly in $l$ satisfying $|l| \leqslant l_n$, that $e^{nh_s(l)-nh_s(l+\frac{y}{n})} \to 1$ as $n \to \infty$.
Applying Proposition \ref{Prop Rn limit1} again
with $\varphi=r_s^{-1}$, $\psi=\widehat \psi^+_{s,\varepsilon}\widehat\rho_{\varepsilon^{2}}$,
and using the Lebesgue dominated convergence theorem,
we obtain
\begin{align*}
&\limsup_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
C_n^-(x,l) \leqslant (1+ C_{\rho}(\varepsilon))
s \pi_{s}(r_{s}^{-1}) \widehat \psi^+_{s,\varepsilon}(0) \widehat\rho_{\varepsilon^{2}}(0)
\int_{|y| \geqslant \varepsilon} \rho_{\varepsilon^2}(y)dy \nonumber\\
&\qquad\qquad = (1+ C_{\rho}(\varepsilon)) \pi_{s}(r_{s}^{-1}) (1+2s \varepsilon) \int_{|y|\geqslant \frac{1}{\varepsilon}} \rho(y)dy
\to 0, \quad \mbox{as} \ \varepsilon \to 0,
\end{align*}
since $\rho$ is integrable on $\mathbb{R}$.
This, together with \eqref{Lowerbound An 1}-\eqref{Thm1 lower bound}, implies the lower bound:
\begin{align} \label{lower bound An 001}
\liminf_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
A_n(x,l)
\geqslant \pi_{s}(r_{s}^{-1}) = \frac{1}{ \nu_s(r_s) },
\end{align}
as required.
We conclude the proof of Theorem \ref{main theorem1}
by combining \eqref{Thm1 upper bound} and \eqref{lower bound An 001}.
\subsection{Proof of Theorem \ref{Thm-Neg-s}}
Since the change of measure formula can be extended for small $s<0$,
under the conditions of Theorem \ref{Thm-Neg-s},
we have, similar to \eqref{Prop change measure1},
\begin{align*}
& -s \sigma_s \sqrt{2\pi n} \,
e^{n \Lambda^*(q+l)}
\frac{1}{r_s(x)}\mathbb{P}(\log |G_n x|\leqslant n(q+l)) \nonumber\\
= & -s \sigma_s
\sqrt{2\pi n} \, e^{nsl}e^{n h_s(l)} e^{sqn}\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\Big( \frac{1}{r_{s}(X^x_{n})}e^{-s \log |G_n x| } \mathbbm{1}_{\{ \log |G_nx| \leqslant n(q+l) \} } \Big).
\end{align*}
Applying Proposition \ref{Prop Rn limit2},
we can follow the proof of Theorem \ref{main theorem1}
to show Theorem \ref{Thm-Neg-s}. We omit the details.
\section{Proof of Theorems \ref{main theorem3} and \ref{Thm-Neg-s-Target}}\label{sec proof of main theroem3}
We first establish the following assertion which will be used to prove Theorem \ref{main theorem3},
but which is of independent interest.
Let $\psi$ be a measurable function on $\mathbb{R}$ and $\varepsilon>0.$
Denote, for brevity,
$\psi_s(y)=e^{-sy}\psi(y)$ and
$$\psi^+_{s,\varepsilon} (y)= \sup_{y'\in\mathbb{B}_{\varepsilon}(y)} \psi_s(y'),
\quad
\psi^-_{s,\varepsilon}(y)= \inf_{y'\in\mathbb{B}_{\varepsilon}(y)} \psi_s(y').$$
Introduce the following condition:
for any $s\in I_\mu^\circ$ and $\varepsilon>0,$ the functions
$y\mapsto \psi^+_{s,\varepsilon}(y)$
and $y\mapsto \psi^-_{s,\varepsilon}(y)$
are measurable and
\begin{align}\label{condition g}
\lim_{\varepsilon\to0^{+}}\int_{\mathbb{R}} \psi^+_{s,\varepsilon}(y)
dy
=\lim_{\varepsilon\to0^{+}}\int_{\mathbb{R}} \psi^-_{s,\varepsilon}(y)
dy
=\int_{\mathbb{R}}e^{-sy}\psi(y)dy<+\infty.
\end{align}
\begin{theorem} \label{main theorem2}
Suppose the assumptions of Theorem \ref{main theorem1} hold true.
Let $q=\Lambda'(s)$, where $s\in I_\mu^\circ$.
Assume that $\varphi$ is a H\"{o}lder continuous function on $\mathcal{S}$
and $\psi$ is a measurable function on $\mathbb{R}$
satisfying condition \eqref{condition g}.
Then, for any positive sequence $(l_n)_{n \geqslant 1}$ satisfying $\lim_{n\to \infty}l_n = 0$,
we have
\begin{align}\label{Asy-s-Posi}
&\lim_{n\to\infty} \sup_{x\in \mathcal{S}} \sup_{|l|\leqslant l_n }
\Bigg|
\sqrt{2\pi n} \, \sigma_s e^{n \Lambda^*(q+l)}
\mathbb{E} \Big[ \varphi(X_{n}^{x}) \psi( \log |G_n x|-n(q+l) ) \Big] \nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}} e^{-sy} \psi(y) dy
\Bigg|
=0.
\end{align}
\end{theorem}
Before proceeding with the proof of this theorem, let us give some
examples of functions satisfying condition \eqref{condition g}.
It is easy to see that
\eqref{condition g} holds for increasing non-negative functions $\psi$ satisfying
$\int_{\mathbb{R}}e^{-sy}\psi(y)dy<+\infty,$
in particular,
for the indicator function
$\psi(y)=\mathbbm{1}_{\{y \geqslant c\}}$, $y \in \mathbb R$, where $c\in \mathbb R$ is a fixed constant.
Another example for which \eqref{condition g} holds true is
when $\psi$ is non-negative,
continuous and there exists $\varepsilon>0$ such that
\begin{align} \label{conditiong002}
\int_{\mathbb{R}}e^{-sy}
\psi_\varepsilon^+(y)
dy<+\infty,
\end{align}
where the function $\psi_\varepsilon^+(y) = \sup_{y'\in\mathbb{B}_{\varepsilon}(y)} \psi(y')$ is assumed to be measurable.
\begin{proof}[\textit{Proof of Theorem \ref{main theorem2}}]
Without loss of generality, we assume that both
$\varphi$ and $\psi$ are non-negative (otherwise, we decompose the functions $\varphi=\varphi^+ - \varphi^-$
and $\psi = \psi^+ - \psi^-$).
Let $T_{n}^x=\log|G_n x|-nq$.
Since $e^{n\Lambda^{*}(q)}=e^{sqn}/\kappa^{n}(s)$,
using the change of measure \eqref{basic equ1}, we have
\begin{align}
A_n(x,l) := \ &\sqrt{2\pi n}~\sigma_s e^{n \Lambda^*(q+l)}
\frac{1}{r_{s}(x)}
\mathbb{E}\Big[\varphi(X_{n}^{x})\psi(\log |G_nx|-n(q + l))\Big] \nonumber\\
= \ &
\sqrt{2\pi n}~\sigma_s e^{n s l}e^{n h_s(l)}
e^{sqn}\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[ (\varphi r_{s}^{-1})(X_{n}^x) e^{ -s \log |G_nx| } \psi(T_{n}^x- n l) \right] \nonumber\\
= \ &
\sqrt{2\pi n}~\sigma_se^{n h_s(l) }
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[(\varphi r_{s}^{-1})(X_{n}^x)e^{-s( T_{n}^x- n l )}\psi(T_{n}^x- n l)\right]. \nonumber
\end{align}
For brevity, set $\Phi_s(x)=\left(\varphi r_{s}^{-1}\right)(x),$ $x\in \mathcal{S}$, and
$\Psi_s(y)=e^{-sy}\psi(y),$ $y\in \mathbb R$.
Then,
\begin{align} \label{Target An}
A_n(x,l) =
\sqrt{2\pi n}~\sigma_s e^{n h_s(l)}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}\left[\Phi_s(X_{n}^x) \Psi_s(T_{n}^x-nl)\right].
\end{align}
\textit{Upper bound.}
We wish to write the expectation in \eqref{Target An} as
an integral of the Fourier transform of $\Psi_s,$
which, however, may not belong to the space $L^{1}(\mathbb{R})$.
As in the proof of Theorem \ref{main theorem1} (see Section \ref{proof T21}),
we make use of the convolution technique to overcome this difficulty.
Applying Lemma \ref{estimate u convo} to $\Psi_s$,
one has, for sufficiently small $\varepsilon > 0$,
\begin{align} \label{Thm2 upper01}
A_n(x,l)
\leqslant & \ (1+ C_{\rho}(\varepsilon))
\sqrt{2\pi n}~\sigma_s e^{n h_s(l)}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}\left[\Phi_s(X_{n}^x)
({\Psi}^+_{s,\varepsilon}\!\ast\!\rho_{\varepsilon^2})(T_{n}^x-nl)\right] \nonumber\\
: = & \ B_n(x,l),
\end{align}
where ${\Psi}^+_{s,\varepsilon}(y) = \sup_{y'\in\mathbb{B}_{\varepsilon}(y)} \Psi_s(y')$, $y \in \mathbb{R}$.
Using the same arguments as for deducing \eqref{Fubini1}, we have
\begin{align} \label{target upper Bn}
B_n(x,l)
= (1+ C_{\rho}(\varepsilon)) \frac{\sigma_s}{\sqrt{2\pi} } \sqrt{n} \ e^{n h_s(l)} \int_{\mathbb{R}}
e^{-itln}
R^{n}_{s,it} \Phi_s(x)\widehat{\Psi}^+_{s,\varepsilon}(t) \widehat{\rho}_{\varepsilon^2}(t)dt,
\end{align}
where
$R^{n}_{s,it}\Phi_s(x)=\mathbb{E}_{\mathbb{Q}_{s}^{x}}\left[e^{it T_{n}^x}\Phi_s(X_{n}^x)\right]$
and $\widehat{\Psi}^+_{s,\varepsilon}$ is the Fourier transform of $\Psi^+_{s,\varepsilon}$.
Note that $\Phi_s$ is strictly positive and $\gamma$-H\"{o}lder continuous function on $\mathcal S$,
and $\widehat{\Psi}^+_{s,\varepsilon} \widehat{\rho}_{\varepsilon^2}$
has a compact support in $\mathbb{R}$.
Applying Proposition \ref{Prop Rn limit1}
with $\varphi = \Phi_s$ and $\psi=\widehat{\Psi}^+_{s,\varepsilon} \widehat{\rho}_{\varepsilon^2}$
(one can verify that the functions $\varphi$ and $\psi$
satisfy the required conditions in Proposition \ref{Prop Rn limit1}), we obtain
\begin{align*}
\lim_{n\to\infty} \sup_{x\in \mathcal{S}} \sup_{|l|\leqslant l_n } B_n(x,l)
= (1+ C_{\rho}(\varepsilon)) \pi_{s}(\Phi_s) \widehat{\Psi}^+_{s,\varepsilon}(0) \widehat{\rho}_{\varepsilon^2}(0).
\end{align*}
Since
$\widehat{\Psi}^+_{s,\varepsilon}(0)
=\int_{\mathbb{R}}\sup_{y'\in\mathbb{B}_{\varepsilon}(y)}e^{-sy'}\psi(y')dy$ and
$\widehat{\rho}_{\varepsilon^2}(0) =1$,
letting $\varepsilon$ go to $0$,
using the condition \eqref{condition g}
and the fact that $C_{\rho}(\varepsilon) \to 0$ as $\varepsilon \to 0$,
we get the upper bound:
\begin{align} \label{Thm2 upper result}
\limsup_{n\to\infty} \sup_{x\in \mathcal{S}} \sup_{|l|\leqslant l_n}
A_n(x,l)
\leqslant \pi_{s}(\Phi_s) \int_{\mathbb{R}} e^{-sy} \psi(y) dy.
\end{align}
\textit{Lower bound.}
Denote ${\Psi}^-_{s,\varepsilon}(y) = \inf_{y'\in\mathbb{B}_{\varepsilon}(y)} \Psi_s(y')$.
From \eqref{Target An}, using Lemma \ref{estimate u convo}, we get
\begin{align} \label{Target Lowerbound An 1}
A_n(x,l)
\geqslant & \
\sqrt{2\pi n} \ \sigma_s e^{n h_s(l) }
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[ \Phi_s(X_{n}^x)
({\Psi}^-_{s,\varepsilon}\! \ast \! \rho_{\varepsilon^2})
(T_{n}^x-nl)\right] \nonumber\\
& -
\sqrt{2\pi n} \ \sigma_s e^{n h_s(l) }
\int_{|y|\geqslant \varepsilon}
\mathbb{E}_{\mathbb{Q}_{s}^{x}}
\left[ \Phi_s(X_{n}^x)
{\Psi}^-_{s,\varepsilon}
(T_{n}^x-nl-y)\right]\rho_{\varepsilon^2}(y)dy \nonumber\\
:= & \
B_n^-(x,l)- C_n^-(x,l).
\end{align}
For $B_n^-(x,l)$,
we proceed as for \eqref{Thm2 upper01} and \eqref{target upper Bn}, with ${\Psi}^+_{s,\varepsilon}$
replaced by ${\Psi}^-_{s,\varepsilon}$.
Using
Proposition \ref{Prop Rn limit1},
with $\varphi=\Phi_s$ and $\psi=\widehat \Psi^-_{s,\varepsilon} \widehat \rho_{\varepsilon^2},$
and the fact that $\widehat \rho_{\varepsilon^2}(0)=1$ and $\widehat{\Psi}^-_{s,\varepsilon}(0)
=\int_{\mathbb{R}}\inf_{y'\in\mathbb{B}_{\varepsilon}(y)}e^{-sy'}\psi(y')dy$,
in an analogous way as in \eqref{Thm2 upper result},
we obtain that
\begin{align} \label{Target lower bound Bn}
& \ \lim_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n } B_n^-(x,l)
\nonumber \\
= & \ \pi_{s}(r_{s}^{-1}) \int_{\mathbb{R}}\inf_{y\in\mathbb{B}_{\varepsilon}(z)}e^{-sy}\psi(y)dz
\to \pi_{s}(r_{s}^{-1}) \int_{\mathbb{R}}e^{-sy}\psi(y)dy, \ \mbox{as} \ \varepsilon \to 0,
\end{align}
where the last convergence is due to the condition \eqref{condition g}.
For $C_n^-(x,l)$,
noting that $\Psi^-_{s,\varepsilon} \leqslant \Psi_s$,
applying Lemma \ref{estimate u convo} to $\Psi_s$ we get
$\Psi^-_{s,\varepsilon}
\leqslant (1+ C_{\rho}(\varepsilon))\widehat{\Psi}^+_{s,\varepsilon} \widehat{\rho}_{\varepsilon^2}$.
Similarly to \eqref{target upper Bn}, we show that
\begin{align*}
C_n^-(x,l)
&\leqslant (1+ C_{\rho}(\varepsilon))
\sqrt{\frac{n}{2 \pi} }~ \sigma_s e^{n h_s(l) }
\\
& \times\int_{|y|\geqslant \varepsilon}
\left( \int_{\mathbb{R}} e^{-it(ln+y)} R^{n}_{s,it}(\Phi_s)(x)
\widehat {\Psi}^+_{s,\varepsilon}(t) \widehat\rho_{\varepsilon^{2}}(t) dt \right)
\rho_{\varepsilon^2}(y)dy.
\end{align*}
From Lemma \ref{lemmaCR001},
for any fixed $y \in \mathbb{R}$,
it holds that $e^{nh_s(l)-nh_s(l+\frac{y}{n})} \to 1$, uniformly in $|l| \leqslant l_n$ as $n \to \infty$.
Applying Proposition \ref{Prop Rn limit1}
with $\varphi=\Phi_s$ and $\psi=\widehat \Psi^+_{s,\varepsilon}\widehat\rho_{\varepsilon^{2}}$,
it follows, from the Lebesgue dominated convergence theorem, that
\begin{align*}
&\limsup_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n } C_n^-(x,l) \\
&\quad \leqslant (1+ C_{\rho}(\varepsilon))
\pi_{s}(\Phi_s) \widehat \Psi^+_{s,\varepsilon}(0) \widehat\rho_{\varepsilon^{2}}(0)
\int_{|y| \geqslant \varepsilon} \rho_{\varepsilon^2}(y)dy
\to 0
\end{align*}
as $\varepsilon \to 0$.
Combining this with \eqref{Target Lowerbound An 1}-\eqref{Target lower bound Bn}, we get the lower bound
\begin{align} \label{Target lower bound An 02}
\liminf_{n\to\infty}\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
A_n(x,l)
\geqslant \pi_{s}(\Phi_s) \int_{\mathbb{R}}e^{-sy}\psi(y)dy.
\end{align}
Putting together \eqref{Thm2 upper result} and \eqref{Target lower bound An 02},
and noting that $\pi_{s}(\Phi_s) = \pi_{s}(\varphi r_s^{-1}) = \frac{\nu_s(\varphi)}{\nu_s(r_s)}$,
the result follows.
\end{proof}
In the sequel, we deduce Theorem \ref{main theorem3} from Theorem \ref{main theorem2}
using approximation techniques.
\begin{proof}[\textit{Proof of Theorem \ref{main theorem3}}]
Without loss of generality, we assume that $\varphi\geqslant 0$ and $\psi \geqslant 0$.
Let $\Psi_s(y)=e^{-sy}\psi(y)$, $y \in \mathbb{R}$.
We construct two step functions as follows:
for any $\eta \in(0,1)$, $m\in\mathbb{Z}$ and $y\in [m\eta, (m+1)\eta)$, set
\begin{align*}
\Psi^+_{s,\eta} (y)=\sup_{y\in [m\eta, (m+1)\eta)} \Psi_s(y) \quad \mbox{and} \quad
\Psi^-_{s,\eta} (y)=\inf_{y\in [m\eta, (m+1)\eta)} \Psi_s(y).
\end{align*}
By the definition of the direct Riemann integrability, the following two limits exist and are equal:
\begin{align} \label{DiretRiemStep 01}
\lim_{\eta\to 0^+}\int_{\mathbb{R}} \Psi^+_{s,\eta}(y)dy
=\lim_{\eta\to 0^+}\int_{\mathbb{R}} \Psi^-_{s,\eta}(y)dy.
\end{align}
Since $\Psi_s$ is directly Riemann integrable,
we have $M:= \sup_{y \in \mathbb{R}} \Psi_s(y)<+\infty$.
Let $\varepsilon \in (0, M\eta)$ be fixed.
Denote $I_{m}=[(m-1) \eta, m\eta)$,
$I_{m}^-=\big(m\eta-\frac{\varepsilon}{M 4^{|m|}}, m\eta \big)$,
and $I_{m}^+ = \big[m\eta, m\eta + \frac{\varepsilon}{M 4^{|m|}} \big)$, $m \in \mathbb{Z}$.
Set $k_m^+:= M 4^{|m|} \frac{ \Psi_{s,\eta}^{+}(m\eta)
-\Psi_{s,\eta}^{+}((m-1)\eta)} { \varepsilon }$, $m \in \mathbb{Z}$.
For the step function $\Psi_{s,\eta}^{+}$,
in the neighborhood of every possible discontinuous point $m \eta$, $m \in \mathbb{Z}$,
if $\Psi_{s,\eta}^{+}(m\eta) \geqslant \Psi_{s,\eta}^{+}((m-1)\eta)$,
then for any $y \in I_m\cup I_{m+1}$, $m \in \mathbb{Z}$, we define
\begin{equation*}
\Psi_{s,\eta,\varepsilon}^{+}(y)=
\begin{cases}
\Psi_{s,\eta}^{+}((m-1)\eta),
& y \in I_m \setminus I_m^- \\
\Psi_{s,\eta}^{+}((m-1)\eta)
+ k_m^+ \left(y - m\eta + \frac{\varepsilon}{M4^{|m|}} \right),
& y \in I_{m}^- \\
\Psi_{s,\eta}^{+}(m\eta),
& y \in I_{m+1}.
\end{cases}
\end{equation*}
If $\Psi_{s,\eta}^{+}(m\eta) < \Psi_{s,\eta}^{+}((m-1)\eta)$,
then we define
\begin{equation*}
\Psi_{s,\eta,\varepsilon}^{+}(y)=
\begin{cases}
\Psi_{s,\eta}^{+}((m-1)\eta),
& y \in I_{m} \\
\Psi_{s,\eta}^{+}((m-1)\eta)
+ k_m^+ (y - m\eta ),
& y \in I_{m}^+ \\
\Psi_{s,\eta}^{+}(m\eta),
& y \in I_{m+1}\setminus I_{m}^+.
\end{cases}
\end{equation*}
From this construction, the non-negative continuous function $\Psi_{s,\eta,\varepsilon}^{+}$
satisfies $\Psi^+_{s,\eta} \leqslant \Psi_{s,\eta,\varepsilon}^{+}$
and $\int_{\mathbb{R}} [\Psi_{s,\eta,\varepsilon}^{+}(y) - \Psi^+_{s,\eta}(y)] dy < \varepsilon$.
Similarly, for the step function $\Psi_{s,\eta}^{-}$,
one can construct a non-negative continuous function $\Psi_{s,\eta,\varepsilon}^{-}$
which satisfies $ \Psi_{s,\eta,\varepsilon}^{-} \leqslant \Psi^-_{s,\eta}$ and
$\int_{\mathbb{R}} [\Psi^-_{s,\eta}(y) - \Psi_{s,\eta,\varepsilon}^{-}(y)] dy < \varepsilon$.
Consequently, in view of \eqref{DiretRiemStep 01}, we obtain that, for $\eta$ small enough,
\begin{align}\label{estimate approximation}
\int_{\mathbb{R}}|\Psi_{s,\eta,\varepsilon}^{+}(y)- \Psi_{s,\eta,\varepsilon}^{-}(y)|dy< 3\varepsilon.
\end{align}
For brevity, set $c_{s,l,n}=\sqrt{2\pi n}~\sigma_s e^{n \Lambda^*(q+l)}$
and $T_{n,l}^x = \log |G_n x|-n(q+l)$.
Recalling that $\Psi_s(y)=e^{-sy}\psi(y)$, we write
\begin{align} \label{mainresult3 estimate}
&\ \left|
c_{s,l,n} \mathbb{E}\left[\varphi(X_{n}^{x}) \psi(T_{n,l}^x ) \right]
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_s(y)dy\right| \nonumber\\
\leqslant & \
\left|
c_{s,l,n} \mathbb{E}\left\{\varphi(X_{n}^{x})e^{s T_{n,l}^x }
\left[\Psi_s(T_{n,l}^x )-\Psi_{s,\eta,\varepsilon}^{+}(T_{n,l}^x )\right]\right\}
\right| \nonumber\\
& \ +
\left| c_{s,l,n}
\mathbb{E}
\left[\varphi(X_{n}^{x})e^{s T_{n,l}^x }\Psi_{s,\eta,\varepsilon}^{+}(T_{n,l}^x )\right]
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{+}(y)dy\right| \nonumber\\
&\ +
\left|{r_{s}(x)}\pi_{s}(\varphi r_{s}^{-1})\int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{+}(y)dy
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_s(y)dy\right| \nonumber\\
= & \ J_1 + J_2 + J_3.
\end{align}
To control $J_2$, we shall verify the conditions of Theorem \ref{main theorem2}.
Noting that the function $y\mapsto e^{sy}\Psi_{s,\eta,\varepsilon}^{+}(y)$ is non-negative and continuous,
it remains to check the condition \eqref{conditiong002}.
By the construction of $\Psi_{s,\eta,\varepsilon}^{+}$
one can verify that there exists a constant
$\varepsilon_1 \in (0, \min\{M \eta, \eta/3\})$ such that
\begin{align} \label{Upper Bound Direct Riem}
\int_{\mathbb{R}}\sup_{y'\in\mathbb{B}_{\varepsilon_1}(y)} \Psi_{s,\eta,\varepsilon}^{+}(y') dy
\leqslant & \ 2 \eta \sum_{m\in \mathbb{Z}} \sup_{y\in [m\eta, (m+1)\eta)}\Psi_{s,\eta}^{+}(y) \nonumber\\
= & \ 2 \eta \sum_{m\in \mathbb{Z}} \sup_{y\in [m\eta, (m+1)\eta)}\Psi_{s}(y) <+\infty,
\end{align}
where the series is finite since the function $\Psi_s$ is directly Riemann integrable.
Hence, applying Theorem \ref{main theorem2} to
$y\mapsto e^{sy}\Psi_{s,\eta,\varepsilon}^{+}(y)$,
we get
\begin{align} \label{mainresult3 estimate I2}
\lim_{n\to \infty}
\sup_{x\in \mathcal{S}}\sup_{|l|\leqslant l_n }
J_2 = 0.
\end{align}
For $J_3(x)$, recall that $\Psi_{s,\eta,\varepsilon}^{-}
\leqslant \Psi_s \leqslant \Psi_{s,\eta,\varepsilon}^{+}$.
Using \eqref{estimate approximation} and the fact that $r_s$ is uniformly bounded on $\mathcal{S}$,
we get that there exists a constant $C_s>0$ such that
\begin{align} \label{mainresult3 estimate I3}
\sup_{x \in \mathcal{S}} J_3 \leqslant C_s \varepsilon.
\end{align}
For $J_1$,
note that
$e^{sy}\Psi_{s,\eta,\varepsilon}^{-}(y)
\leqslant e^{sy}\Psi_s(y)\leqslant e^{sy}\Psi_{s,\eta,\varepsilon}^{+}(y)$, $y\in\mathbb{R}$.
Combining this with the positivity of $\varphi$, it holds that
\begin{align}
|J_1 &|
\leqslant
\left| c_{s,l,n}
\mathbb{E}
\left\{\varphi(X_{n}^{x})e^{s T_{n,l}^x}
\left[\Psi_{s,\eta,\varepsilon}^{+}(T_{n,l}^x)
-\Psi_{s,\eta,\varepsilon}^{-}(T_{n,l}^x)\right]\right\}\right| \nonumber\\
\leqslant & \
\left| c_{s,l,n}
\mathbb{E}
\left[\varphi(X_{n}^{x})e^{s T_{n,l}^x}\Psi_{s,\eta,\varepsilon}^{+}(T_{n,l}^x)\right]
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{+}(y)dy\right| \nonumber\\
& \ +
\left| c_{s,l,n}
\mathbb{E}
\left[\varphi(X_{n}^{x})e^{s T_{n,l}^x}\Psi_{s,\eta,\varepsilon}^{-}(T_{n,l}^x)\right]
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{-}(y)dy\right| \nonumber\\
& \ +
\left| \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{+}(y)dy
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_{s,\eta,\varepsilon}^{-}(y)dy\right| \nonumber\\
= & \ J_{11} + J_{12} + J_{13}. \nonumber
\end{align}
Using \eqref{mainresult3 estimate I2}, it holds that, as $n \to \infty$,
$J_{11} \to 0$, uniformly in $x \in \mathcal{S}$ and $|l| \leqslant l_n$.
For $J_{12}$,
note that the function $y\mapsto e^{sy}\Psi_{s,\eta,\varepsilon}^{-}(y)$ is non-negative and continuous.
By the construction of $\Psi_{s,\eta,\varepsilon}^{-}$,
similarly to \eqref{Upper Bound Direct Riem}, one can verify that
there exists $\varepsilon_2>0$ such that
$
\int_{\mathbb{R}}\sup_{y'\in\mathbb{B}_{\varepsilon_2}(y)} \Psi_{s,\eta,\varepsilon}^{-}(y') dy<+\infty.
$
We deduce from Theorem \ref{main theorem2} that
$J_{12} \to 0$ as $n \to \infty$,
uniformly in $x \in \mathcal{S}$ and $|l| \leqslant l_n$.
For $J_{13}$, we use \eqref{estimate approximation} to get that
$J_{13} \leqslant C_s \varepsilon$.
Consequently, we obtain that, as $n \to \infty$,
$J_1 \leqslant C_s \varepsilon$,
uniformly in $x \in \mathcal{S}$ and $|l| \leqslant l_n$.
This, together with \eqref{mainresult3 estimate},
\eqref{mainresult3 estimate I2}-\eqref{mainresult3 estimate I3}, implies that
\begin{align*}
\lim_{n\to \infty} \sup_{x\in \mathcal{S}} \sup_{|l|\leqslant l_n }
\Big|
c_{s,l,n} \mathbb{E}\left[\varphi(X_{n}^{x}) \psi(T_{n,l}^x ) \right]
- \bar r_{s}(x) \nu_s(\varphi) \int_{\mathbb{R}}\Psi_s(y)dy
\Big|
\leqslant C_s \varepsilon.
\end{align*}
Since $\varepsilon>0$ is arbitrary, we conclude the proof of Theorem \ref{main theorem3}.
\end{proof}
\begin{proof}[\textit{Proof of Theorem \ref{Thm-Neg-s-Target}}]
Following the proof of Theorem \ref{main theorem2}, one can verify that the asymptotic \eqref{Asy-s-Posi}
holds true for $s<0$ small enough and for $\psi$ satisfying condition \eqref{condition g}.
The passage to a directly Riemann integrable function $\psi$ can be done by using the same approximation
techniques as in the proof of Theorem \ref{main theorem3}.
\end{proof}
\section{Proof of Theorems \ref{Thm-LDP-Norm}, \ref{Thm-LDP-Norm-Negs} and \ref{Theorem local LD001}}
\begin{proof}[\textit{Proof of Theorems \ref{Thm-LDP-Norm} and \ref{Thm-LDP-Norm-Negs}}]
We first give a proof of Theorem \ref{Thm-LDP-Norm}.
Since $\log |G_nx|\leqslant\log \|G_n\|$
and the function $\bar r_s$ is strictly positive and uniformly bounded on $\mathcal{S}$,
applying Theorem \ref{main theorem1} we get the lower bound:
\begin{align} \label{NormLow a}
\liminf_{n\to\infty}
\inf_{ |l| \leqslant l_n} \frac{1}{n} \log \mathbb{P}(\log\|G_n\| \geqslant n(q+l) )
\geqslant - \Lambda^*(q).
\end{align}
For the upper bound, since all matrix norms are equivalent,
there exists a positive constant $C$ which does not depend on the product $G_n$ such that
$\log \| G_n \| \leqslant \max_{ 1\leqslant i \leqslant d} \log |G_n e_i| + C,$
where $(e_i)_{1 \leqslant i \leqslant d}$ is the canonical orthonormal basis in $\mathbb{R}^d$.
From this inequality, we deduce that
\begin{align*}
\mathbb{P}(\log\|G_n\| \geqslant n(q+l) )
\leqslant \sum_{i=1}^d \mathbb{P} \Big( \log |G_n e_i| \geqslant n \big( q+l - C/n \big) \Big).
\end{align*}
Using Lemma \ref{lemmaCR001}, we see that
there exists a constant $C_s>0$ such that
$e^{n [\Lambda^*(q+l-C/n) - \Lambda^*(q+l)]} \leqslant C_s$, uniformly in $|l| \leqslant l_n$ and $n \geqslant 1$.
Again by Theorem \ref{main theorem1},
we obtain the upper bound:
\begin{align*}
\limsup_{n\to\infty}
\sup_{|l|\leqslant l_n}
\frac{1}{n} \log \mathbb{P}(\log\|G_n\| \geqslant n(q+l) )
\leqslant - \Lambda^*(q).
\end{align*}
This, together with \eqref{NormLow a}, proves Theorem \ref{Thm-LDP-Norm}.
Using Theorem \ref{Thm-Neg-s},
the proof of Theorem \ref{Thm-LDP-Norm-Negs} can be carried out in the same way.
\end{proof}
\begin{proof}[\textit{Proof of Theorem \ref{Theorem local LD001}}]
Without loss of generality, we assume that the function $\varphi$ is non-negative.
From Theorem \ref{main theorem3},
we deduce that there exists a
sequence $(r_n)_{n\geq1}$, determined by the matrix law $\mu$
such that $r_n\to 0$ as $n \to \infty$ and,
uniformly in $x \in \mathcal{S}$, $|l| \leqslant l_n$ and
$0 \leqslant \Delta \leqslant o(n)$,
it holds that
\begin{align}\label{LDDelta001}
&\mathbb{E} \Big[ \varphi(X_{n}^{x}) \mathbbm{1}_{ \{ \log|G_n x| \geqslant n(q+l) + a + \Delta \} } \Big] \nonumber \\
&\qquad \qquad \qquad = \frac{ \bar r_{s}(x) }{s\sigma_s\sqrt{2\pi n}}
e^{ -n \Lambda^*(q + l + \frac{a + \Delta}{n}) } \Big[ \nu_s(\varphi) + r_n \Big].
\end{align}
Taking the difference of \eqref{LDDelta001} with $\Delta=0$ and with $\Delta>0$, we get, as $n \to \infty$,
\begin{align*}
&\mathbb{E} \Big[ \varphi(X_{n}^{x})
\mathbbm{1}_{ \{ \log|G_n x| \in n(q+l) + [a,a+\Delta) \} } \Big] \nonumber \\
&\qquad \qquad \qquad = I_{\Delta}(n) \frac{ \bar r_{s}(x) }{s\sigma_s\sqrt{2\pi n}} e^{ -n \Lambda^*(q + l) }
\Big[ \nu_s(\varphi) + r_n \Big],
\end{align*}
where
\begin{align*}
I_{\Delta}(n): =
e^{n \Lambda^*(q + l) - n \Lambda^*(q + l + \frac{a}{n}) }
- e^{n \Lambda^*(q + l) - n \Lambda^*(q + l+ \frac{ a + \Delta }{n} )}.
\end{align*}
An elementary analysis using Lemma \ref{lemmaCR001} shows that
$$I_{\Delta}(n) \sim e^{-sa}(1 - e^{-s\Delta}),$$
uniformly in $|l| \leqslant l_n$ and $\Delta_n \leqslant \Delta \leqslant o(n)$,
for any $(\Delta_n)_{n\geq1}$ converging to $0$
slowly enough ($\Delta_n^{-1} = o( r_n^{-1} )$).
This concludes the proof of Theorem \ref{Theorem local LD001}.
\end{proof}
|
1,116,691,497,153 | arxiv | \section{Introduction}
\label{sec:intro}
The recent discovery of the Higgs boson~\cite{Aad:2012tfa,
Chatrchyan:2012xdj} was the last missing piece to establish the
Standard Model of particle physics as an effective theory describing
interactions at $\mathcal{O}(1)$~TeV, thereby confirming the paradigm that nature can be described to a high precision with perturbative quantum field theory in such an energy range. However, many UV completions of the Standard Model predict fundamental modifications to that paradigm. In particular, they predict that the theory transitions from a weakly-coupled into a strongly coupled regime not too far beyond the electroweak scale, e.g. in the range $10-100$~TeV. Examples of such theories\footnote{See also Ref.~\cite{Arkani-Hamed:2015vfh} for selected resonance cross sections and simplified models with mediators to strongly coupled sectors \cite{Englert:2016knz,Becciolini:2014lya} at 100 TeV proton-proton collisions.} are composite Higgs models \cite{Kaplan:1983sm,Caracciolo:2012je, Barnard:2013zea,Ferretti:2014qta}, little string theories \cite{Antoniadis:2011qw}, Higgsplosion \cite{Khoze:2017tjt,Khoze:2017lft,Khoze:2017uga} and classicalization \cite{Dvali:2010jz, Dvali:2012mx}.
While the former results in the production of strongly coupled resonances (such as $Z^\prime$ or heavy scalar particles, which are usually short-lived and decay into a small number of Standard Model particles), the latter two examples result in the production of a multi-particle final state where the energy of the phenomenon is subsequently distributed over a plethora of particles, not unlike the $(B+L)$-violating sphaleron process of the Standard Model. If such processes can be realised with appreciable probabilities, separating signals with a small number of final state objects from large QCD-induced Standard Model backgrounds is a significantly bigger task in a collider environment than for final states with $\mathcal{O}(100)$ particles.
To access energies of $\mathcal{O}(10)$ TeV in fundamental interactions, protons have to be collided at $\mathcal{O}(100)$ TeV center-of-mass energies to account for the fact that the individual quarks and gluons in the proton only carry a fraction of the proton's energy. In the absence of a proton-proton collider that can access such energies, we instead focus on ultra-high-energy cosmic rays to study whether strongly coupled new physics can be probed in their interactions with the atmosphere. When a highly energetic proton hits the atmosphere, large momentum transfers occur which eventually give rise to an extended air shower of photons, hadrons and leptons. As a whole, this air shower is a highly complex object which can arguably obfuscate the hard process that initiates the shower.
In recent years, however, for high-energy events at the LHC, novel analysis techniques have been devised to study jets (complex collimated sprays of hadrons) and their substructure \cite{Marzani:2019hun}. The remarkable success of these techniques, e.g. in discriminating electroweak scale resonances from QCD-induced backgrounds, makes it plausible that one can apply similar techniques to the study of cosmic-ray air showers in separating Standard Model processes from decays of heavy resonances or multi-particle phenomena \cite{Brooijmans:2016lfv, Jho:2018dvt}. Previous work aimed at setting limits on new physics using cosmic-ray
interactions has either predominantly focused on exploiting
primary and secondary neutrinos~\cite{Morris:1993wg, Ringwald:2001vk, Fodor:2003bn,Illana:2004qc}, hadronic shower particles~\cite{Illana:2006xg}, or very light resonances \cite{Yin:2009yt}. Here, instead, we study whether the detailed interactions of the hard process involving very heavy particles could leave an imprint strong enough to discriminate new physics from Standard Model QCD-induced backgrounds as measured at the Pierre Auger Observatory.
\begin{figure*}[tbh]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.92\textwidth]{figures/corsika_vs_herwig_1e7.pdf}
\label{fig:corsika_vs_herwig_1e7}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.92\textwidth]{figures/corsika_vs_herwig_1e11.pdf}
\label{fig:corsika_vs_herwig_1e11}
\end{subfigure}
\hfill
\caption{Comparison of the two approaches of using Herwig or CORSIKA for simulating the hard process for a cosmic-ray proton at energies of (left) $10^7$ GeV and (right) $10^{11}$ GeV through their effect on the observables $X_\text{max}$ and $\rho_\mu$.}
\label{fig:corsika_vs_herwig}
\end{figure*}
Thus, we use machine learning techniques to analyse the structure of air showers to discriminate the kinematic distributions that heavy resonances would leave compared to QCD-induced processes.
First, we describe the simulation setup, where we use Herwig and HERBVI to generate the hard processes, followed by the simulation of the air shower using CORSIKA. Then, we show the effects of the new physics models on two air shower observables compared to the background QCD process. Finally, we train machine learning algorithms to classify the events and use this to derive simple estimates of the limits on the cross sections of these processes.
\section{Simulation Setup}
\label{sec:simulation}
In this section we describe all the steps in our simulation of cosmic-ray air showers from models of new physics.
\subsection{New Physics Processes}
To represent possible processes that can arise in non-perturbative solutions to, and UV completions of, the Standard Model, we consider a $(B+L)$-violating sphaleron process, a heavy gauge boson $Z^\prime$ decaying to two Standard Model photons, and a heavy scalar boson $h^\prime$ decaying to two Standard Model leptons. The masses of the $Z^\prime$ and $h^\prime$ resonances are 10 TeV, with widths of 100 GeV.
The sphaleron process we study includes a change in baryon and lepton numbers of $\Delta B = \Delta L = -3$ and is of the form $qq \to 7 \bar{q} + 3 \bar{l} + n_V W/Z + n_h h$, where $n_V$ and $n_h$ are the numbers of electroweak gauge bosons and Higgs bosons, respectively. Since it was suggested in Refs.~\cite{Ringwald:1989ee,Khoze:1990bm,Khoze:1991mx,Tye:2015tva} that the production cross section for sphalerons is enhanced if produced in association with many gauge bosons, in our simulation we select $n_V = 24$ and $n_h=0$. Such sphalerons could also be searched for at IceCube \cite{Ellis:2016dgb} or at high-energy proton-proton colliders \cite{Ellis:2016ast,Ringwald:2018gpv}, and if observable, they could improve our understanding of the underlying mechanism of electroweak symmetry breaking \cite{Spannowsky:2016ile}.
At the level of observability of a high-energy collision on the surface of our atmosphere, such a multi-particle production process mimics the kinematic features induced by processes from Higgsplosion or classicalization. Thus, we will take the sphaleron as representative of models with enhanced production mechanisms for elementary $2 \to n$ scatterings, where $n \gg 1$.
\subsection{Hard Interaction Simulation}
To simulate the hard interaction for the background QCD and heavy $Z^\prime$/$h^\prime$ processes, we use the Herwig 7 \cite{Bellm:2015jjp} Monte Carlo event generator. Herwig collides the two protons, computes the partonic interaction, and simulates the parton shower as well as the hadronic phase transition.
To generate the sphaleron processes, we use the HERBVI \cite{Gibbs:1994cw,Gibbs:1995bt} tool which is implemented in Herwig. The final-state particles after hadronisation are then passed to the air shower simulation.
\subsection{Air Shower Simulation}
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.92\textwidth]{figures/fragment_comparison_1e7.pdf}
\label{fig:fragment_comparison_1e7}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=0.92\textwidth]{figures/fragment_comparison_1e11.pdf}
\label{fig:fragment_comparison_1e11}
\end{subfigure}
\hfill
\caption{Comparison of the two nucleonic fragmentation models for (left) a cosmic-ray iron at $10^7$ GeV and (right) a cosmic-ray carbon at $10^{11}$ GeV through their effect on the observables $X_\text{max}$ and $\rho_\mu$.}
\label{fig:fragment_comparison}
\end{figure*}
A cosmic-ray air shower is the phenomenon of observable secondary particles produced by a high-energy cosmic ray colliding with the upper atmosphere. In the following we briefly describe the different stages of such a shower.
The process starts with a cosmic ray heading towards the Earth, which we call the primary particle. In principle any particle could be the primary particle in the collision. However, in this work we focus on nuclear matter, and as representatives of the table of elements we choose a proton, carbon and iron.
Usually ordinary high-energy QCD describes the hard interaction when a primary particle hits an air nucleus in the upper atmosphere. However, the probability for the particular process is determined by its cross section, and in this study we also consider the other processes described above for the hard interaction. Regardless of the physics guiding the hard interaction, there will be a QCD parton shower as well as a hadronic phase transition.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.5}
\centering
\begin{tabular}{c|cccc}
$E_\text{lab}^\text{primary}$ [GeV] & ~$E_\text{CM}^\text{p}$ & $E_\text{CM}^\text{C}$ & $E_\text{CM}^\text{Fe}$ & [TeV] \\
\hline \hline
$10^7$~ & ~4.3 & 1.3 & 0.6 &\\
$10^8$~ & ~13.7 & 4.0 & 1.8 &\\
$10^9$~ & ~43.3 & 12.5 & 5.8 &\\
$10^{10}$~ & ~137.0 & 39.5 & 18.3 &\\
$10^{11}$~ & ~433.1 & 125.0 & 57.8 &\\
\hline
\end{tabular}
\caption{Centre-of-mass collision energies corresponding to the primary particle energies considered.}
\label{tab:energies}
\end{table}
As the interaction located in the upper atmosphere is directed downwards, a cascade of secondary interactions will follow. This is the air shower. The secondary particles will produce bremsstrahlung of any form. Furthermore, they will collide with other air molecules, feeding the cascade until the total energy is diluted and the shower dies away.
In experiments like the Pierre Auger Observatory several detectors are used to capture a signal from the air shower. Firstly, there are 1660 water Cherenkov counters on the ground, which measure both muons, and the electromagnetic shower component. These detectors count high-energy muons and establish an estimate of the distribution of the muon density at ground level. Furthermore, there are fluorescence detectors \cite{Abraham:2009pm}, which measure the fluorescence emission of air molecules in the ultraviolet range as a function of atmospheric depth.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figures/New_physics_model_comparison_rotated.pdf}
\caption{The $X_\text{max}$ and $\rho_\mu$ distributions of the new physics models vs the QCD background for each primary particle considered. Only the new physics processes which are kinematically allowed are shown. The axis ranges are held fixed in each row of plots to show the effect of increasing the energy of each primary.}
\label{fig:New_physics_model_comparison}
\end{figure*}
To analyse new physics in cosmic-ray air showers we need to simulate the whole interaction chain described above. To do so, we process the particles generated from Herwig and HERBVI with the CORSIKA \cite{Heck:1998vt} air shower simulator. We use the GHEISHA \cite{Fesefeldt:1985yw} interaction model to treat the low-energy hadronic interactions, and the QGSJET \cite{Kalmykov:1997te} interaction model to treat high-energy hadronic interactions. A thinning procedure is applied to the shower simulation, which restricts the number of particles in each shower stage as a computational requirement.
The incoming primaries that we simulate have zero inclination and interact at a height of 18~km, with energies ranging from $E_{\mathrm{lab}}=10^7$ GeV to $E_{\mathrm{lab}}=10^{11}$ GeV. The corresponding centre-of-mass (CM) collision energies for the hard interaction, which consists of a proton in the cosmic-ray nucleus interacting with a proton in the air nucleus, are given by $\sqrt{s}\simeq\sqrt{2m_{\mathrm{p}}E_{\mathrm{lab}}/A_{\mathrm{N}}}$. Here, $A_{\mathrm{N}}$ is the atomic weight of the primary nucleus: $A_{\mathrm{N}}=1$ for a proton, $A_{\mathrm{N}}=12$ for carbon and $A_{\mathrm{N}}=56$ for iron. For the carbon and iron nuclei, the energy is assumed to be evenly distributed amongst its nucleons. Table \ref{tab:energies} shows the values of the collision energies corresponding to the primary particles that we consider.
From the simulation results, we extract the number of muons $\rho_\mu$ observed at ground level, having survived through the thinning procedure. We do not apply a dethinning procedure to this observable \cite{2012APh....35..759S}. In addition, from the distribution $N(X)$ of charged particles as a function of the shower depth $X$, we can deduce the shower maximum $X_\text{max}$ by performing a $\chi^2$-fit of a Gaisser-Hillas function \cite{1977ICRC....8..353G} to the data. This function is given by,
\begin{equation}
\label{gaisser}
N(X) = N_{\mathrm{max}}\left(\frac{X-X_0}{X_{\mathrm{max}}-X_0}\right)^{\frac{X_{\mathrm{max}}-X_0}{\lambda}}e^{\frac{X_{\mathrm{max}}-X}{\lambda}}~,
\end{equation}
where $N_{\mathrm{max}}$, $X_{\mathrm{max}}$, $X_0$ and $\lambda$ are to be determined from the fit. In principle there is no reason why one should not include more observables usually studied in air shower experiments, such as the risetime. However, for the purposes of this study we limit it to just these two observables to determine whether these are sufficient for a meaningful discrimination between signal and background, and leave a more complicated analysis with more observables to future studies.
As a test of the reliability of using Herwig, with its capability for generating new physics processes, to generate the hard interaction and then processing the events with CORSIKA, we can also generate the full primary-to-air-shower chain for the QCD events with CORSIKA alone by using its own hard process simulation. We find that there is good agreement between them, and in Fig.~\ref{fig:corsika_vs_herwig} we show a comparison of the $\rho_\mu$ and $X_\text{max}$ distributions for the two approaches for a primary proton at both $10^7$ GeV and $10^{11}$ GeV, which spans the energy range we consider. The differences are small, although the distributions are not identical, but for the purposes of this study we will ignore any small systematic uncertainties that may arise due to the use of Herwig as the hard process generator.
Since we are not only interested in ordinary proton-proton interactions, but actually study nucleus-air collisions as well, we need to model the additional nucleonic complexity. As the air is at rest and its binding energy is low compared to the energies we are interested in, we regard it as a stationary proton. However, we cannot use such a simple ansatz for the high-energy primary particle. In principle, we might view the interaction of a nucleus with a proton in the air as a proton-proton interaction. However, we have to take the nucleonic remainder of the now-destroyed primary into account. There are two extremes we can study. We could assume that the impact was so fast that the nucleus stays untouched except with one fewer proton. On the other hand, we could assume that the nucleus is destroyed and completely fragments into its proton and neutron components. A comparison of both approaches is shown in Fig.~\ref{fig:fragment_comparison} for a cosmic-ray iron at $10^7$ GeV and a cosmic-ray carbon at $10^{11}$ GeV. We find the differences between the two extremes are small, and so for the rest of this study we consider a completely fragmented remainder nucleus.
\section{Results and Limits}
\label{sec:results}
In this section we show the effect of the new physics models on the two air shower observables, and train machine learning algorithms to classify the events into signal and background classes. From this, we derive possible limits on the cross sections of the new physics processes.
\subsection{Classification of new physics events}
The $X_\text{max}$ and $\rho_\mu$ distributions for each new physics model in each energy and primary bin are presented in Fig.~\ref{fig:New_physics_model_comparison}, along with the background QCD distributions. The distributions shown have each been calculated from 1000 simulated points using a Gaussian kernel density estimate, with the cross showing the maximum of the distribution and the two contours enclosing 68\% and 95\% of the data. We also show the effect on the average $X_\text{max}$ and $\rho_\mu$ values as a function of the mass of the $Z^\prime$ in Fig.~\ref{fig:Mass_dependence} for a primary proton at $10^9$~GeV, and we expect this behaviour to be representative of variations in the mass for other bins and new physics models.
\begin{figure}[!t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/Mass_dependence.pdf}
\caption{Effect of varying the mass of the $Z^\prime$ on the average values of (lower panel) $X_\text{max}$ and (upper panel) $\rho_\mu$ for a primary proton at $10^9$~GeV.}
\label{fig:Mass_dependence}
\end{figure}
It is clear from the plots for carbon and iron in Fig.~\ref{fig:New_physics_model_comparison} that the new physics effects are washed out by the interactions of the remainder nucleus, and thus the parameter distributions are almost identical. Therefore, we only consider the four proton bins in the energy range $10^8-10^{11}$~GeV where the processes are kinematically possible, with the assumption that the energy and primary compositions can be determined independently of these parameters\footnote{We note that there is a relationship between $X_\text{max}$ and the composition of the primary. Indeed, one could be tempted to interpret variations of the primary composition as being potential signs of new physics. However, for the sake of this analysis we ignore these effects and their systematics, and assume that the primary compositions and energies are well-determined.}, so that these parameters can be used for the new physics classification.
\begin{figure}[!t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/ROC_curves.pdf}
\caption{ROC curves for the four machine learning algorithms trained to classify $Z^\prime$ vs QCD background events for a primary proton at $10^9$~GeV. The dotted line shows the chosen signal efficiency of $\epsilon_{\mathrm{S}}=0.8$.}
\label{fig:ROC_curves}
\end{figure}
In each of these energy and primary bins, we train a machine learning algorithm to independently classify the three new physics models vs the QCD background in the two-dimensional parameter space of $X_\text{max}$ and $\rho_\mu$. The machine learning algorithms that we use are a linear discriminant analysis (LDA), a quadratic discriminant analysis (QDA), a support vector machine (SVM) and a multilayer perceptron (MLP), and we use Scikit-learn \cite{Pedregosa:2012toh} for their implementation.
For each new physics model and bin combination, the 2000 data points (1000 signal and 1000 background) are split into training, validation and test sets. We perform hyperparameter scans over the important hyperparameters of each algorithm, and the algorithm which has the highest accuracy on the validation set is used in order to prevent overfitting the model to the training set. We then calculate the ROC curve for each new physics model on the test set, which allows one to easily obtain the background efficiency $\epsilon_{\mathrm{B}}$ for any chosen signal efficiency $\epsilon_{\mathrm{S}}$. Fig.~\ref{fig:ROC_curves} shows the ROC curves for the $Z^\prime$ vs QCD background classification for a primary proton at $10^9$~GeV for the four machine learning algorithms considered, along with the area-under-curve (AUC) scores for each algorithm. For this particular energy and primary bin, we find that the MLP has the highest AUC score. In Fig.~\ref{fig:MLP_output}, we show the output of the MLP on the signal and background test sets, where a larger output corresponds to a higher confidence from the MLP model that the particular event is a signal event. We see that the MLP is able to discriminate most events correctly.
\begin{figure}[!t]
\centering
\includegraphics[width=0.85\columnwidth]{figures/MLP_output.pdf}
\caption{Output of the MLP trained to classify $Z^\prime$ vs QCD background events for a primary proton at $10^9$~GeV. Larger values of the output indicate a higher confidence from the MLP model that the particular event is a signal event.}
\label{fig:MLP_output}
\end{figure}
\subsection{Limits}
Following the analysis in Ref.~\cite{Brooijmans:2016lfv}, we can use a simple counting procedure in each proton bin to set a limit for the cross section of each new physics process in terms of the proton-air cross section. The probability for a new physics process to occur in the collision of a proton with the air can be expressed as,
\begin{equation}
\mathcal{P}_{\mathrm{new}} = A\frac{\sigma_{\mathrm{new}}}{\sigma_T(E_{\mathrm{lab}})}~,
\end{equation}
where $\sigma_T(E_{\mathrm{lab}})$ is the energy-dependent proton-air cross section, and $A=14.6$ is the average atomic mass of air. For a measured number of $N$ events, with a signal efficiency of $\epsilon_{\mathrm{S}}$ and a background efficiency of $\epsilon_{\mathrm{B}}$, we can set a 95\% confidence limit by requiring that $S/\sqrt{S+B}\gtrsim 2$, where $B=\epsilon_{\mathrm{B}}N$ and $S=\epsilon_{\mathrm{S}}N A \sigma_{\mathrm{new}}/\sigma_T$. Assuming that the number of background events is far greater than the number of signal events, this gives the limit,
\begin{equation}
\sigma_{\mathrm{new}} \lesssim \sqrt{\frac{4\epsilon_{\mathrm{B}}}{\epsilon_{\mathrm{S}}^2NA^2}}\sigma_T \equiv f \sigma_T~.
\end{equation}
The efficiencies $\epsilon_{\mathrm{S}}$ and $\epsilon_{\mathrm{B}}$ can be read off from the ROC curves. Choosing a signal efficiency of $\epsilon_{\mathrm{S}}=0.8$, the corresponding background efficiencies are shown in Table~\ref{tab:limits}, with the associated limit factor $f$ for a representative number of $N$ events in each bin, which reflects the suppression of the cosmic-ray flux as a function of energy \cite{Fenu:2017hlc,Gora:2018xty}. In cases where very strong separation is possible, the background efficiency is set to a minimum value of $\epsilon_{\mathrm{B}}=0.05$ to ensure that the limits are conservative estimates. We also show in Table~\ref{tab:limits} the derivable limits for the case where a systematic uncertainty of 5\% on the background is assumed. In this case, the 95\% confidence limit is obtained from requiring $S/\sqrt{S+B+\delta^2B^2}\gtrsim 2$, where $\delta=0.05$ is the systematic uncertainty.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.8}
\centering
\begin{adjustbox}{width=\columnwidth}
\begin{tabular}{cc|cc|cc|cc}
& & \multicolumn{2}{c|}{~Sphaleron} & \multicolumn{2}{c|}{$Z^\prime$} & \multicolumn{2}{c}{$h^\prime$} \\
\hline \hline
$E_\text{lab}^\text{P}$ [GeV] & $N$ & $\epsilon_{\mathrm{B}}$ & $f$ & $\epsilon_{\mathrm{B}}$ & $f$ & $\epsilon_{\mathrm{B}}$ & $f$\\
\hline
$10^8$ & $50000$~ & 0.05 & {\large $^{0.00017}_{0.0046}$} & 0.28 & {\large $^{0.00041}_{0.0024}$} & 0.14 & {\large $^{0.00029}_{0.0012}$}\\
$10^9$ & $10000$~ & 0.05 & {\large $^{0.00038}_{0.00057}$} & 0.26 & {\large $^{0.00087}_{0.0023}$} & 0.12 & {\large $^{0.00059}_{0.0012}$}\\
$10^{10}$ & $1000$~ & 0.05 & {\large $^{0.0012}_{0.0013}$} & 0.60 & {\large $^{0.0042}_{0.0066}$} & 0.05 & {\large $^{0.0012}_{0.0013}$}\\
$10^{11}$ & $50$~ & 0.05 & {\large $^{0.0054}_{0.0054}$} & 0.31 & {\large $^{0.013}_{0.014}$} & 0.09 & {\large $^{0.0073}_{0.0073}$}\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Background efficiencies $\epsilon_{\mathrm{B}}$ and derived limit fractions $f$ for the new physics cross sections for a selected signal efficiency $\epsilon_{\mathrm{S}}=0.8$, and representative numbers of events $N$. The limit fractions $f$ are shown for (upper number) no systematic uncertainty in the background and (lower number) a 5\% systematic uncertainty in the background.}
\label{tab:limits}
\end{table}
For proton energies in the range $10^8-10^{11}$~GeV, the proton-air cross section ranges from $\sim 450$~mb to $600$~mb \cite{Collaboration:2012wt}. Thus the limits on the new physics processes in Table~\ref{tab:limits} range from $\sim 80$~$\mu$b to $8$~mb. In Fig.~\ref{fig:cross_section_limits} we show the $95\%$ confidence limits on the new physics cross sections as a function of the number of events for a proton at $10^9$~GeV where no systematic uncertainty on the background is assumed.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/cross_section_limits.pdf}
\caption{95\% confidence limits on the cross sections of the new physics processes obtainable for a cosmic-ray proton at $10^9$~GeV, as a function of the number of events $N$ observed.}
\label{fig:cross_section_limits}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
Ultra-high-energy cosmic rays interacting with the atoms in the atmosphere are natural high-energy hadron colliders. In comparison with the LHC the event rate recorded through the fly eye at Auger is much smaller. However, the collision energies recorded reach beyond $\mathcal{O}(100)$ TeV. Thus, Auger might become more sensitive than the LHC for new physics scenarios that are realised at energies outside the kinematic reach of the LHC, and for cross sections that are comparable with QCD interactions. Examples of such scenarios would be potentially unsuppressed sphaleron production or a strongly coupled dark sector.
We find that it is possible to set a model-independent limit on the cross sections of such new physics processes by considering their effects on cosmic-ray air showers via the observables $\rho_\mu$ and $X_{\mathrm{max}}$. Using multi-variate data analysis techniques, a strong separation between signal and QCD background interactions can be achieved. However, based on our classification approach, this is only possible for proton primary particles as the effect is washed out for heavier primaries.
\acknowledgments
We would like to thank Mikael Chala and Christoph Englert for interesting discussions, and Stephen Webster for help with Herwig.
\bibliographystyle{eplbib}
|
1,116,691,497,154 | arxiv | \section{Introduction}
Spin electronics has attracted growing interest in recent years, because it is envisioned that the utility of the spin and charge properties of the electron would open new perspectives to semiconductor device tech\-nology~\cite{Wolf:2001,Prinz:1998,Johnson:1993}. One essential requirement for the spintronic devices is the efficient injection of spin-polarized carriers. Spin injection from a ferromagnetic metal into a semiconductor is attractive, because ferromagnetic metals such as Fe and Co have a relatively high Curie temperature. Spin LED structures provide a way to study the spin injection~\cite{Zhu:2001}. The spin polarized carriers injected from the ferromagnetic metals radiatively recombine in the semiconductor emitting circularly polarized light. It was found that Schottky or tunneling contacts between a metallic ferromagnet and a semiconductor can overcome the conductance mismatch obstacle and show carrier polarizations up to thirty percent~\cite{Erve:2004,Hanbicki:2002}.
However, from a device point of view, a major breakthrough still would be to have all electronic devices. Recently, magnetic $p$-$n$ junction diodes have been proposed and theoretically analyzed~\cite{Zutic:2002,fabian:2002}. These devices whose electronic properties depend on the spin polarization of the carriers can offer opportunities to study the effective spin injection. In this paper, we demonstrate the fabrication of a novel magnetic $p$-$n$ junction diode, in which the spin injection between ferromagnetic metals and semiconductors is measured all-electrically.
A sketch of the band diagram for such a magnetic/nonmagnetic semiconductor $p$-$n$ junction in contact with a ferromagnetic metal is presented in Fig.~\ref{fig:structure}(a). The device performs as follows: a positive bias is applied between the $p$-GaMnAs and the ferromagnetic Fe layers. This places the magnetic $p$-$n$ junction in forward bias and the Fe/GaAs Schottky barrier in reverse bias. Consequently the spin-polarized electrons are injected from Fe into the bulk $n$-GaAs via a Schottky contact. Afterwords, the spin-polarized electrons are extracted from the $n$-GaAs across the depletion layer into the $p$-GaMnAs. If the relative magnetizations of the two magnetic electrodes are changed from parallel to antiparallel, the magnetic $p$-$n$ junction diode should display a GMR-like effect.
\begin{figure}[htbp]
\includegraphics[width=7cm]{Injstructure.eps}
\caption{
(a) Band-energy schemes for the magnetic $p$-$n$ junction with a Fe/GaAs Schottky barrier. The $p$ region (left) is magnetic GaMnAs layer, indicated by the spin splitting of the conduction band. Under applied forward bias, the spin-polarized electrons (solid circles) are injected from Fe (right) to the $n$-GaAs region (middle) and extracted across the depletion layer into the $p$ region. Up and down arrows indicate the magnetizations of the two electrodes. (b) The cross section of the device geometry with four-probe measurements.
}
\label{fig:structure}
\end{figure}
The preparation of the hybrid structure is started from the semiconductor heterostructure, which was grown on a semi-insulating GaAs substrate by molecular beam epitaxy at growth temperature of $630\,^{\circ}\mathrm{C}$. It has the following layer sequence: 300nm GaAs buffer layer/ 300nm AlAs-GaAs super\-lattice/ 100nm GaAs/ 50nm Al$_{0.72}$Ga$_{0.18}$As/ 15nm $n^+$-GaAs($3\times10^{18}$cm$^{-3}$)/ 50nm $n$-GaAs($1\times10^{16}$cm$^{-3}$)/ 10nm $n^+$-GaAs($3\times10^{18}$cm$^{-3}$)/ 60nm $p$-Ga$_{0.94}$Mn$_{0.06}$As. In the sample structure, the 50nm $n$-GaAs is used as transport region for spin-polarized electrons. The 10nm $n^+$-GaAs layer with a Si doping density of $3\times10^{18}$cm$^{-3}$ leads to a smaller depletion region between the $p$-GaMnAs and bulk $n$-GaAs layer. The other 15nm $n^+$-GaAs is used to control the Schottky barrier interface resistivity to overcome the conductance mismatch between Fe and GaAs~\cite{Schmidt:2000,Rashba:2000,Fert:2001}. The GaMnAs layer was grown at a growth temperature of $250\,^{\circ}\mathrm{C}$ and shows a Curie temperature of 65K.
In order to realize the device, we use the epoxy bonding and stop-etching technique (EBASE)~\cite{Kreuzer:2002,Zenger:2004,Weck:1996}, which relies on the highly selective etching of GaAs and Al$_x$Ga$_{1-x}$As by suitable wet chemical etchant. The fabrication steps of the sample involve conventional optical lithography and lift-off procedure. A 100nm Au film deposited on the GaMnAs layer is used as contact to the soft magnetic electrode. Before depositing the second contact, the original sample is inverted and epoxy bonded onto a new semi-insulating host substrate and cured at $80\,^{\circ}\mathrm{C}$ for 4 hours. The original substrate and the 300nm thick superlattice are removed then. After the 100nm GaAs and 50nm AlGaAs layers are selectively etched by the citric acid and 1\% HF respectively, the sample is transferred to the sputtering system immediately. The GaAs semiconductor surface is treated with H$^+$ plasma to remove the oxide layer, then a 12nm Fe layer and a 50nm Co magnetic pinning layer~\cite{Vincent:2003} are deposited on the $n$-GaAs layer as a hard magnetic electrode. Finally, photolithographic definition of a mesa structure provides access to the bottom voltage probes and the thick SiO$_2$ film deposition is used for electrical isolation. The cross section of the whole device for four-probe measurements is shown in Fig.~\ref{fig:structure}(b). Due to the extreme selectivity of HF ($\geq10^7$), the transport length of bulk $n$-GaAs was precisely defined by MBE growth only~\cite{Yabl:1987}. The electric and magnetotransport characterizations of the ferromagnet based magnetic/nonmagnetic p-n junction were carried out at room temperature and at 4.2K employing an HP Semiconductor Analyzer 4155A. The sample was mounted in a $^4$He cryostat with a superconducting coil and the magnetic field was aligned in the plane of the hybrid structure.
\begin{figure}[htbp]
\includegraphics[width=7cm]{IVcurve.eps}
\caption{
Logarithmic plot of forward I-V characteristic of the magnetic $p$-$n$ junction diode at room temperature. The dashed lines represent the theoretical slopes of the curve, indicating the I-V characteristic dominated by the $p$-$n$ junction and Schottky diode respectively. The inset shows the I-V curves of the device at 4.2K (dashed line) and room temperature (solid line).
}
\label{fig:IV}
\end{figure}
The I-V curves of the magnetic $p$-$n$ junction diode measured at room temperature and 4.2K are shown in the inset in Fig.~\ref{fig:IV}. If we look closer at the logarithmic plot of the current vs applied voltage, the different slopes of the curve can be found, see Fig.~\ref{fig:IV}. The device studied here can be treated as a stack of a $p$-$n$ junction and a Schottky diode. For the $p$-$n$ junction, the current can be expressed as $J=J_{s1}[\exp(qV_1/k_BT)-1]$, where $J_{s1}$ is the saturation current density, $V_1$ is the bias for the $p$-$n$ junction, $q$ is the magnitude of electronic charge, $k_B$ is Boltzmann's constant and $T$ is the absolute temperature.
\begin{figure}[htbp]
\includegraphics[width=8.5cm]{hysMR.eps}
\caption{
(a) SQUID magnetic measurements of the GaMnAs, Fe and Fe/Co films at 10K. (b) Magnetoresistance of the device ($p$-GaMnAs/$n$-GaAs/Fe/Co) plotted as a function of magnetic field in the plane. The in-plane magnetic field is along [110] (b1), perpendicular to [110] (b2) and along [1\={1}0] (b3). The alignment of the electrodes is indicated by arrows in figure (b1). (c) Magnetoresistance curve of the reference sample ($p$-GaMnAs/$n$-GaAs/Fe). Constant resistance is observed.
}
\label{fig:HysMR}
\end{figure}
Since a heavily doped $n^+$-GaAs with a doping density of $N_d=3\times10^{18}$cm$^{-3}$ was used for the Schottky contact, the tunneling of electrons through the barrier plays an important role in the transport process. For the Schottky diode under reverse bias, the current density can be expressed as $J=J_{s2}\exp(qV_2/\varepsilon')$, where $J_{s2}$ is the saturation current density and $V_2$ is bias for the Schottky diode~\cite{Pado:1966}. From the definitions: $\varepsilon'=E_{00}[(E_{00}/k_BT-\tanh{E_{00}/k_Bt})]^{-1}$, $E_{00}=(qh/4\pi)[N_d/m^*\epsilon]$, where $m^*$ is the effective mass and $\epsilon$ is the dielectric constant, we obtain $\varepsilon'=78.5$meV. Since $\varepsilon'$ is three times as large as $k_BT$ at room temperature, the voltage drop over the Schottky barrier increases faster than the voltage drop over the $p$-$n$ junction, for increasing current in the device. At low voltage, the resistance of the Schottky barrier is much lower than that of the $p$-$n$ junction. Consequently, the I-V characteristic is dominated by the $p$-$n$ junction with a slope equal to $q/k_BT$. When the voltage is increased, the resistance of the Schottky barrier becomes comparable to the $p$-$n$ junction and cannot be neglected, and at high voltage, the slope equals to $q/\varepsilon'$ as shown in Fig.~\ref{fig:IV}.
Fig.~\ref{fig:HysMR}(a) shows the magnetic hysteresis loops at 10K for the GaMnAs (60nm), Fe (22nm) and Fe (12nm)/Co (50nm) layers. Using a Co layer to magnetically bias the Fe film, the coercivity of the 12nm Fe layer is 30mT, while the GaMnAs layer shows a coercivity of 3mT. In the magnetic field range between these coercivities, the ferromagnets' magnetization can be switched to the anti\-parallel state. Fig.~\ref{fig:HysMR}(b1) shows the magnetoresistance curve of the device measured at 4.2K, the applied magnetic field is in the plane and parallel to the [110] direction of GaMnAs. The negative magnetoresistance curve coincides reasonably well with the distinct coercive fields of the magnetization curves of GaMnAs and Fe/Co layers. With forward applied bias of 1450mV, a negative magnetoresistance of 1.02\% is found. The detailed mechanism of the negative magnetoresistance is still not clear. It is probably due to the antiferromagnetic $s$-$d$ exchange in GaMnAs~\cite{Myers:2005}. For parallel magnetic configuration, the barrier for crossing the depletion layer is larger for the spin up electrons (majority). Therefore, we find the negative magnetoresistance in our experiments.
In order to exclude the tunneling anisotropic magnetoresistance (TAMR) effect in our results~\cite{Ruester:2005}, the angle dependence of the magnetoresistance was studied. In the measurements, the angle of the in-plane magnetic field was changed with respect to the [110] crystallographic direction. Two magnetoresistance curves with the magnetic field perpendicular to the [110] and along [1\={1}0] are shown in Fig.~\ref{fig:HysMR}(b2) and (b3). Since the magnetoresistance does not change for different in-plane directions, we assume that the TAMR plays no important role here. As another test of our results, measurements were made on a reference sample with a 22nm Fe layer in place of the 12nm Fe/50nm Co magnetic electrode. As shown in Fig.~\ref{fig:HysMR}(a), the Fe layer without Co pinning has a similar coercivity than the GaMnAs layer. Thus, the magnetic configurations of these two electrodes can not switch from parallel to antiparallel. The magnetoresistance is shown in Fig.~\ref{fig:HysMR}(c) and the value of the resistance remains constant. This measurement of the reference sample gives further validation of our results.
\begin{figure}[htbp]
\includegraphics[width=6cm]{MRVcurve.eps}
\caption{
Magnetoresistance signal versus voltage. The solid line is a guide to the eye.
}
\label{fig:MRV}
\end{figure}
The bias voltage dependence of the magnetoresistance was also studied and the results are shown in Fig.~\ref{fig:MRV}. We find the magnetoresistance can only be found with high forward bias on the device. Theoretical analysis of the magnetic $p$-$n$ junction shows that there is no spin injection at small biases, because the injection of polarized carriers is still smaller than the equilibrium carrier density. Typically the bias should be above 1V, therefore spin-polarized carriers can be injected across the depletion layer~\cite{fabian:2002}. Our experimental results agree very well with the theory. Furthermore, we also find a peak of the negative magnetoresistance signal of 1.27\% at 1400mV bias. This effect can be explained by the interface resistance which was analyzed by Fert~\cite{Fert:2001}. In his model, the highest magnetoresistance is obtained in the limit $r_N(t_N/l^N_{sf})\ll r_b^*\ll r_N(l^N_{sf}/t_N)$, where $r_N$ is the product of the semiconductor resistivity and the spin diffusion length ($l^N_{sf}$), $t_N$ is the semiconductor transport length and $r_b^*$ is the interface resistance. Since the resistivities of GaMnAs and GaAs are almost the same in our device, the interface resistivity is dominated by the Schottky barrier between Fe and GaAs. When the voltage applied on the Fe/GaAs Schottky barrier varies, the interface resistivity is increased or decreased accordingly, hence the magnetoreistance reaches a maximum value in between.
In conclusion, we have fabricated a novel magnetic $p$-$n$ junction diode, in which the spin injection from ferromagnetic metals to semiconductors can be measured all-electrically. Our study shows the spin-polarized electrons can only be injected when a high forward bias is applied on the $p$-$n$ junction, which agrees well with the theoretical prediction. Furthermore, the bias dependence of the magnetoresistance has also been discussed.
\begin{acknowledgments}
The authors wish to thank Matthias Sperl for the SQUID measurements. One of the authors (Peifeng Chen) would like to thank J.~Fabian and Shidong Wang for fruitful discussions.
\end{acknowledgments}
|
1,116,691,497,155 | arxiv | \section{Privacy Analysis for Counters}
\label{counter-details}
\citet{chan-counter} show that ${{\bf Counter}}\xspace(\varepsilon, T)$ is
$\varepsilon$-differentially private with respect to single changes in the input
stream, when the stream is generated non-adaptively. For our application, we
require privacy to hold for a large number of streams whose joint-sensitivity
can nevertheless be bounded, and whose entries can be chosen adaptively. To
show that {{\bf Counter}}\xspace is also private in this setting (when $\varepsilon$ is set
appropriately), we first introduce some differential privacy notions.
We will make use of a basic differentially private mechanism originally due to
\cite{DMNS06}.
\begin{theorem}[\cite{DMNS06}]
For a function $f:\mathcal{D}\rightarrow \mathbb{R}$, let
\[
\Delta_1 = \max_{D, D' \in \mathcal{D}}\frac{|f(D)-f(D')|}{|\{i : D_i \neq
D'_i\}|}
\]
denote the $\ell_1$ sensitivity of $f$. Then the \emph{Laplace Mechanism}
which on input $D$ outputs $f(D) + \textrm{Lap}(\Delta_1/\varepsilon)$ is
$\varepsilon$-differentially private. Here, $\textrm{Lap}(x)$ denotes a random
variable drawn from the Laplace distribution with variance $2x^2$.
\end{theorem}
\subsection{Composition}
An important property of differential privacy is that it degrades gracefully
when private mechanisms are composed together, even adaptively. We recall the
definition of an adaptive composition experiment due to
\citet{dwork-composition}.
\begin{definition}[Adaptive composition experiment]
\label{comp-exp}
\leavevmode
\begin{itemize}
\item Fix a bit $b \in \{0, 1\}$ and a class of mechanisms $\cM$.
\item For $t = 1 \dots T$:
\begin{itemize}
\item The adversary selects two databases $D^{t, 0}, D^{t, 1}$ and a
mechanism $\cM_t \in \cM$.
\item The adversary receives $y_t = \cM_t(D^{t, b})$
\end{itemize}
\end{itemize}
\end{definition}
The ``output'' of an adaptive composition experiment is the view of the
adversary over the course of the experiment. The experiment is said to be
$\varepsilon$-differentially private if
\[
\max_{S \subseteq \cR}\frac{\Pr[V^0 \in S]}{\Pr[V^1 \in S]} \leq
\exp(\varepsilon),
\]
where $V^0$ is the view of the adversary with $b = 0$, $V^1$ is the view of the
adversary with $b = 1$, and $\cR$ is the range of outputs.
Any algorithm that can be described as an instance of this adaptive composition
experiment (for an appropriately defined adversary) is said to be an instance of
the class of mechanisms $\cM$ under \emph{adaptive $T$-fold composition}. We now
state a straightforward consequence of a composition theorem of
\citet{dwork-composition}.
\begin{lemma}[\citet{dwork-composition}]
\label{lem:l1-comp}
Let $\Delta_1 \geq 0$. The class of $\frac{\varepsilon}{\Delta_1}$-private
mechanisms satisfies $\varepsilon$-differential privacy under adaptive
composition, if the adversary always selects databases satisfying
\[
\sum_{t = 1}^T \left|D^{t, 0} - D^{t, 1}\right| \leq \Delta_1.
\]
\end{lemma}
In other words, the privacy parameter of each mechanism should be calibrated for
the total distance between the databases, over the whole composition (the {\em
$\ell_1$ sensitivity}).
\subsection{Binary Mechanism}
We reproduce Binary mechanism here in order to refer to its internal workings in
our privacy proof.
First, it is worth explaining the intuition of the {{\bf Counter}}\xspace. Given a bit stream
$\sigma \colon [T] \rightarrow \{0,1\}$, the algorithm releases the counts
$\sum_{i=1}^t \sigma(i)$ for each $t$ by maintaining a set of partial sums
$\sum[i, j] \coloneqq \sum_{t=i}^j \sigma(t)$. More precisely, each partial sum
has the form $\Sigma[2^i + 1, 2^i + 2^{i - 1}]$, corresponding to powers of $2$.
In this way, we can calculate the count $\sum_{i=1}^t \sigma(i)$ by summing at
most $\log{t}$ partial sums: let $i_1 < i_2 \ldots < i_m$ be the indices of
non-zero bits in the binary representation of $t$, so that
\begin{equation*}
\label{eq:binary}
\sum_{i=1}^t \sigma(i) = \sum[1, 2^{i_m}] + \sum[2^{i_m}+1, 2^{i_m}
+ 2^{i_{m-1}}] + \ldots + \sum[t-2^{i_1} + 1, t].
\end{equation*}
Therefore, we can view the algorithm as releasing partial sums of
different ranges at each time step $t$ and computing the counts is
simply a post-processing of the partial sums. The core algorithm is
presented in \Cref{alg:binary-mechanism}.
\begin{algorithm}[h!]
\caption{${{\bf Counter}}\xspace(\varepsilon, T)$}
\begin{algorithmic}\label{alg:binary-mechanism}
\STATE{\textbf{Input:} A stream $\sigma\in \{0,1\}^T$}
\STATE{\textbf{Output: } $B(t)$ as estimate for $\sum_{i=1}^t
\sigma(i)$ for each time $t\in [T]$}
\FORALL{$t\in [T]$}
\STATE Express $\displaystyle t = \sum_{j=0}^{\log{t}} 2^j\text{Bin}_j(t)$.
\STATE Let $i \leftarrow \min_j\{\text{Bin}_j(t) \neq 0\}$
\STATE $a_i\leftarrow \sum_{j < i} a_j + \sigma(t) $,
$(a_i = \sum[t-2^i + 1, t])$
\FOR{$0\leq j \leq i - 1$}
\STATE Let $a_j \leftarrow 0$ and $\hat{a_j} \leftarrow 0$
\ENDFOR
\STATE Let $\hat{a_j} = a_j + \Lap(\log(T) /\varepsilon)$
\STATE Let $\displaystyle B(t) = \sum_{i: \text{Bin}_i(t)\neq 0} \hat{a_i}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Counter Privacy Under Adaptive Composition}
We are now ready to provide a prove that the prices released by our mechanism
satisfy $\varepsilon$-differential privacy.
\counterpriv*
\begin{proof}
\citet{chan-counter} show this for a single sensitivity 1 counter for a
non-adaptively chosen stream. We here show the generalization to multiple
counters run on adaptively chosen streams with bounded $\ell_1$ sensitivity,
and bound the $\ell_1$ sensitivity of the set of streams produced by our
algorithm. We will actually show that the sequence of noisy partial sums released by
{{\bf Counter}}\xspace satisfy $\varepsilon$-differential privacy. This is only stronger: the
running counts are computed as a function of these noisy partial sums.
To do so, we first define an adversary for the adaptive composition
experiment (\Cref{comp-exp}), and then show that the view of this adversary is
precisely the sequence of noisy partial sums. The composition theorem
(\Cref{lem:l1-comp}) will then show that the sequence of noisy partial sums
are differentially private with respect to a change in a bidder's valuation.
Let the two runs $b = 0, 1$ correspond to any two neighboring valuations
$(v_i, v_{-i})$ and $(v'_i, v_{-i})$ that differ only in bidder $i$'s
valuation. We first analyze the view on all of the
$\text{counter}(j)$ for $j = 1,\ldots, k$.
The adversary will operate in phases. There are two kinds of phases, which we
label $P_t$ and $P'_t$: one phase per step of the good counters, and one phase
per step of the halting condition counter. Both counters run from time $1$ to
$nT$, so there are $2nT$ phases in total.
At each point in time, the adversary maintains histories $ \{b_i\}, \{b'_i\}$
of all the bids prior to the current phase, and histories $\{e_i\}, \{e'_i\}$
of all prior reports to special counter $\text{counter}(0)$, when bidder $i$
has valuation $v_i, v'_i$ respectively. These histories are initially empty.
Let us consider the first kind of phase. One bidder bids per step of the
counter, so one bidder bids in each of these phases. Each step of the
experiment the adversary will observe a partial sum. Suppose the adversary is
in phase $P_t$. Having observed the previous partial sums, the adversary can
simulate the action of the current bidder $q$ from the histories of previous
bids by first computing the prices indicated by the previous partial sums. The
adversary will compute $q$'s bid when the valuations are $(v_i, v_{-i})$, and
when the valuations are $(v_i', v_{-i})$. Call these two bids $b_t, b_t'$
(which may be $\perp$ if $q$ is already matched in one or both of the
histories).
Note that for bidders $q \neq i$, it is always the case that $b_t = b_t'$.
This holds by induction: it is clearly true when no one has bid, and bidder
$q$'s decision depends only on her past bids, the prices, and her valuation.
Since these are all independent of bidder $i$'s valuation, bidder $q$ behaves
identically.
After the adversary calculates $b_t, b_t'$, the adversary simulates update and
release of the counters. More precisely, the adversary spends phase $P_t$
requesting a set of partial sums
\[
\Sigma = \{ \sigma^j_{I} \mid j \in [k], I \in S_t \},
\]
where $S_t \subseteq [1, nT]$ is a set of intervals ending at $t$,
corresponding to partial sums that {{\bf Counter}}\xspace releases at step $t$.
For each $\sigma^j_I \in \Sigma$, $D^0, D^1 \in \{0, 1\}_I$ are defined by
\[
D^0_k = \left\{
\begin{array}{ll}
1 &: \text{if } b_k = j \\
0 &: \text{otherwise}
\end{array}
\right.
\]
and similarly for $D^1$, with bid history $\{b'_i\}$. Informally, a database
$D$ for $\sigma^j_I$ encodes whether a bidder bid on good $j$ at every
timestep in $I$. The adversary will pick $\cM$ to sum the database and add
noise $Lap(1/\varepsilon_0)$, an $\varepsilon_0$-differentially private operation.
Once the partial sums for $P_t$ are released, the adversary will advance to
the next phase.
Now, suppose the adversary is in the second kind of phase, say $P'_t$. This
corresponds to a step of the halting condition counter. We use exactly the
same construction as above: the adversary will request the partial sums
corresponding to each timestep. The adversary will simulate each bidder's
action by examining the history of bids and prices. Now suppose the two runs
differ in bidder $i$'s valuation. Following the same analysis, the reports to
this halting condition counter differ only in bidder $i$'s reports.
With this definition, the view of the adversary on database $\{D^0\}$ and
$\{D^1\}$ is precisely the noisy partial sums when the valuations are $(v_i,
v_{-i})$ and $(v_i', v_{-i})$, respectively. So, it suffices to show that
these views have almost the same probability.
We apply \Cref{lem:l1-comp} by bounding the distance between the databases for
counter$(1)$ to counter$(k)$. Note that the sequence of databases $ \{D^0 \},
\{D^1\}$ chosen correspond to streams of bids that differ only in bidder $i$'s
bid, or streams of reports to $\text{counter}(0)$ that differ only in bidder
$i$'s report. This is because the bid histories $\{b_t\}, \{b_t'\}$ and report
histories $\{e_t\}, \{e_t'\}$ differ only on timesteps where $i$ acts. Thus,
it suffices to focus on bidder $i$ when bounding the distance between these
databases.
Consider a single good $j$, and suppose $c_j$ of $i$'s bids on good $j$ differ
between the histories. Each of bidder $i$'s bids on $j$ show up in $\log (nT)$
databases, so
\[
\sum |D^0_j - D^1_j| \leq c_j \log nT,
\]
where the sum is taken over all databases corresponding to good
$j$. The same is true for the halting condition counter: if there are $c_0$
reports that differ between the histories, then
\[
\sum |D^0_0 - D^1_0| \leq c_0 \log nT.
\]
Since we know that a bidder can bid at most $T$ times over $T$ proposing
rounds, and will report at most $T$ times, we have $\ell_1$ sensitivity
bounded by
\[
\Delta_1 \leq c_0 \log n T + \sum_j c_j \log nT \leq 2T \log nT.
\]
By \Cref{lem:l1-comp}, setting
\[
\varepsilon_0 = \frac{\varepsilon}{2T\log nT}
\]
suffices for $\varepsilon$-differential privacy, and this is precisely running
each {{\bf Counter}}\xspace with privacy level $\varepsilon' = \varepsilon/2T$.
\end{proof}
\section{Reconstruction Lower Bound} \label{recons-details}
Here, we detail a basic lower bound about differential privacy. Intuitively, it
is impossible for an adversary to recover a database better than random guessing
from observing the output of a private mechanism. The theorem is folklore.
\reconstructionbeta*
\begin{proof}
Fix a database $D\in \{0,1\}^n$ and sample an index $i$ uniformly at
random from $[n]$. Let $D'$ be a neighboring database of $D$ that
differs at the $i$-th bit. By assumption, we have that with
probability at least $1-\beta$
\[
\| \cM(D) - D \|_1 \leq \alpha n,
\qquad
\| \cM(D') - D' \|_1 \leq \alpha n.
\]
Since $i$ is chosen uniformly, we then have
\[
\Pr[ \cM(D)_i = D_i ] \geq (1 - \alpha)(1 - \beta),
\qquad
\Pr[ \cM(D')_i = D'_i ] \geq (1 - \alpha)(1 - \beta).
\]
It follows that $\Pr[ \cM(D')_i = D_i ] \leq 1 - (1 - \alpha)(1- \beta)$
because $D_i \neq D_i'$. By definition of $(\varepsilon, \delta)$-differential
privacy, we get
\[
(1 - \alpha)(1 - \beta) \leq \Pr[\cM(D)_i = D_i ] \leq
e^\varepsilon \Pr[\cM(D')_i = D_i] + \delta \leq e^\varepsilon (1 - (1
- \alpha)(1 - \beta)) + \delta.
\]
Then we have
\[
1 - \alpha \leq \frac{e^\varepsilon + \delta}{(1+ e^\varepsilon)(1 - \beta)},
\]
as desired.
\end{proof}
\section{Extensions}
\label{sec:extensions} In this section, we extend our algorithm in two ways.
First, we show how to compute approximately max-welfare allocations
under general gross substitutes valuations. We also show how to modify and
analyze the algorithm for computing max-weight matchings in the
\emph{unweighted} case when $v_{ij} \in \{0,1\}$ to get \emph{multiplicative}
rather than additive approximation, which can be substantially better in the case when
$\OPT$ is small. (More generally, the approximation depends on the minimum
nonzero valuation.)
\subsection{Gross Substitute Valuations}
Let us first introduce some notation. Let $\Omega = 2^G$ denote the space of
bundles (i.e., subsets of goods). Like previous sections, let $k$
be number of types of goods, and let $s$ be the supply of each type of good.
Let $d$ denote the {\em market size}---the total number of goods, including
identical goods, so $d = ks$.
(We remark that we assume each good has the same supply $s$ only for
convenience. In general, goods may have different supplies, if $s$ denotes the
\emph{minimum} supply of any good. Hence, $d$ is not necessarily dependent on
$s$.) We assume each bidder has a valuation function on bundles, $v_i : \Omega
\rightarrow [0,1]$, and that this valuation satisfies the gross substitutes
condition (\Cref{def-gs}).
Like before, we simulate $k$ ascending price auctions in rounds. Bidders now
maintain a bundle of goods that they are currently allocated to, and bid on one
new good each round. For each good in a bidder's bundle, the bidder keeps track
of the count of bids on that good when it was added to the bundle. When the current
count ticks past the supply, the bidder knows that he has been outbid for that
good.
The main subtlety is in how bidders decide which goods to bid on. Namely, each
bidder considers goods in his bundle to be fixed in price (i.e., bidders ignore
the price increment of at most $\alpha$ that might have occurred after winning
the item). Goods outside of his bundle (even if identical to goods in his
bundle) are evaluated at the true price. We call these prices the bidder's {\em
effective} prices, so each bidder bids on an arbitrary good in his
most-preferred bundle at the effective prices. The full algorithm is given in
\Cref{gs-auction}.
\begin{algorithm}[ht!]
\caption{${{\bf PAlloc}}\xspace(\alpha, \rho, \varepsilon)$ (with Gross Substitute Valuations)}
\begin{algorithmic}\label{gs-auction}
\STATE{\textbf{Input:} Bidders' gross substitute valuations on the
bundles $\{ v_i : \Omega \rightarrow [0, 1] \}$}
\STATE{\textbf{Initialize: for bidder $i$ and good $j$,}
\begin{mathpar}
T = \frac{10}{\alpha \rho},
\and
\varepsilon' = \frac{\varepsilon}{2T},
\and
E = \frac{2\sqrt{2}}{\varepsilon'} (\log nT)^{5/2} \log \left(
\frac{4k}{\gamma} \right) + 1,
\and
m = 2E + 1,
\and
\text{counter}(0) = {{\bf Counter}}\xspace(\varepsilon', nT),
\and
\text{counter}(j) = {{\bf Counter}}\xspace(\varepsilon', nT),
\and p_j = c_j = 0,
\and
d_g = 0,
\and
g(i) = \{ \emptyset \} \and \text{for every bidder\ } i
\end{mathpar}
}
\STATE{$\mathbf{Propose}$ $T$ times; \textbf{Output:} prices $p$ and allocation $g$.}
\vspace{1ex}
\hrule
\begin{minipage}{0.49\textwidth}
\vspace{1ex}
\STATE{\textbf{Propose:}}
\FORALL{bidders $i$}
\FORALL{goods $g \in g(i)$}
\IF{$c_{type(g)} - d_g \geq s - m$}
\STATE{Remove $g(i) := g(i) \setminus g$}
\ENDIF
\ENDFOR
\STATE{Let $p_0$ be the original cost of $g(i)$.}
\STATE{Let $\omega^* \in \displaystyle\argmax_{\omega \supsetneq g(i)} {v_{i}(\omega) -
p(\omega \setminus g(i)) - p_0}$ arbitrary.}
\IF{$v_{i}(\omega^*) - p(\omega \setminus g(i)) - p_0 \geq v_i(g(i)) -
p_0$}
\STATE{Let $j \in \omega^* \setminus g(i)$ arbitrary.}
\STATE{Save $d_j := c_{type(j)}$}
\STATE{Add $g(i) := g(i) \cup j$ and $\textbf{Bid}(\mathbf{e_j})$}
\ENDIF
\STATE{\textbf{else} $\textbf{Bid}(\mathbf{0})$}
\ENDFOR
\STATE{\textbf{CountUnsatisfied}}
\end{minipage}
\hspace{1ex}
\vrule
\hspace{1ex}
\begin{minipage}{0.49\textwidth}
\vspace{1ex}
\STATE{\textbf{Bid:} On input bid vector $\mathbf{b}$}
\FORALL{goods $j$}
\STATE{Feed $\mathbf{b}_j$ to $\text{counter}(j)$.}
\STATE{Update count $c_j := \text{counter}(j)$.}
\IF{$c_j$ is a multiple of $s - m$}
\STATE{Update $p_j := p_j + \alpha$.}
\ENDIF
\ENDFOR
\STATE{}
\STATE{\textbf{CountUnsatisfied:}}
\FORALL{bidders $i$}
\IF{ $i$ wants continue bidding}
\STATE{Feed 1 to counter$(0)$}
\ENDIF
\STATE{\textbf{else} Feed 0 to counter$(0)$}
\ENDFOR
\STATE{Halt if counter$(0)$ increases by less than $\rho d - 2E$}
\end{minipage}
\end{algorithmic}
\end{algorithm}
Privacy is very similar to the case for matchings.
\begin{theorem} \label{gs-priv}
${{\bf PAlloc}}\xspace(\alpha, \rho, \varepsilon)$ satisfies $\varepsilon$-joint differential privacy.
\end{theorem}
\iffull
\begin{proof}
Essentially the same proof as \Cref{matching-privacy}.
\end{proof}
\fi
\begin{theorem} \label{gs-welfare}
Let $0<\alpha< n/d$, and $g$ be the allocation computed by ${{\bf PAlloc}}\xspace(\alpha/3,
\alpha/3, \varepsilon)$, and let $\OPT$ be the optimum max welfare. Then, if $d \geq
n$ and
\[
s \geq \frac{12E' + 3}{\alpha} = O \left( \frac{1}{\alpha^3 \varepsilon}
\cdot \polylog\left( n, k, \frac{1}{\alpha}, \frac{1}{\gamma} \right) \right),
\]
the allocation $g$ has social welfare at least
\[
\sum_{i=1}^n v_i(g(i)) \geq \OPT - \alpha d,
\]
with probability at least $1 - \gamma$, where
\[
E' = \frac{360 \sqrt{2}}{\alpha^2
\varepsilon}\left(\log\left(\frac{90n}{\alpha^2} \right)
\right)^{5/2}\log\left(\frac{4k}{\gamma}\right) + 1.
\]
\end{theorem}
\begin{remark}
In comparison with \Cref{matching-welfare}, \Cref{gs-welfare} requires a similar
constraint on supply, but promises welfare only $\OPT - \alpha d$ rather than
$\OPT - \alpha n$. Since $\OPT \leq n$, this guarantee is only non-trivial for
$\alpha \leq n/d$, and so the supply has a polynomial dependence on the total
size of the market, $d$. In contrast, \Cref{matching-welfare} guarantees good
welfare when the supply has a logarithmic dependence on the total number of
goods in the market.
However, we note that if bidders demand bundles of size at most $b$, then we can
improve the above welfare bound to $\OPT - \alpha n b$. Note that this is
independent of the market size $d$, and strictly generalizes the matching case
(where $b = 1$).
\end{remark}
Similar to \Cref{matching-eq}, we define an \emph{approximate allocation
equilibrium} as a prerequisite for showing our welfare guarantee.
\begin{definition} \label{alloc-eq}
A price vector $p\in [0,1]^k$ and an assignment $g\colon [n] \rightarrow
\Omega$ of bidders to goods is an {\em $(\alpha, \beta, \rho)$-approximate
allocation equilibrium} if
\begin{enumerate}
\item for all but $\rho d$ bidders, $v_i(g(i)) - p(g(i)) \geq
\max_{\omega \in \Omega} v_i(\omega) - p(\omega) - \alpha |g(i)|$;
\item the number of bidders assigned to any good is at most $s$; and
\item each overdemanded good clears except for at most $\beta$ supply.
\end{enumerate}
\end{definition}
The following lemmas show that our algorithm finds an approximate allocation
equilibrium.
\iffull
We prove the last two requirements first.
\else
(We defer proofs to the full version.)
\fi
\begin{lemma} \label{gs-supply}
Assume all counters have error at most $E$ throughout the run of
${{\bf PAlloc}}\xspace(\alpha,\rho,\varepsilon)$. Then, the number of bidders assigned to any good is at most
$s$, and each overdemanded good clears except for at most $\beta$ supply,
where
\[
\beta = 4E + 1 = O \left( \frac{1}{\alpha \rho \varepsilon } \cdot \polylog
\left( n, k, \frac{1}{\alpha}, \frac{1}{\rho}, \frac{1}{\gamma} \right)
\right).
\]
\end{lemma}
\iffull
\begin{proof}
Consider any good $j$. If it is underdemanded, the counter corresponding to
$j$ never rise above $s - m$. Hence, by our conditioning, at most $s - m + E <
s$ bidders are assigned to $j$. If $j$ is overdemanded, the same reasoning as
in \Cref{matching-acc} shows that the number of bidders matched to $j$ lies in
the range $[s - m - 2E, s - m + 2E + 1]$. By the choice of $m$, the upper
bound is at most $s$. Likewise, at least $s - m + E = s - (4E + 1)$ bidders
are assigned to $j$. Setting $\beta = 4E + 1$ gives the desired bound.
\end{proof}
\fi
\begin{lemma} \label{gs-alpha}
We call a bidder who wants to bid more {\em unsatisfied}; otherwise, a bidder
is {\em satisfied}. At termination of ${{\bf PAlloc}}\xspace(\alpha, \rho, \varepsilon)$, all satisfied
bidders are matched to a bundle $g(i)$ that is an $\alpha \cdot |g(i)|$-most
preferred bundle.
\end{lemma}
\iffull
\begin{proof}
We first claim that a bidder's bundle $g(i)$ remains a subset of their most
preferred bundle at the effective prices, i.e., with prices of goods in $g(i)$
set to their price at time of assignment, and all other goods taking current
prices.
The claim follows by induction on the number of timesteps (ranging from $1$ to
$nT$). The base case is clear. Now, assume the claim holds up to time $t$.
There are three possible cases:
\begin{enumerate}
\item If the price of a good outside $g(i)$ is increased, $g(i)$ remains
part of a most-preferred bundle by the gross substitutes condition.
\item If the price of a good in $g(i)$ is increased, some goods may be
removed from the bundle leading to a new bundle $g'(i)$. The only goods
that experience an effective price increase lie outside of $g'(i)$, so
$g'(i)$ remains a subset of a most-preferred bundle at the effective
prices.
\item If a bidder adds to their bundle, $g(i)$ is a subset of the
most-preferred bundle by definition.
\end{enumerate}
Hence, a bidder becomes satisfied precisely when $g(i)$ is equal to the
most-preferred bundle at the effective prices. The true price is at most
$\alpha$ more than the effective price, so the bidder must have an $\alpha
|g(i)|$-most preferred bundle at the true prices.
\end{proof}
\fi
\begin{lemma} \label{gs-unsatisfied}
Suppose all counters have error at most $E$ throughout the run of
${{\bf PAlloc}}\xspace(\alpha, \rho, \varepsilon)$. Then at termination, all but $\rho d$ bidders
are satisfied, so long as
\[
n \leq d
\quad \text{and} \quad
d \geq \frac{8E}{\rho} = \Omega \left( \frac{1}{\alpha \rho^2 \varepsilon} \cdot
\polylog \left( n, k, \frac{1}{\alpha}, \frac{1}{\rho},
\frac{1}{\gamma} \right) \right).
\]
\end{lemma}
\iffull
\begin{proof}
Note that if the unsatisfied bidders counter increases less than $\rho d -
2E$, then at most $\rho d$ bidders are actually unsatisfied. So, it remains
to handle the case where the counter increases by at least $\rho d - 2E$
bidders each round.
In this case, at least $\rho d - 4E$ bidders are unsatisfied at the beginning
of the round. They may not actually bid when their turn comes, because the
prices may have changed. Let the number of bids among all bidders be $B$, and
suppose we run for $R$ rounds. We expect at least $\rho d - 4E$ bids per
round, so $R(\rho d - 4E) - B$ is a lower bound on the number of times a
bidder is unsatisfied, but fails to bid.
In the matching case, if a bidder is unsatisfied at the beginning of the round
but fails to bid during their turn, this must be because the prices have risen
too high. Since prices are monotonic increasing, such a bidder will never be
unsatisfied again.
In contrast, the gross substitutes case is slightly more complex. Bidders who
are unsatisfied at the beginning of a round and don't bid on their turn may
later become unsatisfied again. Clearly, this happens only when the bidder
loses at least one good after they decline to bid: if they don't lose any
goods, then the prices can only increase after they decline to bid. Thus, they
will have no inclination to bid in the future.
There are at most $n$ cases of the bidder dropping out entirely. Thus, the
number of times bidders report wanting to reenter the bidding is at least $R
(\rho d - 4E) - n - B$. Since a bidder loses at least one good each time they
reenter, the number of reentries is at most the number of bids $B$. Hence,
the number of bids in $R$ rounds is at least
\begin{equation} \label{eq:gs-bid}
B \geq \frac{ R (\rho d - 4E) - n }{2}.
\end{equation}
Now, let $s' = s - m = s - (2E + 1)$ be the effective supply and consider how
many bids are possible. Each of the $k$ types of goods will accept at most
$s' + 2E = s + 1$ bids at each of $1/\alpha$ price levels, so there are at
most $k(s + 1)/\alpha = (d + k)/\alpha$ possible bids.
Setting the left side of \Cref{eq:gs-bid} equal to $(d + k)/\alpha$, we find
\[
R \leq \frac{1}{\alpha} \left( \frac{2(d + k) + \alpha n}{\rho d - 4E}
\right) := T_0,
\]
so taking $T \geq T_0$ suffices to ensure that the algorithm halts with no
more than $\rho d$ bidders unsatisfied. Assuming $\rho d \geq 8E$ and $d \geq
n$,
\begin{mathdisplayfull}
T_0 \leq \fullfrac{10d}{\alpha \rho d} = \fullfrac{10}{\alpha \rho} = T.
\end{mathdisplayfull}%
The requirement on $n$ and $d$ is then
\begin{equation*}
d \geq \frac{8E}{\rho} = \Omega \left( \frac{1}{\alpha \rho^2 \varepsilon} \cdot
\polylog \left( n, k, \frac{1}{\alpha}, \frac{1}{\rho},
\frac{1}{\gamma} \right) \right)
\quad \text{and} \quad
n \leq d,
\end{equation*}
as desired.
\end{proof}
\fi
\begin{lemma} \label{gs-acc}
With probability at least $1 - \gamma$, ${{\bf PAlloc}}\xspace(\alpha, \rho, \varepsilon)$ computes an
$(\alpha, \beta, \rho)$-approximate allocation equilibrium, where
\[
\beta = O \left( \frac{1}{\alpha \rho \varepsilon } \cdot \polylog
\left(n, k,
\frac{1}{\alpha}, \frac{1}{\rho}, \frac{1}{\gamma} \right) \right),
\]
so long as
\[
d \geq \frac{8E}{\rho} = \Omega \left( \frac{1}{\alpha \rho^2 \varepsilon} \cdot
\polylog \left( n, k, \frac{1}{\alpha}, \frac{1}{\rho},
\frac{1}{\gamma} \right) \right) \text{and }
n \leq d.
\]
\end{lemma}
\iffull
\begin{proof}
Condition on the error for each counter being at most $E$ throughout the run
of the algorithm. By \Cref{counter-error}, this holds for any single counter
with probability at least $1 - \gamma/2k$. By a union bound, this holds for
all counters with probability at least $1 - \gamma$. The theorem follows by
\Cref{gs-supply,gs-alpha,gs-unsatisfied}.
\end{proof}
\fi
Now, it is straightforward to prove the welfare theorem (\Cref{gs-welfare}).
\iffull
\begin{proof}
The proof follows \Cref{matching-welfare} closely. By
\Cref{gs-acc}, $(g,p)$ is a $(\alpha/3, \beta,
\alpha/3)$-approximate allocation equilibrium, where $\beta = 4E' + 1$. Then all but
$\alpha d/3$ bidders are satisfied and get a bundle $g(i)$ that is $\alpha
|g(i)|$ optimal; let this set of bidders be $B$. Note that $\sum_i |g(i)| \leq
d$. Let $g^*$ be any other allocation. Then,
\begin{align*}
\sum_{i \in B} v_i (g(i)) - p(g(i)) &\geq \sum_{i \in B} v_i(g^*(i)) -
p(g^*(i)) - \frac{\alpha }{3} |g(i)| \\
\sum_{i \in B} v_i (g^*(i)) - v_i(g(i)) &\leq \sum_{i\in B} p(g^*(i)) -
p(g(i)) + \alpha d/3
= \sum_{j \in G} p_j (N^*_j - N_j) + \alpha d/3
\end{align*}
where the $N_j$ is the number of good $j$ sold in $g$ and $N_j^*$ is the
number of good $j$ sold in $g^*$. If $p_j > 0$, we know $N_j \geq s - \beta$,
hence $N_j^* - N_j \leq \beta \leq \alpha s/3$. Also $p_j\leq 1$ for
each good $j$, we have
\[
\sum_{j\in G} p_j(N_j^* - N_j) \leq \sum_j p_j(N_j^* - N_j) \leq
\alpha \sum_j s = \alpha d/3.
\]
Furthermore, at most $\alpha d/3$ bidders are left unsatisfied in
the end; these bidders contribute at most $\alpha d /3$ welfare to the optimal
matching since valuations are bounded by $1$. Putting it all together,
\[
\sum_{i} v_i (g^*(i)) - v_i(g(i)) \leq \alpha d/3 + \alpha d/3 +
\alpha d/3 = \alpha d.
\]
The stated supply bound $s$ follows directly from \Cref{gs-acc}.
\end{proof}
\else
The proof follows \Cref{matching-welfare} quite closely; we defer the proof to
the full version.
\fi
\subsection{Multiplicative Approximation to Welfare}
In certain situations, a close variant of {{\bf PMatch}}\xspace (\Cref{alg:matching}) can give a
multiplicative welfare guarantee. In this section, we will work with matchings
and we will assume that the value of the maximum weight matching $\OPT$ is known.
(It is possible to privately estimate this quantity to high accuracy.) Our
algorithm is exactly the same as {{\bf PMatch}}\xspace, except with a different halting
condition: rather than count the number of unmatched bidders each round, count
the number of bids per round. Once this count drops below a certain threshold,
halt the algorithm.
More precisely, we use a function $\mathbf{CountBids}$ (\Cref{alg:count-bids})
in place of $\mathbf{CountUnsatisfied}$ in \Cref{alg:matching}.
\begin{algorithm}[h!]
\caption{Modified Halting Condition $\mathbf{CountBids}$}
\begin{algorithmic}\label{alg:count-bids}
\STATE{\textbf{CountBids:}}
\FORALL{bidders $i$}
\IF{$\mu(i) \neq \perp$ \text{ and } $c_{\mu(i)} - d_i \geq s - m$}
\STATE{Let $\mu(i) := \emptyset$}
\ENDIF
\IF{$i$ bid this round}
\STATE{Feed $1$ to counter$(0)$.}
\ENDIF
\STATE{\textbf{else} Feed 0 to counter$(0)$.}
\ENDFOR
\IF{\text{counter}$(0)$ increases by less than $\frac{\alpha
OPT}{2\lambda} - 2E$}
\STATE{Halt; For each $i$ with $\mu(i) = \emptyset$, let $\mu(i) =
\perp$}
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{theorem} \label{mult-acc}
Suppose bidders have valuations $\{ v_{ij} \}$ over goods such that
\begin{mathdisplayfull}
\min_{ v_{ij} > 0 } v_{ij} \geq \lambda.
\end{mathdisplayfull}%
Then \Cref{alg:matching}, with
\begin{mathdisplayfull}
T = \fullfrac{24}{\alpha^2}
\end{mathdisplayfull}%
rounds, using stopping condition $\mathbf{CountBids}$ (\Cref{alg:count-bids})
in place of $\mathbf{CountUnsatisfied}$, and stopped once the total bid
counter increases by less than
\begin{mathdisplayfull}
\fullfrac{\alpha \OPT}{2 \lambda} - 2E
\end{mathdisplayfull}%
bids in a round, satisfies $\varepsilon$-joint differential privacy
and outputs a matching that has welfare at least $O((1 -
\alpha/\lambda)\OPT)$, so long as
\[
s = \Omega \left( \frac{1}{\alpha^3 \varepsilon} \cdot \polylog \left( n, k,
\frac{1}{\alpha}, \frac{1}{\gamma} \right) \right)
\]
\[
\text{and} \qquad
\OPT = \Omega \left( \frac{\lambda}{ \alpha^3 \varepsilon} \cdot \polylog
\left( n, k, \frac{1}{\alpha}, \frac{1}{\gamma} \right) \right).
\]
\end{theorem}
\iffull
\begin{proof}
Privacy follows exactly like \Cref{matching-privacy}. We first show that at
termination, all but $\alpha \OPT /\lambda$ bidders are matched to an
$\alpha$-approximate favorite item. The analysis is very similar to
\Cref{matching-acc}. Note that every matched bidder is matched to an
$\alpha$-approximate favorite good, since it was an exactly favorite good at
the time of matching, and the price increases by at most $\alpha$. Thus, it
remains to bound the number of unsatisfied bidders at termination.
Condition on all counters having error bounded by $E$ at all time steps; by
\Cref{counter-error} and a union bound over counters, this happens with
probability at least $1 - \gamma$. Like above, we write $s' = s - m$ for the
effective supply of each good. Let us first consider the case where the
algorithm stops early. If the total bid counter changes by less than
$\frac{\alpha \OPT}{2\lambda} - 2E$, the true number of bids that round is at
most
\[
Q = \frac{\alpha \OPT}{2\lambda}.
\]
We will upper bound the number of unsatisfied bidders at the end of the round.
Note that the number of unsatisfied bidders at the end of the round is the
number of bidders who have been rejected in the current round. Suppose there
are $N$ goods that reject bidders during this round. The total count on these
goods must be at least
\[
(s' - 2E) \cdot N - Q
\]
at the start of the round, since each counter will increase by at most $2E$
due to error, and there were at most $Q$ bids this round. By our conditioning,
there were at least
\[
(s' - 2E) \cdot N - Q - 2EN
\]
bidders matched at the beginning of the round. Since bidders are only matched
when their valuation is at least $\lambda$, and the optimal weight matching is
$\OPT$, at most $\frac{OPT}{\lambda}$ bidders can be matched at any time.
Hence,
\[
N \leq \left( \frac{\OPT}{\lambda} + Q \right) \cdot \frac{1}{s' - 4E}.
\]
Then, the total number of bidders rejected this round is at most $2EN + Q$.
Simplifying,
\begin{align*}
2EN + Q & \leq \frac{2E}{s' - 4E} \cdot \left( \frac{\OPT}{\lambda} + Q \right) + Q \\
& \leq \left( \frac{6E}{s' - 4E}\right) \left(\frac{\OPT}{\lambda} \right)
+ \frac{\alpha \OPT}{2\lambda}.
\end{align*}
To make the first term at most $\frac{\alpha \OPT}{2\lambda}$, it suffices to
take
\begin{align*}
\frac{6E}{s' - 4E} & \leq \frac{\alpha}{2} \\
s' & \geq \frac{12 E}{\alpha} + 4E \\
s & \geq \frac{12 E}{\alpha} + 6E + 1,
\end{align*}
or $s \geq 18E/\alpha$. In this case, the algorithm terminates with at most
$\frac{\alpha \OPT}{\lambda}$ unsatisfied bidders, as desired.
On the other hand, suppose the algorithm does not terminate early, the bid
count increasing by at least $Q - 2E$ every round. By our conditioning, this
means there are at least $Q - 4E$ bids each round; let us bound the number of
possible bids.
Since bidders only bid if they have valuation greater than $\lambda$ for a
good, and since the maximum weight matching has total valuation $\OPT$, at
most $\OPT/\lambda$ bidders can be matched. Like before, we say goods are
underdemanded or overdemanded: they either have final price $0$, or positive
final price.
There are at most $\OPT/\lambda$ true bids on the goods of the first type;
this is because bidders are never rejected from these goods. Like before,
write $s' = s - m$. Each counter of a overdemanded good shows $s'$ people
matched, so at least $s' - 2E$ bidders end up matched. Thus, there are at
most
\begin{mathdisplayfull}
\fullfrac{\OPT}{\lambda (s' - 2E)}
\end{mathdisplayfull}%
overdemanded goods. Each such good takes at most $s' + 2E$ bids at each of
$1/\alpha$ price levels. Putting these two estimates together, the total
number of bids $B$ is upper bounded by
\[
B \leq \frac{\OPT}{\lambda} \cdot \left( 1+ \frac{s' + 2E}{s' - 2E} \right)
\leq \frac{6 \OPT}{\lambda \alpha}
\]
if $s' \geq 4E$, which holds since we are already assuming $s' \geq 4E +
\frac{12E}{\alpha}$. Hence, we know the number of bids is at most
\begin{align*}
T \cdot (Q - 4E) &\leq B \leq \frac{6 \OPT}{\lambda \alpha} \\
T &\leq \frac{6 \OPT}{\lambda} \cdot \left( \frac{2 \lambda}{\alpha \OPT - 8
\lambda E} \right).
\end{align*}
Assuming $\alpha \OPT \geq 16 \lambda E$, we find $T \leq 24/\alpha^2$.
With this choice of $T$, the supply requirement is
\begin{equation} \label{eq-mult-supply}
s \geq \frac{18E}{\alpha} = \Omega \left( \frac{1}{\alpha^3 \varepsilon} \cdot
\polylog \left( n, k, \frac{1}{\alpha}, \frac{1}{\gamma} \right) \right).
\end{equation}
Likewise, the requirement on $\OPT$ is
\[
\OPT \geq \frac{16\lambda E}{\alpha} = \Omega \left( \frac{\lambda}{
\alpha^3 \varepsilon} \cdot \polylog \left( n, k, \frac{1}{\alpha},
\frac{1}{\gamma} \right) \right).
\]
Now, we can follow the analysis from \Cref{matching-welfare} to bound the
welfare. Suppose the algorithm produces a matching $\mu$, and consider any
other matching $\mu^*$. For each bidder who is matched to an
$\alpha$-approximate favorite good,
\begin{mathdisplayfull}
v_{i \mu(i)} - p_{\mu(i)} \geq v_{i \mu^*(i)} - p_{\mu^*(i)} - \alpha.
\end{mathdisplayfull}%
Each such bidder is matched to a good with value at least $\lambda$, so there
are at most $\OPT/\lambda$ such bidders. Summing over these bidders (call them
$S$),
\[
\sum_{i \in S} v_{i \mu(i)} - p_{\mu(i)} \geq \sum_{i \in S} v_{i \mu^*(i)}
- p_{\mu^*(i)} - \frac{\alpha \OPT}{\lambda}.
\]
Letting $N_j, N_j^*$ be the number of goods of type $j$ matched in $\mu,
\mu^*$ and rearranging,
\[
\sum_{i \in S} v_{i \mu^*(i)} - v_{i\mu(i)} \leq \sum_{j \in S} p_j(N_j^* -
N_j) + \frac{\alpha \OPT}{\lambda}.
\]
Exactly the same as \Cref{matching-welfare}, each overdemanded good $(p_j >
0)$ clears except for at most $\beta = 4E + 1$ supply. Since at most
$\frac{\OPT}{\lambda}$ bidders can be matched, the number of goods with $p_j >
0$ is at most
\[
\frac{\OPT}{\lambda (s - \beta)}.
\]
Like before, $N_j^* - N_j \leq \beta$. Since there are at most $\alpha \OPT /
\lambda$ bidders not in $S$ and each has valuation in $[0, 1]$, when summing
over all bidders,
\[
\sum_{i} v_{i \mu^*(i)} - v_{i\mu(i)} \leq \frac{\OPT \beta}{\lambda (s -
\beta)} + \frac{\alpha \OPT}{\lambda} + \frac{\alpha \OPT}{\lambda}.
\]
The first term is at most $\alpha \OPT / \lambda$ for $s \geq \beta (1 +
1/\alpha)$, when the algorithm calculates a matching with weight $O( (1 -
\alpha/\lambda) \OPT)$. Since $\beta = 4E + 1$, this reduces to the supply
constraint \Cref{eq-mult-supply}.
\end{proof}
\else
Privacy follows like \Cref{matching-privacy}. Utility follows a similar
analysis as for the matching case, with one main twist: in the unwweighted
case, there can be at most $\OPT/\lambda$ bidders matched to a prefered good,
since each matched bidder contributes weight $\lambda$. Thus, we can halt the
algorithm sooner when $\OPT$ is small. Details can be found in the full
version.
\fi
\begin{remark}
For a comparison with \Cref{matching-welfare} and {{\bf PMatch}}\xspace, consider the
``unweighted'' case where bidders have valuations in $\{0, 1\}$ (i.e.,
$\lambda = 1$). Note that both {{\bf PMatch}}\xspace and the multiplicative version require
the same lower bound on supply. Ignoring log factors, {{\bf PMatch}}\xspace requires $n =
\tilde{\Omega}(1/\alpha^3 \varepsilon)$ for an additive $\alpha n$ approximation,
while \Cref{mult-acc} shows $\OPT = \tilde{\Omega}(1/\alpha^3 \varepsilon)$ is
necessary for a multiplicative $\alpha$, hence additive $\alpha \OPT$,
approximation. Hence, \Cref{mult-acc} gives a stronger guarantee if $\OPT =
\tilde{o}(n)$ in the unweighted case, ignoring log factors.
\end{remark}
\section{Introduction}
The classic maximum-weight matching problem in bipartite graphs can be viewed as
follows: there are $k$ goods $j \in \{1,\ldots, k\}$ and $n$ buyers $i \in
\{1,\ldots,n\}$. Each buyer $i$ has a value $v_{ij} \in [0,1]$ for each good
$j$, and the goal is to find a matching $\mu$ between goods and buyers which
maximizes the social welfare: $\mathrm{SW} = \sum_{i=1}^n v_{i,\mu(i)}$. When
the goods are sensitive,\footnote{%
For instance, the goods might be related to the treatment of disease, or might
be indicative of a particular business strategy, or might be embarrassing in
nature.}
it is natural to ask for a matching that hides the reported values of each of
the players.
It is not hard to see that this is impossible under the standard notion of
differential privacy, which insists that the allocation must be insensitive to
the reported valuations of each player. We formalize this in
\Cref{sec:lowerbounds}, but the intuition is simple: consider the case with two
types of goods with $n$ identical copies each, and
suppose that each buyer has a private preference for one of the two types: value
$1$ for the good that he likes, and value $0$ for the other good. There is no
contention since the supply of each good is larger than the total number of
buyers, so any allocation achieving social welfare $\OPT - \alpha n$ can be used
to reconstruct a $(1-\alpha)$ fraction of the preferences; this is impossible
for non-trivial values of $\alpha$ under differential privacy.
In light of this observation, is there any hope for privately solving maximum-weight
matching problems? In this paper, we show that the answer is \emph{yes}: it is
possible to solve matching problems (and more general allocation problems) to
high accuracy assuming at least a small number of identical copies of each good, while
still satisfying an extremely strong variant of differential privacy. We observe
that the matching problem has the following two features:
\begin{enumerate}
\item Both the input and solution are naturally partitioned amongst the same
$n$ people: in our case, each buyer $i$ receives the item $\mu(i)$ she is
matched to in the solution.
\item The problem is not solvable privately because the item given to a buyer
must reflect her private data, but this need not (necessarily) be the case
for items given to other buyers.
\end{enumerate}
By utilizing these two features, we show that the matching problem can be
accurately solved under the constraint of \emph{joint
differential privacy} \citep{kearns-largegame}. Informally speaking, this
requires that for every buyer $i$, the joint distribution on items $\mu(j)$ for
$j \neq i$ must be differentially private in the reported valuation of buyer
$i$. As a consequence, buyer $i$'s privacy is protected even if {\em all} other
buyers collude against him, potentially sharing the identities of the items they
receive. As long as buyer $i$ does not reveal her own item, her privacy is
protected.
We then show that our techniques generalize well beyond the max-matching
problem, to the more general \emph{allocation} problem---in this setting, each
buyer $i$ has a valuation function defined over subsets of goods
$v_i:2^{[k]}\rightarrow [0,1]$ from some class of valuations, and the goal
is to find a partition of the goods $S_1,\ldots,S_n$ maximizing social welfare.
(Note that the maximum-weight matching problem is the special case when agents are
\emph{unit demand}, i.e., only want bundles of size 1.) We generalize our
algorithm to solve the allocation problem when bidders' valuations
satisfy the \emph{gross substitutes} condition. This is an economically
meaningful class of valuation functions that is a strict subclass of
submodular functions, and (as we will explain) are the most general class of
valuation functions for which our techniques could possibly apply.
\subsection{Our Techniques and Results}
Our approach makes a novel connection between \emph{market clearing prices} and
differential privacy. Prices have long been considered as a low information way
to coordinate markets; conceptually, our paper formalizes this intuition in the
context of differentially private allocation. Our algorithm is a differentially
private implementation of $m$ simultaneous ascending price auctions, one for
each type of good. Following the classic analysis of \citet{job-matching}, the
prices in these auctions converge to \emph{Walrasian equilibrium prices}: prices
under which each buyer is simultaneously able to buy his most preferred bundle
of goods. We show that although the allocation itself cannot be computed under
standard differential privacy, the Walrasian equilibrium prices can be, and that
the computation of these prices can be used to coordinate a high welfare
allocation while satisfying joint differential privacy.
The classical ascending price auction works as follows. Each good begins with a
price of $0$, and each agent is initially unmatched to any good. Unmatched
agents $i$ take turns bidding on the good $j^*$ that maximizes their utility at
the current prices: i.e., $j^* \in \arg\max(v_{ij} - p_j)$. When a bidder bids
on a good $j^*$, he becomes the new high bidder and the price of $j^*$ is
incremented. Bidders are tentatively matched to a good as long as
they are the high bidder. The auction continues until there are no unmatched
bidders who would prefer to be matched to any of the goods at the current
prices. The algorithm necessarily converges because each bid increases the sum
of the prices of the goods, and prices are bounded by some finite
value.\footnote{%
Bidders do not bid on goods for which they have negative utility; in our case,
$v_{ij} \in [0,1]$}
Moreover, by construction, every bidder ends up matched to their most preferred
good given the prices. Finally, by the ``First Welfare Theorem'' of Warlasian
equilibria, any matching that corresponds to these equilibrium prices maximizes
social welfare. We emphasize that it is this final implication that
is the key: ``prices'' play no role in our problem description, nor do we ever
actually charge ``prices'' to the agents---the prices are purely a
device to coordinate the matching.
We give an approximate, private version of this algorithm based on several
observations. First, in order to implement this algorithm, it is sufficient to
maintain the sequence of prices of the goods privately: given a record of the
price trajectory, each agent can figure out for himself what good he is
matched to. Second, in order to privately maintain the prices, it suffices to
maintain a private count of the number of bids each good has received over the
course of the auction. Finally, it turns out that it is
possible to halt the algorithm early without significantly harming the quality
of the final matching. This guarantees that no bidder ever makes more than
a small number (independent of both $n$ and $k$) of total bids, which allows us
to bound the sensitivity of the bid-counters. Together, these observations allow
us to implement the auction privately using work by \citet{DNPR10} and
\citet{chan-counter}, who introduce counters with the privacy properties
we need.
The result is an algorithm that converges to a matching together with prices
that form an approximate Walrasian equilibrium. We complete our analysis by
proving an approximate version of the first welfare theorem, which shows that
the matching has high weight.
Our algorithm actually works in a stronger privacy model, which we call the {\em
billboard model}. The algorithm posts the prices publicly on a {\em billboard}
as a differentially private signal such that every player can deduce what object
she should be matched to just from her own private information and the contents
of the billboard. As we show, algorithms in the billboard model automatically
satisfy joint differential privacy.
Forthermore, we view implementations in the billboard model as preferable to
arbitrary jointly differentially private implementations. This is because
algorithms in the billboard model only need the ability to publish sanitized
messages to all players, and do not need a secure channel to communicate the
mechanisms' output to each player (though of course, there still needs to be a
secure channel from the player to the mechanism). The work of \citet{MM09} and
some of the results of \citet{GLMRT10} can be viewed as previous algorithms
implemented in this mold.
The algorithm of \citet{job-matching} extends to the general allocation problem
when players have gross substitute preferences, and our private algorithm does
as well. We note that this class of preferences is the natural limit of our
approach, which makes crucial use of equilibrium prices as a coordinating
device: in general, when agents have valuations over bundles of goods that do
not satisfy the gross substitutes condition, Walrasian equilibrium prices may not
exist.
Finally, we give lower bounds showing that our results are qualitatively tight:
not only is the problem impossible to solve under the standard
differential privacy, to get any non-trivial solution even under {\em joint}
differential privacy, it is necessary to assume that there are multiple copies
of each type of good. Our lower bounds are all fundamentally reductions to
database reconstruction attacks. Our lower bound for joint-differentially
private algorithms may be of general interest, as we believe it forms a good
template for other lower bounds for joint differential privacy.
We first state our main result informally in the special case of max-matchings,
which we prove in \Cref{sec:matchings}. We prove our more general theorem for
allocation problems with gross substitutes preferences in \Cref{sec:extensions}.
Here, privacy is protected with respect to a single agent $i$ changing her
valuations $v_{ij}$ for possibly \emph{all} goods $j$.
\begin{theorem*}[Informal]
There is a computationally efficient $\varepsilon$-joint differentially private
algorithm which computes a matching of weight $\mathrm{OPT}-\alpha n$ in
settings in which there are $n$ agents and $k$ types of goods, with $s$ copies
of each good when:
\[
s \geq O\left(\frac{1}{\alpha^3 \varepsilon} \cdot \polylog \left(n, k,
\frac{1}{\alpha} \right) \right).
\]
In certain settings, the welfare guarantee can be improved to $(1-\alpha)\OPT$.
\end{theorem*}
We complement this result with several lower bounds in \Cref{sec:lowerbounds}.
We show that no algorithm can solve the private max-matchings problem to
non-trivial accuracy under the standard constraint of differential privacy. We
also show that even under joint differential privacy, it is necessary to assume
that there are multiple copies of each item.
\begin{theorem*}[Informal]
No joint differentially private algorithm can compute matchings of weight
greater than $\mathrm{OPT} - \alpha n$ on instances in which there are $n$
agents and $s$ copies of each good, when
\begin{mathdisplayfull}
s \leq O\left(\fullfrac{1}{\sqrt{\alpha}}\right).
\end{mathdisplayfull}
\end{theorem*}
In particular, no algorithm can compute matchings of weight $\mathrm{OPT} -
o(n)$ on instances for which the supply $s = O(1)$. In addition, we show that
when goods have supply only $s = O(1)$, it is not even possible to compute the
equilibrium prices privately under standard differential privacy.
\subsection{Related Work}
Differential privacy, first defined by \citet{DMNS06}, has become a standard
``privacy solution concept'' in the theoretical computer science literature.
There is far too much work to survey comprehensively; for a textbook
introduction, see \citet{DR13}.
The privacy of our algorithms relies on work by \citet{DNPR10} and
\citet{chan-counter}, who show how to release a running count of a stream of
bits under \emph{continual observation}---i.e., report the count as the stream
is revealed, provide high accuracy at every point in time, while keeping the
transcript differentially private.
Beginning with \citet{DN03}, much work in differential privacy has focused on
answering numeric valued queries on a private dataset (e.g.,
\citet{DMNS06,BLR08,HR10}, among many others). In contrast, work on private
combinatorial optimization problems has been sporadic (but not non-existant,
e.g., \citet{NRS07,GLMRT10}). Part of the reason is that many combinatorial
optimization problems are impossible to solve under differential privacy
(including the allocation problems we consider in this paper). To sidestep this
problem, we employ the solution concept of {\em joint differential privacy}.
First formalized by \citet{kearns-largegame}, similar ideas are present in the
vertex and set-cover algorithms of \citet{GLMRT10}, the private recommendation system of
\citet{MM09}, and the analyst private data analysis algorithms of
\citet{DNV12,HRU13}.
The utility of our algorithm relies on analysis due to \citet{job-matching}, who
study the problem of matching {\em firms} to \emph{workers} when the firms have
preferences that satisfy the \emph{gross substitutes} condition. They give an
algorithm based on simulating simultaneous ascending auctions that converge to
\emph{Walrasian equilibrium prices}, together with a corresponding matching. In
this respect, our approach is complete: \citet{GS99} show that gross substitutes
preferences are precisely the set of preferences for which Walrasian equilibrium
prices are guaranteed to exist.
While our approximate equilibrium achieves good approximation to the optimal
welfare at the expense of certain incentive properties, our work is closely
related to recent work on privately computing various kinds of equilibrium in
games (e.g., correlated equilibrium \citep{kearns-largegame}, Nash equilibrium
\citep{RR13}, and minmax equilibrium \citep{HRU13}). These works belong to a
growing literature studying the interface of game theory and differential
privacy; for a recent survey, see \citet{PR13}.
\section{Lower Bounds}
\label{sec:lowerbounds}
Our lower bounds all reduce to a basic database reconstruction lower bound for
differential privacy.
\begin{restatable}{theorem}{reconstructionbeta} \label{thm:reconstruction}
Let mechanism $\cM \colon \{0,1\}^n \rightarrow \{0,1\}^n$ be
$(\varepsilon, \delta)$-differentially private, and suppose that for all
database $D$, with probability at least $1-\beta$, $\|\cM (D) -
D \|_1 \leq \alpha n$. Then,
\[
\alpha \geq 1 - \frac{e^\varepsilon + \delta}{(1+e^\varepsilon) (1-\beta)}
:= c(\varepsilon, \delta, \beta).
\]
\end{restatable}
In other words, no $(\varepsilon, \delta)$-private mechanism can reconstruct more
than a fixed constant fraction of its input database. For $\varepsilon, \delta,
\beta$ small, $c(\varepsilon, \delta, \beta) \sim 1/2$. Informally, this theorem
states that a private reconstruction mechanism can't do much better than
guessing a random database. Note that this holds even if the adversary doesn't
know which fraction was correctly reconstructed. This theorem is folklore; a
proof can be found in \thelongref{recons-details}.
Our lower bounds will all be proved using the following pattern:
\begin{itemize}
\item First, we describe how to convert a database $D \in \{0, 1\}^n$ to
a market, by specifying the bidders, the goods, and the valuations $v_{ij}
\in [0, 1]$ on goods.
\item Next, we analyze how these valuations change when a single bit in $D$ is
changed. This will control how private the matching algorithm is with
respect to the original database, when applied to this market.
\item Finally, we show how to output a database guess $\hat{D}$ from the
matching produced by the private matching algorithm.
\end{itemize}
This composition of three steps will be a private function from $\{0, 1\}^n
\rightarrow \{0, 1\}^n$, so we can apply \Cref{thm:reconstruction} to lower
bound the error. This will in turn imply a lower bound on the error of the
matching algorithm.
\subsection{Standard Differential Privacy}
Note that \Cref{alg:matching} produces market clearing prices under standard
differential privacy. We will first show that this is not possible if each good
has unit supply. Recall that prices correspond to an {\em $(\alpha, \beta,
\rho)$-approximate matching equilibrium} if all but $\rho$ bidders can be
allocated to a good such that their utility (valuation less price) is within
$\alpha$ of their favorite good (\Cref{matching-eq}). We will ignore the $\beta$
parameter, which controls how many goods are left unsold.
\iffull\else (We defer the proof to the full version.)\fi
\begin{theorem} \label{lb-prices}
Let $n$ bidders have valuations $v_{ij} \in [0, 1]$ for $n$ goods. Suppose
that mechanism $\cM$ is $(\varepsilon, \delta)$-differentially private, and
calculates prices corresponding to an $(\alpha, \beta, \rho)$-approximate
matching equilibrium for $\alpha < 1/2$ and some $\beta$ with probability
$1 - \gamma$. Then,
\begin{mathdisplayfull}
\rho \geq \frac{1}{2} c(2 \varepsilon, \delta(1 + e^\varepsilon), \gamma).
\end{mathdisplayfull}%
Note that this is independent of $\alpha$.
\end{theorem}
\iffull
\begin{proof}
Let $D \in \{0, 1\}^{n/2}$ be a private database and construct the following
market. For each bit $i$, we construct the following gadget, consisting of two
goods $0_i, 1_i$ and two bidders, $b_i, \bar{b_i}$. Both bidders have
valuation $D_i$ for good $\mathbf{1}_i$, $1 - D_i$ for good $\mathbf{0}_i$,
and $0$ for the other goods. Evidently, there are $n$ bidders and $n$ goods.
Note that changing a bit $i$ in $D$ changes the valuation of two bidders in
the market: $b_i$ and $\bar{b_i}$. Therefore, mechanism $\cM$ is $(2\varepsilon,
\delta(1 + e^\varepsilon))$-differentially private with respect to $D$. Let the
prices be $p_{0i}, p_{1i}$. To guess the database $\hat{D}$, we let $\hat{D}_i
= 1$ if $p_{1i} > 1/2$, otherwise $\hat{D}_i = 0$.
By assumption, $\cM$ produces prices corresponding to an $(\alpha, \beta,
\rho)$-approximate matching equilibrium, with probability $1 - \gamma$. We do
not have access to the matching, but we know the prices must correspond to
{\em some} matching $\mu$. Then, for all but $\rho n$ gadgets, $\mu$ matches
both bidders to their $\alpha$-approximate favorite good, and both goods are matched
to bidders who receive $\alpha$-approximate favorite goods.
Consider such a gadget $i$. We will show that exactly one of
$p_{0i}$ or $p_{1i}$ is greater than $1/2$, and this expensive good
corresponds to bit $D_i$. Consider one of the bidders in this gadget, and
suppose he prefers good $g_+$ with price $p_+$, while he received good $g_-$
with price $p_-$. Since he receives an $\alpha$-approximate favorite good,
%
\begin{mathdisplayfull}
(1 - p_+) - (0 - p_-) \leq \alpha,
{\iffull \qquad \else \;\; \fi} \text{so} {\iffull \qquad \else \;\; \fi}
p_+ - p_- \geq 1 - \alpha > 1/2.
\end{mathdisplayfull}%
%
So $p_+ > 1/2$ and $p_- < 1/2$. Note that good $g_+$ is in the gadget, while
good $g_-$ may not be. So, one of the goods in the gadget has price strictly
greater than $1/2$. The other good in the gadget is an
$\alpha$-approximate favorite
good for some bidder. All bidders have valuation $0$ for the good, hence its
price must be strictly less than $1/2$.
Thus, the reconstruction procedure will correctly produce bit for each such
gadget, and so will miss at most $\rho n$ bits with probability at least $1 -
\gamma$. The combined reconstruction algorithm is a map from $\{0, 1\}^{n/2}
\rightarrow \{0, 1\}^{n/2}$, and $(2\varepsilon, \delta(1 +
e^\varepsilon))$-differentially private. By \Cref{thm:reconstruction},
\begin{mathdisplayfull}
2 \rho \geq c(2 \varepsilon, \delta(1 + e^\varepsilon), \gamma).
\end{mathdisplayfull}%
\end{proof}
\fi
\subsection{Separation Between Standard and Joint Differential Privacy}
While we can compute an approximate maximum-weight matching under joint privacy
when the supply of each good is large {\iffull \else \\ \fi} (\Cref{matching-acc}), this is not
possible under standard differential privacy even with infinite supply.
(In fact, it is not possible with finite supply either.)
\begin{theorem} \label{lb-alloc}
Let $n$ bidders have valuations $v_{ij} \in \{0, 1\}$ for $2$ goods with
infinite supply. Suppose that mechanism $\cM$ is $(\varepsilon,
\delta)$-differentially private, and computes a matching with weight at least
$\OPT - \alpha n$ with probability $1 - \gamma$. Then,
\begin{mathdisplayfull}
\alpha \geq c(\varepsilon, \delta, \gamma).
\end{mathdisplayfull}%
\end{theorem}
\begin{proof}
Let $D \in \{0, 1\}^{n}$. We assume two goods, $\mathbf{0}$ and $\mathbf{1}$.
We have one bidder $b_i$ for each bit $i \in [n]$, who has valuation $D_i$ for
$\mathbf{1}$, and valuation $1 - D_i$ for $\mathbf{0}$. Since changing a bit
changes a single bidder's valuation, applying $\cM$ to this market is
$(\varepsilon, \delta)$-private with respect to $D$. To guess the database
$\hat{D}$, we let $\hat{D}_i$ be $0$ if $b_i$ is matched to $\mathbf{0}$, $1$
if $b_i$ is matched to $\mathbf{1}$, and arbitrary otherwise.
Note that the maximum welfare matching assigns each $b_i$ the good
corresponding to $D_i$, and achieves social welfare $\OPT = n$. If $\cM$ computes an
matching with welfare $\OPT - \alpha n$, it must give all but an
$\alpha$ fraction of bidders $b_i$ the good corresponding to $D_i$. So, the
reconstructed database will miss at most $\alpha n$ bits with probability $1 -
\gamma$, and by \Cref{thm:reconstruction},
\begin{mathdisplayfull}
\alpha \geq c(\varepsilon, \delta, \gamma).
\end{mathdisplayfull}%
\end{proof}
Note that this gives a separation: under joint differential privacy,
\Cref{alg:matching} can release a matching with welfare $\OPT - \alpha
n$ for any $\alpha$, provided supply $s$ is large enough (by \Cref{matching-welfare}). This
is not possible under standard differential privacy, even with {\em infinite}
supply.
\subsection{Joint Differential Privacy}
Finally, we show that a large supply assumption is necessary in order to compute
an additive $\alpha$ maximum welfare matching under joint differential privacy.
\begin{theorem} \label{lb-jp}
Let $n$ bidders have valuations $v_{ij} \in [0, 1]$ for $k$ types of goods
with supply $s$ each. Suppose mechanism $\cM$ is $(\varepsilon, \delta)$-joint
differentially private for $\varepsilon, \delta < 0.1$, and calculates a matching
with welfare at least $\OPT - \alpha n$ with probability $1 - \gamma$ for
$\gamma < 0.01$, and all $n, k, s$. Then, $s = \Omega(\sqrt{1/\alpha}).$
\end{theorem}
\iffull
\begin{proof}
Let $k = n/(s+1)$.
Given a private database $D \in \{0, 1\}^k$, construct the following market.
For each bit $i$, we construct a gadget with two goods $\mathbf{0}_i,
\mathbf{1}_i$, each with supply $s$. Each gadget has a distinguished bidder
$b_i$ and $s$ identical bidders, all labeled $\bar{b_i}$. Let bidder $b_i$,
who we call the {\em real bidder}, have valuation $D_i$ for $1_i$, and $1 -
D_i$ for $0_i$. Bidders $\bar{b_i}$, which we call the {\em spy bidders}, all
have the same valuation: $\eta = \frac{1}{4s}$
for $\mathbf{0}_i$ or $\mathbf{1}_i$ drawn at random, and $0$ for all other
goods (in and out of the gadget). We say a bidder {\em prefers} a good if they
have positive valuation for the good.
Note that changing a bit in $D$ changes a single bidder's valuation. Also note
that the spy bidders' valuations do not depend on $D$. Hence, by joint
differential privacy of $\cM$, the function that maps the above market
through $\cM$ to the allocation of just the spy bidders is $(\varepsilon,
\delta)$-differentially private with respect to an entry change in $D$.
We will describe how to guess $\hat{D}$ based on just the spy bidders' joint
view, i.e., the goods they are assigned. This reconstruction procedure will
then be $(\varepsilon, \delta)$-differentially private, and we can apply
\Cref{thm:reconstruction} to lower bound the error of $\cM$ . For every bit $i
\in [k]$,
let $\hat{D}_i$ be $1$ if the spy bidders in gadget $i$ are all assigned to
$\mathbf{0}_i$, $0$ if the spy bidders in gadget $i$ are all assigned to
$\mathbf{1}_i$, and uniformly random otherwise.
We'll say that a gadget {\em agrees} if the spy bidders and real bidder prefer
the same good. Gadgets that don't agree, {\em disagree}. Let $w$ be the
number of gadgets that agree.
By construction, gadgets agree independently at random with probability $1/2$.
Hence, Hoeffding's inequality gives
\[
\prob{\left|w - \frac{k}{2}\right| \leq \lambda k} \geq 1 -
2\exp(-2 \lambda^2 k)
\]
for some $\lambda$ to be chosen later; condition on this event. With
probability at least $1 - \gamma$, mechanism $\cM$ computes a matching with
welfare at least $\OPT - \alpha n$; condition on this event as well. Note
that the optimum welfare is $1 + (s - 1) \eta$ for gadgets that agree, and $1
+ s \eta$ for gadgets that disagree, hence $\OPT = w (1 + (s-1) \eta) + (k -
w) (1 + s \eta)$ in total.
For each gadget, there are several possible allocations. Intuitively, an
assignment gives social welfare, but may also lead to a bit being
reconstructed. Let $RB(\mu) = \|D - \hat{D}\|_1$ be the error of the
reconstruction when the matching is $\mu$. We'll argue that any matching $\mu$
with nearly optimal social welfare must result in large expected
reconstruction $\mathbb{E}[RB(\mu)]$. Note that
\[
\mathbb{E}[RB(\mu)] = \sum_{i \in [k]} \prob{D_i = \hat{D}_i},
\]
so we argue gadget by gadget.
First, suppose the gadget $i$ agrees. The matching $\mu$ can give the
preferred good to the bidder, the spies, or neither. If the preferred good
goes to the bidder, this gives at most $1 + (s -1) \eta$ social welfare. Not
all the spies get the same good, so
\begin{mathdisplayfull}
\prob{D_i = \hat{D}_i} = \fullfrac{1}{2}.
\end{mathdisplayfull}%
If the preferred good goes to the spies, then this contributes $s \eta$ to
social welfare, and
\begin{mathdisplayfull}
\prob{D_i = \hat{D}_i} = 0.
\end{mathdisplayfull}%
Note that it doesn't matter whether the bidder is assigned in $\mu$, since the
social welfare is unchanged, and the reconstruction algorithm doesn't have
access to the bidder's allocation. There are other possible allocations, but
they are dominated by these two choices (they get less social welfare for
higher reconstruction probability).
Now, suppose gadget $i$ disagrees. There are several possible allocations.
First, both the bidder and the spies may get their favorite good. This leads
to $1 + s \eta$ welfare, and
\begin{mathdisplayfull}
\prob{D_i = \hat{D}_i} = 1.
\end{mathdisplayfull}%
Second, the bidder may be assigned their favorite good, and at most $s -1$
spies may be assigned their favorite good. This leads to $1 + (s - 1) \eta$
welfare, with
\begin{mathdisplayfull}
\prob{D_i = \hat{D}_i} = \fullfrac{1}{2}.
\end{mathdisplayfull}%
Again, there are other possible allocations, but they lead to less social
welfare or higher reconstruction probability. We call these four allocations
{\em optimal}.
Let $a_1, a_2$ be the fractions of agreeing gadgets with the two optimal
agreeing allocations, and $d_1, d_2$ be the fractions of disagreeing gadgets
with the two optimal disagreeing allocations. Let $t$ be the fraction of
agreeing pairs. The following linear program minimizes $(1/k)
\mathbb{E}[RB(\mu)]$ over all matchings $\mu$ achieving an
$\alpha$-approximate maximum welfare matching, for supply $s$.
\begin{align*}
LP_s & := & \text{minimize: } & \frac{1}{2}a_1 + d_1 + \frac{1}{2}d_2 \\
& & \text{such that: } & a_1 + a_2 \leq t \\
& & & d_1 + d_2 \leq 1 - t \\
& & & \frac{1}{2} - \lambda \leq t \leq \frac{1}{2} + \lambda \\
& & & (1 + (s - 1) \eta) a_1 + s \eta a_2
+ (1 + s \eta) d_1 + ( 1+ (s - 1) \eta) d_2 \\
& & & \geq t (1 + (s - 1) \eta) + (1 - t)
(1 + s \eta) - \frac{\alpha n}{k}
\end{align*}
The last constraint is the welfare requirement, the second to last constraint
is from conditioning on the number of agreeing gadgets, and the objective is
$(1/k) \mathbb{E}[RB(\mu)]$.
Plugging in $\eta = \frac{1}{4s}, \lambda = 1/128, \alpha = \frac{k}{16ns}$
and solving, we find
\[
(a_1, a_2, d_1, d_2, t) = \left(\frac{65}{128}, 0, \frac{31}{128},
\frac{1}{4}, \frac{65}{128}\right)
\]
is a feasible solution for all $s$, with objective $\alpha' = 159/256$. To
show that this is optimal, consider the dual problem:
\begin{align*}
DUAL_s & := & \text{maximize: } &
- \rho_2
+ \left( \frac{1}{2} - \lambda \right) \rho_3
- \left( \frac{1}{2} + \lambda \right) \rho_4
+ \left( 1 + s \eta - \frac{\alpha n}{k}\right) \rho_5 \\
& & \text{such that: } & - \rho_1 + (1 + (s - 1) \eta)
\rho_5 \leq \frac{1}{2} \\
& & & - \rho_1 + s \eta \rho_5 \leq 0 \\
& & & - \rho_2 + (1 + s \eta) \rho_5 \leq 1 \\
& & & - \rho_2 + (1 + (s -1) \eta)
\rho_5 \leq \frac{1}{2} \\
& & & \rho_1 - \rho_2 + \rho_3 -
\rho_4 + \eta \rho_5 \leq 0
\end{align*}
We can directly verify that
\[
(\rho_1, \rho_2, \rho_3, \rho_4, \rho_5) = \left( \frac{5}{2} s - 1,
\frac{5}{2}s - 1, 0, \frac{1}{2}, 2s \right)
\]
is a dual feasible solution with objective $\alpha' = 159/256$.
We know that $\cM$ calculates an additive $\alpha$-approximate maximum welfare
matching. While the allocations to each gadget may not be an optimal
allocation, suboptimal allocations all have less social welfare and larger
$RB$. So, we know the objective of $LP_m$ is a lower bound for $RB(\cM)$.
Thus, $\mathbb{E}[RB(\cM)] \geq k \alpha'$ for any supply $s$. Since $RB$ is
the sum of $k$ independent, $0/1$ random variables, another Hoeffding bound
yields
\begin{mathdisplayfull}
\prob{ RB(\cM)/k \geq \alpha' - \lambda' } \geq 1 - 2 \exp (-2\lambda'^2 k).
\end{mathdisplayfull}%
Set $\lambda' = 1/256$, and condition on this event. Taking everything
together, any matching mechanism $\cM$ which finds a matching with weight at
least $\OPT - \alpha n$ failing with at most $\gamma$ probability gives an
$(\varepsilon, \delta)$-private mechanism taking database $D$ to $\hat{D}$, such
that
\begin{mathdisplayfull}
\frac{1}{k} \cdot \|D - \hat{D}\|_1 \geq \alpha' - \lambda' = 79/128.
\end{mathdisplayfull}%
with probability at least $1 - \gamma - 2\exp(-2 \lambda^2 k) - 2\exp(-2
\lambda'^2 k)$.
For $\varepsilon, \delta < 0.1$ and $\gamma < 0.01$, this contradicts
\Cref{thm:reconstruction} for large $k$. Note that the failure probability
and accuracy do not depend directly on $s$, since $\lambda, \lambda', \alpha'$
are constants. Hence,
\[
\alpha \gg \frac{k}{16ns} = \frac{1}{16s(s + 1)}
\]
uniformly for all $s$, and $s = \Omega(\sqrt{1/\alpha})$ as desired.
\end{proof}
\else
We will only sketch the idea here, deferring the full proof to the full version.
Given a database $D \in \{0, 1\}^n$, we will have one real bidder, $m$ ``spy''
bidders, and two goods for each bit. The real bidder will have valuation for one
of the two goods determined by the private data $D$, while the spy bidders will
all have the same preference for one of the two goods, set uniformly at random
(independent of the private data). By arranging the valuations of the spy
bidders appropriately, we can show that any algorithm that achieves good welfare
must serve many of the spy bidders. When the spy bidder and the true bidder
prefer the same good (which happens half of the time), the spy bidders can learn
about the true bidder's preferences when they don't get their preferred good. By
taking the joint view of spy bidders, we can reconstruct a large enough portion
of the database to contradict \Cref{thm:reconstruction}: Under {\em joint}
differential privacy, the view of the spy bidders should satisfy {\em standard}
differential privacy with respect to the data from outside the coalition, i.e.,
the private data.
\fi
\section{A Private Algorithm for Maximum-Weight Matching}
\section{Private Max-Weight Matching}
\label{sec:matchings}
In this section, we study the special case of unit demand valuations. Though our
later algorithm for gross substitutes valuations generalizes this case, we first
present our algorithm in this simpler setting to highlight the key features of
our approach.
Consider a matching market with $n$ bidders and $k$ different types of goods,
where each good has supply $s$ and bidder $i$ has valuation $v_{ij}\in [0,1]$
for good $j$. Some agents may not end up being matched to a good: to simplify
notation, we will say that unmatched agents are matched to $\perp$, a special
dummy good.
To reach a maximum weight matching, we first formulate an intermediate goal: we
want to privately compute prices $p\in [0,1]^k$ and an allocation of the goods
$\mu\colon [n]\rightarrow [k] \cup \{ \perp \}$ such that \emph{most} bidders
are matched with their \emph{approximately} favorite goods \emph{given the
prices} and each over-demanded good almost clears, where a
good is {\em over-demanded} if its price is strictly positive.\footnote{%
This is the notion of approximate Walrasian equilibrium we will use.}
We will show that if this intermediate goal is met, then in fact we have
computed an approximate maximum weight matching.
\begin{definition} \label{matching-eq}
A price vector $p\in [0,1]^k$ and an assignment $\mu\colon
[n]\rightarrow [k] \cup \{ \perp \}$ of bidders to goods
is an {\em $(\alpha, \beta, \rho)$-approximate matching equilibrium} if
\begin{enumerate}
\item All but a $\rho$ fraction of bidders $i$ are matched to an {\iffull \else \\ \fi}
$\alpha$-approximate favorite good: i.e.,$v_{i \mu(i)} - p_{\mu(i)} \geq
v_{ij} - p_j - \alpha$ for every good $j$, for at least $(1-\rho)n$
bidders $i$ (we call these bidders {\em satisfied});
\item the number of bidders assigned to any type of good does not exceed its
supply; and
\item each over-demanded good clears except for at most $\beta$ supply.
\end{enumerate}
\end{definition}
\subsection{Overview of the Algorithm}
Our algorithm takes in the valuations as input, and outputs a trajectory of
prices that can be used by the agents to figure out what they are matched to.
Throughout, we will sometimes talk of the bidders performing some action, but
this actually means that our algorithm simulates the actions of the bidders
internally---the actual agents do not interact with our algorithm.
\Cref{alg:matching} ({{\bf PMatch}}\xspace) is a variant of a {\em deferred acceptance}
algorithm first proposed and analyzed by \citet{job-matching}, which runs $k$
simultaneous ascending price auctions: one for each type of good. At any given
moment, each type of good has a {\em proposal price} $p_j$. In rounds (passing
through each bidder once in some fixed, publicly known order), unsatisfied
bidders bid on a good that maximizes their utility at the current prices: that
is, a good $j$ that maximizes $v_{ij} - p_j$. (This is the $\mathbf{Propose}$
function.)
The $s$ most recent bidders for a type of good are tentatively matched to that
type of good (these are the current \emph{high bidders}). A bidder tentatively
matched to a good with supply $s$ becomes unmatched to that good once the good
he is matched to receives $s$ subsequent bids (he has been \emph{outbid}).
Every $s$ bids on a good increases its price by a fixed increment $\alpha$.
Bidders keep track of which good they are matched to (in the variable $\mu$), if
any, and can determine whether they are currently matched or unmatched by
looking at a count of the number of bids received by the last good they bid on.
To implement this algorithm privately, we count the number of bids each good has
received using private counters. Unsatisfied bidders can infer the prices of all
goods based on the number of bids each has received, and from this information,
they determine which good to bid on (their favorite good at the given prices).
Their bid is recorded by sending a ``1'' to the appropriate counter. (This is
the $\mathbf{Bid}$ function.) Matched bidders remember the reading of the bid
counter on the good they are matched to at the time that they last bid (in the
variable $d_i$); when the counter ticks $s$ bids past this initial count, the
bidder concludes that he has been outbid, and becomes unmatched. The final
matching is communicated implicitly: the real agents observe the full published
price trajectory, and simulate what good they would have been matched to had
they bid according to the published prices.
Since the private counters are noisy, the more than $s$ bidders may be matched
to a good. To maintain feasibility, the auction is run with some supply $m$
withheld: i.e., it is run as if the supply of each good were $s-m$, rather than
$s$. The {\em reserved supply} $m$ is used to satisfy the demand of all bidders
who believe themselves to be matched to each type of good; the number of such
bidders is at most $s$, with high probability.
Our algorithm stops as soon as fewer than $\rho n$ bidders place bids in a
round. We show that this early stopping condition does not significantly harm
the welfare guarantee of the matching, while it substantially reduces the
{\em sensitivity} of the counters: no bidder ever bids more than $O(1/(\alpha\rho))$
times in total. Crucially, this is independent of both the number of types of
goods $k$, and the number of bidders $n$. This greatly improves the accuracy of
the prices: the degree to which we have to perturb the bid counts to protect
privacy is proportional to the sensitivity of the counters.
To privately implement this stopping condition, we maintain a separate counter
($\text{counter}(0)$) which counts the number of unsatisfied bidders throughout
the run of the algorithm. At the end of each proposal round, bidders who are
unsatisfied will send ``$1$'' to this counter, and bidders who are matched will
send ``$0$''. If this counter increases by less than roughly $\rho n$ in any
round, we conclude the algorithm. (This is the $\mathbf{CountUnsatisfied}$
function.)
\begin{algorithm}[h!]
\caption{${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$} \label{alg:matching}
\begin{algorithmic}
\STATE{\textbf{Input:
Bidders' valuations
$(\{v_{1j}\}_{j=1}^m, \ldots, \{v_{nj}\}_{j=1}^m)$}
\STATE{\textbf{Initialize: for bidder $i$ and good $j$,}
\begin{mathpar}
T = \frac{8}{\alpha \rho},
\and
\varepsilon' = \frac{\varepsilon}{2T},
\and
E = \frac{2\sqrt{2}}{\varepsilon'} (\log{nT})^{5/2}
\log\left(\frac{4k}{\gamma} \right),
\and
m = 2 E + 1
\and
\text{counter}(j) = \textbf{Counter}(\varepsilon', nT)
\and
p_j = c_j= 0,
\\
\mu(i) = \emptyset,
\and
d_i = 0,
\and
\text{counter}(0) = \textbf{Counter}(\varepsilon', nT)
\end{mathpar}
}
\STATE{$\mathbf{Propose}$ $T$ times; \textbf{Output:} prices $p$ and allocation $\mu$.}
\vspace{1ex}
\hrule
\begin{minipage}{0.49\textwidth}
\vspace{1ex}
\STATE{\textbf{Propose:}}
\FORALL{bidders $i$}
\IF{$\mu(i) = \emptyset$}
\STATE{Let $\mu(i) \in \argmax_j v_{ij} - p_j$, breaking ties arbitrarily}
\IF{$v_{i \mu(i)} - p_{\mu(i)} \leq 0$}
\STATE{Let $\mu(i) := \perp$ and $\textbf{Bid}(\mathbf{0})$.}
\ENDIF
\STATE{\textbf{else} Save $d_i := c_{\mu(i)}$ and
$\textbf{Bid}(\mathbf{e_{\mu(i)}})$.}
\ENDIF
\STATE{\textbf{else} $\textbf{Bid}(\mathbf{\mathbf{0}})$}
\ENDFOR
\INDSTATE[0]{\textbf{CountUnsatisfied}}
\end{minipage}
\hspace{1ex}
\vrule
\hspace{1ex}
\begin{minipage}{0.49\textwidth}
\vspace{1ex}
\STATE{\textbf{Bid:} On input bid vector $\mathbf{b}$}
\FORALL{goods $j$}
\STATE{Feed $\mathbf{b}_j$ to $\text{counter}(j)$.}
\STATE{Update count $c_j := \text{counter}(j)$.}
\IF{$c_j \geq (p_j/\alpha + 1) (s - m)$}
\STATE{Update $p_j := p_j + \alpha$.}
\ENDIF
\ENDFOR
\STATE{}
\STATE{\textbf{CountUnsatisfied:}}
\FORALL{bidders $i$}
\IF{$\mu(i) \neq \perp$ \text{ and } $c_{\mu(i)} - d_i \geq s - m$}
\STATE{Feed 1 to counter$(0)$; Let $\mu(i) := \emptyset$}
\ENDIF
\STATE{\textbf{else} Feed 0 to counter$(0)$.}
\ENDFOR
\IF{\text{counter}$(0)$ increases by less than $\rho n - 2E$}
\STATE{Halt; For each $i$ with $\mu(i) = \emptyset$, let $\mu(i) =
\perp$}
\ENDIF
\end{minipage}
\end{algorithmic}
\end{algorithm}
\vspace{3ex}
\subsection{Privacy Analysis}
In this section, we show that the allocation (implicitly) output by our
algorithm satisfies joint differential privacy with respect to a single bidder
changing \emph{all} of her valuations. We first show a basic but useful lemma:
to show joint differential privacy, it is sufficient to show that the output
sent to each agent $i$ is an arbitrary function only of some global signal that
is computed under the standard constraint of differential privacy, together with
agent $i$'s private data. We call this the {\em billboard model}: some message
is viewable by all agents, as if placed on a public billboard, and this message
is differentially private. In our case, the price history over the course of the
auction is the differentially private message posted on the billboard. Combined
with their personal private valuation, each agent can compute their personal
allocation.
\begin{lemma}[Billboard Lemma] \label{billboard}
Suppose $\cM : \cD \rightarrow \cR$ is $(\varepsilon, \delta)$-differentially
private. Consider any set of functions $f_i : \cD_i \times \cR \rightarrow \cR'$,
where $\cD_i$ is the portion of the database containing $i$'s data. The
composition $\{ f_i (\Pi_i D, \cM(D)) \}$ is $(\varepsilon, \delta)$-joint
differentially private, where $\Pi_i$ is the projection to $i$'s data.
\end{lemma}
\begin{proof}
We need to show that for any agent $i$, the view of the other agents is
$(\varepsilon, \delta)$-differentially private when $i$'s private data is
changed. Suppose databases $D, D'$ are $i$-neighbors, so $\Pi_j D = \Pi_j D'$
for $j \neq i$. Let $\cR_{-i}$ be a set of views of the bidders besides $i$.
Let $\cR^* = \{ r \in \cR \mid \{ f_j( \Pi_j D, r) \}_{-i} \in \cR_{-i} \}$.
Then, we need
\begin{align*}
&\Pr[ \{ f_j (\Pi_j D, \cM(D)) \}_{-i} \in \cR_{-i} ]\\
\leq
e^\varepsilon &\Pr[ \{ f_j (\Pi_j D', \cM(D')) \}_{-i} \in \cR_{-i} ] + \delta \\
=
e^\varepsilon &\Pr[ \{ f_j (\Pi_j D, \cM(D')) \}_{-i} \in \cR_{-i} ] + \delta \\
\text{and so\ } &\Pr [ \cM(D) \in \cR^* ] \leq e^\varepsilon \Pr [\cM(D') \in \cR^*] + \delta,
\end{align*}
but this is true since $\cM$ is $(\varepsilon, \delta)$-differentially private.
\end{proof}
\iffull
\begin{restatable}{theorem}{counterpriv} \label{prices-privacy}
The sequence of prices and counts of unsatisfied bidders released by
{\iffull \else \\ \fi} ${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$ satisfies $\varepsilon$-differential
privacy.
\end{restatable}
\iffull
\begin{proof}[Proof Sketch]
\else
\begin{proof}[Sketch]
\fi
We give a rough intuition here, and defer the full proof to
\Cref{counter-details}. Note that the prices can be computed from the noisy
counts, so it suffices to show that these counts are private. Since no bidder
bids more than $T \approx 1/(\alpha\rho)$ times in total, the \emph{total}
sensitivity of the $k$ price streams to a single bidder's valuations is only
$O(1/(\alpha \rho))$ (independent of $k$) even though a single bidder could in
principle bid $\Omega(1/\alpha)$ times on each of the $k$ streams. Hence the
analysis of these $k$ simultaneously running counters is akin to the analysis
of answering {\em histogram queries}---multiple queries whose joint
sensitivity is substantially smaller than the sum of their individual
sensitivities.
By setting the counter for each good with privacy parameter $\varepsilon' =
\varepsilon/2T$, the prices should be $\varepsilon/2$ differentially private. By the
same reasoning, setting the unsatisfied bidders counter with privacy parameter
$\varepsilon' = \varepsilon/2T$ also makes the unsatisfied bidders count
$\varepsilon/2$ private. Thus, these outputs together satisfy
$\varepsilon$-differential privacy.
While this intuition is roughly correct, there are some technical details.
Namely, \citet{chan-counter} show privacy for a single counter with
sensitivity $1$ on a non-adaptively chosen stream. Since intermediate
outputs (i.e., prices) from our counters will affect the future streams (i.e.,
future bids) for other counters, this is not sufficient. In fact, it is
possible to prove privacy for multiple counters running on adaptively chosen
streams, where the privacy parameter depends only on the joint sensitivity of
the streams, and not on the number of streams. We show this using largely
routine arguments; details can be found in \Cref{counter-details}.
\end{proof}
\else
With this lemma, the privacy proof is largely routine. We defer the details to
the full version.
\fi
\begin{theorem} \label{matching-privacy}
${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$ is $\varepsilon$-joint differentially private.
\end{theorem}
\iffull
\begin{proof}[Proof Sketch]
\else
\begin{proof}[Sketch]
\fi
Note that given the sequence of prices, counts of unsatisfied bidders, and the
private valuation of any bidder $i$, the final allocation to that bidder can
be computed by simulating the sequence of bids that bidder $i$ would make:
these are determined by the price when bidder $i$ is slotted to bid, and by
whether the halting condition has been met. Bidder $i$'s final allocation is
simply the final item that he bids on. The prices and halting condition are
computed as a deterministic function of the noisy counts, which are
$\varepsilon$-differentially private\iffull by \Cref{prices-privacy}\else \fi.
So, \Cref{billboard} shows that {{\bf PMatch}}\xspace is $\varepsilon$-joint differentially
private.
\end{proof}
\subsection{Utility Analysis}
In this section, we compare the weight of the matching produced by
{{\bf PMatch}}\xspace with OPT. As an intermediate step, we first show that the
resulting matching \emph{paired with the prices} output by the algorithm forms
an approximate matching equilibrium. We next show that any such matching must be
an approximately max-weight matching.
The so-called ``first welfare theorem'' from general equilibrium theory
guarantees that an exact (i.e., a $(0,0,0)$-) matching equilibrium gives an exact
maximum weight matching. Compared to this ideal, {{\bf PMatch}}\xspace loses
welfare in three ways. First, a $\rho$ fraction of bidders may end up
unsatisfied. Second, the matched bidders are not necessarily matched to goods
that maximize their utility given the prices, but only to goods that do so
approximately (up to additive $\alpha$). Finally, the auction sets aside part
of the supply to handle over-allocation from the noisy counters, which
may not end up being sold (say, if the counters are accurate or actually
under-allocate). That is, we compute an equilibrium of a market with reduced
supply, so our welfare guarantee requires that the supply $s$ be significantly
larger than the necessary reserved supply $m$.
The key performance metric is \emph{how much} supply is needed to achieve a
given welfare approximation in the final matching. On the one hand, we will show
later that the problem is impossible to solve privately if $s = O(1)$
(\Cref{sec:lowerbounds}). On the other hand, the problem is trivial if $s \geq
n$: every agent can be simultaneously matched to her favorite good with no
coordination; this is trivially both optimal and private. Our algorithm will
achieve positive results
when $s \geq \polylog(n)$.
\begin{theorem} \label{matching-welfare}
Let $\alpha>0$, and $\mu$ be the matching computed by ${{\bf PMatch}}\xspace(\alpha/3,
\alpha/3, \varepsilon)$. Let $\OPT$ denote the weight of the optimal (max weight)
matching. Then, if the supply satisfies
\[
s \geq \frac{16 E' + 4}{\alpha} = O\left( \frac{1}{\alpha^3 \varepsilon}
\cdot \polylog \left( n, k, \frac{1}{\alpha}, \frac{1}{\gamma}
\right)\right),
\]
and $n > s$, the matching $\mu$ has social welfare at least $\OPT - \alpha
n$ with probability $\geq 1-\gamma$, where
\[
E' = \frac{288\sqrt{2}}{\alpha^2 \varepsilon}
\left(\log\left(\frac{72n}{\alpha^2} \right)\right)^{5/2}
\log\left(\frac{4k}{\gamma} \right).
\]
\end{theorem}
\begin{remark}
Our approximation guarantee here is \emph{additive}. In Section
\ref{sec:extensions}, we show that if we are in the \emph{unweighted} case
where $v_{ij} \in \{0,1\}$, the above guarantee can be made
\emph{multiplicative}, unusual in the context of differential privacy. That
is, we can find a matching $\mu$ with welfare at least
$(1-\alpha)\mathrm{OPT}$. Also, the second assumption $n > s$ is minimal, as
the problem is trivially solvable for $s\geq n$.
\end{remark}
The proof follows from the following lemmas.
\iffull\else (We defer some proofs to the full version.) \fi
\begin{lemma} \label{matching-eq-alpha}
We call a bidder who wants to continue bidding {\em unsatisfied}; otherwise
bidder $i$ is {\em satisfied}. At termination of \\ ${{\bf PMatch}}\xspace(\alpha,\rho,\varepsilon)$,
all satisfied bidders $i$ are matched to a good $\mu(i)$ such that
\[
v_{i,\mu(i)} - p_{\mu(i)} \geq \max_j (v_{i,j} - p_j) - \alpha .
\]
\end{lemma}
\iffull
\begin{proof}
Fix any satisfied bidder $i$ matched to $j^* = \mu(i)$. At the time that
bidder $i$ last bid on $j^*$, by construction, $v_{ij^*} - p_{j^*} \geq
\max_{j}(v_{ij}-p_j)$. Since $i$ remained matched to $j^*$, its price could
only have increased by at most $\alpha$, and the prices of other goods $j \neq
j^*$ could only have increased. Hence, at completion of the algorithm,
\begin{mathdisplayfull}
v_{i,\mu(i)} - p_{\mu(i)} \geq \max_{j}(v_{ij}-p_j) - \alpha
\end{mathdisplayfull}%
for all matched bidders $i$.
\end{proof}
\fi
\begin{lemma} \label{matching-eq-beta}
Assume all counters have error at most $E$ throughout the run of
${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$. Then the number of bidders assigned to any good
is at most $s$, and each over-demanded good clears except for at most $\beta$
supply, where
\[
\beta = 4 E + 1= O\left(\frac{1}{\alpha\rho \varepsilon}\cdot \polylog \left(
\frac{1}{\alpha}, \frac{1}{\rho}, \frac{1}{\gamma},k,n \right)\right).
\]
\end{lemma}
\begin{proof}
Since the counter for each under-demanded good never exceeds $s-m$, we know that
each under-demanded good is matched to no more than $s-m+E < s$ bidders.
Consider any counter $c$ for an over-demanded good. Let $t$ be a time
step in counter $c$ such that
\begin{mathdisplayfull}
c(nT) - c(t + 1) \leq s - m < c(nT) - c(t).
\end{mathdisplayfull}%
Note that the bidders who bid after time $t$ are the only bidders matched to
this good at time $nT$. Let $\sigma$ be the true bid stream for this good,
so the total number of bidders allocated to this good at time $nT$ is
\begin{align*}
c_\sigma(nT) - c_\sigma(t) & \leq c_\sigma(nT) - c_\sigma(t + 1) + 1 \\
& \leq (c(nT) + E) - (c(t + 1) - E) + 1 \\
& \leq s - m + 2E + 1 = s.
\end{align*}
Similarly, we can lower bound the number of bidders allocated to this good:
\begin{align*}
&c_\sigma(nT) - c_\sigma(t) \\
& = (c_\sigma(nT) - c(nT)) + (c(nT) - c(t)) + (c(t) - c_\sigma(t)) \\
& > s - m - 2E > s - 4E - 1.
\end{align*}
Therefore, every over-demanded good clears except for at most $\beta =
4E + 1$ supply, which gives the dependence
\begin{align*}
\beta &= \frac{16\sqrt{2}}{\alpha \rho \varepsilon} \left(\log\left(\frac{6n}{\alpha\rho} \right)\right)^{5/2}
\log\left(\frac{4k}{\gamma} \right) + 1 \\
&=
O\left(\frac{1}{\alpha\rho \varepsilon}\cdot \polylog \left(
\frac{1}{\alpha}, \frac{1}{\rho}, \frac{1}{\gamma},k,n \right)\right).
\end{align*}
\ar{Lets put the actual dependence on all terms in, rather than writing polylog.
This should also go in the theorem statement.}
\end{proof}
\begin{lemma} \label{matching-eq-rho}
Assume all counters have error at most $E$ throughout the run of
${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$. Then at termination, all but a $\rho$ fraction of
bidders are satisfied, so long as $s \geq 8 E + 1$ and $n \geq 8E/\rho$.
\end{lemma}
\begin{proof}
First, we claim that the total number of bids made over the course of the
algorithm is bounded by $3n/\alpha$.
We account separately for the under-demanded goods (those with price 0 at the
end of the auction) and the over-demanded goods (those with positive price).
For the under-demanded goods, since their prices remain 0 throughout the
algorithm, their corresponding noisy counters never exceeded $(s-m)$.
Since no bidder is ever unmatched after having been matched to an
under-demanded good, the set of under-demanded goods can receive at most one bid
from each agent; together the under-demanded goods can receive at most $n$
bids.
Next, we account for the over-demanded goods. Note the bidders matched to
these goods are precisely the bidders who bid within $s- m$ ticks of the final
counter reading. Since the counter has error bounded by $E$ at each time step,
this means at least $s - m - 2E$ bidders end up matched to each over-demanded
good. Since no agent can be matched to more than one good there can be at
most $n/(s-m-2E)$ over-demanded goods in total.
Likewise, we can account for the number of price increases per over-demanded
good. Prices never rise above $1$ (because any bidder would prefer to be
unmatched than to be matched to a good with price larger than $1$). Therefore,
since prices are raised in increments of $\alpha$, each over-demanded good can
have its price incremented at most $1/\alpha$ times. Since there can be at
most $(s - m + 2E)$ bids between each price update (again, corresponding to $s
- m$ ticks of the counter), the total number of bids received by all of the
over-demanded goods in total is at most
\[
\frac{n}{s-m-2E}\cdot \frac{1}{\alpha}\cdot (s-m+2E).
\]
Since each bid is either on an under or over-demanded good, we can upper
bound the \emph{total} number of bids $B$ by
\[
B \leq n + \frac{n}{\alpha} \left( \frac{s - m + 2E}{s - m - 2E}\right) =
\frac{n}{\alpha} \left(\alpha + \frac{s- m +2E}{ s-m-2E}\right).
\]
We set the reserved supply to be $m = 2 E + 1$ and by assumption, we have $s
\geq 8E+1$. Since we are only interested in cases where $\alpha < 1$, we
conclude
\begin{equation} \label{match-bid-ub}
B \leq n + \frac{n}{\alpha} \left( \frac{s - m + \alpha_2}{s - m -
\alpha_2} \right) \leq \frac{3n}{\alpha}.
\end{equation}
Now, consider the halting condition. There are two cases: either the algorithm
halts early, or it does not. We claim that at termination, at most $\rho n $
bidders are unsatisfied. The algorithm halts early if at any round of
\textbf{CountUnsatisfied}, counter$(0)$ (which counts the number of unsatisfied
bidders) increases by less than $\rho n - 2E$. So if the algorithm halts
early, there must be at most $\rho n - 2 E + 2E = \rho n$ unsatisfied bidders.
Otherwise, suppose the algorithm does not halt early. At the start of each
round there must be at least $\rho n - 4E$ unsatisfied bidders. Not all of
these bidders must bid during the \textbf{Propose} round since price increases
while they are waiting to bid might cause them to no longer demand any item,
but this only happens if bidders prefer to be unmatched at the new prices.
Since prices only increase, these bidders remain satisfied for the rest of the
algorithm. If the algorithm runs for $R$ rounds and there are $B$ true bids,
\begin{mathdisplayfull}
B \geq R (\rho n - 4E) - n.
\end{mathdisplayfull}%
Combined with our upper bound on the number of bids (\Cref{match-bid-ub}) and
our assumption $\rho n \geq 8E$, we can upper bound the number of rounds $R$:
\[
R \leq \left(\frac{3n}{\alpha} + n\right) \cdot \left( \frac{1}{\rho n - 2E}
\right) \leq \left(\frac{4n}{\alpha}\right) \left(\frac{2}{\rho n}\right) =
\frac{8}{\alpha \rho} := T
\]
Thus, running the algorithm for $T$ rounds leads to all but $\rho n$ bidders
satisfied.
\end{proof}
\begin{lemma} \label{matching-acc}
With probability at least $1 - \gamma$, ${{\bf PMatch}}\xspace(\alpha, \rho, \varepsilon)$ computes an
$(\alpha, \beta, \rho)$-matching equilibrium, where
\[
\beta = 4E+1 = O\left(\frac{1}{\alpha\rho \varepsilon}\cdot \polylog \left(
\frac{1}{\alpha}, \frac{1}{\rho}, \frac{1}{\gamma},k,n \right)\right)
\]
so long as $s \geq 8E + 1 \mbox{ and } n \geq 8E/\rho$.
\end{lemma}
\iffull
\begin{proof}
By \Cref{counter-error}, counter$(0)$ is $\left( \lambda_1,
\gamma/2\right)$-useful, and each of the $k$ good counters is
$\left(\lambda_2 , \gamma/2 \right)$-useful, where
\[
\lambda_1 = \frac{2\sqrt{2}}{\varepsilon'}
(\log{nT})^{5/2}\log\left(\frac{4}{\gamma} \right)
\quad \text{and} \quad
\lambda_2 = \frac{2\sqrt{2}}{\varepsilon'} (\log{nT})^{5/2}
\log\left(\frac{4k}{\gamma} \right).
\]
Since we set $E = \lambda_2 > \lambda_1$, all counters are $(E,
\gamma/2)$-useful, and thus with probability at least $1 - \gamma$, all
counters have error at most $E$. The theorem then follows by
\Cref{matching-eq-alpha,matching-eq-beta,matching-eq-rho}.
\end{proof}
\fi
With these lemmas im place, it is straightforward to prove the welfare theorem
(\Cref{matching-welfare}).
\iffull
\begin{proof}[Proof of \Cref{matching-welfare}]
\else
\begin{proof}[of \Cref{matching-welfare}]
\fi
By \Cref{matching-acc}, ${{\bf PMatch}}\xspace(\alpha/3, \alpha/3, \varepsilon)$
calculates a matching $\mu$ that is an $(\alpha/3, \beta,
\alpha/3)$-approximate matching equilibrium with probability at least
$1-\gamma$, where $\beta = 4E' + 1$. Let $p$ be the prices at the end of the
algorithm, and $S$ be the set of satisfied bidders. Let $\mu^*$ be the
optimal matching achieving welfare $\sum_{i=1}^n v_{i,\mu^*(i)} =
\mathrm{OPT}$. We know that $|S|\geq (1-\alpha/3)n$ and
\[
\sum_{i\in S} (v_{i \mu(i)} - p_{\mu(i)}) \geq \sum_{i\in
S}(v_{i\mu^*(i)} - p_{\mu^*(i)}) - \alpha|S|/3.
\]
Let $N^*_j$ and $N_j$ be the number of goods of type $j$ matched in matchings $\mu^*$
and $\mu$ respectively, and let $G$ be the set of over-demanded goods at prices $p$.
Since each over-demanded good clears except for at most $\beta$ supply, and
since each of the $n$ agents can be matched to at most 1 good, we know that
$|G|\leq n/(s-\beta)$. Since the true supply in $\OPT$ is at most $s$, we also
know $N^*_j - N_j \leq \beta$ for each over-demanded good $j$. Finally, by
definition, under-demanded goods $j$ have price $p_j = 0$. So,
\begin{align*}
\sum_{i \in S} v_{i\mu^*(i)} - \sum_{i \in S} v_{i\mu(i)}
& \leq \sum_{i\in S} p_{\mu^*(i)} - \sum_{i\in S} p_{\mu(i)} + \alpha|S|/3 \\
& = \sum_{j \in G} p_j (N^*_j - N_j) + \alpha |S|/3 \\
& \leq \sum_{j \in G} \beta + \alpha |S|/3 \leq \frac{n \beta}{s-\beta} +
\alpha |S|/3.
\end{align*}
If $s \geq 4\beta/\alpha$, the first term is at most $\alpha n/3$. Finally,
since all but $\alpha n/3$ of the bidders are matched with goods in $S$, and
their valuations are upper bounded by 1, we can
conclude:
\[
\sum_i v_{i\mu(i)} - \sum_{i} v_{i\mu^*(i)} \leq \alpha n/3 + \alpha|S|/3 +
\alpha n/3 \leq \alpha n .
\]
Unpacking $\beta$ from \Cref{matching-acc}, we get the stated bound on supply.
\end{proof}
\section{Preliminaries}
\subsection{The Allocation Problem}
We consider allocation problems defined by a set of goods $G$, and
a set of $n$ agents $[n]$. Each agent $i \in [n]$ has a \emph{valuation
function} $v_i:2^G\rightarrow [0,1]$ mapping bundles of goods to values.
A \emph{feasible allocation} is a collection of sets $S_1,\ldots,S_n \subseteq G$
such that $S_i \cap S_j = \emptyset$ for each $i \neq j$: i.e., a partition of
goods among the agents. The
\emph{social welfare} of an allocation $S_1,\ldots,S_n$ is defined to be
$\sum_{i=1}^n v_i(S_i)$, the sum of the agent's valuations for the allocation;
we are interested in finding allocations which maximize this quantity. Given an
instance of an allocation problem, we write $\mathrm{OPT} =
\max_{S_1,\ldots,S_n}\sum_{i=1}^n v_i(S_i)$ to denote the social welfare of the
optimal feasible allocation.
A particularly simple valuation function is a \emph{unit demand valuation},
where bidders demand at most one item. Such valuation functions take the form
$v_i(S) = \max_{j \in S} v_i(\{j\})$, and can be specified by numbers $v_{i,j} =
v_i (\{j\})\in [0,1]$, which represent the value that bidder $i$ places on good
$j$. When bidders have unit demand valuations, the allocation problem
corresponds to computing a maximum weight matching in a bipartite graph.
Our results will also hold for {\em gross substitute valuations}, which include
unit demand valuations as a special case. Informally, for gross substitute
valuations, any set of goods $S'$ that are in a most-demanded bundle at some set
of prices $p$ remain in a most-demanded bundle if the prices of \emph{other}
goods are raised, keeping the prices of goods in $S'$ fixed. Gross substitute
valuations are a standard class of valuation functions: they are a strict
subclass of submodular functions, and they are precisely the valuation functions
with Walrasian equilibria in markets with indivisible goods \citep{GS99}.
Before giving the formal definition, we first introduce some notation. Given a
vector of prices $\{p_g\}_{g \in G}$, the (quasi-linear) \emph{utility} that
player $i$ has for a bundle of goods $S_i$ is defined to be $u_i(S_i, p) =
v_i(S_i) - \sum_{j \in S_i} p_j$.\footnote{%
This is a natural definition of utility if agents must pay for the bundles
they buy at the given prices. In this paper we are concerned with the purely
algorithmic allocation problem, so our algorithm will not actually charge
prices. However, prices will be a convenient abstraction throughout our
work.}
Given a vector of prices $p$, for each agent $i$, we can define his set of
\emph{most demanded bundles}: $\omega(p) = \arg\max_{S \subseteq G} u_i(S, p)$.
Given two price vectors $p, p'$, we write $p \preceq p'$ if $p_g \leq
p'_g$ for all $g$.
\begin{definition} \label{def-gs}
A valuation function $v_i:2^G\rightarrow [0,1]$ satisfies the {\em gross
substitutes condition} if for every pair of price vectors $p \preceq p'$, and
for every set of goods $S \in \omega(p)$, if $S' \subseteq S$ satisfies $p'_g
= p_g$ for every $g \in S'$, then there is a set $S^* \in \omega(p')$ with $S'
\subseteq S^*$.
\end{definition}
Finally, we will always consider markets with multiple copies of each type of good. Two
goods $g_1,g_2 \in G$ are \emph{identical} if for every bidder $i$ and for every
bundle $S \subseteq G$, $v_i(S \cup \{g_1\}) = v_i(S \cup \{g_2\})$: i.e., the two
goods are indistinguishable according to every valuation function. Formally, we
say that a set of goods $G$ consists of $k$ {\em types} of goods with $s$ {\em
supply} if there are $k$ representative goods $g_1,\ldots,g_k \in G$ such that
every good $g' \in G$ is identical to one of $g_1,\ldots,g_k$, and for each
representative good $g_i$, there are $s$ goods identical to $g_i$ in $G$. For
simplicity of presentation we assume throughout the paper that the supply of
each good is the same, but this is not necessary; all of our results continue
to hold when the supply $s$ denotes the \emph{minimum} supply of any type of
good.
\subsection{Differential Privacy Preliminaries}
Although it is impossible to solve the allocation problem under standard
differential privacy (see \Cref{sec:lowerbounds}), standard differential privacy
plays an essential role in our analysis; let us begin here.
Suppose agents have valuation functions $v_i$ from a class of functions $C$. A
database $D \in C^n$ is a vector of valuation functions, one for each of the $n$
bidders. Two databases $D, D'$ are $i$-\emph{neighbors} if they differ in only
their $i$'th index: that is, if $D_j = D'_j$ for all $j \neq i$. If two
databases $D, D'$ are $i$-neighbors for some $i$, we say that they are
\emph{neighboring databases}. We will be interested in randomized algorithms
that take a database as input, and output an element from some range $\cR$. Our
final mechanisms will output sets of $n$ bundles (so $\cR = (2^G)^n$), but
intermediate components of our algorithms will have different ranges.
\begin{definition}[\citet{DMNS06}]
An algorithm $\cM:C^n\rightarrow \cR$ is {\em $(\varepsilon,\delta)$-differentially
private} if for every pair of neighboring databases $D, D' \in C^n$ and for
every set of subset of outputs $S \subseteq \cR$,
\[
\Pr[\cM(D) \in S] \leq \exp(\varepsilon)\Pr[\cM(D') \in S] + \delta.
\]
If $\delta = 0$, we say that $\cM$ is {\em $\varepsilon$-differentially private}.
\end{definition}
When the range of a mechanism is also a vector with $n$ components (e.g., $\cR =
(2^G)^n$), we can define \emph{joint differential privacy}: this requires that
simultaneously for all $i$, the \emph{joint} distribution on outputs given to
players $j \neq i$ is differentially private in the input of agent $i$. Given a
vector $x = (x_1,\ldots,x_n)$, we write $x_{-i} =
(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)$ to denote the vector of
length $n-1$ which contains all coordinates of $x$ except the $i$'th coordinate.
\begin{definition}[\citet{kearns-largegame}]
An algorithm $\cM:C^n \rightarrow (2^G)^n$ is {\em $(\varepsilon,\delta)$-joint
differentially private} if for every $i$, for every pair of $i$-neighbors $D, D'
\in C^n$, and for every subset of outputs $S \subseteq (2^G)^{n-1}$,
\[
\Pr[\cM(D)_{-i} \in S] \leq \exp(\varepsilon)\Pr[\cM(D')_{-i} \in S] + \delta.
\]
If $\delta = 0$, we say that $\cM$ is {\em $\varepsilon$-joint differentially
private}.
\end{definition}
Note that this is still an extremely strong definition that protects $i$ from
arbitrary coalitions of adversaries---it weakens the constraint of differential
privacy only in that the output given specifically to agent $i$ is allowed to be
sensitive in the input of agent $i$.
\subsection{Differentially Private Counters}
The central tool in our matching algorithm is the private streaming counter
proposed by \citet{chan-counter} and \citet{DNPR10}. Given a bit stream $\sigma =
(\sigma_1, \ldots , \sigma_T)\in \{0,1\}^T$, a streaming counter $\cM(\sigma)$
releases an approximation to $c_\sigma(t) = \sum_{i=1}^t\sigma_i$ at every time
step $t$. We can define what it means for a streaming counter to be accurate.
\begin{definition}
A streaming counter $\cM$ is {\em $(\alpha, \beta)$-useful} if with
probability at least $1 - \beta$, for each time $t \in [T]$,
\[
\left| \cM(\sigma)(t) - c_\sigma(t) \right| \leq \alpha.
\]
\end{definition}
For the rest of this paper, let ${{\bf Counter}}\xspace(\varepsilon, T)$ denote the Binary
Mechanism of \citet{chan-counter}, instantiated with parameters $\varepsilon$ and
$T$. The mechanism produces a monotonically increasing count, and satisfies the
following accuracy guarantee. \iffull (Further details may be found in
\Cref{counter-details}.)
\else
(Further details may be found in the full version.)
\fi
\begin{restatable}[\citet{chan-counter}]{theorem}{counteraccuracy}
\label{counter-error}
For $\beta > 0$, ${{\bf Counter}}\xspace(\varepsilon, T)$ is
$\varepsilon$-differentially private with respect to a single bit change in the
stream, and $(\alpha, \beta)$-useful for
\[
\alpha = \frac{2\sqrt{2}}{\varepsilon} \ln \left( \frac{2}{\beta}\right)
\left(\sqrt{\log(T)}\right)^5.
\]
\end{restatable}
\section{Conclusion and Open Problems}
In this paper we gave algorithms to accurately solve the private
allocation problem when bidders have gross substitute valuations. Our results
are qualitatively tight: it is not possible to strengthen our approach
to standard differential privacy (from joint differential privacy), nor is it
possible to solve even max-matching problems to non-trivial accuracy under joint
differential privacy with constant supply. Moreover, our approach cannot be
pushed any further: our algorithm fundamentally relies on computing Walrasian
equilibrium prices for the underlying market, and such prices are not guaranteed
to exist for valuation functions beyond the gross substitutes class. This does
not mean that the allocation problem cannot be solved for more general valuation
functions, only that fundamentally new ideas would be needed.
Along with \citet{kearns-largegame} and other works in the joint privacy model,
our work adds compelling evidence that substantially more is possible under the
relaxation of \emph{joint} differential privacy, as compared to the standard
notion of differential privacy. For both the allocation problem studied here
and the equilibrium computation problem studied in \citet{kearns-largegame},
non-trivial results are impossible under differential privacy, while strong
results can be derived under joint-differential privacy. Characterizing the
power of joint differential privacy, as compared to the standard differential
privacy, continues to be a fascinating direction for future work.
More specifically, in this paper we achieved joint differential privacy via the
{\em billboard lemma}: we showed that the allocation given to each player can be
derived as a deterministic function only of 1) a differentially private message
revealed to all players, and 2) their own private data. However, this isn't
necessarily the only way to achieve joint differential privacy. How much further
does the power of joint differential privacy extend beyond the billboard model?
\subsection*{Acknowledgments}
The authors would like to thank Cynthia Dwork, Sudipto Guha, Moritz Hardt,
Sanjeev Khanna, Scott Kominers, Mallesh Pai, David Parkes, Adam Smith, and Kunal
Talwar for helpful discussions. In particular, we would like to thank Scott
Kominers for suggesting the connection to Kelso and Crawford, and Adam Smith
for discussions on the ``billboard model'' of privacy. Finally, we thank the
anonymous reviewers.
\iffull
\bibliographystyle{plainnat}
\else
\bibliographystyle{acmtrans}
\fi
|
1,116,691,497,156 | arxiv |
\section{Introduction}
Recent advancements in the field of digital media have resulted in a surge of interest in multimodal learning. Multimodal learning aims to learn well-unified representations from different modalities such as language, vision, or audio and projects them into a common low-dimensional space. For example, visual question answering needs an understanding of both vision and language \citep{antol2015vqa,goyal2017making,khattab2020colbert}; video highlight detection exploits video and audio features to identify the exciting moments \citep{videohightlight2021, videohightlight22021}; emotion recognition requires a fusion of spoken words, facial expressions, and voice \citep{busso2008iemocap, emotion-multimodal2018}.
In information retrieval, the conventional retrieval tasks focus on unimodal learning, including text-to-text \citep{nguyen2016ms,kwiatkowski2019natural} and image-to-image \citep{philbin2007object,nister2006scalable,jegou2008hamming} retrieval. Both the texts and images contain comprehensive information, requiring the model to compute semantic representations of a single modality and match the unimodal document-query pair. Prior works \citep{khattab2020colbert,boytsov-nyberg-2020-flexible,zhang2021poolingformer,singh2021end,zheng2017sift,gordo2016deep,babenko2014neural} have improved retrieval performance and assisted users in searching for the requested documents. While beneficial, these tasks suffer from a significant key limitation: the text representations and the image features exist in their own spaces. In real-world applications, users may take texts or images as the queries to retrieve relevant data of the other modality. Therefore, cross-modal retrieval has attracted considerable attention from researchers recently.
\input{Image/fig-framework}
Cross-modal retrieval aims to retrieve a relevant unimodal document from another modality of a query. Image-text retrieval is a fundamental challenge in cross-modal retrieval, and most existing methods \citep{dou2021empirical,li2021align,tan2019lxmert,chen2020uniter} train their models on the COCO \citep{lin2014microsoft} and Flickr30K \citep{plummer2015flickr30k} datasets. These datasets include images and their captions. However, unlike text-to-text retrieval, the texts in these datasets only have image related information. Nowadays, many online materials and documents may have both texts and images, such as Wikipedia, news, blogs, social media posts, and commercial websites. Also, considering most people utilize text-based queries to search multimedia documents on a search engine, a searching query can be keywords, captions, or both. The retrieval frameworks should be able to deal with the above text-based query with different modality information.
We introduce Mr. Right as shown in Figure~\ref{fig:framework}, a new comprehensive and challenging retrieval dataset, which provides text-image documents and text-based queries with different modality information that are text-related, image-related, or mixed. For documents, we collect from the Wikipedia-based Image Text Dataset \citep{srinivasan2021wit}, including paragraphs and photos. For queries, based on our knowledge, there is no existing dataset containing our proposed three types of queries. Therefore, we hire Amazon Mechanical Turk (AMT) workers to construct total 3k annotated queries for each type. Moreover, we also provide 350k auto-generated queries for model pre-training before learning annotated queries. In our training paradigm, we introduce document-query contrastive learning (DQC) and document-query matching (DQM) to fuse text and image features into multimodal representations.
Finally, we propose a full-ranking benchmark for Mr. Right. Similar to TR datasets \citep{lin2014microsoft}, our full-ranking evaluation contains three types of annotated queries that models have to retrieve the most relevant document from all corpus. We create the benchmark based on our multimodal framework with the comparison to prior text-to-text retrieval (TR) and text-to-image retrieval (IR) baselines with human performance. Our results show that multimodal retrieval (MR) can perform better with the help of extended information from different modalities. Interestingly, it is a balance to incorporate these modalities into unified representations. However, it is still challenging to achieve comparable retrieval performance to the human benchmark.
With Mr. Right, we take a significant step toward establishing a novel benchmark for evaluating the capabilities of multimodal retrieval systems. To the best of our knowledge, Mr. Right is the first multimodal retrieval dataset that explores multimodal documents and text-based queries with different modality information, and it is open-source to welcome methods of all kinds.
\section{Evaluation Details}
\label{appendix-evaluation}
\subsection{Benchmark tasks}
\label{appendix-benchmark}
In this section, we provide the benchmarks of Mr. Right on humans, baseline retrieval models, and our multimodal framework. There are three types of tasks, and the details are as follows:
\paragraph{Task1: Text-related query} This task aims to follow previous text retrieval datasets \citep{nguyen2016ms,kwiatkowski2019natural,yang2018hotpotqa,soboroff2018trec,thorne2018fever}. Users mostly search for documents relying on the keywords from document paragraphs or text-based information. In our dataset, text-related queries contain name entities (person, date, organization, location) or factual knowledge (relations, terminologies).
\paragraph{Task2: Image-related query} This task aims to follow previous text-to-image retrieval datasets \citep{sharma2018conceptual,lin2014microsoft,plummer2015flickr30k}. With a blurred impression about the appearance of an object, users search documents based on the part of context from document images. In our dataset, image-related queries are similar to image captions that explain details of objects, such as color, shape, amount, position, or action.
\paragraph{Task3: Mixed query} We propose this task to simulate users searching documents with text-related and image-related information. To precisely find the correct document, Mr. Right provides document texts and images to consider both modalities for retrieval. Our mixed queries generate a brief description that includes the document paragraph and photo, which can be viewed as a combination of corresponding text-related and image-related queries.
\subsection{Baseline models}
\label{appendix-baselines}
\paragraph{Text retrieval models}
To evaluate text retrieval performance with state-of-the-art (SOTA) neural frameworks, we test three approaches in the followings. 1) RoBERTa-base \citep{liu2019roberta}: a pre-trained language model which can encode both documents and queries into the contextualized sentence representations to compute the similarity in the same vector space. 2) DiffCSE \citep{chuang2022diffcse}: current unsupervised SOTA among sentence representation learning methods. 3) all-mpnet-base-v2 (SBERT): current supervised SOTA for sentence embedding tasks and semantic search tasks on SentenceTransformers \citep{reimers-2019-sentence-bert} leaderboard. We evaluate both zero-shot and fine-tuned performance for these models. We train with an in-batch negative loss function and use an AdamW optimizer with learning rate $2 \times 10^{-5}$ and batch size 32 for 30 epochs.
\paragraph{Image retrieval models}
Current image-text retrieval models are highly related to pre-trained vision-and-language models. Training with natural language supervision, these models demonstrate the ability of crossmodal image retrieval. We zero-shot evaluate CLIP \citep{radford2021learning}, and ALBEF \citep{li2021align} as our baselines and also fine-tune on our dataset. We don't use other VLP models (e.g., METER \citep{dou2021empirical}, ViLT \citep{kim2021vilt}) because most of them require a high computational overhead for evaluation due to the need to calculate matching scores across all image text pairs. We fine-tune CLIP and ALBEF using an Adam optimizer with learning rate $1 \times 10^{-6}$ and batch size 128 for 40 epochs.
\paragraph{Multimodal retrieval models}
Without existing baselines, we build an ensemble model by integrating the document-query similarity scores from our best TR model and IR model. We fuse the scores with a weighted sum parameter tuning on the validation set for different tasks. The final ensemble relevance scores are then used to rank the search results.
\input{Image/fig-attention}
\subsection{Grad-CAM visualizations}
To better understand the multimodal representations of documents, we compute Grad-CAM visualizations on the cross-attention maps of the query-document matching classifier in Figure~\ref{fig:attention}. With different queries, our model will interact with different parts of the image and texts, which is highly correlated to where humans would look to match the pairs. For the above example, the word ``smiling'' highly focus on the face of the image, and the word ``winning'' is related to ``tournament'' and ``Lawyers World Cup'' in texts. For the bottom example, the words ``army'' and ``tank'' in the queries will attend to corresponding parts of the image, while the words ``Australian'' and ``War'' will highlight ``Australian'' and ``fighting'' in texts, respectively.
\label{appendix-gradcam}
\input{Image/fig-human-evaluation}
\input{Image/fig-fail}
\subsection{Failed examples}
We create human evaluation by randomly sampling 50 examples from each type of task. As illustrated in Figure~\ref{fig:humaneval}, human annotators have to select the most relevant document from four candidates obtained from our MR model. Each question is answered by three workers. If more than half workers have the same answers and match the correct document, we consider humans can answer this question correctly; otherwise, humans fail it.
With human evaluation, we can distinguish the performance difference between humans and our models in Figure~\ref{fig:fail}. For text-related queries, we can find our model is capable of obtaining the related document instead of the accurate one. However, humans can easily choose the right one by matching text keywords, such as ``9:1'' in the query and ``9-to-1'' in the document of the first example. For image-related queries, our model prefers to retrieve specific color words and ignores the remaining, such as the ``blue spire'' and ``green lawn'' of the second example. On the other hand, humans can perceive the details of images. For mixed queries, our model may pay attention to wrong words, such as the word ``walking'' and ``centres'' in the last two examples. On the contrary, humans can simultaneously recognize the correct document by matching text keywords and image context.
\label{appendix-mistake}
\subsection{Compared to auto-generated and human-annotated queries}
\input{Table/Table-ablation-queries}
Since our human-annotated queries have more diverse properties than auto-generated queries as shown in Table~\ref{tab:data-category}, we compare their performance on mixed queries with our pre-trained MR model (METER). The results are shown in Table~\ref{tab:query}, and we can find auto-generated queries outperform human-annotated queries because the annotated queries are more complex and difficult to learn. Therefore, we fine-tune our models on 1k annotated queries to adapt in human-annotated domain.
\label{appendix-queries}
\section{Proposed Model Details}
\label{appendix-model}
With inputs document texts $D_T$ and document image $D_I$, we derive the document features $F_d$ from multimodal encoder $E_d$. Also, with input query
texts $Q_T$, we obtain the query features $F_q$ from query encoder $E_q$. After, we take average of the size-variant features $F_d$ and $F_q$ to get a single fixed vector as document and query representations $R_d$ and $R_q$.
\begin{equation}
F_d = E_d(D_T, D_I) \quad \textrm{and} \quad F_q = E_q(Q_T)
\end{equation}
\begin{equation}
R_d = Average(F_d) \quad \textrm{and} \quad R_q = Average(F_q)
\end{equation}
With document and query representations $R_d$ and $R_q$, we aim to close the distance between two vectors by contrastive learning. Therefore, we learn two projection functions $f_d$ and $f_q$ with fully-connected layers and L2-normalization to map their representations into the same space. We calculate the similarity by dot product for all document and query vector pairs in a training batch when treating matched pairs as positive while all other pairs as negative. The contrastive loss $L_{dqc}$ we minimize is in the following:
\begin{equation}
Sim(R_d, R_q) = f_d(R_d)^\top f_q(R_q)
\end{equation}
\begin{equation}
P^{i}_{d2q} = \frac{\exp(Sim(R^i_d, R^i_q) / \tau)}{\sum^{N}_{j=1}\exp(Sim(R^i_d, R^j_q) /\tau)},\ \ P^{i}_{q2d} = \frac{\exp(Sim(R^i_q, R^i_d) / \tau)}{\sum^{N}_{j=1}\exp(Sim(R^i_q, R^j_d) /\tau)}
\end{equation}
\begin{equation}
L_{dqc} = - \frac{1}{B} \sum^{B}_{i}\frac{Y^{i}_{d2q}\log(P^{i}_{d2q}) + Y^{i}_{q2d}\log(P^{i}_{q2d})}{2}
\end{equation}
Here, we calculate the normalized softmax loss for both document-to-query and query-to-document classification. The loss is set up with batch size $B$, negative samples size $N(=B)$, and a learnable temperature parameter $\tau$ to scale the logits. For negative pairs of the document to query, $R^i_d$ and $R^j_q$ are the representations of the document in the $i$-th pair and query in the $j$-th pair, respectively. To more effectively organize in-batch negatives, we keep two queues \cite{li2021align} to store the most recent $K$ representations, helping enlarge the amounts of negative samples. The modified equation is to change negative sample size $N$ from batch size $B$ to queue length $K$.
In addition to contrastive loss, we obtain document-query matching loss to learn a fine-grained similarity of pair documents and queries. The matching loss is a binary cross-entropy loss to predict whether a pair of documents and queries are matched or mismatched. We build a 6-layer transformer-based classifier $C$ with input document features $F_d$ and query features $F_q$. Specifically, a special token (\emph{e.g.}, $[CLS]$) is inserted at the beginning of the input sequence, and it tries to learn a global cross-modal representation in transformers. After, a linear classifier is added to the $[CLS]$ token to predict a binary label. The whole matching loss is in the following:
\begin{equation}
P^i_{dqm}(j=y^{i}_{dqm}) = \frac{\exp(C_j(F_d, F_q))}{\exp(C_0(F_d, F_q))+\exp(C_1(F_d, F_q))}
\end{equation}
\begin{equation}
L_{dqm} = - \frac{1}{B} \sum^B_iY^{i}_{dqm}\log(P^i_{dqm})
\end{equation}
Our full training objective is:
\begin{equation}
L = L_{dqc}+L_{dqm}
\end{equation}
\section{Dataset License}
\label{appendix-license}
Our dataset is under the Creative Commons Attribution Share Alike 4.0 (CC BY-SA 4.0) license.
\section{Maintenance}
\label{appendix-maintenance}
We believe that Mr. Right will assist researchers in building robust multimodal retrieval models and improve the current retrieval systems. We are willing to maintain Mr. Right. If researchers have any problems, they can create an issue from our repository. We also welcome any methods to perform on our benchmark. We bear all responsibility for violations of rights related to Mr. Right.
\section{Related Work}
\input{Table/Table-related-dataset}
In this section, we describe previous retrieval datasets and explain how the existing methods employ transformer-based neural networks \citep{vaswani2017attention} to learn the representations of different modalities. Table~\ref{tab:dataset} shows an overview of the retrieval datasets.
\paragraph{Retrieval dataset} Most previous retrieval datasets only consider single modality (texts or images) documents without a unified representation among multiple domains. We categorize these datasets into unimodal and cross-modal learning. Text retrieval (TR) and image-to-image retrieval are the fundamental challenges in unimodal learning. Previous TR datasets \citep{nguyen2016ms,kwiatkowski2019natural,yang2018hotpotqa,soboroff2018trec,thorne2018fever} comprise a large corpus of text-based documents and related queries. They collect documents from different sources, such as Wikipedia \citep{yang2018hotpotqa,thorne2018fever}, news \citep{soboroff2018trec}, and online articles \citep{nguyen2016ms}. These sources involve diverse and generalized domain knowledge, reflecting real-world situations when users search from an extensive database. To ensure the quality of queries, some works collect the queries from searching logs \citep{nguyen2016ms,kwiatkowski2019natural} or hire crowd workers to generate annotations \citep{yang2018hotpotqa,thorne2018fever}. Similarly, the existing image-to-image datasets \citep{oh2016deep,wah2011caltech,radenovic2018revisiting,wang2011contextual} include several categories of images in the same domain, such as products, birds, and landmarks. Major works \citep{tan2021instance,ramzi2021robust,oh2016deep} randomly sample images from each category as queries, while the remaining images are the documents. As shown in Table~\ref{tab:dataset}, these unimodal datasets have more documents than queries, showing the challenge of ranking large numbers of documents. For cross-modal learning, the existing image-text retrieval (IR) datasets \citep{sharma2018conceptual,lin2014microsoft,plummer2015flickr30k} contain images and their captions. Major works harvest their images and captions from the web. They develop a pipeline to extract, filter, and transform their captions. In this task, the number of images equals the captions, meaning documents and queries are in pairs. Unlike the unimodal tasks having a large size of documents, the evaluation of cross-modal performs on a small size of document-query pairs. Different from single modality documents, recent work \citep{m5product} has proposed an E-commerce product multimodal retrieval dataset that contains data more than two modalities.
\paragraph{Retrieval model}
Due to the superior performance of contextualized representations in transformer-based models \citep{vaswani2017attention}, self-attention-based architectures have become the model of choice in natural language processing (NLP) and computer vision (CV). In unimodal retrieval, the existing methods \citep{santhanam2021colbertv2,karpukhin2020dense,xiong2020approximate,qu2020rocketqa} of text-to-text retrieval employ transformers to encode queries and documents into vector representations and compute their similarity. Also, the Vision Transformers \citep{dosovitskiy2020image} reduce the time-consuming process of extracting region features, and the later works \citep{el2021training,li2022hashformer,chen2021transhash} attain excellent results in image-to-image retrieval. In cross-modal retrieval, Vision-and-Language Pre-training (VLP) models have improved performance on IR tasks. The recent CLIP
\citep{radford2021learning} and ALIGN \citep{jia2021scaling} utilize contrastive learning to align the unimodal representations of image-text pairs. Other VLP methods (\emph{e.g.}\ METER \citep{dou2021empirical}, ALBEF \citep{li2021align}, LXMERT \citep{tan2019lxmert}, UNITER \citep{chen2020uniter}) perform multimodal fusion to produce the joint representations of text-image pairs. This bridges the semantic gap between visual and textual features in texts and images.
\section{The Mr. Right Dataset}
\label{sec-dataset}
\input{Image/fig-dataset}
Mr. Right aims to construct a new dataset for multimodal retrieval tasks. The dataset focuses on two components: (1) Multimodal documents consist of different modality information, including texts and images; (2) Text-based queries involve text content, image captions, or both. Mr. Right collects documents and annotated/generated queries by extracting, labeling, and filtering.
\subsection{Data Collection}
\paragraph{Wikipedia-based document}
To generate multimodal documents with diverse knowledge domains, we gather paragraphs and photos from the Wikipedia-based Image Text Dataset \citep{srinivasan2021wit}, which consists of 37.6 million entity-rich image-text pairs with 11.5 million unique images. The original dataset includes 108 languages, and we only keep 1.5 million English data for simplification. We process this dataset with three steps: (1) image filtering eliminates images with invalid download links, corrupted content, and non-JPEG/PNG images; (2) text filtering removes repeated pages and deletes contents without mentioning the title, ensuring that the document is related to the subject; (3) text reduction extracts the first paragraph as the document to avoid high memory and computational requirements. The whole Wikipedia article can be very long, and we find that usually the first paragraph contains a brief introduction. After the filtering, there are 806,357 of multimodal documents remaining.
\paragraph{Human-annotated query}
A query may relate to the document text, document image, or both. To generate three types of queries based on multimodal documents, we hire crowd-workers with qualifications from Amazon’s Mechanical Turk (see Appendix~\ref{data-collection}). The annotators are shown a text, a image, and both successively to come up with queries based on first impression. To ensure the consistency and quality of annotations, we give the following guidelines: (1) Word count limitation. Each query should be between 10 to 100 words to ensure the query involves enough information. (2) No title. Annotators should avoid including the title because it is the result that users want to retrieve. (3) No copied phrases in the passage. A real-world query may involve ambiguous meaning or terms, so it is better to paraphrase the sentences. (4) Requirement of adjectives and nouns in the image query. Instead of only phrasing the objects in the image, we hope crowd-workers describe the details, such as colors, shapes, and actions. The annotated query examples can be found in Figure~\ref{fig:dataset}.
\paragraph{Auto-generated query}
Annotating queries for the whole multimodal dataset can be time-consuming. To address this, we develop a pipeline that extracts snippets of the document texts as the text-related queries and generates captions of the document images as the image-related queries. According to Figure \ref{fig:dataset}, Wikipedia content generally has a specified format. The first sentence begins with the title and a short introduction, followed by the details. Therefore, we utilize dependency parsing using spaCy API and detect the dependent verb of the title in the first sentence. Then we take the adjectives and nouns after the verb as the text-related query and remove the adjectives in the first sentence as the document. This ensures that models have to learn the information from the remaining texts. For the image-related queries, each image in the original Wikipedia dataset has its own annotation. However, many annotations relate to the document title instead of the image content. In addition, some of the annotations contain proper names, which is difficult to learn from the scene context. To address this, we replace them with generated image captions based on BLIP \citep{li2022blip} that outperforms a variety of methods on vision and language tasks. As shown in Figure~\ref{fig:dataset}, the caption generated from the model is closer to the image content, and it can be the query that human uses to search for visual information. To generate the mixed queries, we concatenate the former two queries, which contain text information and image context respectively.
\subsection{Annotated Query Validation}
\paragraph{Rule-based filtering} A well-defined query should have multiple part-of-speech (POS) tags. Therefore, annotated query candidates without nouns or with only one noun lacking adjectives or verbs are discarded. Since queries directly copied from the documents are trivial for retrieval, we drop text-related and mixed query candidates that highly overlap with document texts. We remove the queries with the longest-common-substring (LCS) length larger than 40 and the ratio (divided by query length) larger than 0.6. As for image-related queries, we take out the candidates that include additional knowledge more than the document image context. In other words, we filter out the queries containing proper names, such as a particular person's identity or locations. The above three filters discard around 10\% of the candidates.
\paragraph{Human filtering}
In our task, each query should correspond to a unique document. To ensure uniqueness, we utilize text retrieval model BM25 and image retrieval model CLIP to search relevant documents given text-related and image-related queries, respectively. For simplicity, we retrieve top-10 relevant candidates from the whole multimodal documents and prioritize to examine the queries without unique document pairs, i.e., close ranking scores for different documents. After filtering out these queries, we efficiently validate whether the semantic meaning between the query and the correct document is unrepeated. After our validation, there are 25\% of query sets discarded and remain 3,047 annotated query sets.
After finishing the collection and validation stage, our dataset contains 806,357 multimodal documents, 351,979 auto-generated query sets, and 3,047 human-annotated query sets. Each query set is mapped to one document and contains three types of proposed queries. We further split 1k human-annotated sets for fine-tuning and 2k for testing as shown in Table~\ref{tab:dataset}.
\input{Table/Table-data-analysis}
\input{Table/Table-data-category}
\subsection{Dataset Analysis}
\paragraph{Quantitatively analysis}
Table~\ref{tab:data-analysis} presents statistics for our annotated and generated queries showing vocab sizes, average lengths, and top-3 named entity recognition (NER) tags. Each type of query has the same amount. Based on the vocabulary size, we can find that text-related queries have more diverse words than image-related queries, indicating that our image descriptions are composed of limited illustrative words. Furthermore, texts contain more sparse information than images, resulting in the need for longer word lengths of annotated text-related queries. The top-3 NER tags suggest that human favors distinct entities such as countries and dates, while our generated approach mainly focuses on affiliations in the text-related queries. For image-related queries, human and the BLIP model prefer to describe the number of objects.
\paragraph{Qualitatively analysis}
We heuristically identify query properties covered in the dataset to recognize the difference between human-annotated and auto-generated queries. We randomly sample 100 queries from the three types of queries and present the results in Table~\ref{tab:data-category}. As can be seen, we split the properties of text-related queries into three categories. Paraphrase means that queries involve different words and sentence clause structure from the documents; keyword extraction indicates that queries only include important terms; duplication means queries are the reorganized document phrases. Considering efficiency, we copy the snippets of document texts to generate text-related queries. For the image-related queries, some annotators may focus on describing the most conspicuous object, while others include multiple objects and their adjectives. Auto-generation produces more image-related queries with multiple objects. It may be because the BLIP model learns captioning from many data and prefers to describe the image details. The difference between annotated and generated mixed queries is also apparent. Human may fuse the image descriptions and text content or concatenate them with prepositions. Our generated mixed queries only rely on the concatenation. In our study, annotated queries contain more diverse types of queries and well-unified sentence structures, but the annotation process is time-consuming and expensive. Our auto-generated queries ensure efficiency, and the experiment results show that these queries are effective.
\subsection{Benchmark}
To simulate the real-world retrieval problems, we create Mr. Right's benchmark with the whole corpus of 800k Wikipedia documents. This full-ranking setting guarantees that the model can handle large numbers of multimodal documents. Further, search queries may have text keywords or image descriptions, so we present three retrieval tasks with the corresponding queries: text-related, image-related, and mixed. See Appendix~\ref{appendix-benchmark}. for more task details.
\subsection{Evaluation Metrics}
Retrieval tasks might be precision-focused or recall-focused, depending on the requirements of real-world applications. In Mr. Right, documents and queries are binary relevant, and retrieving relevant documents from our large corpus of different modalities is challenging. Following previous image-text retrieval tasks \citep{dou2021empirical,li2021align,tan2019lxmert,chen2020uniter}, we report recall@$k$ as our performance metric. Further, considering the rank of documents, we also utilize MRR (Mean Reciprocal Rate) as our binary rank-aware metric, a general standard in text retrieval tasks. In our experiments, we compute recall with $k=1,5,10$ and MRR@10 for all models and assess their performance.
\section{Multimodal Retrieval}
\label{sec-model}
With Mr. Right, the next step is to set up the retrieval task based on the multimodal documents and text-based queries. We illustrate our retrieval formulation (Section \ref{Retrieval Formulation}) and model architecture (Section \ref{Model Architecture}). Then we describe our two training objectives (Section \ref{Training Objectives}).
\subsection{Retrieval Formulation}
\label{Retrieval Formulation}
Given a document $D$ with a paragraph text $D_{T}$ and an image $D_{I}$, we use a multimodal encoder to fuse $D_{T}$ and $D_{I}$ into a single fixed-size multimodal vector representation $R_{d}$. Also, we encode a text-based query $Q_{T}$ into a fixed-size vector representation with our text encoder. To establish our retrieval task, we need to encode all the documents $\{(D^1_{T}, D^1_{I}), (D^2_{T}, D^2_{I}),...,(D^N_{T}, D^N_{I})\}$ and queries $\{Q^1_{T},Q^2_{T},...,Q^M_{T}\}$ into $\{R^1_{d},R^2_{d},...,R^N_{d}\}$ and $\{R^1_{q},R^2_{q},...,R^M_{q}\}$ respectively. With these representations, we compute the cosine similarity scores between documents and queries and find the most similar document for each query. In this scenario, we can build offline indexing for document representations and compute query representations online for real-world applications.
\subsection{Model Architecture}
\label{Model Architecture}
\paragraph{Document (Multimodal) Encoder}
To encode both document texts and images into unified multimodal representations, we leverage previous pre-trained VLP models for initialization in our framework. These models have learned a common low-dimensional space to embed vision and language features. In these models, we have a vision encoder (\emph{e.g.} CNNs or vision transformers \citep{dosovitskiy2020image}) and a text encoder (\emph{e.g.} BERT \citep{devlin2018bert} or RoBERTa \citep{liu2019roberta}) to extract modality-specific features. Then we have a fusion module (\emph{e.g.} co-attention or merge-attention \cite{dou2021empirical}) to integrate both features into a unified feature. Therefore, we can view these VLP models like a black box multimodal encoder $E_M$ to output size-variant multimodal document features $F_d$ whose size depends on the length of input texts $D_T$ and the dimension of images $D_I$. To derive a single fixed-size representation $R_d$ for each document, we simply average the size-variant document features $F_d$.
\paragraph{Query (Text) Encoder}
Since our queries are text-based $Q_T$, we create a query encoder $E_q$ and share the parameters from the text encoder of our multimodal encoder. This ensures the text representations of queries are similar to document texts. Like a multimodal encoder, we take the average of the query features $F_q$ to obtain query representations $R_q$.
\subsection{Training Objectives}
\label{Training Objectives}
In this section, we introduce document-query contrastive learning (DQC) and document-query matching (DQM) to project document and query representations into the same space.
\paragraph{Document-Query Contrastive learning}
Contrastive learning has been widely used to train on VLP models \cite{radford2021learning,jia2021scaling,li2021align} which can increase the similarity scores between parallel pairs. With document and query representation $R_d$ and $R_q$, we learn two projection functions $f_d$ and $f_q$ with a fully-connected layer to map their representations into the same space. Then we calculate the cosine similarities between document and query pairs in a training batch. The matched pairs are positive while all other pairs are negative. Based on the pairs, we minimize the contrastive loss $L_{dqc}$ like in-batch cross-entropy loss. To organize in-batch negatives more effectively, we keep two queues \cite{li2021align} to store the most recent $K$ representations, enlarging the amounts of negative samples per batch.
\paragraph{Document-Query Matching}
To further learn a fine-grained similarity of pair documents and queries, we build a binary classifier $C$ to predict whether the output features of document encoder $F_d$ and query encoder $F_q$ is matched. Specifically, we use a 6-layer transformer model and insert a special token $[CLS]$ at the head of the input sequence to obtain global information. Then we employ a linear classifier on this token followed by softmax to predict a two-class label and compute the matching loss $L_{dqm}$ according to binary cross-entropy loss. Motivated by ALBEF \cite{li2021align}, we sample online hard negative pairs for each document and query from contrastive similarity distribution. In addition to real-world negative documents, we produce two pseudo negative documents by combining the positive document and the sampled negative document into pairs of a positive image and a negative text and vice versa. Hence, for a query, we have a positive document, a sampled negative document, and two pseudo negative documents.
\section{Experiments}
\label{sec-exp}
\subsection{Dataset}
Mr. Right has both auto-generated and human-annotated queries. We first pre-train our models on the 350k auto-generated document-query pairs to learn the multimodal representations. Further, we fine-tune our learned model on the human-annotated 1k training pairs with 10\% as our validation set.
\subsection{Baselines}
We compare our proposed multimodal retrieval framework with TR and IR baselines. We only collect existing dense retrieval \citep{denseretrieval2018} approaches for a fair comparison. Additionally, we develop the MR baseline with the ensemble of TR and IR baselines. Text retrieval models only consider the document texts; image retrieval models only focus on the document images; multimodal retrieval models perceive both document texts and images. All the baseline models are described in Appendix~\ref{appendix-baselines}.
\subsection{Experiment Setup}
\label{Experimental Setup}
We train our framework using existing VLP models, including METER \citep{dou2021empirical}, ALBEF \citep{li2021align}, and ViLT \citep{kim2021vilt} to make use of their multimodal pre-trained weights. The pre-training process lasts for 40 epochs and fine-tunes for 20 epochs on 8 NVIDIA V100 GPUs. Our optimizer is AdamW with a weight decay of 0.02, and the learning rate is warmed-up to $5 \times 10^{-5}$ in the first epoch and decayed to $1 \times 10^{-7}$ following the scheduler. Also, we set up a 0.5 gradient clipping value and 9,600 queue size for DQC. For image augmentation, we use random-crop of size 288$\times$288 or 384$\times$384 depending on the pre-trained VLP models and apply RandAugment \citep{cubuk2020randaugment}. For texts, we truncate our max length for queries with 40 and documents with 128. In order to simulate real-world user queries, we randomly select text-related, image-related, or mixed queries during training.
\input{Table/Table-result-model}
\subsection{Results and Analysis}
\paragraph{Compared to TR/IR}
We present the retrieval results in Table~\ref{tab:main}. We compare our method against TR/IR models and discuss the performance difference across three query types. The table shows that TR and IR have difficulties responding to the opposite queries. TR obtains worse results for image-related queries while IR is vice versa. This may be because their documents only contain unimodal information with either texts or images. In contrast, MR shows the ability to mitigate this problem. It achieves comparable performance as TR on text-related queries and scores higher than IR on image-related queries. This improvement indicates that MR can perform better due to the extended information from different modalities. Further, when queries are mixed, MR exploits the advantage of multimodal representations and achieves superior performance compared to TR and IR.
\paragraph{Multimodal representation} \label{exp-mr} We integrate fine-tuned SBERT and ALBEF as an ensemble MR model that shows a comparable performance to our MR models. Although incorporating TR and IR models can perform well among different types of queries, the vector size of document representations for multiple modalities increases linearly, and we need to define the best weighted combination of their output scores. In contrast, our proposed multimodal representation can unify multiple domain information into a standard size feature. To understand the multimodal representations, we compute Grad-CAM visualizations (see Appendix~\ref{appendix-gradcam}) on the attention maps of document texts and images given different types of queries. The attention heat is highly correlated to where human would look to match the corresponding query. In Table~\ref{tab:main}, we find a trade-off of our framework to deal with text-related and image-related queries simultaneously. Comparing MR with different backbone VLMs, the performance is debated between the two queries. This may come from the limited size of our unified representation. We cannot include all the document text and image information together but a balance between them.
\input{Table/Table-result-human}
\paragraph{Human evaluation}
Besides model performance, we also present human evaluation results compared to our MR models in Table~\ref{tab:human}. We randomly select 50 samples for each query type. To efficiently retrieve related documents for human evaluation, we utilize our MR model (METER) to obtain the top 3 relevant candidates with the correct document and construct a four-choice question with one correct answer. Table~\ref{tab:human} shows humans get 89.3\% accuracy. It validates the reliability of our dataset, but it also shows there is room for improvement of models on Mr. Right. It may be because human can understand various query properties in Table~\ref{tab:data-category}, extract the crucial text content, or perceive image scene context in detail. To see the retrieval results difference between our models and human, we provide failed examples in Appendix~\ref{appendix-mistake}. Also we provide the performance comparison of our auto-generated and human-annotated queries in Appendix~\ref{appendix-queries}.
\section{Conclusion}
In this paper, we propose Mr. Right, a multimodal retrieval dataset for information retrieval. Mr. Right covers three types of text-based search queries with different modality information, including text-related, image-related, and mixed, to simulate real-world search situations. Further, our dataset provides documents with texts and images to develop multimodal representation. We build our end-to-end multimodal retrieval model for Mr. Right to unify features across modalities. Compared to the previous text and image retrieval frameworks, multimodal retrieval shows improvements on different queries and points out the balance between modalities. However, current multimodal models still have a significant gap to human performance, showing the potential of Mr. Right as a challenge in multimodal retrieval. We believe Mr. Right can breathe new insights into information retrieval for more robust retrieval systems.
\section{Limitations and Future Work}
In Mr. Right, we only consider text-based queries, which may limit the search modalities from users. We can expand our dataset with additional domain queries and documents such as images, audio, and video. Further, Mr. Right focuses on the materials in Wikipedia. We can explore other sources such as news, blogs, or commercial websites. Mr. Right is a preliminary attempt to explore multimodal retrieval, and there are still challenges we need to analyze and study in future work.
\section*{References}
\section{Supplementary Materials for Mr. Right}
\label{sec:A}
We provide the following detailed sections and materials that complement the discussions in the main paper. Code and dataset are available at \url{https://github.com/hsiehjackson/Mr.Right}
\begin{itemize}[leftmargin=0.35cm]
\item Establishment of the datasheet for Mr. Right in Appendix~\ref{appendix-datasheet}
\item Details of the evaluation process in Appendix~\ref{appendix-evaluation}.
\item Designs of the proposed model in Appendix~\ref{appendix-model}.
\item Confirmations of the data license in Appendix~\ref{appendix-license}.
\item Maintenance of Mr. Right in Appendix~\ref{appendix-maintenance}.
\end{itemize}
\section{Datasheets}
\label{appendix-datasheet}
\subsection{Motivation}
Information retrieval is a fundamental and essential challenge in real-world applications. In the past, researchers focused on unimodal retrieval because previous datasets only included data with a single modality, such as text-to-text and image-to-image retrieval datasets. They design robust and effective frameworks to improve the performance of these retrieval tasks. However, humans perceive the world with different modalities, such as language, vision, or audio. Due to multimedia development, humans have begun to utilize one modality to search for another modality. For example, image-text retrieval is a challenge in which models need to learn a common representation between images and texts and retrieve the most relevant documents. Further, sometimes we may need to combine different modalities and understand the meaning together. To accelerate the advancement of retrieval on multimodal learning, we propose Mr. Right, which contains multimodal documents and three types of text-based queries according to the real-world context. It has 806,357 multimodal documents, 351,979 auto-generated queries, and 3,047 human-annotated queries for each type.
\subsection{Collection Process}
\label{data-collection}
\paragraph{Multimodal document}
We construct Mr. Right based on the Wikipedia-based Image Text (WIT) Dataset \citep{srinivasan2021wit}. The original dataset includes Wikipedia articles and Wikipedia image links in 108 languages. Each article has a page title, a page description, and a reference image description. The dataset has filtered the image-text pairs based on effective restrictions, such as text length, image size, and image format. However, Wikipedia updates its content frequently, some image URLs are outdated, and some pages have different versions. Therefore, we create our pipeline to filter WIT and obtain the multimodal documents. The process is explained in the following:
\begin{itemize}[leftmargin=0.35cm]
\item Download Wikipedia CSV file \citep{srinivasan2021wit} and keep English articles with titles in the content. There are about 1,479,330 English documents. Download the images using the Python \textit{multiprocessing} and \textit{urllib2} module. During the downloading, we find that some image URLs are invalid. It may be because Wikipedia has updated the links. Corrupted images are also discarded. After the downloading, there are 953,042 images that occupy 1.5TB.
\item Discard the documents with the same title. We analyze the composition of the remaining document candidates and find that some documents present the same title with the similar content. It is because the page may be updated according to the time, and there are different versions of the documents. To avoid one query mapping to multiple correct documents, we filter these repeated documents. Finally, we obtained 806,357 multimodal documents, including text-image pairs with rich semantic information.
\end{itemize}
\input{Image/fig-datasheet-annotation}
\newpage
\paragraph{Human-annotated query}
As shown in Figure \ref{fig:datasheetannotation}, WIT original image reference annotations contain page title or page description rather than the image context. In real-world applications, we consider that user queries may be image descriptions that include image objects, colors, background, or people's actions. Further, user queries may involve multimodal information, such as image caption fused with text content. In our study, there is no dataset that consists of mixed query for retrieval. Therefore, we hire annotators from Amazon Mechanical Turk to produce human-annotated queries. We require annotators should be masters to ensure label quality. Only annotator with at least 50 approved HITs and an 80\% HIT approval rate is allowed. We pay 0.25\$ USD per assignment that includes text-related query, image-related query, and mixed. Also, to award those hardworking annotators, we provide an additional bonus. After the annotation, the statistical data shows that workers' average time per assignment (three types of queries) is 6 minutes 34 seconds. More details can be seen in Figure \ref{fig:template}. In total, we have paid 3,687.24\$ USD (including the platform fees) to annotate 4,276 assignments.
To further ensure the quality of Mr. Right, we provide guidelines and examples to human annotators. They have to read the guidelines first before labeling. Guidelines indicate that a query should meet some restrictions to simulate the possible real-world searching queries, and annotators can come up with the queries based on their habits by following the guidelines. The annotation template is illustrated in Figure \ref{fig:template}, and the guidelines are described as follows:
\begin{itemize}[leftmargin=0.35cm]
\item Words Limit: 10 -- 100
\item Do not include title.
\item Do not copy the sentence from the document.
\item Try your best to paraphrase the words.
\item Include image information such as color, gender, action, etc.
\item Include adjectives and nouns for images.
\end{itemize}
\input{Image/fig-template}
\paragraph{Auto-generated query}
Coming up with searching queries is time-consuming. Therefore, we propose auto-generation for training queries. For text-related queries, we extract a snippet from the first sentence. Our analysis finds that most Wiki passages start with a page title and a brief introduction. Therefore, we utilize this format to extract the sentence's crucial information. We use Spacy API with en\_core\_web\_lg package to parse the sentence and detect the title-dependent verb. Then we take the snippet after the verb as the query. To increase the robustness of models, we also remove the snippet from the document text, which means models have to learn the representation from the remaining text and still be capable of matching the document-query pairs. For image-related queries, we implement the BLIP \citep{li2022blip} model, which outperforms many VLP frameworks on image captioning. We employ the model on 351,979 images and produce one caption for each image. The images are resized to 384×384. We use beam search with a beam size of 3 and set the maximum generation length as 30. For the mixed query, it is still challenging to produce a query that fuses the text and image information. Considering the efficiency, we concatenate the text and image queries as mixed queries.
\input{Image/fig-filter}
\subsection{Filtering}
In Figure~\ref{fig:filter}, we show the examples of annotated query validation, including rule-based and human filtering. For the first three examples, We filter out the queries through POS tagging. For the fourth example, we drop the queries by calculating the LCS. For the last two examples, we use CLIP and BM25 to support human discarding queries which map to ambiguous documents.
\input{Image/fig-analysis}
\subsection{Usage}
We split Mr. Right into five files: \textit{multimodal\_documents.json}, \textit{multimodal\_pretrain\_pairs.json}, \textit{multimodal\_finetune\_pairs.json}, \textit{multimodal\_val\_queries.json}, and \textit{multimodal\_test\_queries.json}. In \textit{multimodal\_documents.json}, it contains document ids, titles, texts, and image URLs. We do not provide image files directly due to the copyright issue. In \textit{multimodal\_pretrain\_pairs.json}, we provide our auto-generated queries and edited document texts. We still equip this file with the original document texts to keep the flexibility of using Mr. Right. Researchers can create their model framework and train on our auto-generated document-query pairs or produce other effective data.
In \textit{multimodal\_finetune\_queries.json}, we randomly sample human-annotated document pairs for fine-tuning. In \textit{multimodal\_val\_queries.json} and \textit{multimodal\_test\_queries.json}, they include corresponding document ids and human-annotated queries. The examples of multimodal document-query pairs are shown in Figure~\ref{fig:analysis-category}. All of our source codes are uploaded to GitHub. Researchers can download json files from our repository. We also offer our training codes.
\label{sec:usage}
|
1,116,691,497,157 | arxiv | \section{INTRODUCTION}
It is long been known that the formation of hydrogen bonds between
molecules or ionic groups is responsible for a drastic changes in
a wide variety of entire system properties such as structural phase
transformations and proton ordering phenomena \cite{blinc,aksenov}. In
addition, proton transport phenomena in H-bonded materials and
superionic properties discovered in some hydrogen-bonded
crystals (for example, M$_3$H(AO$_4$)$_2$ class where
M=Rb, Cs, NH$_4$; A=Se, S) are related closely to hydrogen-bonded network
rearrangement. On heating these crystals transform into superionic
conducting phase with statistically disordered hydrogen-bonded network
(Fig.~1(a)).
The protons can migrate through the two-dimensional conducting planes
with low activation energy ($\sim 0.1$~eV). In this case protonic
conductivity increases significantly to the value about 0.1~$\Omega^{-1}
\cdot$~cm$^{-1}$. It is generally accepted \cite{belushkin} that the
two-stage conduction mechanism is required to sustain proton transport. The
intrabond proton tunnelling along the hydrogen bridge is connected with the
transfer of ionic positive and negative charged defects, whereas the
intermolecular proton transfer
due to reorientations of molecular group with proton leads to the breaking
of the hydrogen bond and creation of a new one between another pair of
molecular complexes. It should be noted that the formation of the hydrogen
bridge induces the distortion of groups involved in
hydrogen bond towards the proton that results in the shortening of the
bond \cite{pietraszko}. By this means the
protonic polaron is localized between distorted ionic groups in the
low-temperature ferroelastic phases, giving rise in this case to the
dimerized structure. As has been shown in Ref.~\onlinecite{pavlenko},
the small-radii
polaron is formed due to the strong coupling of proton with optical
stretching vibration modes of the oxygen ions. It is evident that such an
transformations from the superionic phase occurring in systems on cooling
have the mixed (displacive and order-disorder) character.
Theoretical investigations of various
ferroelectric-type orderings in hydrogen-bonded systems have been
based generally on pseudospin Ising-type models with additional including
of the pseudospin-phonon interactions to describe the coupling of protons
with lattice vibration modes. In particular, the quantum
double-well chain with quartic symmetric double-well potential has been
used to model the transition from the symmetry-broken to the
symmetry-restored ground state in hydrogen halides
HX (X=F, Br, Cl) \cite{wang}
which consist of hydrogen-bonded chains with weak
interchain coupling. The dynamics of both ionic and orientational defects
created by the rotations of molecular groups in hydrogen halides has been
studied in the framework of classic approach based on soliton model
\cite{savin}.
It must be emphasized that taking into account the two-stage transport
mechanism renders the pseudospin formalism unsuitable for the proton
subsystem description since the number of protons can differ from the
number of possible (virtual) hydrogen bonds and the proton occupancy of
each bond in principle can be other than unity due to reorientational
hopping and consequent feasibility of proton migration along the chain.
Such a situation, as an example, is observed in superionic materials of
M$_3$H(AO$_4$)$_2$-type which transform on cooling into dielectric state
with dimerized structure \cite{pietraszko2}.
It should be noted that such type
of transitions to dielectric states is
reminiscent of that of electronic systems in which the Peierls
instabilities are observed. There have been many works to study the
metal-insulator Peierls transitions in electron-phonon systems which are
unstable against the electron-phonon interactions \cite{peierls,rise}. It
is common knowledge that the Peierls instabilities occur with the
formation of Peierls gap at $\ve{k}=\pm \ve{k}_F$ ($\ve{k}_F$ is the Fermi
level)in the electronic energy band that connected with the electronic
charge density waves condensation and structural lattice distortion
modulations with $\ve{q}=2\ve{k}_F$. The
appearance of insulator state together with the structural transformation
can be modelled in the framework of the Holstein electron-phonon model
without additional including the anharmonic terms in the lattice potential.
Recent investigations of Peierls transitions in electron-phonon systems
have prompted us to study similar effects in several hydrogen-bonded
solids. On the one hand, parallel sequences of plains (001)
are formed in superionic state of M$_3$H(AO$_4$)$_2$ crystals.
These hexagonal conducting plains consist of AO$_4$ groups connected by
virtual hydrogen bonds (see Fig.~1(a)). In the low-temperature phases
the frozen-in hydrogen bonds with only one index $f$ ($f=1,2,3$) form
well-defined sequences of dimers that involves the appearance of the
parallel dimerized chain arrays consisting of ionic groups linked
by the $f$th hydrogen bond (see Fig.~1(b)).
\begin{figure}[htbp]
\epsfxsize=4.cm
\epsfysize=3.5cm
\centerline{\epsffile{fig1a.eps}}
\epsfxsize=4.cm
\epsfysize=3.5cm
\centerline{\epsffile{fig1b.eps}}
\caption{(a) Hydrogen-bonded network in (001) plane of M$_3$H(AO$_4$)$_2$
crystal group; the solid lines indicate the possible type of dimerized
structure which can appear with ($f=3$)th H-bonds frozen-in.
(b) Structure of H-bonded dimer in one of low-temperature dimerized phases.}
\label{fig1}
\end{figure}
To analyze the influence of the
proton-ionic group displacements coupling we consider the simplified model,
namely the quasi-one-dimensional quantum double-well chain along one of the
proton pathways (for instance, the virtual hydrogen bond sequence
$\ldots-1-3-1-3-\ldots$). As an initial step, we neglect of the interproton
repulsion effect that is justified for the low proton concentration (in our
case each proton is averaged over every three virtual hydrogen
bonds). However, we take into account the possibility of proton exchange
between our selected chain and surrounding. On the other hand, besides
these superionic compounds we also analyze in this work the influence of
ionic group displacements on the proton subsystem behavior in
quasi-one-dimensional solid hydrogen halides. We reveal possible
symmetry-broken phases with proton charge disproportionalities coming from a
Holstein coupling to AO$_4$ ionic groups or X atoms.
We compare our conclusions with the results of the pressure effect
theoretical studies in M$_3$H(AO$_4$)$_2$ \cite{sinitsyn} and hydrogen
halides \cite{wang,jansen}. Although the first step of our
analysis consists of the quasi-one-dimensional chain study, we believe our
results can also be relevant for other hydrogen-bonded materials.
\section{DESCRIPTION OF THE MODEL}
The object of our consideration is the chain shown in Fig.~2(a).
However, to avoid the geometric complexities introduced by the kinks
in such an zig-zag chain, we consider in our model linear chain (see
Fig.~2(b) where two neighboring chains are shown). The process of the
proton transfer in the double-well H-bond potential is represented as the
quantum tunnelling between two proton states with intrabond transfer integral
$\Omega_0$
\begin{equation}
\Omega_0\sum_l(c_{la}^+ c_{lb}+c_{lb}^+ c_{la}), \label{h1}
\end{equation}
where $c_{l\nu}^+$, $c_{l\nu}$ denote proton creation and annihilation
operators in the position ($l$, $\nu=a,b$) of the chain. Besides that, we
describe the interbond reorientational proton hopping in two-level
approximation as the quantum tunnelling effect with hopping amplitude
$\Omega_R$
\begin{equation}
\Omega_R\sum_l(c_{l+1,a}^+ c_{lb}+c_{lb}^+ c_{l+1,a}).\label{h2}
\end{equation}
In this way in the framework of orientational-tunnelling model proposed in
Ref.\ \onlinecite{jps} the two-stage proton migration
mechanism can be considered
as the sequential migration of the ionic and orientational defects.
As far as such a double-well chain is just a structural component of the
system we also admit a possibility of proton exchange between the chain and
surroundings by considering the system thermodynamics in the framework of
the grand canonical ensemble with inclusion of the proton chemical potential
\begin{equation}
-\mu \sum_{l,\nu} n_{l\nu} \label{h3}
\end{equation}
which is to be determined at the given proton concentration in the chain
from corresponding equation for the chemical potential.
\begin{figure}[htbp]
\epsfxsize=9.cm
\epsfysize=3.cm
\centerline{\epsffile{fig2a.eps}}
\epsfxsize=7.5cm
\epsfysize=3.cm
\centerline{\epsffile{fig2b.eps}}
\null\vspace{0.1in}
\caption{(a) Zig-zag hydrogen-bonded chain in hydrogen halides, arrows
indicate the possible path of proton migration along the chain.
(b) Simplified model chains, the anti-phase and in-phase displacements
of ionic groups identified by solid and dashed arrows.}
\label{fig2}
\end{figure}
Our main interest is to analyze the influence of the longitudinal
optical ionic group vibration modes on the proton subsystem ground state.
However, it was noted in Ref.~\onlinecite{springborg2} that the
interactions between protons of the neighboring chains can lead to
appearance of three-dimensional ordering. The more detailed analysis of the
interchain proton interaction effect in this model together with
the determination of stability conditions for the existence of
the phases with different ordering type at finite temperatures will be
presented elsewhere \cite{pavlenko2}.
We consider the anti-phase stretching vibration mode which causes a change
of H-bond length in chain as indicated in Fig.~2(b) by solid arrows.
Besides that, we also take into account the optical in-phase vibrations of
ionic groups in chain which induce their displacements with respect
to surrounding chains as identified in Fig.~2(b) by dashed arrows.
The coupling to the first type of displacements leads to the equal change of
the potential well ($l$, $a$) and ($l$, $b$) depth within the H-bond
\begin{equation}
\sum_{l,q} \tau_l^{(1)}(q)(n_{la}+n_{lb})(b_{q,1}+
b_{-q,1}^+), \label{h4}
\end{equation}
whereas the coupling of protons to another optical mode causes the
difference of these potential minima depth
\begin{equation}
\sum_{l,q} \tau_l^{(2)}(q)(n_{la}-n_{l-1,b})(b_{q,2}+
b_{-q,2}^+). \label{h5}
\end{equation}
Here $\tau_l^{(1)}(q)=-2ig_1\sqrt{{\hbar}/{2MN\omega_1(q)}}
\sin\frac{1}{2}qd \exp[iq(l+1/2)d]$ and
$\tau_l^{(2)}(q)=g_2\sqrt{{\hbar}/{2MN\omega_2(q)}}
\exp[iqld]$ where $g_1$ and $g_2$ are corresponding coupling
constants, $M$ is the effective ionic group mass, $N$ denotes the number
of hydrogen bonds in chain and $d$ is a lattice
spacing. Furthermore we take dispersionless approximation for the phonon
frequencies: $\omega_1(q)=\omega_1$ and $\omega_2(q)=\omega_2$ and
assume the harmonic approximation for the
lattice vibration energies
\begin{equation}
\hbar\omega_1 \sum_{q} b_{q,1}^+ b_{q,1}+
\hbar\omega_2 \sum_{q} b_{q,2}^+ b_{q,2}. \label{h6}
\end{equation}
First of all let us consider the case of the isolated chain without coupling
to the phonon bath. Since the Hamiltonian (\ref{h1})-(\ref{h3}) can be
exactly diagonalized, the proton energy spectrum
\begin{equation}
\varepsilon_\nu (k)=\pm |t_{k}|, \hspace{0.05in}
|t_{k}|=\sqrt{\Omega_0^2+\Omega_R^2+2\Omega_0 \Omega_R \cos{kd}}
\end{equation}
forms two energy bands with the bandwidth
$\Delta \varepsilon=\Omega_0+\Omega_R-|\Omega_0-\Omega_R|$. The energy gap
in this case is $\Delta_{ab}=2|\Omega_0-\Omega_R|$. Eliminating one of the
elementary transport process by setting hopping amplitude $\Omega_0=0$
or $\Omega_R=0$ we can see that both the energy bands degenerate into two
energy levels and the quantum fluctuations between these two system states
could be derived. It is clear that in the case when
$\bar{n}=\frac{1}{N} \sum\limits_{l\nu} \bar{n}_{l\nu}=1$
(one proton is averaged within
the bond) the lower band is filled and the chemical potential $\mu$ is
centered between bands - thus the material is in dielectric state. Such an
situation can be observed in hydrogen halides. However, for
$\bar{n}=\frac{1}{2}$ only half of the lower band is filled and this
corresponds to the case of protonic conductor that occurs for example in
superionic phases of superprotonic crystals.
We will discuss afterwards the consequences of the proton-phonon coupling
effect focusing on the analysis of the two physically different cases
$\bar{n}=\frac{1}{2}$ ($1/4$-filled two-band model) and $\bar{n}=1$
(half-filling case in two-band model).
\section{BROKEN-SYMMETRY SOLUTIONS}
\subsection{Case $\bar{n}=\frac{1}{2}$}
Let us now focus on the case of quarter filling when the half of the
lower proton band is filled (one proton per two bonds). Then the
macroscopic condensed phonon state is predominantly stabilized at
$q^*=2k_F=\pi/d$ \cite{peierls,rise} and is characterized by the
expectation values of the phonon creation and annihilation operators
\begin{equation}
\langle B_{q,1} \rangle=\langle b_{q,1}+b_{-q,1}^+ \rangle=
\frac{\Delta}{g_1} \sqrt{N} \delta_{q,q^*}, \label{d1}
\end{equation}
where $\Delta$ denotes the introduced distortion order parameter which
should be determined from the stationarity conditions of the free energy.
Since the condensation of displacements (\ref{d1}) leads to the unit cell
doubling, using the Fourier transformation
$c_{l\nu (i)}=\frac{1}{\sqrt{N/2}} \sum\limits_{k} c_{k\nu (i)}
{\rm e}^{ikld}$ where the index
$i=\{+,- \}$ denotes ($l=2m$) or ($l=2m+1$)th cell,
the Hamiltonian in condensed state with static periodic distortions (\ref{d1})
(adiabatic treatment) is given by
\begin{eqnarray}
H=(-\mu+\tilde{\Delta}) \sum\limits_{k\nu} n_{k\nu (+)}-
(\mu+\tilde{\Delta}) \sum\limits_{k\nu} n_{k\nu (-)}+ \nonumber\\
\frac{1}{8}N \frac{\tilde{\Delta}^2}{E_0}+
\Omega_0\sum_{k,i}(c_{ka(i)}^+ c_{kb(i)}+c_{kb(i)}^+ c_{ka(i)})+ \label{hc}\\
\Omega_R\sum\limits_k \sum\limits_{i\neq i'}
(c_{ka(i)}^+c_{kb(i')}{\rm e}^{-ikd}+c_{kb(i')}^+ c_{ka(i)}{\rm
e}^{ikd}),\nonumber
\end{eqnarray}
where $E_0={(\hbar g_1)^2}/{2M (\hbar\omega_1)^2}$ is well known from
polaron theory \cite{polarons} protonic polaron binding energy which
appears in the expression for the elastic energy per H-bond
$\frac{1}{8}\frac{\tilde{\Delta}^2}{E_0}$ and
$\tilde{\Delta}=4\Delta \sqrt{\hbar/2M\omega_1}=4\Delta \sqrt{E_0
\hbar \omega_1}/g_1$. The similar result can be obtained when we consider
the second type of the ionic group displacements, in this case
$\langle B_{q,2} \rangle=\langle b_{q,2}+b_{-q,2}^+ \rangle=
\frac{\Delta'}{g_2} \sqrt{N} \delta_{q,q^*}$ and the Hamiltonian
in condensed state has the form similar to (\ref{hc}) with
$\tilde{\Delta} \rightarrow \tilde{\Delta}'=2\sqrt{\hbar/2M\omega_2}$
and $E_0 \rightarrow E_0'={(\hbar g_2)^2}/{2M (\hbar\omega_2)^2}$.
Since the inclusion of the coupling to the second phonon mode leads merely
to renormalization of the binding energy $E_0$ in the Hamiltonian,
further we focus on the analysis of (\ref{hc}) with only one
type of displacements taken into account.
Introducing the double-time one-fermion diagonal Green functions
one can get rigorously the density of proton states
\begin{eqnarray}
&&\rho(\varepsilon)=\frac{2}{\pi} \frac{|\varepsilon| \cdot
|t_1-\varepsilon^2|}{B_1 B_2} \left\{ \Theta (\varepsilon \cdot {\rm
sgn}(\varepsilon) -\sqrt{t_1-t_2^0})- \right.\nonumber\\
&&\Theta (\varepsilon \cdot {\rm sgn}(\varepsilon)-
\sqrt{(\Omega_0-\tilde{\Delta})^2+\Omega_R^2})+ \label{dos}\\
&& \Theta(\varepsilon \cdot {\rm
sgn}(\varepsilon)-\sqrt{t_1+t_2^0})-\nonumber\\
&& \left. \Theta(\varepsilon \cdot {\rm sgn}(\varepsilon)-
\sqrt{(\Omega_0+\tilde{\Delta})^2+\Omega_R^2}) \right\}\nonumber
\end{eqnarray}
where
\begin{eqnarray}
B_1=\sqrt{(t_1-\varepsilon^2)^2-4\Omega_0^2\tilde{\Delta}^2},\\
B_2=\sqrt{4\Omega_0^2(\tilde{\Delta}^2+\Omega_R^2)-(t_1-\varepsilon^2)^2}
\nonumber
\end{eqnarray}
and the following notations are introduced:
$t_1=\Omega_0^2+\Omega_R^2+\tilde{\Delta}^2$,
$t_2^0=2\Omega_0\sqrt{\Omega_R^2+\tilde{\Delta}^2}$ and
$\Theta(x)$ is the Heaviside step function.
The expression for the ground state energy
can be obtained easily from (\ref{hc}) and (\ref{dos}):
\begin{eqnarray}
F=\frac{N}{8}\frac{\tilde{\Delta}^2}{E_0}-\sum\limits_k \sqrt{t_1+2\Omega_0
\sqrt{\tilde{\Delta}^2+\Omega_R^2 \cos^2{kd}}}.
\end{eqnarray}
To determine the stable phase the equation ${\partial F} / {\partial
\tilde{\Delta}}=0$ should be solved.
It appears that this equations has besides $\tilde{\Delta}=0$,
a nonzero additional solution $\tilde{\Delta} \neq 0$ for
$g_1>g_P$ where $g_P$ is the crossover proton-phonon coupling strength.
The solution $\tilde{\Delta}\neq 0$ corresponds to the global minimum
of $F$ and, as a result, implies the structural distortion stabilization
with the amplitude $u_l=\sqrt{{\hbar}/{2MN\omega_1}} \langle
B_{q^*} \rangle=\frac{\tilde{\Delta}}{2g_1} (-1)^l$ (see Fig.~3(a)).
Let us discuss the proton position average occupancies on the bonds
and the band structure. At $g_1=g_P$ each proton band splits into two
subbands
\begin{eqnarray}
\varepsilon_{a(+/-)}(k)=\mp
\sqrt{t_1+2\Omega_0\sqrt{\tilde{\Delta}^2+\Omega_R^2 \cos^2{kd}}},\\
\varepsilon_{b(+/-)}(k)=\pm
\sqrt{t_1-2\Omega_0\sqrt{\tilde{\Delta}^2+\Omega_R^2 \cos^2{kd}}}\nonumber
\end{eqnarray}
as shown in Fig.~3(b) where the proton density of states
in the disordered and dimerized phases is represented.
The Peierls energy gap between either of the two
(lower and upper) subbands $\Delta_1=\sqrt{t_1+2\Omega_0\tilde{\Delta}}-
\sqrt{t_1-2\Omega_0\tilde{\Delta}} \approx {2\Omega_0\tilde{\Delta}}/
{\sqrt{\Omega_0^2+\Omega_R^2}}$ and tends to zero for $\Omega_0 \rightarrow
0$. In this case $\tilde{\Delta}=\pm \sqrt{4E_0^2-\Omega_R^2}$ and the
phase transition (change in the nature of the ground state)
occurs when the localization energy $E_0 \sim
(g_1^*)^2=\frac{1}{2} \Omega_R$. The energy gap between the second and
third subbands increases at $g_1>g_P$
\begin{eqnarray*}
\Delta_{ab}=2\sqrt{t_1-2\Omega_0\sqrt{\tilde{\Delta}^2+\Omega_R^2}}.
\end{eqnarray*}
\begin{figure}[htbp]
\epsfxsize=5.5cm
\epsfysize=5.cm
\centerline{\epsffile{fig3a.eps}}
\null\vspace{0.1in}
\epsfxsize=5.5cm
\epsfysize=5.cm
\centerline{\epsffile{fig3b.eps}}
\null\vspace{0.1in}
\caption{(a) Distortion parameter
$\Delta'=\frac{|\tilde{\Delta}|}{\hbar\omega_1}$
as a function of proton-phonon coupling
$\tilde{g}_1$ for $\tilde{\Omega}_0=0.5$; inset:
dependence of average proton accupancies on $\tilde{g}_1$ for
$\tilde{\Omega}_R=0.14$. (b) Proton density of states
$\tilde{\rho}(\varepsilon)=\frac{\rho(\varepsilon)}{\hbar\omega_1}$
($\tilde{\varepsilon}=\frac{\varepsilon}{\hbar\omega_1}$),
dashed and dotted curves indicate the cases of $\Delta'=0.5$ and
$\Delta'=0.0$ respectively.}
\label{fig3}
\end{figure}
The proton chemical potential $\mu$ is centered between two lowest
subbands with further increasing of $g_1>g_P$ that points to the insulator
state appearance. We see from inset in
Fig.~3(a) that the distortion stability is
\begin{figure}[htbp]
\epsfxsize=8.cm
\epsfysize=1.cm
\centerline{\epsffile{fig4.eps}}
\caption{Dimerized structure which appears in the case of quarter-filled
chain.}
\label{fig4}
\end{figure}
accompanied by the formation of the proton charge-density-wave state in
which $\langle n_{la}\rangle=\langle n_{lb} \rangle=\frac{1}{4}(1+(-1)^l)$
that means the forming of dimerized structure as shown in Fig.~4. Consider
further the ground state phase diagrams
($\tilde{g}_1={g_1}/{\hbar\omega_1}$,
$\tilde{\Omega}_0={\Omega_0}/{\hbar\omega_1}$) and
($\tilde{g}_1$, $\tilde{\Omega}_R={\Omega_R}/{\hbar\omega_1}$)
represented in Fig.~5. We see the strong influence of the amplitude
$\Omega_R$ on the dimerized state stability. The increasing of $\Omega_R$
suppresses dimerization. At $\Omega_R \rightarrow 0$ (without
\begin{figure}[htbp]
\epsfxsize=4.5cm
\epsfysize=4.cm
\centerline{\epsffile{fig5a.eps}}
\null\vspace{0.1in}
\epsfxsize=4.5cm
\epsfysize=4.cm
\centerline{\epsffile{fig5b.eps}}
\null\vspace{0.1in}
\caption{Ground-state phase diagrams (a) ($\tilde{g}_1$,
$\tilde{\Omega}_0$) and (b) ($\tilde{g}_1$,
$\tilde{\Omega}_R$). The notations PD and PU denote the dimerized
and uniform phases respectively.}
\label{fig5}
\end{figure}
reorientational hopping) the system is brought immediately into the
dimerized state. Only for the finite values of $\Omega_R$ the uniform
disordered phase begins to appear and the "metal"-insulator transition
occurs.
It is necessary to mention that the hopping amplitudes $\Omega_0$
and $\Omega_R$ depend strongly on external pressure. In particular,
the $\Omega_0$ value decreases with pressure that deduces from quantum
mechanical calculations \cite{scheiner} as well as from the experimental
measurements \cite{rambaud}. This is associated with the shortening of the
distance between two potential minima ($l$, $\nu$) in the bond. Thus
we can make a conclusion about pressure effect on the system state from the
diagrams shown in Fig.~5. Using the obtained in \cite{pavlenko} values for
parameters $\Omega_R$, $g_1$ and $\omega_1$ ($\Omega_R/\hbar\omega_1
\approx 0.14$ and $\hbar^2 g_1^2/2M(\hbar\omega_1)^3 \approx 3.8$) we
reveal that the dimerized state is always stable at T=0 under pressure for
this set of parameters.
It is interesting that the similar picture has been
observed in M$_3$H(AO$_4$)$_2$ materials from experimentally measured baric
dependencies at low temperatures \cite{sinitsyn}. Nevertheless, we notice
that as $g_1$ decreases and approaches the critical value
$g_1^*=\sqrt{\Omega_R/2}$, a transition from the dimerized to the uniform
state occurs with pressure. This effect appears due to the more weak
proton-phonon coupling and, as a result, to the tendency of the proton
delocalization in chain.
\subsection{Case $\bar{n}=1$}
Let us discuss another case when one proton in average is placed in the
bond. According to Peierls theory \cite{peierls} such a system is very
susceptible towards lattice modulation at $q^*=0$. It should be noted that
in this case only the second type of optical vibrations (interchain mode)
contributes to the lattice distortions condensation.
The Hamiltonian in condensed phase has the form
\begin{eqnarray}
H=&&\sum\limits_{k}\left[(-\mu+\tilde{\Delta})n_{ka}-
(\mu+\tilde{\Delta}) n_{kb}\right]+
\frac{1}{8}N \frac{\tilde{\Delta}^2}{E_0}+\\
&&\sum_{k}(t_k c_{ka}^+ c_{kb}+t_k^* c_{kb}^+ c_{ka}).\nonumber
\end{eqnarray}
In this case the density of proton states
\begin{eqnarray}
&&\rho(\varepsilon)=\frac{2}{\pi} \frac{|\varepsilon|}
{\sqrt{4\Omega_0^2\Omega_R^2-(\varepsilon^2-t_1)^2}} \times \\
&&\left(\Theta(\varepsilon \cdot {\rm
sgn}(\varepsilon)-\sqrt{\tilde{\Delta}^2+
(\Omega_0-\Omega_R)^2})- \right.\nonumber\\
&& \left. \Theta(\varepsilon \cdot {\rm sgn}(\varepsilon)-
\sqrt{\tilde{\Delta}^2+(\Omega_0+\Omega_R)^2})\right)
\nonumber
\end{eqnarray}
points to the two-band structure
\begin{equation}
\varepsilon_\nu (k)=\pm \sqrt{\tilde{\Delta}^2+|t_k|^2}
\end{equation}
with the Peierls energy gap $\Delta_{ab}=2\sqrt{\tilde{\Delta}^2+
(\Omega_0-\Omega_R)^2}$. The chemical potential always is centered between
two bands, i.e. $\mu=0$.
We present the equation for determination of
$\tilde{\Delta} \neq 0$ which follows from the stationary condition of
$F$:
\begin{equation}
\frac{1}{4E_0}=\frac{1}{N}\sum_k
\frac{1}{\sqrt{\tilde{\Delta}^2+|t_k|^2}}. \label{del2}
\end{equation}
The nonzero solution, which appears for $g_2>g_P$, corresponds to the
formation of a proton charge density wave in chain together with
the distortions $u_l=\tilde{\Delta}/2g_2$ stabilization (see Fig.~6).
\begin{figure}[htbp]
\epsfxsize=8.cm
\epsfysize=1.cm
\centerline{\epsffile{fig6.eps}}
\caption{Broken-symmetry structure which appears in the case of
half-filled chain.}
\label{fig6}
\end{figure}
The typical dependencies of the proton position average occupancies
$\langle n_{l\nu} \rangle$ are represented in Fig.~7. It is interesting
that the system now is invariable with respect to the interchanging
$\Omega_0 \leftrightarrow \Omega_R$. Thus it is sufficiently to analyze the
system behavior as a function of $\Omega_0$ for instance, with the given
fixed value of $\Omega_R$. The ground state phase diagram
($\tilde{g}_2={g_2}/{\hbar\omega_2}$,
$\tilde{\Omega}_0={\Omega_0}/{\hbar\omega_2}$)
(see Fig.~8) differs essentially from the case
$\bar{n}=\frac{1}{2}$. The phase equilibrium curve has the specific
salient point at $\Omega_0=\Omega_R$. The drastically decrease of $g_P$
in the vicinity of $\Omega_0=\Omega_R$ is connected with the fact that the
transfer anisotropy $|\Omega_0-\Omega_R|$ forms the additional transverse
field which competes with the ordering stabilization process. This
anisotropy is vanished at $\Omega_0=\Omega_R$ that leads to the lowering of
\begin{figure}[htbp]
\epsfxsize=6.cm
\epsfysize=6.cm
\centerline{\epsffile{fig7.eps}}
\caption{Average proton occupancies as a functions of $\tilde{g}_2$ for
$\tilde{\Omega}_0=0.8$; bold and thin curves indicate the cases when
$\tilde{\Omega}_R=0.5$ and $\tilde{\Omega}_R=2.5$.}
\label{fig7}
\end{figure}
the crossover proton-phonon coupling energy $g_P$ required for the ordering
stabilization. The interpretation of the diagram
($\tilde{g}_2$, $\tilde{\Omega}_0$) with
respect to the pressure effect is very interesting.
\begin{figure}[htbp]
\epsfxsize=6.cm
\epsfysize=6.cm
\centerline{\epsffile{fig8.eps}}
\caption{Ground-state phase diagrams ($\tilde{g}_2$, $\tilde{\Omega}_0$).
The notation PO indicates the symmetry-broken phase with proton ordering
on the hydrogen bonds; inset: the region $\Omega_0 \sim \Omega_R$ in more
details.}
\label{fig8}
\end{figure}
The second-order transition from the uniform to ordered state occurs
under pressure (with $\Omega_0$ decrease) for $g_2>g_2^*=\sqrt{\Omega_R}/2$.
However, in the region $g_2<g_2^*$ the additional reentrant transition
from the symmetry-broken to the uniform state appears (see Fig.~9).
\begin{figure}[htbp]
\epsfxsize=6.cm
\epsfysize=6.cm
\centerline{\epsffile{fig9.eps}}
\caption{Distortion parameter as a function of $\tilde{\Omega}_R$
for $\tilde{g}_2=0.32$ and $\tilde{\Omega}_0=0.5$; inset:
corresponding dependencies of average proton accupancies.}
\label{fig9}
\end{figure}
In this case the region of symmetry-broken phase equilibrium narrows
with $g_2$ decrease. We notice that the first-principle
calculations \cite{jansen,springborg2,springborg}
and the results of Monte Carlo simulations \cite{wang} in quasi-one
dimensional hydrogen halides show a transition from the symmetry-broken
phase shown in Fig.~6 to the uniform symmetric phase under pressure at the
low temperatures. Thus our results in the vicinity of
$\Omega_0 \approx \Omega_R$
and for considerably weak proton-phonon coupling $g_2<g_2^*$
are in qualitative agreement with the conclusions of
Refs.~\onlinecite{wang,jansen}
confirming a proper treatment of the quantum effects in these
hydrogen-bonded materials.
\section{CONCLUSIONS}
In the present work the lattice effect on the ground state properties
of the quantum quasi-one-dimensional hydrogen-bonded chain
is analyzed in the framework of the two-stage orientational-tunneling model.
The interaction of protons with two different types of surrounding ionic
group optical displacements is considered.
We show that when the proton-phonon coupling energy becomes large,
the system undergoes a transition from disordered to broken-symmetry
phases.
The different cases of proton concentration have been analyzed:
$\bar{n}=1/2$ and $\bar{n}=1$. It is shown that in the first case the
Peierls transition to the dimerized phase occurs, whereas in the second
one we obtain a transformation into the proton-ordered state.
The influence of two different transport amplitudes on ground states
properties is also studied. We compare
our ground-state phase diagrams with the pressure effect experimental
investigations in superprotonic systems and hydrogen halides at low
temperatures.
\section*{Acknowledgements}
This work is partially supported by INTAS Grant No.~95-0133.
|
1,116,691,497,158 | arxiv | \section{Introduction}
The pnictide superconductor SrPtAs~\cite{nishikubo:2011} ($T_c=2.4K$) has attracted attention recently due to the time-reversal-symmetry (TRS) breaking occurring with the onset of superconductivity.\cite{youn:2012, goryo:2012, biswas:2013,fischer:2014a,matano:2014, akbari:2014, wang:2014, bruckner:2014,tutuncu:2014}
Unlike other pnictide superconductors, SrPtAs has a hexagonal crystal structure.
This has important consequences for the unconventional superconducting order parameters which can be classified with respect to the irreducible representations of the hexagonal point group and are only allowed to mix within the same representation.\cite{sigrist:1991}
A particularly interesting example is the degeneracy of $d_{xy}$ and $d_{x^2-y^2}$ which allows for a TRS-breaking ($d+id$)-wave superconducting state.\cite{fischer:2014a}
SrPtAs crystallizes in the $P6_3/mmc$ space group (\#194). This space group is non-symmorphic, i.e., some point group operations have to be combined with non-trivial translations to map the crystal onto itself, with a generating point group isomorphic to $D_{6h}$.
Note that considering a unit cell containing two Pt-As layers with an inversion center in between leads to a point group $D_{3d}\subset D_{6h}$.~\cite{goryo:2012}
However, focusing solely on this unit cell and its point group misses the symmetry operations in $D_{6h}\setminus D_{3d}$ and with it half the irreducible representations.
The full symmetry also has to be considered to construct a (tight-binding) Hamiltonian, which is responsible for the gap mixing on a microscopic level.
In this article, we apply a comprehensive symmetry analysis of the superconducting order parameters in {SrPtAs} using the generating point group $D_{6h}$.
In section \ref{sec:sym} we first elaborate on the symmetry of SrPtAs and discuss a tight-binding model for illustration and to introduce basis functions.
In section \ref{sec:op}, we use the symmetry to classify gap functions and analyze their intermixing.
Finally, we discuss and summarize the resulting gap functions in section \ref{sec:discussion}.
\begin{figure}[b]
\centering
\subfigure[]{
\includegraphics[width=0.25\textwidth]{lattice3d}
}
\subfigure[]{
\includegraphics[width=0.2\textwidth]{lattice}
}
\caption{(a) 3D crystal structure only showing Pt (red) and As (blue) sites. (b) Symmetry of a single layer with a three-fold rotation axis (triangle) and 3 two-fold rotation axes and 3 mirror planes (only one of each shown).}
\label{fig:lattice}
\end{figure}
\section{Symmetry and tight-binding Hamiltonian}
\label{sec:sym}
SrPtAs possesses a hexagonal structure composed of honeycomb Pt-As layers, see Fig.~\ref{fig:lattice}(a), which are spaced by Sr layers.
In a single honeycomb layer, the A and B sites are occupied by Pt and As, respectively, such that there is no center of inversion.
However, in the neighboring layers, the Pt and As sites are interchanged, and the alternating stacking of the layers results in global centers of inversion between each two neighboring layers.
Due to this stacking, the crystal has a non-symmorphic space group.
An important consequence of this is that it is not possible to choose a unit cell that possesses the full generating point group $D_{6h}$ of the system.
To illustrate this property and better understand the resulting spin-orbit coupling and its effects, we here follow Ref.~\onlinecite{fischer:2011b} and choose as a starting point a single Pt-As layer, Fig.~\ref{fig:lattice}(b). The point group of such a single layer contains the symmetry operations $\{E, 2C_3, 3C_2', \sigma_h, 2 S_3, 3\sigma_v\}=D_{3h}$, in particular it lacks inversion $i$. The full crystal, however, has a generating point group that is isomorphic to $D_{\rm 6h}$. While the symmetry transformations of $D_{\rm 3h}$ with respect to the Pt-As layer leave the full crystal invariant, the elements of $D_{\rm 6h}\setminus D_{\rm 3h}=\{2C_6, C_2, 3C_2'', i, 2S_6, 3\sigma_d\}$ interchange the two distinct layers and thus have to be combined with a translation along the $z$ axis by $\tilde{c} = c/2$.
Following this construction, we discuss a tight-binding Hamiltonian with `$s$' orbitals on the Pt sites.~\cite{youn:2012, youn:2012b}
Starting from a single layer (with point group $D_{3h}$), the Hamiltonian contains a hopping term on the triangular (Pt) lattice,
\begin{equation}
\mathcal{H} = t \sum_{\textbf{k} s} \Big[\sum_{n}\cos(\textbf{T}_n\cdot\textbf{k})\Big]c^{\dag}_{\textbf{k} s}c^{\phantom{\dag}}_{\textbf{k} s},
\label{eq:Hsingle}
\end{equation}
where we have introduced the lattice vectors
\begin{eqnarray}
\textbf{T}_1 &=& a(0,1,0),\\
\textbf{T}_2 &=& a(\sqrt{3}/2, -1/2, 0),\\
\textbf{T}_3 &=& a(-\sqrt{3}/2, -1/2, 0)
\label{eq:Ts}
\end{eqnarray}
[see Figure \ref{fig:vectors}] and $c^\dag_{\textbf{k} s}$ creates an electron with momentum $\textbf{k}$ and spin $s$. In addition, there is a spin-orbit-coupling (SOC) term due to the As positions,
\begin{equation}
\mathcal{H}^{\rm soc} = \alpha_{\rm so}\sum_{\textbf{k}, s, s'}\Big[\sum_{n}\sin(\textbf{T}_n\cdot\textbf{k})\Big]c^{\dag}_{\textbf{k} s}\sigma^3_{ss'}c^{\phantom{\dag}}_{\textbf{k} s'}
\label{eq:HSOC}
\end{equation}
with $\sigma^i$ the Pauli matrices acting in spin space.
Note that from a symmetry perspective, the hopping term~\eqref{eq:Hsingle} transforms as $A_{1g}$ in $D_{6h}$, while the spin-orbit term~\eqref{eq:HSOC} transforms as $B_{1u}$. In $D_{\rm 3h}$, however, both transform as A$_{1g}$ and are thus symmetry allowed here.
\begin{figure}[tb]
\centering
\includegraphics[width=0.25\textwidth]{vectors}
\caption{Lattice vectors with $\textbf{T}_n$ connecting Pt sites within the same layer and $\textbf{t}_n$ the in-plane vectors connected to nearest-layer hopping.}
\label{fig:vectors}
\end{figure}
To treat the layer staggering of the 3D structure, we divide SrPtAs into two sublattices, namely even and odd layers, respectively.~\cite{fischer:2011b} Instead of working in layer space, we fold the Brillouin zone (BZ) in $z$ direction (with respect to stacked layers without staggering) by introducing the operators
\begin{equation}
c_{\alpha\textbf{k} s} = \left\{\begin{array}{ll} c_{\textbf{k} s} & \alpha=1 \\ c_{\textbf{k}+\textbf{Q} s}&\alpha=2\end{array}\right.,
\label{eq:caks}
\end{equation}
where $\textbf{Q}=(0,0,\pi)$ (setting $\tilde{c}=1$), and Pauli matrices ($\tau$) acting in $\{\textbf{k}, \textbf{k}+\textbf{Q}\}$ space (see Appendix).
The spin-independent Hamiltonian can then be written as
\begin{equation}
\mathcal{H} = \sum_{\alpha,\alpha'}\sum_{\textbf{k}, s}\vphantom{\sum}'\mathcal{H}_{\textbf{k}\alpha\alpha'}c^{\dag}_{\alpha\textbf{k} s}c^{\phantom{\dag}}_{\alpha'\textbf{k} s},
\label{eq:H0}
\end{equation}
where the sum $\sum_{\textbf{k}}'$ runs over the folded BZ. $\mathcal{H}_{\textbf{k}\alpha\alpha'}$ consists of a trivial intra-sublattice hopping
\begin{equation}
\mathcal{H}_{\textbf{k}}^{\rm intra} = [t\sum_{n}\cos(\textbf{T}_n\cdot\textbf{k}) + t_z' \cos(2k_z)]\tau^0,
\label{eq:Hintra}
\end{equation}
and an inter-sublattice hopping connecting neighboring layers,
\begin{equation}
\mathcal{H}_{\textbf{k}}^{\rm inter} = t_z\cos(k_z)[\sum_{n}\cos(\textbf{t}_n\cdot\textbf{k})\tau^3 + \sum_{n}\sin(\textbf{t}_n\cdot\textbf{k})\tau^2].
\label{eq:Hinter}
\end{equation}
Note the momentum structure due to the fact that Pt sites of neighboring layers do not sit on top of each other [see Fig.~\ref{fig:vectors}].
Finally, the spin-orbit-coupling term has opposite sign on the two sublattices, reading in this basis
\begin{equation}
\mathcal{H}_{\textbf{k}}^{\rm soc} = \alpha_{\rm so} \sum_{n}\sin(\textbf{T}_n\cdot\textbf{k}) \sigma^3\otimes\tau^1.
\label{eq:Hsoc}
\end{equation}
The nature of the $\tau$ matrices and their transformation behavior is summarized in Table~\ref{tab:taus} and a detailed derivation of the tight-binding terms can be found in the appendix.
Note that all the bands resulting from this Hamiltonian are doubly degenerate and $S_z$ is conserved. Other spin-orbit-coupling terms that break $S_z$ are connected to inter-layer hopping and small, and we will discuss consequences of them later.
In terms of symmetry, the Hamiltonian has to transform trivially, i.e., as a scalar, under all the symmetry transformations of the crystal (corresponding to $A_{1g}$ in $D_{6h}$). This is achieved here through the combination of a momentum and spin part with a $\tau$ matrix such that the whole Hamiltonian transforms as $A_{1g}$, i.e. either $A_{1g}\otimes A_{1g}$ [see Eq.~\eqref{eq:Hintra}] or $B_{1u}\otimes B_{1u}$ [Eq.~\eqref{eq:Hsoc}].\cite{footnote:dorbitals} In addition, note that by construction a function with $f(\textbf{k}) = f(\textbf{k}+\textbf{Q})$ [$f(\textbf{k}) = - f(\textbf{k}+\textbf{Q})$] has to be combined with $\tau^0$ or $\tau^1$ ($\tau^2$ or $\tau^3$), respectively.
\begin{table}[tb]
\centering
\begin{tabular}{cccc}
\hline\hline
& intra-sublattice & inter-sublattice & IR \\
\hline
intra-band & $\tau^0$ & $\tau^3$ & $A_{1g}$\\
inter-band & $\tau^1$ & $\tau^2$ & $B_{1u}$\\
\end{tabular}
\caption{Classification of the Pauli matrices $\tau^a$ defined in the space $\{\textbf{k}, \textbf{k}+\textbf{Q}\}$ of the two bands in the folded Brillouin zone. The sublattice refers to even and odd layers.}
\label{tab:taus}
\end{table}
\section{Gap Classification}
\label{sec:op}
A general gap function in the $\{\textbf{k}, \textbf{k}+\textbf{Q}\}$ basis of Eq.~\eqref{eq:caks} can be written as
\begin{equation}
\Delta_{ss'}^{\alpha\alpha'}(\textbf{k}) = \{\psi_a(\textbf{k})(i\sigma^y) + [\vec{d}_a(\textbf{k})\cdot\vec{\sigma}](i\sigma^y)\}_{ss'}\otimes\tau^a_{\alpha\alpha'},
\label{ea:generalgap}
\end{equation}
and the spin-singlet part $\psi_a(\textbf{k})$ and the spin-triplet part $\vec{d}_a(\textbf{k})$ can be classified according to the irreducible representations of $D_{6h}$. Note that due to the SOC, we have to classify the combined momentum and spin part for the spin-triplet gap functions.
For the full classification, we use the same scheme as for the Hamiltonian, namely that gaps transform as $R\otimes R'$ with $R'$ the irreducible representation $\tau^a$ transforms as, i.e. either $A_{1g}$ or $B_{1u}$ (Table~\ref{tab:taus}).
However, there are two requirements for the gap functions $\psi_a(\textbf{k})$ and $\vec{d}_a(\textbf{k})$: (1) As noted in the previous section, our construction requires $\psi_a(\textbf{k})$ and $\vec{d}_a(\textbf{k})$ for $a=2,3$ to change sign under $\textbf{k}\mapsto\textbf{k}+\textbf{Q}$ but not for $a=0,1$. (2) The Pauli principle has to be satisfied, requiring that for an even spin-singlet and odd spin-triplet function $a=0,1,3$ (``triplet in $\{\textbf{k}, \textbf{k}+\textbf{Q}\}$''), and $a=2$ for odd spin-singlet and even spin-triplet order parameters (``singlet in $\{\textbf{k}, \textbf{k}+\textbf{Q}\}$'').
A list of possible (tight-binding) functions for the spin-singlet functions $\psi_{a}(\textbf{k})$ respecting the crystals symmetry for the various irreducible representations of $D_{6h}$ is given in Table~\ref{tab:bfpsi}. Table~\ref{tab:bfdz} lists spin-triplet gap functions with a $d$ vector in the $\hat{z}$ direction. Finally, Table~\ref{tab:bfdxy} lists spin-triplet order parameters with in-plane $d$ vector. Note that this corresponds to the classification of Ref.~\onlinecite{sigrist:1991} when looking at their expansion in $k_x, k_y, k_z$, e.g.,
\begin{eqnarray}
\sum_n \omega_n\sin(\textbf{T}_n\cdot\textbf{k}) &\sim& (k_x - i k_y),\\
\sum_n \omega_n\cos(\textbf{T}_n\cdot\textbf{k})&\sim& [(k_x^2-k_y^2) + 2 i k_xk_y]
\label{eq:expansion}
\end{eqnarray}
($\omega_n \!=\! e^{i 2\pi n/3}$), and similarly for the other combinations.
\begin{table}
\centering
\begin{tabular}{c|c}
$\Gamma^+$ & $\psi_{0,1,3}(\textbf{k})$\\
\hline
$A_{1g}$ & $1$, $\cos k_z \sum_n\cos(\textbf{t}_n\cdot\textbf{k})$, $\sum_n\cos(\textbf{T}_n\cdot\textbf{k})$\\
$A_{2g}$ & - \\
$B_{1g}$ & - \\
$B_{2g}$ & - \\
$E_{1g}$ & $\{\sin k_z\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}), \sin k_z \sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})\}$ \\
& $\{\sin 2 k_z\sum_n\omega_n\sin(\textbf{T}_n\cdot\textbf{k}), \sin 2k_z \sum_n\omega_n^*\sin(\textbf{T}_n\cdot\textbf{k})\}$ \\
$E_{2g}$ & $\{\cos k_z\sum_n \omega_n\cos(\textbf{t}_n\cdot\textbf{k}),\cos k_z\sum_n \omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
& $\{\sum_n \omega_n\cos(\textbf{T}_n\cdot\textbf{k}),\sum_n \omega_n^*\cos(\textbf{T}_n\cdot\textbf{k})\}$ \\
\hline
$\Gamma^-$ & $\psi_{2}(\textbf{k})$ \\
\hline
$A_{1u}$ & - \\
$A_{2u}$ & $\sin k_z\sum_n\cos(\textbf{t}_n\cdot\textbf{k})$\\
$B_{1u}$ & $\cos k_z\sum_n\sin(\textbf{t}_n\cdot\textbf{k})$\\
$B_{2u}$ & - \\
$E_{1u}$ & $\{\cos k_z \sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}), \cos k_z\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})\}$ \\
$E_{2u}$ & $\{\sin k_z\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}), \sin k_z \sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
\end{tabular}
\caption{Basis functions for $\psi_a(\textbf{k})$ in $D_{6h}$ with $\omega_n = \exp(i 2\pi n/3)$. Note that the table only respects the Pauli principle. In addition, it is required that $\psi_3(\textbf{k})=-\psi_3(\textbf{k}+\textbf{Q})$. }
\label{tab:bfpsi}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c}
$\Gamma^+$ & $\vec{d}_{2}(\textbf{k})$\\
\hline
$A_{1g}$ & - \\
$A_{2g}$ & $\hat{z}\cos k_z \sum_n\cos(\textbf{t}_n\cdot\textbf{k})$\\
$B_{1g}$ & - \\
$B_{2g}$ & - \\
$E_{1g}$ & $\{\hat{z}\sin k_z\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}), \hat{z}\sin k_z \sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})\}$ \\
$E_{2g}$ & $\{\hat{z}\cos k_z \sum_n \omega_n\cos(\textbf{t}_n\cdot\textbf{k}),\hat{z}\cos k_z \sum_n \omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
\hline
$\Gamma^-$ & $\vec{d}_{0,1,3}(\textbf{k})$ \\
\hline
$A_{1u}$ & $\hat{z}\sin k_z\sum_n\cos(\textbf{t}_n\cdot\textbf{k})$, $\hat{z}\sin 2k_z$ \\
$A_{2u}$ & - \\
$B_{1u}$ & $\hat{z}\sum_n\sin(\textbf{T}_n\cdot\textbf{k})$\\
$B_{2u}$ & $\hat{z}\cos k_z \sum_n\sin(\textbf{t}_n\cdot\textbf{k})$\\
$E_{1u}$ & $\{\hat{z}\cos k_z \sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}), \hat{z}\cos k_z\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})\}$ \\
& $\{\hat{z}\sum_n\omega_n\sin(\textbf{T}_n\cdot\textbf{k}), \hat{z} \sum_n\omega_n^*\sin(\textbf{T}_n\cdot\textbf{k})\}$ \\
$E_{2u}$ & $\{\hat{z}\sin k_z\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}), \hat{z}\sin k_z \sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
\end{tabular}
\caption{Basis functions for $d^z_a(\textbf{k})$ in $D_{6h}$. Note that the table only respects the Pauli principle. In addition, it is required that $\vec{d}_3(\textbf{k})=-\vec{d}_3(\textbf{k}+\textbf{Q})$.}
\label{tab:bfdz}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c}
$\Gamma^+$ & $\vec{d}_{2}(\textbf{k})$\\
\hline
$A_{1g}$ & $\sin k_z[\hat{x}_{+}\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}) + \hat{x}_{-}\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})]$ \\
$A_{2g}$ & $\sin k_z [\hat{x}_{+}\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}) -\hat{x}_{-}\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})]$ \\
$B_{1g}$ & $\cos k_z[\hat{x}_{+}\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}) - \hat{x}_{-}\sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})]$ \\
$B_{2g}$ & $\cos k_z[\hat{x}_{+}\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}) + \hat{x}_{-}\sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})]$ \\
$E_{1g}$ & $\{\hat{x}\cos k_z\sum_n\cos(\textbf{t}_n\cdot\textbf{k}), \hat{y}\cos k_z \sum_n\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
$E_{2g}$ & $\{\sin k_z\hat{x}_{-}\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k}), \sin k_z\hat{x}_{+}\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})\}$ \\
\hline
$\Gamma^-$ & $\vec{d}_{0,1,3}(\textbf{k})$ \\
\hline
$A_{1u}$ & $\hat{x}_{+}\sum_n\omega_n\sin(\textbf{T}_n\cdot\textbf{k}) -\hat{x}_{-}\sum_n\omega_n^*\sin(\textbf{T}_n\cdot\textbf{k})$ \\
$A_{2u}$ & $\hat{x}_{+}\sum_n\omega_n\sin(\textbf{T}_n\cdot\textbf{k}) + \hat{x}_{-}\sum_n\omega_n^*\sin(\textbf{T}_n\cdot\textbf{k})$ \\
$B_{1u}$ & $\sin k_z[\hat{x}_{+}\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}) + \hat{x}_{-}\sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})]$ \\
$B_{2u}$ & $\sin k_z[\hat{x}_{+}\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}) - \hat{x}_{-}\sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})]$ \\
$E_{1u}$ & $\{\hat{x}_{-}\sin 2 k_z, \hat{x}_{+}\sin 2 k_z\}$ \\
& $\{\hat{x}_{-}\sin k_z \sum_n\cos(\textbf{t}_n\cdot\textbf{k}), \hat{x}_{+}\sin k_z\sum_n\cos(\textbf{t}_n\cdot\textbf{k})\}$ \\
$E_{2u}$ & $\{\hat{x}_{-}\sum_n\omega_n\sin(\textbf{T}_n\cdot\textbf{k}), \hat{x}_{+}\sum_n\omega^*_n\sin(\textbf{T}_n\cdot\textbf{k})\}$ \\
\end{tabular}
\caption{Basis functions for $\vec{d}_a(\textbf{k})$ in $D_{6h}$. For simplicity we have introduced $\hat{x}_{\pm} = (\hat{x}\pm i\hat{y})$. Note that the table only respects the Pauli principle. In addition, it is required that $\vec{d}_3(\textbf{k})=-\vec{d}_3(\textbf{k}+\textbf{Q})$.}
\label{tab:bfdxy}
\end{table}
Utilizing Tables~\ref{tab:bfpsi}-\ref{tab:bfdxy}, we can now construct order parameters belonging to any given irreducible representation of $D_{\rm 6h}$. In the following, we look at some examples for illustration, namely $A_{1g}$, $B_{1u}$, and $E_{2g}$, that have been discussed for SrPtAs~\cite{goryo:2012, biswas:2013, akbari:2014, fischer:2014a, matano:2014, wang:2014,bruckner:2014} starting with $A_{1g}$: assuming intra-layer coupling, the gap function has a dominant term of the form
\begin{equation}
\Delta^{(0)}_{A_{1g}}(\textbf{k}) = \psi_{0}^{A_{1g}}(\textbf{k})(i\sigma^y)\otimes\tau^0
\label{eq:A1g}
\end{equation}
with
\begin{equation}
\psi^{A_{1g}}_{0}(\textbf{k})= A + B \sum_n\cos(\textbf{T}_n\cdot\textbf{k}).
\label{eq:psiA1g0}
\end{equation}
$A$ and $B$ are coefficients depending on the pairing interaction and bandstructure.
Order parameters with the same symmetry can be mixed in by the Hamiltonian, i.e.,
\begin{multline}
\Delta'_{A_{1g}}(\textbf{k}) = \tilde{\psi}_{0}^{A_{1g}}(\textbf{k})(i\sigma^y)\otimes\tau^0 + \psi_{3}^{A_{1g}}(\textbf{k})(i\sigma^y)\otimes\tau^3\\ + \psi^{B_{1u}}_2(\textbf{k})(i\sigma^y)\otimes\tau^2 + [\vec{d}{\,}^{B_{1u}}_{1}(\textbf{k})\cdot\vec{\sigma}](i\sigma^y)\otimes\tau^1.
\label{eq:A1gb}
\end{multline}
These include an additional intra-sublattice spin-singlet
\begin{equation}
\tilde{\psi}^{A_{1g}}_{0}(\textbf{k}) \propto \cos 2k_z,
\end{equation}
a trivial ($A_{1g}$) and a non-trivial ($B_{1u}$) inter-layer spin-singlet
\begin{equation}
\psi^{A_{1g}}_{3}(\textbf{k}) \propto \cos k_z \sum_n\cos(\textbf{t}_n\cdot\textbf{k}),
\label{eq:psiA1g3}
\end{equation}
and
\begin{equation}
\psi_2^{B_{1u}}(\textbf{k}) \propto \cos(k_z)\sum_n\sin(\textbf{t}_n\cdot\textbf{k}),
\label{eq:psibB1u}
\end{equation}
as well as an intra-layer spin-triplet gap function
\begin{equation}
\vec{d}{\,}^{B_{1u}}_{1}(\textbf{k}) \propto \hat{z}\sum_n\sin(\textbf{T}_n\cdot\textbf{k}).
\label{eq:dB1u1}
\end{equation}
Note that as a result of the intermixing of singlet and triplet gap functions, the resulting order parameter is non-unitary, i.e., $\Delta \Delta^\dag \not\propto\sigma^0\otimes\tau^0$.
In a similar way, the $f$-wave gap function of $B_{1u}$ symmetry has a dominant intra-layer part
\begin{equation}
\Delta^{(0)}_{B_{1u}}(\textbf{k}) = [\vec{d}{\,}^{B_{1u}}_{0}(\textbf{k})\cdot\vec{\sigma}](i\sigma^y)\otimes \tau^0
\end{equation}
with
\begin{equation}
\vec{d}{\,}^{B_{1u}}_{0}(\textbf{k}) \propto \hat{z}\sum_n\sin(\textbf{T}_n\cdot\textbf{k}).
\label{eq:fB1u1}
\end{equation}
The additionally allowed term is
\begin{equation}
\Delta'_{B_{1u}}(\textbf{k}) = \psi^{A_{1g}}_{1}(\textbf{k})(i\sigma^y)\otimes\tau^1, \label{eq:b1u}
\end{equation}
an intra-sublattice spin-singlet gap function with
\begin{equation}
\psi^{A_{1g}}_{1}(\textbf{k})= A + B \sum_n\cos(\textbf{T}_n\cdot\textbf{k}) + C\cos 2k_z.
\label{eq:psiA1g1}
\end{equation}
Finally, we look at the ($d\pm id$)-wave order parameter transforming as $E_{2g}$. The gap function reads
\begin{equation}
\Delta^{(0)}_{E_{2g},\pm}(\textbf{k}) = \psi^{E_{2g}}_{0, \pm}(\textbf{k})(i\sigma^{y})\otimes\tau^0
\label{eq:psiE2g}
\end{equation}
with
\begin{equation}
\psi^{E_{2g}}_{0,+}(\textbf{k}) \propto \sum_n\omega_n \cos (\textbf{T}_n\cdot \textbf{k})
\label{eq:psiE2g0}
\end{equation}
and $\Delta_{E_{2g},-}(\textbf{k}) = [\Delta_{E_{2g},+}(\textbf{k})]^*$.
We can additionally mix in
\begin{multline}
\Delta'_{E_{2g},\pm}(\textbf{k}) = \psi^{E_{2g}}_{3, \pm}(\textbf{k})(i\sigma^{y})\otimes\tau^3\\ + \psi^{E_{1u}}_{\pm}(\textbf{k})(i\sigma^{y})\otimes\tau^2 + [\vec{d}{\,}^{E_{1u}}_{\pm}(\textbf{k})\cdot\vec{\sigma}](i\sigma^y)\otimes\tau^1,
\label{eq:psiE2gb}
\end{multline}
including an inter-layer $(d\pm id)$ spin-singlet gap function
\begin{equation}
\psi^{E_{2g}}_{3,+}(\textbf{k}) \propto\cos k_z \sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k}),
\label{eq:psiE2g3}
\end{equation}
as well as $p\pm ip$ gap functions with spin-singlet
\begin{equation}
\psi^{E_{1u}}_{+}(\textbf{k}) \propto \cos k_z \sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k})
\label{eq:psiE1u}
\end{equation}
and spin-triplet form
\begin{equation}
\vec{d}{\,}^{E_{1u}}_{+} = \hat{z} \sum_n \omega_n \sin(\textbf{T}_n\cdot \textbf{k}).
\label{eq:dE1u}
\end{equation}
Before we finish by discussing the above gap functions we note that the conservation of $S_z$ led to a block-diagonal form of the mean-field Hamiltonian. However, this conservation is not imposed by symmetry. Indeed, we can add an additional SOC term to the Hamiltonian,
\begin{multline}
\mathcal{H}_{\rm soc}' = \alpha_{\rm so}'\sin k_z \Big[\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k})(\sigma^1+i\sigma^2)\otimes\tau^2 \\
+ \sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})](\sigma^1-i\sigma^2)\otimes\tau^2\Big].\label{eq:fullsoc}
\end{multline}
This term also allows us to additionally mix in spin-triplet order parameters with in-plane $d$ vectors from Table~\ref{tab:bfdxy}.
The $A_{1g}$ and $B_{1u}$ gap function can have an additional inter-layer spin-triplet gap contribution
\begin{multline}
\vec{d}{\,}^{B_{1u}}_{1/3}(\textbf{k}) \propto \sin k_z[\hat{x}_{+}\sum_n\omega_n\cos(\textbf{t}_n\cdot\textbf{k})\\ + \hat{x}_{-}\sum_n\omega_n^*\cos(\textbf{t}_n\cdot\textbf{k})],
\label{eq:db1u3}
\end{multline}
with $\hat{x}_{\pm}=\hat{x}\pm i\hat{y}$. For the $B_{1u}$, there is in addition an inter-layer spin-triplet gap function
\begin{multline}
\vec{d}^{A_{1g}}_{2}(\textbf{k}) \propto \sin k_z[\hat{x}_{+}\sum_n\omega_n\sin(\textbf{t}_n\cdot\textbf{k})\\ + \hat{x}_{-}\sum_n\omega_n^*\sin(\textbf{t}_n\cdot\textbf{k})].
\label{eq:dA1g}
\end{multline}
Similarly, we can add the $E_{2g}$ and the $E_{1u}$ gap functions of Table~\ref{tab:bfdxy} with the corresponding $\tau$ matrices into the $d\pm id$ order parameter.
\begin{table}[tb]
\centering
\begin{tabular}{cccc}
\hline\hline
Irr. Rep. & no SOC & $S_z$ SOC & full SOC\\
\hline
$E_{2g}$ & C & A & D \\
$B_{1u}$ & AIII & AIII (A) & DIII (D) \\
\end{tabular} \caption{Topological classification~\cite{schnyder:2008,kitaev:2009,ryu:2010} of the $E_{2g}$ and $B_{1u}$ pairing states
in SrPtAs for a mean-field Hamiltonian without spin-orbit coupling, the dominant spin-orbit coupling with $S_z$ conservation, and the full spin-orbit coupling, respectively. The classification in brackets for $B_{1u}$ corresponds to a time-reversal-symmetry-breaking mixing of the various basis functions.}
\label{tab:topology}
\end{table}
\section{Discussion and Summary}
\label{sec:discussion}
We finish our analysis by discussing two important properties of the order parameters above, namely their nodal structure and their topological classification~\cite{schnyder:2008,kitaev:2009,ryu:2010}. For this purpose, we first only consider the dominant in-plane contributions given in Eqs.~\eqref{eq:psiA1g0}, \eqref{eq:fB1u1}, and \eqref{eq:psiE2g0}. In a second step, we will discuss consequences of additional intermixing. As the $A_{1g}$ order parameter is trivial, we only discuss the $E_{2g}$ and the $B_{1u}$ gap functions.
We first consider the $B_{1u}$ order parameter. A gap of pure $f$-wave symmetry [$\vec{d}_0^{B_{1u}}(\textbf{k})$] has line nodes on any pocket around the $\Gamma$-$A$ line. However, admixture of the other components, e.g. the $\vec{d}_2^{A_{1g}}(\textbf{k})$ and the $\psi_1^{A_{1g}}(\textbf{k})$ components, can lift these nodes. Triplet superconductors in general do not have symmetry imposed line nodes, which is known as Blunt's theorem.~\cite{blount:1985} Note, however, the difference to the $p$-wave situation on a square lattice: While in the latter cases, two gap functions which are related by lattice symmetries are mixed, the $B_{1u}$ channel requires either a singlet component or an inter-layer gap mixed in. Both are subdominant and hence there will still be a significantly suppressed gap along lines on the Fermi surfaces around $\Gamma$-$A$. From a topological point of view the `pure' $B_{1u}$ gap has a block-diagonal form and belongs to class AIII. Mixing through the spin-orbit coupling conserving $S_z$ leaves this classification invariant, while the full spin-orbit coupling of the form~\eqref{eq:fullsoc} destroys the block-diagonal form and leads to an order parameter of class DIII. Finally, a TRS-breaking combination of the various gap functions, which could be realized at a second transition,~\cite{sigrist:1998} would lead to classes A and D, respectively (see Table~\ref{tab:topology}).
Due to the mixing of $d_{xy}$ and $d_{x^2-y^2}$ basis functions the spin-singlet gap with $E_{2g}$ symmetry only has symmetry-imposed point nodes, where the lines parallel to $z$ through $K$, $K'$, and $\Gamma$ intersect with the Fermi surfaces. The pure $d+id$ gap belongs to class C according to the topological classification. The admixing of a triplet $p$-wave, Eq.~\eqref{eq:dE1u}, then changes the classification to A. This class is trivial in three dimensions, but has a $\mathbb{Z}$ classification in two dimensions. For Fermi surfaces with point nodes, such as is the case for SrPtAs, this state is thus a Weyl superconductor. At the point nodes at the BZ boundary, the low-energy excitations can be described as pairs of Majorana-Weyl fermions with a linear spectrum, that do not mix due to $S_z$ conservation.~\cite{fischer:2014a}. Note, however, that the full spin-orbit-coupling term again destroys the block-diagonal form of the mean-field Hamiltonian, resulting in an order parameter belonging to class D (see Table~\ref{tab:topology}). A direct consequence is that the two linear branches of the Majorana-Weyl spectrum can in principle mix, leading to a more complicated nodal and low-energy structure. The gapless character in the nodal region is, however, still protected, as the two nodes on the same point in momentum space carry an equal topological charge.
To summarize, we have presented a comprehensive symmetry analysis of possible gap functions of SrPtAs. Our results can be used for a thorough analysis of instabilities and response functions such as the spin-susceptibility in this complicated material and can help to determine the intriguing superconducting order parameter in SrPtAs.
\section*{Acknowledgment}
We would like to thank Yuval Baum, Titus Neupert and Manfred Sigrist for helpful discussions. MHF was supported by the Swiss Society of Friends of the Weizmann Institute of Science.
JG is financially supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science, Grant No. 23540437.
|
1,116,691,497,159 | arxiv | \section{Introduction}
\label{sec:intro}
Jets, and more generally speaking outflows, are a widespread phenomena in many different systems, from protostars to supermassive black holes.
Until only recently, outflows were believed to be launched by extraction of rotational energy either from a magnetically-threaded disk \citep{BlandfordPayne:1982} or from a rotating black hole \citep{BlandfordZnajek:1977}. In fact, it was common to think that there were at least two separated kinds of jets, the magnetically-dominated jets from black holes and the pressure-dominated jets from any other jetted source.
However, with the advent of cutting-edge GRMHD simulations and the first post-processed emission spectrum associated with it \citep{Moscibrodzka:2016,Liska:2017, Davelaar:2019}, it is becoming clear that there is not such a dichotomy and, most likely, at least in black hole systems, the two mechanisms can coexist. Furthermore, the emission is likely dominated by the outer, more mass loaded, jet sheath rooted onto the accretion disk, whereas the inner core of the jet is lighter and magnetically-dominated \citep{Moscibrodzka:2016}.
Similarly, when the jet-launching object is a protostar or a (non-BH) compact object, the outflow is likely to be a composition of a stellar wind \citep[e.g.][]{Shu:1994} or an equivalent Blandford-Znajek process for highly magnetized neutron stars \citep{Parfrey:2016} and a disk-driven outflow (e.g. \citealp{PudritzNorman:1983,ContopoulosLovelace:1994,Ferreira:1997,VTST:2000}).
Semi-analytical models describing the various launching mechanisms listed above have been continuously developed in parallel with simulations because they capture the underlying physics while allowing a time-efficient exploration of the parameter space and fitting of astrophysical sources.
However, in order to make the equations treatable with a semi-analytical approach, the dimensionality of the problem is reduced by assuming symmetries in the system and a non-linear separation of variables is performed.
The separation of variables is commonly referred to as the \emph{self-similarity} assumption. There are two distinct classes of self-similar models depending on how the separation of variables is carried out \citep{VlahakisTsinganos:1998}: the \emph{meridional} self-similar models \citep[e.g.][]{Sauty:1994,Trussoni:1997,Sauty:1999,Chantry:2018}, where the dependent variables are functions of $r$ and \emph{radial} self-similar models \citep[e.g.][]{ContopoulosLovelace:1994,Ferreira:1997,VTST:2000, VlahakisKonigl:2003, Polko:2010,Polko:2013,Polko:2014,Ceccobello:2018}, where the independent variable is $\theta$.
In both classes of self-similar models we are left with a mixed system of differential and algebraic equations describing the accelerating flow along a magnetic field line threading a rotating disk. To determine the motion of the fluid element, one needs to solve simultaneously the forces acting along and perpendicular to a streamline.
This is a notoriously cumbersome problem, which can be tackled by introducing further simplifications, such as assuming a fixed structure of the magnetic field and/or neglecting the gas pressure force and/or using asymptotic extensions of the models to replace the region of the solutions at large distance from the disk.
In \citet[][hereafter Paper I]{Ceccobello:2018}, we presented our newly developed algorithm to self-consistently solve the poloidal and transverse forces, given by the Bernoulli and Grad-Shafranov equations respectively, for a relativistic fluid in the presence of gravity, under the assumption of radial self-similarity. We showed that with our numerical algorithm it is possible to obtain solutions with a broad variety of jet structures and dynamical properties and work is ongoing to couple these solutions with a radiative code and apply those to black hole systems (Lucchini et al. submitted).
In this paper, we adopt the equations presented in \citet[VTST00 from now on]{VTST:2000} and we adapt our algorithm, described in Paper I, to perform a parameter study to model astrophysical sources with more moderate speeds, such as young stellar objects (YSOs) and evolved stars outflows.
In Sec.~\ref{sec:eqnMet} we summarize the basic equations and give a short description of the algorithm. In Sec.~\ref{sec:parstu}, we show the results of our parameter space exploration and discuss the solution properties as they transition from cold jets to hot ones. In Sec.~\ref{sec:app}, we show an example of an application to the post-AGB star W43A and we give the selection criteria we used to isolate the solutions that better resemble the jet of W43A and discuss the characteristics of the selected jet configuration in relation to the source.
Finally in Sec.~\ref{sec:discussion}, we summarise the study presented in this paper.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1]
\draw [thick, black,<->] (0,5) node[left] {$z$} --(0,0) -- (7,0) node[right] {$\varpi$};
\draw[<->] (0,2.5) node[right,xshift=0.5cm,yshift=0.16cm] {$\varpi\equiv \varpi_{\rm A} G$} -- (3.6,2.5);
\draw[ very thick] (2,0) to [out=63,in=-100] (4.5,4.8) node[black,above] {$\alpha=1$};
\draw[gray] (3.5,0) to [out=63,in=-100] (6.0,4.8);
\draw[gray] (1.0,0) to [out=63,in=-100] (3.5,4.8);
\draw[dashdotted,->] (0,0) -- (2.3,0.5) node[right] {MSP};
\draw[dashdotted,->] (0,0) -- (2.7,1.2) node[right] {AP};
\draw[dashdotted,->] (0,0) -- (4.42,4.5) node[right] {MFP};
\draw (0,0.5) node[above,xshift=0.3cm] {$\theta$} to [out=20,in=130] (0.4,0.4);
\draw (5.05,2.45) node[right,xshift=0.2cm] {$\psi$} to [out=-20,in=90] (5.30,2);
\draw (4.75,2) -- (5.8,2);
\end{tikzpicture}\hfill
\caption{System of coordinates we adopt to describe a solution of the MHD system of equations presented in VTST. We identify a solution with the "reference" streamline identified with the label $\alpha \equiv \varpi_{\rm A}^2/\varpi_*^2 = 1$. The two dependent variables $(M,G)$, together with all the other quantities describing the system, are functions of $\theta$ (the independent variable in radial self-similarity), which is the angle between a point on the streamline and the $z$-axis. The angle $\psi$ is defined by the tangent to the streamline and the horizontal axis, while the distance from a point on the streamline to the $z$-axis is defined by its cylindrical radius $\varpi$ in units of the Alfv\'en cylindrical radius $\varpi_{\rm A}$.}
\label{fig:angles}
\end{figure}
\section{Equations and numerical method}
\label{sec:eqnMet}
\subsection{Problem description}
\label{ssec:problem}
The equations that we are going to solve with our numerical algorithm are the ones describing an axisymmetric, radial self-similar, non-relativistic, disk-driven outflow with non-negligible enthalpy (Fig.~\ref{fig:angles}). Since we adopted the prescription given in \citet{VTST:2000}\footnote{We will use $\Gamma$ for the polytropic index and $F$ for the power law exponent, instead of the symbols $\gamma$ and $x$ as was done in \citetalias{VTST:2000} to maintain the same convention we had for the relativistic equations in Paper I.}, we present here just a brief summary. In Appendix \ref{app:scaling} we report the conversion from dimensionless to physical quantities as a function of the input parameters and the scaling relations. The dependent variables of the equations described in \citetalias{VTST:2000} are the poloidal Mach number $M$, the dimensionless cylindrical radius, $G$, and the angle describing the inclination of the streamline with respect to the disk plane, $\psi$. These are all functions of $\theta$ once their radial dependence has been defined as power laws of the function $\alpha \equiv \varpi_{\rm A}^2/\varpi_*^2$, where $\varpi_{\rm A}$ is the cylindrical radius at the Alfv\'en point and $\varpi_*$ is the chosen scaling length of the problem and effectively is the cylindrical radius of the Alfv\'en point on the streamline with $\alpha=1$. (see Fig.~\ref{fig:angles}).
To obtain a full solution, i.e. a streamline rooted at the disk midplane and terminating infinity, the adopted numerical scheme must handle three singular points that are present in the Bernoulli and Grad-Shafranov equations when solved simultaneously: the Alfv\'en point (AP) and the magnetosonic fast/slow points (MFP/MSP)\footnote{Note that after the separation of variables, they are points (not surfaces) on a single streamline and they are \emph{modified} because their position and definition of the phase speeds of the slow and fast magnetosonic waves are affected by the geometry of the magnetic field \cite[e.g.][]{Sauty:1994,FerreiraPelletier:1995}}.
At each singular point the equations can be regularized either analytically, in the case of the AP, or numerically, for the MSP and the MFP, analogously to the simpler case of the sonic point in the Parker wind model \citep{Parker:1958}.
The AP has been studied extensively, due to the possibility of manipulating the equations analytically there.
The other two singular points present a more complex case. On the one hand, the position of both the MSP and the MFP is not known before the full solution for a given set of initial parameters is calculated, on the other hand, a full solution cannot be computed without knowing the position of these two singular points and the AP.
Due to this intrinsic difficulty, the MSP and MFP are often neglected by assuming cold flows, i.e. thermal pressure plays no role in accelerating the flow (no MSP), and/or by adopting a given asymptotic behaviour of the streamline once the flow has become superalfvenic, which effectively pushes the MFP at infinity. Typically, either one or both of the above assumptions are made to avoid dealing with the complexity of determining these singular points.
Moreover, when the MSP and/or the MFP are not removed from the equations, finding solutions across large volumes of the parameter space is a difficult task that requires a solid numerical algorithm capable of recovering the unknown positions of the singular points and properly handling the equations at these locations for wide ranges of the input parameters.
However, the role of the MFP in self-similar theories is fundamental when solving the Bernoulli and Grad-Shafranov equations combined, because it is the singular point where the flow loses causal contact with the source \citep{LCB:1992,Bogovalov:1999,Meier:2012}.
Downstream of the MFP the flow starts to focus rapidly towards the polar axis up until the last recollimation point (LRP). We identify the LRP with the region where the jet terminates in our solutions (see Paper I).
This region has been connected in relativistic jets with the standing shock/particle acceleration regions in active galactic nuclei and in stellar-mass black hole systems \citep[e.g.][]{Ceccobello:2018,Cohen:2014, Meier:2012,Polko:2010,Markoff:2001,Markoff:2005,Markoff:2010}.
\citet{WeberDavis:1967} showed that there can be multiple families of solutions with different velocity profiles, crossing either none or one/two/three singular points. We are looking at those that cross all three points, which are characterised by an increasing poloidal Mach number.
\citetalias{VTST:2000} were the first to calculate complete solutions with all these characteristics for the non-relativistic case.
\subsection{Non-relativistic MHD system of equations}
\label{ssec:equations}
The Bernoulli and Grad-Shafranov equations for a steady-state axisymmetric system describe the energy flux balance along the poloidal direction and the equilibrium configuration of the magnetic field lines.
Both can be derived from the conservation of momentum equation, which describes the forces acting on a streamline:
\begin{equation}
\rho(\boldsymbol{V}\cdot \nabla) \boldsymbol{V} -\frac{1}{4\pi}\boldsymbol{B} \times (\nabla\times \boldsymbol{B}) + \nabla P -\rho \nabla \frac{\mathcal GM}{r} =0
\label{eq:consMom}
\end{equation}
where $\rho, P, \boldsymbol{V}, \boldsymbol{B}$ are the density, pressure, velocity and magnetic field of the flow. $\mathcal{G}$ and $\mathcal{M}$ are the gravitational constant and the mass of the central object, respectively.
If we adopt either cylindrical ($\boldsymbol{\hat z}, \boldsymbol{\hat \varpi}, \boldsymbol{\hat \phi}$) or spherical coordinates ($\boldsymbol{\hat r}, \boldsymbol{\hat \theta}, \boldsymbol{\hat \phi}$), the poloidal and perpendicular unit vectors ($\hat{\boldsymbol{b}}$, $\hat{\boldsymbol{n}}$) can be written as follows
\begin{eqnarray}
&\hat{\boldsymbol{n}} = \cos(\psi)\boldsymbol{\hat z} - \sin(\psi) \boldsymbol{\hat \varpi} = \cos(\theta+\psi)\hat{\boldsymbol{r}} - \sin(\theta+\psi)\hat{\boldsymbol{\theta}}\\
&\hat{\boldsymbol{b}} = \sin(\psi)\boldsymbol{\hat z} + \cos(\psi) \boldsymbol{\hat \varpi} = \sin(\theta+\psi)\hat{\boldsymbol{r}}+ \cos(\theta+\psi)\hat{\boldsymbol{\theta}} \\
&\hat{\boldsymbol{\phi}} = \hat{\boldsymbol{\phi}}.
\end{eqnarray}
The projection of Eq.~\ref{eq:consMom} along $\hat{\boldsymbol{b}}$, the Bernoulli equation, describes how the different types of energies can be converted to one another. The projection of Eq.~\ref{eq:consMom} along $\hat{\boldsymbol{n}}$, the Grad-Shafranov equation or transfield equation, provides the shape of the magnetic field lines.
The projections of the Bernoulli and the transfield equation can be rewritten using the scaling equations given in Appendix \ref{app:scaling} and then rearranged in the following form:
\begin{equation}
A_i\frac{dM^2}{d\theta} + B_i\frac{d\psi}{d\theta} = C_i
\label{eqn:detform}
\end{equation}
with $i=1$ representing the coefficients of the Bernoulli equation and $i=2$ the coefficients of the transfield equation.
The Bernoulli and transfield equations arranged in the way described above can further be recast into a system of two first-order differential equations for the evolution of the poloidal Mach number $M(\theta) = \sqrt{4\pi \rho} V_p/B_p$ and the angle $\psi$ describing the inclination of the streamline with respect to the horizontal axis:
\begin{align}
\frac{dM^2}{d\theta} & = \frac{\mathcal{N_{\rm 1}}}{\mathcal{D}} = \frac{B_2 C_1 - B_1 C_2}{ A_1B_2- A_2 B_1}\label{eqn:dM2dt}\\
\frac{d\psi}{d\theta} & = \frac{\mathcal{N_{\rm 2}}}{\mathcal{D}} = \frac{ A_1 C_2 - A_2 C_1}{ A_1B_2- A_2 B_1}, \label{eqn:dpsidt}
\end{align}
with the numerators $\mathcal{N}_i$ ($i=1,2$) and the denominator $\mathcal{D}$ being functions of the coefficients $A_i,B_i,C_i$ ($i=1,2$) which are given in Appendix~\ref{app:numsden}.
As described in paper I, in order to minimize the intrinsic errors we chose not to solve Eq.~\ref{eqn:dpsidt}, but instead derive $\psi(\theta)$ from the Bernoulli integral Eq.~\ref{eqn:BernConst} \citepalias[see also][and Appendix]{VTST:2000} from the MSP to the LRP.
Upstream of the MSP, the streamlines can undergo oscillations, depending on the given set of input parameters, so the sign of $\cos(\psi + \theta)$ can change. Hence Eq.~\ref{eqn:dpsidt} must be integrated with care in this region to ensure the correct radial profile of the solutions from the disk to the MSP.
Additionally, we solve a differential equation for the unknown function $G(\theta)$, which is
defined as the cylindrical radius to the polar axis of a streamline
labeled by $\alpha$, normalised to its cylindrical radius at the
Alfv\'en point. The equation for $G$ is the following
\begin{align}
G(\theta)& = \frac{\varpi}{\varpi_\alpha} = \frac{\varpi}{\varpi_\star} \alpha^{-1/2}\\
\frac{dG^2}{d\theta} &= \frac{2 G^2\cos(\psi)}{\sin(\theta)\cos(\psi+\theta)}.
\label{eqn:dG2dt}
\end{align}
The solution of these equations depends on six parameters: $\Gamma, F, k_{\rm VTST},\lambda_{\rm VTST}, \mu_{\rm VTST}$ and $\epsilon_{\rm VTST}$ \citepalias[see][]{VTST:2000}.
The first parameter $\Gamma$ is the polytropic index in the equation of state $q = P/\rho^{\Gamma}$, where $q$ is the specific gas entropy and a constant of motion of the problem. The parameter $F$ determines the initial current distribution in the radial direction, $-\varpi B_\phi = \mathcal C_2 \sin(\theta) r^{F-1}$, which is an increasing or decreasing function of $r$ depending on the value of $F$. This parameter also determines the radial dependence of the magnetic field lines through $B\sim \varpi^{F-2}$. $k_{\rm VTST}$ is proportional to the ratio between the Keplerian speed and the poloidal flow speed at the Alfv\'en radial distance, and often is referred to as the \emph{mass loss parameter} (see e.g. \citealp{Ferreira:1997}, but also \citetalias{VTST:2000}). $\lambda_{\rm VTST}$ is the specific angular momentum in units of $V_\star \varpi_\star$ and $\mu_{\rm VTST}$ is proportional to the gas entropy.
The parameters $k_{\rm VTST}, \mu_{\rm VTST}$ and $\lambda_{\rm VTST}$ are defined by the following relations:
\begin{equation}
k_{\rm VTST} = \sqrt{\frac{\mathcal{GM}}{\varpi_\star V_\star^2}}; \quad \mu_{\rm VTST} = \frac {8 \pi P_\star}{B_\star^2}; \quad
\lambda_{\rm VTST} = \frac{L}{V_\star \varpi_\star}.
\end{equation}
It is worth noticing that the starred quantities found across the paper are scaling factors and can be related to the quantities calculated at the AP on the reference streamline ($\alpha=1$), namely $\rho_\star = \rho_{\rm A}$, $\varpi_\star = \varpi_{\rm A}$ and
\begin{equation}
\left(B_*,V_*\right) = -\frac{\cos(\theta_{\rm A} + \psi_{\rm A})}{\sin(\theta_{\rm A})}\left(B_{p,\rm A}, V_{p,\rm A}\right), \quad {\rm with} B_* = \sqrt{4\pi\rho_*} V_*
\end{equation}
Finally, $\epsilon_{\rm VTST}$ is the sum of kinetic, enthalpy, gravitational and Poynting energy flux densities per unit of mass flux density, rescaled by $\alpha^{-1/2}V^{2}_\star$, i.e.
\begin{align}
\epsilon_{\rm VTST} &= \frac{\alpha^{1/2}}{V_\star^2} E = \left[\epsilon_{\rm{K}, p} + \epsilon_{\rm{K}, \phi} +\epsilon_{\rm{T}} +\epsilon_{\rm{M} }+ \epsilon_{\rm G} \right] \nonumber\\
& = \left[\frac{1}{2} \left(\frac{M^2}{G^2}\frac{\sin(\theta)}{\cos(\theta+\psi)}\right)^2 + \frac{1}{2}\left(\frac{\lambda_{\rm VTST}}{G^2}\frac{G^2 - M^2}{1-M^2}\right)^2 \right. \nonumber\\
& \left. \quad+ \frac{\mu_{\rm VTST}}{2} \frac{\Gamma}{\Gamma-1}M^{2(1-\Gamma)} + \lambda_{\rm VTST}^2\frac{1-G^2}{1-M^2}\right.\nonumber\\
& \left.\quad - k_{\rm VTST}^2 \frac{\sin(\theta)}{G}\right]. \label{eqn:BernConst}
\end{align}
The total energy flux per unit mass can be rescaled with the Alfv\'en poloidal velocity as
$2 E/V_{\rm A,p}^2 = 2\epsilon_{\rm VTST} V_*^2\alpha^{-1/2}/V_{\rm A,p}^2$, which becomes
\begin{equation}
\tilde \epsilon = 2\epsilon_{\rm VTST} \frac{\cos^2(\theta_{\rm A}+\psi_{\rm A})}{\sin^2(\theta_{\rm A})}
\end{equation}
and with the use of the De L'H\^opital rule to regularize the indefinite terms ( see Eq.~\ref{eqn:delHop}), we can write it at the AP and obtain the Alfv\'en Regularity Condition \citepalias[ARC, see][]{VTST:2000} in the compact form
\begin{align}
\tilde{\epsilon} =& 1 + \frac{\cos^2(\theta_{\rm A}+\psi_{\rm A})}{\sin^2(\theta_{\rm A})}\left[- 2k_{\rm VTST}^2 \sin(\theta_{\rm A}) + \mu_{\rm VTST}\frac{\Gamma}{\Gamma-1} \right.\nonumber\\
& \left. +\lambda_{\rm VTST}^2\left(1+g_{\rm A}^2\right)\right].
\label{eqn:ARCrescaled}
\end{align}
The function $g_{\rm A}$ is the \emph{fastness} parameter calculated at the AP. A general definition of the fastness parameter given by \citet{PelletierPudritz:1992} is
\begin{equation}
\frac{V_\phi}{\varpi} = \Omega (1- g)
\label{eqn:g}
\end{equation}
where
\begin{equation}
\Omega = \frac{1}{\varpi} \left(V_\phi - V_p \frac{B_\phi}{B_p}\right)
\label{eqn:Omega}
\end{equation}
is the angular frequency of the streamline, which is a constant of motion of the problem.
The fastness parameter gives a measure of how large the angular velocity of the gas is in relation to the angular velocity of the magnetic surface on which it moves.
We can derive $g_{\rm A}$ from the application of the De L'H\^opital rule to the indefinite forms
\begin{align}
\left.\frac{1-G^2}{1-M^2}\right|_{\rm A} & \equiv g_{\rm A} = \frac{2 \cos(\psi_{\rm A})}{p_{\rm A} \sin(\theta_{\rm A})\cos(\theta_{\rm A}+\psi_{\rm A})}\\
\left.\frac{G^2-M^2}{1-M^2}\right|_{\rm A}& = 1 - g_{\rm A}
\label{eqn:delHop}
\end{align}
where $p_{\rm A} = dM^2/d\theta |_{\rm A}$.
In the following section, we summarise the method we developed in Paper I that we now adapt to solve the non-relativistic equations. For the details of the algorithm, we address the interested reader to Paper I. Indeed, there is no substantial difference in the mechanics of the algorithm, although the non-relativistic equations are noticeably easier to handle.
\begin{table}
\setlength\tabcolsep{4.5 pt}
\setlength\extrarowheight{5pt}
\caption{Model parameters. The parameters are equivalent to \citetalias{VTST:2000}, but we changed the notation of some of them to avoid confusion with the relativistic parameters and physical quantities described in Paper I.}
\label{tab:modpars}
\begin{tabular}{l | m{5cm} }
\hline
Input parameters &\\
\hline
$F$ & exponent of the radial scaling of the current\\
$\Gamma$ & polytropic index of the gas \\
$\theta_{\rm A}$ & angular distance of the AP from the jet axis\\
$\psi_{\rm A}$ & inclination of the streamline with respect to the horizontal axis at the A
\\
$k_{\rm VTST}$ & mass loss parameter \\
\hline
Fitted parameters & \\
\hline
$\theta_{\rm MFP}$ & angular distance of the MFP from the jet axis \\
$\theta_{\rm MSP}$ & angular distance of the MSP from the jet axis \\
$\mu_{\rm VTST} $ & scaling of the gas-to-magnetic pressure ratio\\
$\lambda_{\rm VTST}$ & specific angular momentum in
units of $V_\star \varpi_\star$ \\
\hline
\end{tabular}
\end{table}
\subsection{Method}
\label{ssec:met}
In Paper I, we described a new numerical method to find solutions to the relativistic radial self-similar MHD equations for a disk-launched jet in the presence of gravity \citep{VlahakisKonigl:2003, Polko:2014}.
As discussed in Sec.\ref{ssec:problem}, even under the simplifying assumption of self-similarity, solving self-consistently and simultaneously the Bernoulli and Grad-Shafranov equations is known to be a rather difficult task because of the singular surfaces.
At the location of the singular points, the equations \ref{eqn:dM2dt}-\ref{eqn:dpsidt} are indeterminate but finite, e.g.
\begin{equation}
\frac{dM^2}{d\theta} = \frac{\mathcal{N}_1}{\mathcal{D}} = \frac00 = {\rm finite}. \label{eqn:windSing}
\end{equation}
However, only at the AP one can derive an analytical expression that gives the finite value of the derivative of the poloidal Mach number (Alfv\'en Regularity Condition, ARC). The location of the AP and $G^2_{\rm A}, M^2_{\rm A}, dG^2/d\theta |_{\rm A}$ and $dM^2/d\theta |_{\rm A}$ can be determined from the values of the input parameters and the ARC (Eq.~\ref{eqn:ARCrescaled}).
The regularity conditions at the MFP and MSP can exclusively be derived numerically together with their position on the streamline.
As a result, the most frequent approach is to determine all the unknown functions and parameters at AP and then integrate the system with a shooting method towards the other two singular points. However, given the high accuracy needed to determine the values of the parameters and the intrinsic numerical difficulties of treating, under these conditions, the form 0/0, this method presents serious drawbacks and does not allow to easily find and convincingly identify solutions to the required accuracy threshold. Therefore, it impedes a full exploration of the parameter space.
The structure of our numerical method is the following:
\begin{itemize}
\item[1.] We guess the locations of the critical points, $\theta_{\rm MSP}$ and $\theta_{\rm MFP}$, and derive values for $M^2, G^2$ and their derivatives given by the condition that the numerators and the denominator of Eq.~\ref{eqn:windSing}, and of the similar equation for $\psi$, i.e. $d\psi/d\theta = \mathcal{N}_2/\mathcal{D} = 0/0$, are zero at the MSP/MFP of choice.
\item[2.] We integrate away from AP, MSP and MFP towards the midpoints $\theta_{\rm mid, MSP} = (\theta_{\rm A} + \theta_{\rm MSP})/2$ and $\theta_{\rm mid, MFP} = (\theta_{\rm A} + \theta_{\rm MFP})/2$
\item[3.] We determine the parameters that give a match at the midpoints using the Bayesian open-source code \texttt{multinest} \citep{Multinest:2008, Multinest:2009,Multinest:2013}.
\end{itemize}
The specific choice of input parameters and fitted parameters is given in Tab.~\ref{tab:modpars}. Once a particular family of solutions is specified through the choice of $F,\Gamma, \theta_{\rm A},\psi_{\rm A}$and $k_{\rm VTST}$, we identify the location of the MSP and MFP and the best-fit values of the remaining parameters $\mu_{\rm VTST}$ and $\lambda_{\rm VTST}$ and we extend the solutions upstream of the MSP towards the disk midplane and downstream of the MFP towards the last recollimation point (LRP). In paper I, we defined this point as the last point we were able to calculate with our algorithm. The last few integration points before LRP seem to indicate the onset of a recollimation shock where the fluid is compressed in a small section around the polar axis. Indeed, we noticed that the denominator is approaching zero again in Eq.~\ref{eqn:dM2dt} and Eq.~\ref{eqn:dpsidt}, while the numerator is not. This means that both the derivative of $M^2$ and $\psi$ become infinite close to LRP, making the integration towards this (singular) point impossible.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/lambdamu_all}\hfill\\
\includegraphics[width=0.88\linewidth, angle=0]{FIG/lambda_mu_k30_largerLab}
\caption{\emph{Upper panel:} Grid of solutions presented in the angular momentum and entropy plane, i.e. $(\lambda_{\rm VTST}$,$\mu_{\rm VTST})$-plane for $\Gamma=5/3$, $F=0.75$. The location ($\theta_{\rm A}$) of the AP, collimation ($\psi_{\rm A}$) at the AP, and the mass loss parameter $k_{\rm VTST}$ are allowed to vary within a chosen grid. In each $k_{\rm VTST}=$ constant subset, the lines connect solutions with constant $\theta_{\rm A}$ and variable $\psi_{\rm A}$. \emph{Lower panel:} Same plot for a subset of solutions for $k_{\rm VTST} =3.0$. Neighbouring lines differ in $\theta_{\rm A}$ by 5 degrees. Along each line we have indicated the value of $\psi_{\rm A}$ for some of them to illustrate how sensitive the equations are for a small change of this angle. Particularly, a tiny change in $\psi_{\rm A}$ translates into a large step in $\mu_{\rm VTST}$, when $\lambda_{\rm VTST}$ is large.}
\label{fig:lambdaMu}
\end{figure}
\section{Parameter study}
\label{sec:parstu}
Given the wealth of solutions that we are able to retrieve using this algorithm, we focus on a grid of solutions obtained by fixing the adiabatic index $\Gamma$ to 5/3, the exponent of the radial scaling of the current $F$ to $0.75$, as in \citep[][hereafter BP]{BlandfordPayne:1982}, and the mass loss parameter $k_{\rm VTST}$ between 1.5 and 5.0 in steps of 0.5. We note that the resulting solutions will be generally different from the stereotypical BP-like solution because we include gas pressure and the crossing of all the three singular points.
For each $k_{\rm VTST}$, we seek solutions with all the allowed combinations of $\theta_{\rm A}$ and $\psi_{\rm A}$, which are the angles determining the position and the collimation of the streamline at the Alfv\'en point, respectively. In Fig.~\ref{fig:lambdaMu} we show the distribution of these solutions in the plane of dimensionless angular momentum and entropy, i.e. the $(\lambda_{\rm VTST}$, $\mu_{\rm VTST})$-plane.
Each line represents solutions for a constant $k_{\rm VTST}$ and $\theta_{\rm A}$, while only $\psi_{\rm A}$ varies.
As the upper panel of Fig.~\ref{fig:lambdaMu} shows, although our solutions cover a good extent of this region of the parameter space, a few series could not be completed because of the disappearing of the MSP below the disk midplane, e.g. for $k_{\rm VTST}=2.5$ (gray line), which are physically not meaningful.
In the lower panel of Fig.~\ref{fig:lambdaMu} we show how the collimation angle at the AP, $\psi_{\rm A}$, is changing for a few lines on which the position of the AP, $\theta_{\rm A}$, is constant and the mass loss parameter, $k_{\rm VTST}$, has been set to 3.0 for all the lines. We see that the parameters of a solution change significantly with only a small change in $\psi_{\rm A}$. This is particularly true in the top part of the figure where the dimensionless angular momentum, $\lambda_{\rm VTST}$, is large.
We only find a solution when the sum of the angles $\theta_{\rm A}+ \psi_{\rm A} $ is roughly within the interval $93\degree$-$111\degree$. This range varies depending on the value of $k_{\rm VTST}$ (see Tab.~\ref{tab:angles}). In general the allowed range of this sum is between $90\degree$ and $180\degree$ to ensure that the derivative of the poloidal Mach number is negative, i.e. the fluid is accelerating at Alfv\'en. For a constant location of the AP, $\theta_{\rm A}$, the collimation angle $ \psi_{\rm A} $ is small when the entropy $\mu_{\rm VTST}$ approaches zero and is large when the angular momentum $\lambda_{\rm VTST}$ approaches zero. As we will discuss later, the combination of these two angles ultimately determines the dynamics and the geometry of the jet and the narrower range of their sum that we find is likely due to the minimum and maximum energy fluxes allowed in this region of the parameter space (see Fig.~\ref{fig:RatioEne}).
\begin{table}
\centering
\setlength\tabcolsep{4.5 pt}
\setlength\extrarowheight{5pt}
\caption{The maximum value of $\theta_{\rm A}$ and the sum $\theta_{\rm A}+ \psi_{\rm A} $ with a constant $k_{\rm VTST}$. The minimum of $\theta_{\rm A}=10\degree$ and of the sum $\theta_{\rm A}+ \psi_{\rm A} =93\degree$ are the same for all $k_{\rm VTST}$ values.}
\begin{tabular}{l | c c}
\hline
$k_{\rm VTST}$ & $\theta_{\rm A, max}$ & $(\theta_{\rm A}+ \psi_{\rm A})_{\rm max}$ \\
\hline
1.5 & 30\degree & 98\degree \\
\hline
2.0 & 45\degree & 102\degree \\
\hline
2.5 & 65\degree & 106\degree \\
\hline
3.0 & 65\degree & 107\degree \\
\hline
3.5 & 70\degree & 109\degree \\
\hline
4.0 & 75\degree & 110\degree \\
\hline
4.5 & 75\degree & 110\degree \\
\hline
5.0 & 80\degree & 111\degree \\
\hline
\end{tabular}
\label{tab:angles}
\end{table}
The lowest value of the sum, i.e. 93 (small $\theta_{\rm A}$, large $\psi_{\rm A}$), coincides with the jet configurations with lowest total energy-to-mass flux ratios which is around $\sim V_{\rm A,p}^2/2$ at the jet base (z=0) for $k_{\rm VTST}$ of the order of unity (Eq.~\ref{eqn:ARCrescaled}). These solutions have little-to-none magnetic field ($\lambda_{\rm VTST} \rightarrow 0$), and represent a tenuous jet (low total energy flux, see Sec.~\ref{ssec:trends} and Sec.~\ref{ssec:hotcold}) supported by some ($\mu_{\rm VTST}$ small) gas pressure, which provides the balance to gravity.
By varying the two angles within the allowed range, we recover a large collection of solutions where we see low-energy hot jets transform into cold and fast jets with a large angular momentum ($\lambda_{\rm VTST}\gtrsim 20$) and a relatively small contribution of the gas pressure (small $\mu_{\rm VTST}$) to the total energy.
The large variety of physical properties within this sample of solutions provides an ideal framework to study the different jet configurations and to devise a method for the comparison of such solutions to astrophysical sources.
\subsection{General trends}
\label{ssec:trends}
In this Section we discuss some general properties and trends observed while inspecting the whole ensemble of solutions.
In Fig.~\ref{fig:maxEneRotFrame} we show how the total energy is divided up between rotational energy and generalized pressure \citep{Ferreira:1997}.
The rotational energy is the difference between the total energy in an inertial frame and the total energy in a frame rotating with a frequency $\Omega$ (Eq.~\ref{eqn:Omega}), i.e.
\begin{equation}
E_{\rm rot} = L\Omega = \varpi_{\rm A}^2\Omega^2.
\end{equation}
In the above equation, $L$ is the angular momentum, defined as
\begin{equation}
L = \varpi \left(V_\phi - \frac{B_\phi}{4\pi \rho}\frac{B_p}{V_p} \right),
\label{eqn:L}
\end{equation}
which is also a constant of motion along the streamline.
Fig.~\ref{fig:maxEneRotFrame} shows the total energy in the rotating frame rescaled by the poloidal kinetic energy at the AP (blue dots):
\begin{equation}
2\frac{E - L\Omega}{V_{\rm A,p}^2} = \tilde \epsilon - \tilde \epsilon_{\rm rot}
\end{equation}
and the rotational energy rescaled, $2L\Omega/V_{\rm A,p}^2= \tilde\epsilon_{\rm rot}$ (yellow dots) versus the rescaled total energy in the inertial frame $\tilde\epsilon = 2E/V_{{\rm A},p}^2$.
All the points lie on two narrow curves. The solutions highlighted in the bottom panel of Fig.~\ref{fig:lambdaMu} are marked as red crosses in Fig.~\ref{fig:maxEneRotFrame}. The total energy in the rotational frame, $\tilde\epsilon - \tilde\epsilon_{\rm rot}$ (blue dots), otherwise called the generalized pressure \citep{Ferreira:1997,PelletierPudritz:1992}, achieves a maximum when. the rotational energy, $\tilde \epsilon_{\rm rot}$ (yellow dots), is negligible. Since the total energy flux sustaining a jet, i.e. the Bernoulli constant, is positive, the generalized pressure can change sign depending on the relative contribution of the rotational energy, $\tilde \epsilon_{\rm rot}$, to the total energy.
As the rotational energy, $\tilde \epsilon_{\rm rot}$, increases, it approaches equipartition with the generalized pressure which occurs in the regime where the latter is still positive. When the sign flip occurs, we start to see a dominant contribution of the magnetic energy in the Bernoulli equation (Eq.~\ref{eqn:BernConst}).
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/EneTotEneRotRescaled_base}\hfill
\caption{The rotational energy as a function of the total energy scaled with the poloidal velocity at Alfv\'en for all solutions (yellow dots). In blue is shown the total energy in a frame rotating with the constant angular frequency $\Omega$ versus the total energy in the inertial frame, also scaled with the poloidal velocity at Alfv\'en. The red crosses are solutions with $k_{\rm VTST} =3.0$ and $\theta_{\rm A}=60\degree$. They will be used to discuss other trends later on.}
\label{fig:maxEneRotFrame}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/EneThEmagEneTotRescaled}\hfill
\caption{
Ratio between the thermal energy and magnetic energy as a function of the total energy with respect to the poloidal kinetic energy at Alfv\'en for all solutions.}
\label{fig:RatioEne}
\end{figure}
Based on the ratio between the thermal energy and the magnetic energy flux we distinguish three categories of solutions: thermally-dominated hot, equipartition/centrifugal and magnetically-dominated cold jets (see Fig.~\ref{fig:RatioEne}). We show this ratio at the disk midplane (blue squares) and at the MSP (pink crosses) for the full sample of solutions versus the total energy rescaled with the poloidal kinetic energy. The vertical lines are drawn to guide the eye.
We see that the distribution of the jet models in this plane is very similar between $z=0$ and the MSP.
The hot jets are low-energy solutions and as the $E_{\rm TH}/E_{\rm M}$ increases the total energy flux, $\tilde \epsilon$, remains constant and at its minimum value.
When the thermal and magnetic energy flux are roughly at equipartition, the total energy is increasing steadily as the solutions become more magnetically-dominated. As we enter the cold regime, the magnetic energy grows more rapidly for a small variation of the input parameters (see bottom panel of Fig.~\ref{fig:lambdaMu}), but the jet configurations do not increase so much in total energy anymore, approaching its maximum. Since there is this correspondence between total energy and hot/cold regime, we will use it interchangeably across the paper.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/EneTotEneCompRescaled_base}\\
\includegraphics[width=1\linewidth]{FIG/EneTotEneCompRescaled_msp}\hfill
\caption{The total energy at the base (top panel) and at the MSP (bottom panel) split up in its components for a series of solutions for $k_{\rm VTST} =3.0$ and $\theta_{\rm A}=60\degree$. All energies have been scaled with the poloidal kinetic energy at Alfv\'en.}
\label{fig:EneCompsSeries}
\end{figure}
In Fig.~\ref{fig:EneCompsSeries} we show the different contributions to the total energy at the base (top panel) and at the MSP (bottom panel) for a series of solutions with $k_{\rm VTST} =3.0$ and $\theta_{\rm A}=60\degree$. The trends discussed here are also observed in other series. We start by noticing that when the magnetic energy is larger, the total energy is larger too. When the total energy is low, the gravitational energy and the thermal energy dominate with almost equal magnitude, cancelling each other. Only at higher total energies the thermal energy becomes negligible. Apart from the most energetic solutions, the kinetic energy consists mainly of the poloidal component. At higher energies the poloidal component of the velocity of the gas leaving the midplane is relatively low while the toroidal speed gives the largest contribution to the total kinetic energy.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/VCompRescaled_base}\\
\includegraphics[width=1\linewidth]{FIG/VCompRescaled_msp}\hfill
\caption{Velocities at the base (top panel) and at the MSP (bottom panel) scaled by the poloidal velocity at Alfv\'en as a function of the scaled total energy for a series of solutions for $k_{\rm VTST} =3.0$ and $\theta_{\rm A}=60\degree$.}
\label{fig:VelCompSeries}
\end{figure}
In Fig.~\ref{fig:VelCompSeries} we present the components of the velocity and of the angular frequency $\Omega$ of the streamlines (Eq.~\ref{eqn:Omega}) for the same series of solutions presented in Fig.~\ref{fig:EneCompsSeries}. At lower total energies the poloidal velocity is relatively large with respect to the toroidal velocity. As a consequence, even when the ratio of the magnetic field components (gray line with dots) is low, i.e. the magnetic field is almost not twisted at all, the second term on the rhs of the equation describing $\Omega$ (Eq.~\ref{eqn:Omega}, magenta line with stars) is dominant, while the toroidal velocity (brown line with pentagons) is negative and smaller. This means that the gas is lagging behind the rotation of the disk and the magnetic field is weak, while $\Omega$ is at its minimum.
Only the very last solution with the highest energy of this series is rotating at keplerian speed, which can be seen by noticing that the last dot of the pink line with crosses ($V_{\rm k}$)
coincides with the last point of the brown line with pentagons ($V_\phi$)
in the top panel of Fig.~\ref{fig:VelCompSeries}.
As expected by the non-negligible contribution of the enthalpy, the overwhelming majority of the solutions in this ensemble is subkeplerian at the disk midplane, with a deviation increasingly larger as the solutions become warmer and warmer.
This means that the typical approximation $\Omega\sim\Omega_{\rm gas} = V_\phi/\varpi \sim \Omega_{\rm k}$ cannot be taken as a general property of this sample of solutions. Only a small fraction of the solutions presented in this paper can be considered corotating with the disk, like for instance the last three high-energy solutions in Fig.~\ref{fig:VelCompSeries}, where we see that $\Omega\varpi$ (green line with crosses) matches $V_\phi$ (brown line with pentagons), while $-V_{p}B_\phi/B_p$ (magenta line with stars) is close to zero.
From a geometrical point of view, the radial profile of the streamlines varies depending on how hot the jet is, typically with highly oscillating jet bases for cold jets while no oscillations are present for warm and hot jets (see Fig.~\ref{fig:streamlines}). This is a consequence of the oscillatory nature of the transverse component of the forces that define the collimation of the streamline. We will discuss this topic in detail in Section~\ref{ssec:hotcold}.
Since it is very likely that such oscillations may be unstable and considering that the MSP is a more robust point in our solutions, we identify the MSP with the jet base from now on.
Different jet configurations can be also classified based on the amount of acceleration that the gas experiences from the MSP to the MFP, being that the point where the flow loses causal contact with the source and the flow upstream. In Fig.~\ref{fig:deltaV}, we plot all the solutions divided in subgroups with constant $k_{\rm VTST}$ in the plane defined by the increase in the poloidal velocity experienced by the matter from the MSP to the MFP and the rescaled total energy flux.
The low-energy flux, pressure-driven, solutions have also low $\Delta V/V_{\rm MFP}$ since they are characterised by large poloidal velocities at the MSP which do not increases much approaching the MFP.
As the energy flux increases, the poloidal velocity decreases (see bottom panel of Fig. \ref{fig:VelCompSeries}) and the increment of the velocity $\Delta V/V_{\rm p, MFP}$ approaches 1.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/streamlinesCold2Hot_base}\hfill
\caption{Examples of streamlines of a series of solutions for increasing $\mu_{\rm VTST}$}
\label{fig:streamlines}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/deltaVpolperc_mspmfp_vs_EtotRescaled}\hfill
\caption{Acceleration fraction versus the total energy rescaled.}
\label{fig:deltaV}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/EkinDelta_vs_EneTot_t65k40_APMSP}\hfill
\includegraphics[width=1\linewidth]{FIG/EkinDelta_vs_EneTot_t65k40_MFPAP}
\caption{\emph{Upper panels:} Relative increment of the energy fluxes with respect to the total energy flux, $(E_{\cdot,\rm AP} - E_{\cdot,\rm MSP})/E_{\rm tot}$, versus the total energy rescaled from the MSP to the AP. The small left panel is a zoom around zero. The small right panel shows the relative increment of the components of the angular momentum with respect to the total angular momentum between the MSP and the AP versus the total energy rescaled. The solutions are obtained for $\theta_{\rm A}=65\degree$ and $k_{\rm VTST}=4.0$. \emph{Lower panels:} Same as the upper panel but between the AP and the MFP.}
\label{fig:deltaE}
\end{figure}
Similarly, the acceleration of the flow is also traced by the increase in the poloidal kinetic energy. In Fig.~\ref{fig:deltaE}, we show the relative increment/decrement of the energy fluxes between the MSP and the AP (first phase of the acceleration, top three panels) and the AP and the MFP (second phase of acceleration, bottom three panels) in a transition from hot to cold solutions (low-to-high energy flux). In the first phase of the acceleration, hot solutions are driven by the thermal energy which suffers the largest decrement. However, as highlighted by the zoom around zero, a fraction of the thermal energy is transferred to the magnetic energy, which is increasing for hot solutions with energy fluxes $< 5$. This behaviour is followed closely by the relative increment/decrement of the components of the angular momentum. For these hot solutions the hydrodynamical component of $L$ decreases, while the magnetic component increases, showing that the angular momentum of the gas is transferred to the angular momentum of the magnetic field. Such additional channel of energy transfer has been seen in simulations such as e.g. \citet{Komissarov:2009,Cayatte:2014} and in Paper I. This effect is seen as well in the bottom panel of Fig.~\ref{fig:energyfluxes} as a small rise in the magnetic energy around the AP. As the jet models move to higher-energy configurations, the magnetic energy increases while the thermal energy is still important, leading to an increasing poloidal kinetic energy. The peak of the poloidal kinetic energy occurs in correspondence to $\Delta E_{\rm M}/E_{\rm tot} \simeq \Delta E_{\rm TH}/E_{\rm tot}$. Then, it decreases again due to a decrease in toroidal kinetic energy. In the second phase of the acceleration, the thermal energy still dominates for hot low-energy solutions. Equipartition/MC and cold solutions instead are accelerated all the way from the MSP to the MFP by the magnetic field. In the upper part of the jet, the relative increment of the components of the angular momentum do not change sign and the magnetic angular momentum is always transferred to the gas component.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/deltaPoints_vs_EtotRescaled}\hfill
\includegraphics[width=1\linewidth]{FIG/Vpline_t60k40}
\includegraphics[width=1\linewidth]{FIG/Vpline_t30k40}
\caption{\emph{Upper panel}: Distance between the MFP and MSP vs total energy rescaled $\tilde\epsilon = 2 E_{\rm tot}/V_{p,\rm A}^2$. The lines are connecting solutions with constant angular position of the AP ($\theta_{\rm A}$) and the colours mark different values of the mass loss parameter $k_{\rm VTST}$. The red dashed line is a series of solutions obtained with $\theta_{\rm A} = 50\degree$ and $k_{\rm VTST}=4.0$. \emph{Middle panel}: Poloidal velocity scaled with the Alfv\'en poloidal velocity ($\tilde V_p = V_p/V_{p,\rm A}$) along the streamline for a set of solutions with $\theta_{\rm A} = 50\degree$ and $k_{\rm VTST}=4.0$. The solutions in this plot have a total energy flux rescaled ($\tilde \epsilon$) going from 1.5 to 10.5 and distances between the MSP and MFP changing by a factor of $\sim50$. \emph{Bottom panel}: Same plot as the middle one, but for $\theta_{\rm A} = 30\degree$. The solutions in this plot have roughly a constant total energy flux rescaled ($\tilde \epsilon\sim1.3$) and distances between the MSP and MFP changing by one order of magnitude, going from $7900$ to $72500$.The second solutions in the middle and bottom panel have the largest $\Delta z$, but not the largest $\tilde\epsilon$.}
\label{fig:deltaZ}
\end{figure}
Moreover, since the downstream portion of the MFP might be already affected by a shock given by the loss of causal contact with the flow upstream, we take as a proxy the total jet length the distance between the MSP and the MFP. The top panel of Fig.~\ref{fig:deltaZ} shows that low energy solutions can be as short as $10^2/\varpi_*$ and as long as $10^6/\varpi_*$. As the total energy increases this interval narrows by $\sim$2 order of magnitude ($10^3-7\times10^4$). We note that when the streamlines become more vertical (increasing $\psi_{\rm A}$), this leads to a decrease in total energy (the lines in the plot are drawn for constant angular position of the AP, $\theta_{\rm A}$), while increasing $\theta_{\rm A}$ (from top to bottom) makes the $\Delta z$ decrease. If we were to focus on one of the most extended lines across the energy range, for instance the red dashed line that is for an intermediate constant value of the angular position of the AP ($\theta_{\rm A} = 50\degree$) and a fixed mass loss parameter ($k_{\rm VTST} = 4.0$), we would see a correlation between the distance between the MSP and the MFP and the total energy: the higher the energy the larger is the distance, until it reaches an almost constant length ($\sim 20000/\varpi_*$). We note that the maximum of $\Delta z$ does not coincide though with the highest energy in the line. Therefore, beyond a certain total energy, the jets do not grow taller, but their $\Delta V/V_{p,MSP}$ increases as shown in the middle panel of Fig.~\ref{fig:deltaZ} and in Fig.~\ref{fig:deltaV}. Low-energy hot solutions increase in length by a factor of 10 as the collimation angle,$\psi_{\rm A}$, increases, maintaining their velocity increment roughly constant (bottom panel of Fig.~\ref{fig:deltaZ}).
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/plasmaBmassLoad_EtotRescaled_k45_msp}\hfill
\caption{Plasma-$\beta$ (black lines with dots, top panel) and mass load $\eta$ (red lines with crosses, bottom panel) vs total energy flux rescaled for all the solutions with $k_{\rm VTST} = 4.5$ at the MSP. Each line connects solutions with constant $\theta_{\rm A}$ and increasing $\psi_{\rm A}$. The arrows show the approximate direction of increasing $\theta_{\rm A}$ and $\psi_{\rm A}$.}
\label{fig:plasmaBmassloadKconst}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FIG/plasmaBmassLoad_EtotRescaled_t45_msp}\hfill
\caption{Plasma-$\beta$ (lines with dots, top panel) and mass load $\eta$ (lines with crosses, bottom panel) vs total energy flux rescaled for all the solutions with $\theta_{\rm A}=45\degree$ and $k_{\rm VTST} =3.0\, {\rm( blue)},3.5\, {\rm(pink)},4.0\, {\rm( gray)},4.5 \, {\rm( green)}, 5.0 \, {\rm( yellow)}$ at the MSP. Each line connects solutions with constant $\theta_{\rm A}=45\degree$ and varying $\psi_{\rm A}$.}
\label{fig:plasmaBmassloadTconst}
\end{figure}
Lastly, we discuss the variation of the plasma-$\beta$ and mass load $\eta$ at the MSP, which we identify with the jet base as discussed above.
These two quantities are given by
\begin{equation}
\beta = \frac{P}{B^2/8\pi} \qquad {\rm and} \qquad \eta = \frac{4\pi \rho V_p\varpi \Omega}{B_p^2},
\end{equation}
following the definitions of e.g. \citet{Anderson:2005,Spruit:1996}.
Since the general trend is the same within subsets of solutions with constant $k_{\rm VTST}$, we present here the series of solutions obtained for $k_{\rm VTST}=4.5$ and for Alfv\'en position angle, $\theta_{\rm A}$, going from $10\degree$ to $75\degree$ roughly from the bottom up (Fig.~\ref{fig:plasmaBmassloadKconst}) and then discuss how they change for increasing $k_{\rm VTST}$ at constant $\theta_{\rm A}$(Fig.~\ref{fig:plasmaBmassloadTconst}).
Solutions in both figures have increasing collimation angle, $\psi_{\rm A}$, along each line from left to right (from $28\degree$ to $83\degree$ in Fig.~\ref{fig:plasmaBmassloadKconst} and from $48\degree$ to $57\degree$ in Fig.~\ref{fig:plasmaBmassloadTconst}).
In Fig.~\ref{fig:plasmaBmassloadKconst}, we see that thermally-dominated, low-energy-flux solutions have the largest plasma-$\beta$ ($\sim1$). Then as the collimation angle, $\psi_{\rm A}$, decreases, the energy flux increases and the plasma-$\beta$ experiences a first decrease. For the Alfv\'en angular positions for which more $\psi_{\rm A}$ values are allowed, we see the plasma-$\beta$ remaining constant for many consecutive solutions of increasing energy flux. However, when the solutions become magnetically-dominated, the plasma-$\beta$ has a drop.
The mass load has a minimum which coincides with the beginning of the plateaux of the plasma-$\beta$, to again rise to higher energy fluxes.
The relatively large mass load of the low energy flux solutions is due to high-density of the gas, while a similar value is reached for the high energy flux solutions because the magnetic field is more tightly wound up ($|B_\phi/B_p| >1$, see e.g. \citealt{Anderson:2005,Spruit:1996}).
In Fig.~\ref{fig:plasmaBmassloadTconst}, we show how the same quantities vary in relation to an increase in $k_{\rm VTST}$.
The plasma-$\beta$ and the mass load, $\eta$, show a similar behaviour with respect to the mass loss parameter, $k_{\rm VTST}$: the larger is $k_{\rm VTST}$ the larger is the plasma-$\beta$ and the mass load. However, we notice that $\eta$ has a weaker dependence on $k_{\rm VTST}$ both at low and high energy fluxes, while the plasma-$\beta$ responds to a change in $k_{\rm VTST}$ more homogeneously across the energy flux interval.
\begin{table}
\centering
\setlength\tabcolsep{4.5 pt}
\setlength\extrarowheight{5pt}
\caption{Parameters of the solutions used in Sec.~\ref{ssec:hotcold}. The solutions have the following common parameters $k_{\rm VTST}=3.0$, $\theta_{\rm A}=60\degree$, $\Gamma=5/3$ and $F=0.75$. The Cold Jet and the Hot Jet models are the extremes of the series, while the MC Jet is an intermediate one which is closest to the classical magneto-centrifugal jets encountered in the literature.}
\begin{tabular}{l | c c c c c}
\hline
Model & $\mu_{\rm VTST}$ & $\lambda_{\rm VTST}$ & $\theta_{\rm MFP}$ & $\theta_{\rm MSP}$ & $\psi_{\rm A}$ \\
\hline
Cold Jet & $7.58\times 10^{-2}$ & 17.3853 & 0.11803 & 1.2462 & 37.07 \\
\hline
MC Jet & 0.5396 & 16.8723 & 0.11778 & 1.2578 & 37.09 \\
\hline
Hot Jet & 6.5510 & 1.6861 & 0.12347 & 1.3852 & 46.00 \\
\hline
\end{tabular}
\label{tab:extremes}
\end{table}
\subsection{Hot and cold jets}
\label{ssec:hotcold}
To illustrate the qualitative changes of the outflow properties along a series of solutions for increasing collimation angle $\psi_{\rm A}$, we describe the transition looking at the two extreme solutions plots of the components of the Bernoulli equation (Eq.~\ref{eqn:BernConst}) and an intermediate one which resembles a more classical magneto-centrifugally launched jet. We will refer to these solutions as Cold, magneto-centrifugal (MC) and Hot Jet models and list their parameters in Tab.~\ref{tab:extremes}.
As shown in Fig.~\ref{fig:energyfluxes}, the energy fluxes along the poloidal direction are substantially different going from the Cold (\emph{upper panel}) to the Hot (\emph{lower panel}) Jet solution.
The cold jet has a high Poynting-to-enthalpy flux ratio. The magnetic energy is then converted into kinetic energy downstream of the AP. Upstream of the MSP, all the energy fluxes are oscillating, following the oscillations of the radial profile of the streamline (See Fig.~\ref{fig:streamlines}). The intermediate MC jet solution has qualitatively the same characteristics of the cold one, but the oscillations are gone.
The hot jet has an uneventful behaviour of the energy fluxes along the streamline. The enthalpy is dominant and roughly equal to gravity in absolute value and opposite in sign. Right after the AP, initially the thermal energy flux is the main source of energy being transformed into kinetic energy and into magnetic energy, which shows a small increase, as discussed in Sec.~\ref{ssec:trends}. Then, the magnetic energy flux takes over the final acceleration. For constant mass loss parameter, $k_{\rm VTST}$, the total energy flux is $\sim 2$ orders of magnitude larger for the cold jet. This larger energy reservoir allows the cold jet to extend in length a factor of $\sim$100 more than the hot jet, when the same reference scale length, $\varpi_*$ is applied.
The forces acting along ($\hat b$) and perpendicular ($\hat n$) to the streamline highlight the transition from cold to hot jet configurations. Here we give the compact form of the forces in both direction, while we provide the full derivation in Appendix~\ref{app:forces}.
\begin{align}
\hat b:& \frac{\rho}{2} \frac{\partial V_p^2}{\partial l} = \rho V_\phi^2 \frac{\cos(\psi)}{\varpi} - \frac{\partial P}{\partial l} + \rho \frac{\partial }{\partial l}\left(\frac{\mathcal{GM}}{r}\right) \nonumber\\
& \qquad \qquad - \frac{1}{8\pi}\frac{\partial B_\phi^2}{\partial l} - B_\phi^2\frac{\cos(\psi)}{4\pi\varpi} \label{eqn:poloidalForces}\\
\hat n: & \left(\rho V_p^2 - \frac{B_p^2}{4\pi}\right)\frac{\partial \psi}{\partial l} = + \rho V_\phi^2 \frac{\sin(\psi)}{\varpi} - \frac{\partial P}{\partial n} + \rho \frac{\partial }{\partial n}\left(\frac{\mathcal{GM}}{r}\right) \nonumber\\
& \qquad \qquad \qquad \qquad - \frac{1}{8\pi}\frac{\partial }{\partial n}\left(B_p^2 + B_\phi^2\right) + B_\phi^2\frac{\sin(\psi)}{4\pi\varpi} \label{eqn:transvForces}
\end{align}
The term on the lhs of the Eq.~\ref{eqn:poloidalForces} is the acceleration along the streamline, the first term on the rhs is the centrifugal force, the second term is the gas pressure force, the third term is the gravitational force and the last two terms are the magnetic pressure gradient and the magnetic tension.
On the lhs of Eq.~\ref{eqn:transvForces} there is the derivative of the angle $\psi$ along the streamline. The inverse of this derivative is also called the collimation radius, $R_c = (\partial\psi/\partial l)^{-1}$.
On the rhs there are: the centrifugal force, the gas pressure force, the gravitational force and the magnetic pressure gradient and the magnetic tension.
In the following discussion, we refer to accelerating/collimating forces when such terms are positive, and to decelerating/decollimating forces when they are negative.
In Fig.~\ref{fig:transfieldF}, we show the forces perpendicular to the streamline and in Fig.~\ref{fig:poloidalF} the forces along the streamline for the same three solutions.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{FIG/EnergyComp_1725_L}\\
\includegraphics[width=0.9\linewidth]{FIG/EnergyComp_1727_L}\\
\includegraphics[width=0.9\linewidth]{FIG/EnergyComp_1753_L}
\caption{Energy fluxes along the streamline. The green line is Poynting flux, the pink line is the enthalpy energy flux, the brown line is the kinetic energy flux and the dashed purple line is the gravitational energy with reversed sign. The total energy flux is shown as a solid black line. Note that the $y$-axis scale is different in the three plots, while the x-axis scale is the same. From top to bottom the solutions go from cold to hot. The parameters are listed in Tab.~\ref{tab:extremes}.}
\label{fig:energyfluxes}
\end{figure}
The cold jet has a troublesome start, since it lacks a vertical velocity component that allows for a straightforward launching (top panel in Fig.~\ref{fig:zoomF}).
At the very beginning, the jet is decollimating ($\partial \psi/\partial l$ < 0, black thin line) under the action of the gas pressure force (pink line). Soon, the gas pressure gradient changes sign and together with the other positive forces, i.e. gravity (purple line), centrifugal (brown line) and magnetic tension (teal line), is collimating the jet against magnetic pressure gradients. Around the peak of gravity and the centrifugal force, the pressure gradient becomes negative but smaller in modulus, resulting in a converging streamline (thick solid black line). After that the previous configuration of the forces is mirrored to the right side of the peak, until shortly before the MSP, the streamline starts to decollimate again. However, downstream of the MSP the sign switches again when the magnetic tension becomes dominant, keeping the jet collimated up until also the magnetic pressure gradient becomes positive, about half way between the AP and the MFP (top panel of Fig.~\ref{fig:transfieldF}).
The MC Jet model shows the same behaviour downstream of the MSP, while it presents no oscillations in the region between the disk and the MSP.
The Hot Jet is always collimating. Until the AP, gravity is the main force driving the collimation against the gas pressure gradient that remains negative until past the AP. Beyond this point, the magnetic forces become dominant in keeping the jet focused.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{FIG/TransverseForces_1725_L}\\
\includegraphics[width=0.9\linewidth]{FIG/TransverseForces_1727_L}\\
\includegraphics[width=0.9\linewidth]{FIG/TransverseForces_1753_L}
\caption{Forces perpendicular to the streamline. The black line shows the derivative of the collimation angle along the streamline. The lines are solid when the force is providing collimation (positive) and dashed when it is instead decollimating (negative). From top to bottom the solutions go from cold to hot. The parameters are listed in Tab.~\ref{tab:extremes}}
\label{fig:transfieldF}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{FIG/PoloidalForces_1725_L}\\
\includegraphics[width=0.9\linewidth]{FIG/PoloidalForces_1727_L}\\
\includegraphics[width=0.9\linewidth]{FIG/PoloidalForces_1753_L}
\caption{Forces along the streamline. The black line is the poloidal acceleration along the streamline. The lines are solid when the force is providing collimation (positive) and dashed when it is instead decollimating (negative). From top to bottom the solutions go from cold to magneto-centrifugal to hot. The parameters are listed in Tab.~\ref{tab:extremes}.}
\label{fig:poloidalF}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{FIG/TransverseForces_1725_L_zoom_sline}\\
\includegraphics[width=0.9\linewidth]{FIG/PoloidalForces_1725_L_zoom_sline}
\caption{Zoom of the forces along the streamline (bottom) and perpendicular (top) to it for the Cold Jet model (Tab.~\ref{tab:extremes}). The thick solid black line is the streamline.}
\label{fig:zoomF}
\end{figure}
The poloidal forces also presents oscillations in the Cold Jet model, while they do not in the MC and Hot Jet models.
In the top panel of Fig.~\ref{fig:poloidalF}, we see some more moderate oscillations for the initial segment of the cold jet. In the upstream region of the MSP, the jet is initially slowly accelerating ($\partial V_p^2/\partial l>0$).
Then the pressure force (pink line) becomes negative and gravity (purple line) is
attracting the fluid back to the centre (the thick solid black line in Fig,~\ref{fig:zoomF} shows the streamline focussing towards the axis), now increasing its speed,
providing acceleration while the poloidal motion of the flow (think black line) is actually decelerating (bottom panel in Fig.~\ref{fig:zoomF}). At the minimum radius of the streamline, all the forces change sign and the jet starts to accelerate driven by a combination of centrifugal (brown) and magnetic force (teal line). Half way between the MSP and the AP, the magnetic force takes over and it will sustain the acceleration for the remaining (and larger) fraction of the jet extent.
In the intermediate jet solution the flow is accelerated by the gas pressure force (and for a small segment just downstream of the MSP by the centrifugal and the magnetic forces) until halfway the MSP and the AP, when the magnetic force drives again the acceleration of the jet until the LRP.
The Hot Jet is instead decelerating until past the MSP, then the pressure provides acceleration working against the gravitational pull. Finally halfway between the AP and the MFP, the magnetic force becomes the dominant accelerating force for the rest of the jet length.
\section{Proof of concept: application to W43A}
\label{sec:app}
In this section we describe how to compare our solutions to an astrophysical source, the water fountain W43A.
W43A is a pre-planetary nebula (PPN; plural, PPNe), located at a distance of 2.2 kpc from the sun \citep{Tafoya20}, that is thought to be hosting an Asymptotic Giant Branch (AGB) star \citep{ImaiDiamond:2005, Tafoya20}. It has been observed that during the transition from the AGB to planetary nebula (PN) phases, the star’s ejecta change from a roughly spherical symmetric wind to an envelope with a highly non-spherical configuration \citep{BalickFrank:2002}. These non-spherical post-AGB or PPNe envelopes often exhibit (collimated) bipolar outflows and/or jets, which are most likely formed at the time the star leaves the AGB \citep[e.g.][]{ST98}. The origin of the non-spherical outflows around W43A and other PPNe is a matter of debate, and is typically thought to include a common envelope evolution (CEE) phase \citep[e.g.][]{NB06}. It is suggested that W43A also hosts a close companion embedded in the circumstellar envelope of the AGB star, likely a main sequence star or a white dwarf, although such companion has not been directly observed \citep[e.g.][]{Imai02,ImaiDiamond:2005,Tafoya20}. The binary interaction between the two stars is expected to lead to the ejection of the envelope. During this phase, both a circumbinary disk and an accretion disk around the companion can form. It has been proposed that fast outflows, either collimated or wide, can be launched before, during and/or after the common envelope phase, contributing to the evolution of the system by heating and mechanically re-disturbing the material of the envelope, possibly leading to its ejection \citep{Chamandy:2018, Soker:2020}. A scenario in which jets are launched at the onset of the short-lived water fountain phase of W43A life cycle seems plausible considering the current properties of the source \citep{Tafoya20}.
Following the argument that \citet{Sahai:2017} used for the water fountain IRAS 16342-3814, if we were to assume that the radiation pressure is the main force responsible for the launching and acceleration the jets of W43A, we could estimate the timescale for ejecting such radiation-driven jets as
\begin{equation}
\tau_{\rm rad} = \frac{Pc}{L}
\end{equation}
where $P$ is the total momentum, $L$ is the luminosity of the source and $c$ is the speed of light.
The momentum derived from observational constraints is $P\sim 3.06\times 10^{37}$ g cm/s. Adopting a luminosity of 6000 $L_\odot$ given by \citet{DuranRojas:2014}, we obtain a timescale of $\sim 1268$ yr which is almost 20 times larger than the dynamical timescale ($t_{\rm dyn}\sim 65$ yr) estimated by \citet{Tafoya20}. Thus,
radiation can be ruled out as the mechanism responsible for launching and accelerating the jet.
Several mechanisms have been proposed to produce collimated jets, many of which make use of magnetic fields to drive, or at least to strongly contribute to, the acceleration and collimation of the material from a rotating object, i.e., a star, a compact object or a disk \citep[e.g.][]{Shu:2000,BlandfordZnajek:1977,BlandfordPayne:1982, Ferreira:1997, Parfrey:2016} and a similar contribution has been proposed for PPNe as well \citep[e.g.][]{GarciaSegura:2005}.
\subsection{Observational constraints}
\label{ssec:obsConstr}
Recent observations by \citet{Tafoya20} show that W43A possesses a dense ($n\sim2\times10^7$ cm$^{-3}$), collimated ($z/\varpi \sim 20$, where $\varpi$ is the radius of the jet and z is its height) molecular jet. The molecular jet inclination angle with respect to the plane of the sky is 35$\degree$, and its position angle (P.A.; with respect to the north) is 68$\degree$. The jet extends with constant collimation angle out to a distance from the central source of $\approx$1600~AU, and it is surrounded by two lobes of shocked material with a lower density ($n\sim 3\times10^6$ cm$^{-3}$) (see Fig.~\ref{fig:W43A}).
W43A is known to host maser emission from different chemical species, such as OH,
H$_{2}$O and SiO. The OH masers are located on an expanding torus of radius $\sim500$ AU with an expansion velocity of $\sim18$ km/s and a velocity separation of $\sim 16$ km/s. The density required for the excitation of the OH masers at that distance is $\sim10^4-10^6$ cm$^{-3}$ \citep{Elitzur:1992}. The H$_2$O maser emission is observed at the two regions where the jet seems to be interacting with the lobes. The H$_2$O maser spots have velocities $\sim 150$ km/s and hydrogen densities $\sim10^8-10^{10}$ cm$^{-3}$ \citep{Imai02, Vlemmings06a, Vlemmings06b}. SiO masers have also been observed, at $\sim 70$ AU from the star \citet{ImaiDiamond:2005}, and were modelled as an expanding shell of shocked material surrounding a high velocity outflow. The magnetic field in the material surrounding W43A has been measured using observations of the Zeeman splitting of H$_2$O and OH masers \citep{Vlemmings06a,Amiri10}. The magnetic field strength measured in the H$_2$O maser regions is $\sim200$~mG \citep{Vlemmings06a}.
The magnetic field in the maser regions is likely enhanced, due to compression of the field lines in the shocked interaction region between the jet and the surrounding medium. The H$_2$ number density in the lobes around the jet is estimated to be $3\times10^6$~cm$^{-3}$ and that in the surrounding shell is $5\times10^8$~cm$^{-3}$ \citep[][ Fig.~\ref{fig:W43A}]{Tafoya20}. Using these densities to update the uncompressed magnetic field estimates from \citet{Vlemmings06a} and \citet{Amiri10} and assuming a typical H$_2$O maser region number density of $10^9$~cm$^{-3}$ and a magnetic C-shock, we find a magnetic field strength in the range of $\sim0.6$~mG, when the shock occurs in the lower density material of the lobes, to $\sim100$~mG, if the shock occurs in the denser shell surrounding the lobes. Since the exact maser density is unknown, the uncertainty on these values is large. Although it is unclear exactly which component of the magnetic field is traced by the H$_2$O maser measurements, the linear polarisation direction and evidence of change in sign of the measured magnetic field across the jet indicate that the masers likely probe the toroidal magnetic field \citep{Amiri10}.
We refer to the bipolar high velocity outflow traced by the H$_2$O masers as the \emph{molecular jet} of W43A.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{FIG/W43A_sketch_v3}
\caption{Sketch of W43A masers emission regions.}
\label{fig:W43A}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{FIG/W43A_sketch_source_AGB}
\caption{Sketch of the central region of W43A.}
\label{fig:W43Asource}
\end{figure}
\subsection{Modelling assumptions}
\label{ssec:modAssumpt}
How such molecular jet is launched and how it maintains its collimation throughout its length has not yet been established.
We hypothesise that what is shaping the molecular jet of W43A is a \emph{disk-driven MHD jet}.
More specifically, we assume that a MHD jet is launched by an accretion disk formed around a white dwarf companion ($\mathcal M = 0.6 M_\odot$, $\mathcal R = 0.01 R_\odot$) orbiting around the AGB star (see Fig.~\ref{fig:W43Asource}).
In this scenario, the MHD jet is accelerated outwards and entrains material from the surroundings, building up a more mass loaded, slower cocoon which is observed as a molecular jet \citep[e.g.][]{Hardee:1996,Rosen:1999}.
In this paper we show that the properties of disk-driven MHD jets can be very diverse. In order to reduce the allowed range of such properties, we compare our solutions to the observational constraints of the observed molecular jet of W43A. If the MHD jet is driving the molecular jet, its momentum has to be at least equal, or larger, than the momentum carried by the molecules ($P_j \ge P_{j,\rm obs}$).
Since the way in which the composition of the MHD jet relates to the molecular content is unknown, we assume that its hydrogen number density is at most equal to the hydrogen density estimated from the CO mass ($n \ge n_{\rm obs}$).
With a lower density, the MHD jet is also likely to travel at a faster speed than the water maser spots in that region ($V_{\rm H_{2}O} \ge V_{\rm H_{2}O, obs}$).
Finally, we adopt the full range of the toroidal magnetic field strength ($B_\phi = 0.6-100 $ mG) to look for solutions which have the requirements listed above and extend for 2000 AU.
Given these uncertainties, we choose to compare the momentum rate carried by our solutions with the one estimated with the observational constraints to avoid a direct comparison between densities and velocities where we should instead make more assumptions such as on the ionization fraction or on the intrinsic speed of the jet.
This allow us to predict the general properties of the MHD jet to drive the molecular jet sheath surrounding it.
To estimate the momentum rate from observations, we approximate the molecular jet of W43A as a full cylinder with a length of 2000 AU and a radius of 45 AU. The momentum rate can be estimated as $\dot P = M_j V_{\rm H_{2}O}/t_{\rm dyn}$, which for a jet with a total mass of $\sim 10^{-3}$ M$_{\odot}$, a velocity of 150 km/s and a dynamical timescale of 65 years is $\sim 0.0023$ M$_\odot$/yr km/s. This is equivalent of a total momentum over 65 years of $P\sim 3.06\times 10^{37}$ g cm/s.
We note that this estimated value of the momentum of W43A lies within the range ($10^{35}-10^{40}$ g cm/s) reported by \citet{BlackmanLucchini:2014} for a sample of pre-planetary nebulae showing high-velocity and extreme high-velocity outflows.
Finally, we derive the mass loss rate of $\dot M_j = \pi \varpi_0^2\rho V_{\rm H_{2}O} \sim 1.58\times10^{-05}$M$_\odot$/yr.
For these calculations, we have considered a constant density and velocity along the jet axis and along the jet radius as well. In the next section, we will discuss how to derive averaged quantities in physical units from a single scale-invariant streamline, which is what we call a solution.
\subsection{Scaling of the solutions}
\label{ssec:selection}
We use a sample of roughly 1500 solutions mapping the parameter space and we start by selecting the ones that satisfy the criterion:
\begin{equation}
\left(\frac{\varpi}{z}\right)_{\rm H_{2}O} = \tan(\theta_{\rm H_{2}O}),
\label{eqn:thetaH2O}
\end{equation}
which is equivalent to determining which solutions terminate beyond the H$_2$O spots ($\theta_{\rm LRP} < \theta_{\rm H_{2}O}$).
The quantity that determines $\theta_{\rm H_{2}O}$ is the jet cylindrical radius at $z_{\rm H_{2}O} = 1000$ AU, while we keep the jet height at the water maser spots constant.
Observational constraints on the cylindrical radius at H$_2$O vary from $\sim10$ AU \citep{Imai02} to $ \sim45$ AU \citep{Tafoya20}.
We treat the cylindrical radius at H$_2$O as a free parameter within the above interval.
Since our solutions are calculated in dimensionless units, e.g. ($\varpi/\varpi_*, V/V_*, B/B_*, \rho/\rho_*, ...$), the first step to compare them with a physical system is to introduce a characteristic length to scale them.
We use the cylindrical radius of the jet at the H$_2$O masers spots as reference length to scale the cylindrical radius of each solution, $\varpi$, through the relation
\begin{equation}
\varpi_* = \left(\frac{\varpi}{G}\right)_{\rm H_{2}O}
\label{eqn:wscale}
\end{equation}
for the reference streamline ($\alpha=1$, see Sec.~\ref{ssec:equations}).
Once the reference length is fixed, the scaling of the velocity is also defined as
\begin{equation}
V_* = \sqrt{\frac{\mathcal {G M}}{\kappa_{\rm VTST}^2\varpi_*}}
\end{equation}
for a given central object with mass (for a white dwarf, $\mathcal M = 0.6 M_\odot$).
We use the observational constraints on the toroidal magnetic field component to determine the maximum and minimum $B_*$ as
\begin{equation}
B_* = \frac{B_{\phi,\rm{obs}}}{B_{\phi,\rm{H_{2}O}}},
\end{equation}
where $B_{\phi,\rm{obs}}$ are the values of the updated magnetic field limits, 0.6 and 100 mG, discussed in Sec.~\ref{ssec:obsConstr} and $B_{\phi,\rm{H_{2}O}}$ is the value of the toroidal component of the magnetic field
for the reference line of our solutions at the position of the H$_2$O maser spot. Then we introduce a third value of $B_{\phi, \rm obs}$ that matches the momentum rate deduced from observational constraints.
Finally, we derive the scaling for the mass density and the pressure from the above as follows:
\begin{equation}
\rho_* = \frac{B_*^2}{4\pi V_*^2}\qquad \rm{and} \qquad P_* = \frac{\mu_{\rm VTST} B_*^2 }{8\pi}.
\end{equation}
\subsection{Integrated quantities}
\label{ssec:intQuant}
As is generally the case for self-similar models, the properties of the jet at a given radius are derived from the reference streamline and extended with the appropriate radial dependence in the form of power law of the parameter $\alpha$ (see Sec.~\ref{sec:eqnMet}, \citep{FerreiraPelletier:1993,VlahakisTsinganos:1998}).
This parameter is defined as
\begin{equation}
\alpha = \left(\frac{\varpi_\alpha}{\varpi_\star}\right)^2 = \left(\frac{\varpi(\theta)}{\varpi_\star G(\theta)}\right)^2
\label{eqn:alpha}
\end{equation}
where $\varpi(\theta)$ is the radial profile of the reference streamline, $G(\theta) = \varpi(\theta)/\varpi_\alpha$, $\varpi_\alpha$ is the cylindrical radius at the AP for the streamline with a given $\alpha$ and $\varpi_\star$ is the scaling length defined in Eq.~\ref{eqn:wscale} using the criterion described in Eq.~\ref{eqn:thetaH2O}.
We give the radial scalings for all the relevant quantities in Appendix~\ref{app:scaling}.
Using these relations, a given solution can be extended to infinity and towards the polar axis. Expanding a solution over the radial direction is necessary to calculate quantities such as the jet mass loss and the momentum rate which require an integration over a surface perpendicular to the jet axis. Since the geometry of the equations that we adopted has a singularity on the polar axis, for the following calculations of integrated quantities we will consider a \emph{flux tube} defined by inner and outer cylindrical radii, $\varpi_{\rm in}$ and $\varpi_{\rm out}$, or, equivalently, $\alpha_{\rm in}$ and $\alpha_{\rm out}$.
Once that the scaling length $\varpi_*$ is defined, the inner and outer radii are determined and so are also the streamline labels $\alpha_{\rm in}$ and $\alpha_{\rm out}$, through the equation \ref{eqn:alpha}.
First, we evaluate a density-weighted average velocity for each jet solution at the height of the H$_2$O maser spots over the flux tube area as follows
\begin{equation}
\langle V_{\rm H_2O}\rangle = \frac{ 2\pi z_{\rm H_{2}O}^2\int_{\theta_{\rm LRP}}^{\theta_{\rm H_{2}O}} \rho V\frac{\sin(\theta)}{\cos^3(\theta)}d\theta}{ 2\pi z_{\rm H_{2}O}^2 \int_{\theta_{\rm LRP}}^{\theta_{\rm H_{2}O}} \rho\frac{\sin(\theta)}{\cos^3(\theta)}d\theta},
\label{eqn:avVH2O}
\end{equation}
where the relation between $\theta$ and $\alpha$ (or $\varpi$) is defined as
\begin{equation}
\alpha(\theta) = \left(\frac{z_{\rm H_{2}O}\tan(\theta)}{\varpi(\theta)}\right)^2,
\end{equation
where $\varpi(\theta)$ is calculated on the reference streamline with $\alpha=1$.
We note that, since the velocity decreases with increasing $\alpha$, we expect this average to be dominated by the inner streamlines in the flux tube, while the streamlines close to the outer edge of the flux tube will be slower. For this reason, it is more meaningful to compare the density-averaged velocity with the observed (almost constant) velocity.
The mass loss rate of the jet can be derived as the mass flux flowing from the z=0 surface of the flux tube as follows
\begin{equation}
\dot{M_j} = \int_{\varpi_{\rm in}}^{\varpi_{\rm out}} \rho V \cdot dA.
\end{equation}
The mass loss rate is dependent on the value of the toroidal magnetic field we are considering. Since there is still considerable uncertainty on the strength of the toroidal component of the magnetic fields, we can associate to each solution three mass loss rates corresponding to the minimum and maximum $B_\phi$ in Sec.~\ref{ssec:obsConstr} and the minimum value of $B_\phi$ for which we find matching solutions (see Fig.~\ref{fig:PdotZH2O}).
Similarly we will give three values for the momentum rate of each jet configurations.
The momentum rate of a jet model is
\begin{equation}
\dot{P} = 2\pi z_{\rm H_{2}O}^2 \int_{\theta_{\rm LRP}}^{\theta_{\rm H_{2}O}} \rho V^2 \frac{\sin(\theta)}{\cos^3(\theta)}d\theta,
\end{equation}
where the integration is done over all the streamlines contributing to the flux tube above the H$_2$O maser spot.
\subsection{Comparison results}
\label{ssec:results}
In Fig.~\ref{fig:PdotZH2O} we present the total jet height ($z_{\rm LRP}$) versus the momentum rate of all the solutions in our sample. The black diamond marks the observed $\dot P$ at the observed total jet height.
The shaded gray horizontal and vertical areas show the intervals for $\dot P$ and $z_{\rm LRP}$ we use to define a solution as a good match.
The shaded light yellow area between the blue squares and the magenta triangles define the values of the momentum rates that are allowed within the range of the toroidal magnetic field derived from observations. We see that for any value of $B_\phi$ the solutions fall on a curve with little-to-none scattering introduced by the variation of the other jet properties.
We produce this plot once we have set the half-width of the jet, but before introducing the other constraints on velocity and density and we find that a toroidal magnetic field at the H$_2$O maser spots of at least 14 mG is required for the jet solutions to have a comparable or higher momentum rate than the observed one.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{FIG/Pdot_vs_JetHeight_multipleB
\caption{Total jet height versus momentum rate for a jet width of 20 AU at H$_2$O for all the solutions in the sample. The blue squares and pink triangles represent the solutions $\dot P$ for a toroidal magnetic field of 0.6 and 100 mG, respectively. The yellow dots are the momentum rates of the solutions with the minimum $B_\phi$ (14 mG) that matches the observed $\dot P$. The light yellow areas highlights the allowed momentum rates within this interval of $B_\phi$. The shaded gray areas show the acceptance intervals for $z_{\rm LRP}$ and $\dot P$.}
\label{fig:PdotZH2O}
\end{figure}
Among the solutions found at the interception of the shaded gray areas in Fig.~\ref{fig:PdotZH2O} for the given choice of the jet radius ($\varpi_{\rm H2O}=20$ AU) and toroidal magnetic field ($B_\phi = 14$ mG), we present a sample of 8 jet configurations which satisfy all the constraints on density, velocity, total jet height and momentum rate.
We report the parameters and the relevant scaled quantities in Tab.~\ref{tab:selsol}.
Given the observational constraints (Sec.~\ref{ssec:obsConstr}) and the tight correlation that exist between the jet total extent and the momentum rate, we are left with solutions having the same angular position and collimation angle at the AP, $\theta_{\rm A}=14\degree$ and $\psi_{\rm A}=79\degree$ (and $\psi_0=35\degree$) respectively, for the given choice of the parameter $F$ ($F=0.75$) and the polytropic index of the gas ($\Gamma=5/3$). As a reference, we give the typical BP solution parameters ($k_{\rm BP}= 0.03,~\lambda_{\rm BP}= 30,~\psi_0 = 32\degree$) and we report in Tab.~\ref{tab:selsol} our parameters in BP units. We remind the reader that the equations that we adopted differ from the classical BP because we do not neglect the enthalpy of the gas.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{FIG/Vprofile_oneB_selected}
\caption{Velocity versus jet height of the scaled reference streamlines for $\alpha=1$, which are the outermost streamlines of our selected jet configurations. The solid horizontal line is the observed velocity at the H$_2$O masers. The vertical thin solid line is the "base" of the jet as seen in the CO observations, and the vertical dashed solid line is the location of the H$_2$O maser spot. The thick solid vertical line is at z = 2000 AU.}
\label{fig:VprofileW43A}
\end{figure}
\begin{table*}
\centering
\setlength\tabcolsep{4.5 pt}
\setlength\extrarowheight{5pt}
\caption{Parameters of the selected solutions as a good match to W43A. All solutions have models parameters $F=0.75$, $\Gamma=5/3$, $\theta_{\rm A}=14\degree$, $\psi_{\rm A}=79\degree$ and scaling parameters $\varpi_{\rm H2O} = 20$ AU, $z_{\rm H2O} = 1000$ AU and $B_\phi = 14$ mG.}
\begin{tabular}{l | c c c c c c c c}
\hline
Model & S1 & S2 & S3 & S4 & S5 & S6 & S7 & S8 \\
\hline
$k_{\rm VTST}$ & 1.5 & 2.0 & 2.5 & 3.0 & 3.5 & 4.0 & 4.5 & 5.0 \\
$\mu_{\rm VTST}$ & 0.7957 & 1.3122 & 1.9295 & 2.6433 & 3.4504 & 4.3488 & 5.3373 & 6.4149 \\
$\lambda_{\rm VTST}$ & 0.3155 & 0.3185 & 0.3214 & 0.3239 & 0.3261 & 0.3280 & 0.3295 & 0.3307 \\
$\epsilon_{\rm VTST}$ & 11.2599 & 11.4833 & 11.7117 & 11.9394 & 12.1626 & 12.3788 & 12.5862 & 12.7840 \\
$\theta_{\rm MFP}$ & $2.1866\times 10^{-2}$ & $2.1753\times 10^{-2}$ & $2.1623\times 10^{-2}$ & $2.1480\times 10^{-2}$ & $2.1327\times 10^{-2}$ & $2.1166\times 10^{-2}$ & $2.1000\times 10^{-2}$ & $2.0830\times 10^{-2}$ \\
$\theta_{\rm MSP}$ & 0.7494 & 0.6046 & 0.5194 & 0.4631 & 0.4232 & 0.3938 & 0.3712 & 0.3535\\
\hline
$\langle V_{\rm H_{2}O} \rangle$ (km/s) & 1982 & 1510 & 1238 & 1059 & 934 & 843 & 775 & 722 \\
$\langle V_{p,{\rm MSP}} \rangle$ (km/s) & 1405 & 1057 & 859 & 731 & 642 & 578 & 530 & 494 \\
$\langle n_{\rm H_{2}O} \rangle$ (cm$^{-3}$) & $8.05\times 10^{5}$ & $1.39\times 10^{6}$ & $2.10\times 10^{6}$ & $2.87\times 10^{6}$ & $3.72\times 10^{6}$ & $4.60\times 10^{6}$ & $5.49\times 10^{6}$ & $6.37\times 10^{6}$\\
$\dot M_j$ (M$_\odot$/yr) & $1.24\times 10^{-6}$ & $1.58\times 10^{-6}$ & $1.95\times 10^{-6}$ & $2.24\times 10^{-6}$ & $2.64\times 10^{-6}$ & $2.87\times 10^{-6}$ & $3.08\times 10^{-6}$ & $3.26\times 10^{-6}$ \\
$\dot P$ (M$_\odot$/yr)(km/s) & $2.38\times 10^{-3}$ & $2.41\times 10^{-3}$ & $2.41\times 10^{-3}$ & $2.42\times 10^{-3}$ & $2.43\times 10^{-3}$ & $2.45\times 10^{-3}$ & $2.48\times 10^{-3}$ & $2.51\times 10^{-3}$ \\
$P$ (g cm/s) & $3.07\times 10^{37}$ & $3.12\times 10^{37}$ & $3.12\times 10^{37}$ & $3.13\times 10^{37}$ & $3.14\times 10^{37}$ & $3.17\times 10^{37}$ & $3.20\times 10^{37}$ & $3.24\times 10^{37}$ \\
\hline
$k_{\rm BP}$ & 0.0032 & 0.0042 & 0.0052 & 0.0062 & 0.0071 & 0.0080 & 0.0088 & 0.0097 \\
$\mu_{\rm BP}$ & 13.1344 & 8.3814 & 5.9173 & 4.4605 & 3.5194 & 2.8716 & 2.4038 & 2.0532 \\
$\lambda_{\rm BP}$ & 1.6307 & 1.2404 & 1.0059 & 0.8491 & 0.7364 & 0.6513 & 0.5844 & 0.5303 \\
$\epsilon_{\rm BP}$ & $8.3257\times 10^{-2}$ & $4.7334\times 10^{-2}$ & $3.0601\times 10^{-2}$ & $2.1451\times 10^{-2}$ & $1.5895\times 10^{-2}$ & $1.2265\times 10^{-2}$ & $9.7593\times 10^{-3}$ & $7.9553\times 10^{-3}$ \\
\hline
\end{tabular}
\label{tab:selsol}
\end{table*}
We notice that the only other parameter that has not been constrained is the mass loss parameter $k_{\rm VTST}$. This parameter is proportional to the mass-to-magnetic flux ratio, which leads to an uncertainty on the dimensionless angular momentum, $\lambda_{\rm VTST}$, and the parameter regulating the entropy of the gas, $\mu_{\rm VTST}$. Such spread in the values of the above parameters is reflected in the uncertainties on the average velocities and densities at the maser spot. However, the resulting momentum rate among the selected models is roughly constant and very close to the momentum rate estimated from the observations ($\dot P \sim 0.0024-0.0025$ M$_\odot$/yr km/s). We note that substantially larger $\dot P$ could be achieved by adopting a larger toroidal magnetic field.
A shorter jet ($<2000$ AU) would still require a larger $B_\phi$, while a taller jet could yield a larger momentum rate for the same choice of $B_\phi$.
In Fig.~\ref{fig:VprofileW43A} we show the velocity profiles of the jet models given in Tab.~\ref{tab:selsol}.
Given the model assumptions given in Sec.~\ref{ssec:modAssumpt}, we selected solutions that have velocity profiles of the reference ($\alpha=1$) outermost line of the flux tube largely exceeding the average observed velocity of 150 km/s (horizontal black line), so that the MHD jet would be able to transfer momentum to and accelerate the molecular cocoon.
The inner streamlines ($\alpha < 1$) have the same acceleration profile, but higher speeds due to the relations \ref{eqn:Vpol} and \ref{def:Vphi} given in Appendix \ref{app:scaling}.
We notice that the smaller the mass load parameter, $k_{\rm VTST}$, the larger is the speed. The uncertainty left on $k_{\rm VTST}$, and therefore on the velocity of the MHD jet layer, can only be removed with further observations of the core of the molecular jet of W43A.
While there is a moderate acceleration taking place from the MSP to shortly downstream of the Alfv\'en point, in the portion of the jet observed through the emission of CO, i.e. from 45 AU to 2000 AU (the region between the thin vertical solid line and the thick vertical solid line in Fig.~\ref{fig:VprofileW43A}), the velocity has already reached its maximum and it stays constant up until the jet tip.
The total velocity is entirely poloidal, while the toroidal component is close to zero along the entire jet extent. Under these circumstances, the magnetic field and the gas are not corotating even upstream of the Alfv\'en surface, which is an indication of a jet driven by thermal pressure (see bottom panels of Fig.~\ref{fig:energyfluxes}-\ref{fig:poloidalF}) as opposed to a magnetically-driven jet (top and middle panels of Fig.~\ref{fig:energyfluxes}-\ref{fig:poloidalF}). Typically these winds are less powerful and they can only achieve higher speeds if a large injection speed ($V_{p,{\rm MSP}}$) is provided (see Tab.~\ref{tab:selsol}). Such high injections speeds are consistent or higher than the initial speeds considered in recent MHD simulations by \citet{Balick:2020}, which are successful in reproducing the qualitative shapes of a sample of pre-planetary nebulae.
In order to make this comparison as complete as possible, we investigate the effect of varying the radius of the jet and of having a main sequence star as the accreting object. Increasing or decreasing the jet radius within the observed range 10-45 AU has the effect of decreasing/increasing the angular position of the AP and increasing/decreasing its collimation angle. The thinnest jet ($\varpi_{\rm H_{2}O} = 10$ AU, $\theta_{\rm A}=10\degree$, $\psi_{\rm A}=83\degree$) allows two values of the mass loss parameter (1.5 and 2.0) instead of eight, limiting the selection to just two models. The thickest jet ($\varpi_{\rm H_{2}O} = 45$ AU) leaves us solutions with $\theta_{\rm A}$ ($\sim25\degree$) and $\psi_{\rm A}$ ($68\degree$) and excludes $k_{\rm VTST}=1.5$.
We also considered the possibility that the central object may be a main sequence star ($M\sim M_\odot$ and $R_\star\sim R_\odot$), finding our conclusions unaltered, as expected by the mild dependence that our scaling scheme has on the mass of the central star.
\section{Summary}
\label{sec:discussion}
In this paper we discussed the adaptation of the numerical algorithm we presented in Paper I to solve the non-relativistic, radial self-similar MHD equations describing a disk-driven outflow.
We focused on the study of a large sample of solutions defined by constant Blandford-Payne-like parameter $F$ ($F=0.75$) and polytropic index $\Gamma=5/3$. We recognized similar patterns within the collection of jet configurations that are ultimately ascribed to the cold-to-hot transition that we find recurrently for similar values of the angular position of the Alfv\'en point and the collimation angle at the same position.
We analysed the behaviour of all the relevant jet quantities undergoing this transition and found that:
\begin{itemize}
\item Cold jets have the largest (dimensionless) angular momentum and they have the lowest enthalpy and plasma-$\beta$ much lower then unity. They are therefore magnetically-dominated jets. They have little-to-none vertical speed upstream of the magnetosonic slow point, but have $|B_\phi/B_p|$ ratios larger than unity. This combination produces twisted streamlines with variable radius, due to the oscillatory behaviour of the transverse forces. The highly wound up magnetic field is also responsible for the relatively high mass load ($\eta \lesssim 1$) of these solutions. At approximately half-way between the Alfv\'en point and the magnetosonic fast point, a large fraction of the magnetic energy has turned into kinetic energy and the jet becomes kinetically-dominated until the last recollimation point.
\item Magneto-centrifugal jets are similar to cold jets however the enthalpy is slightly larger, and it plays a role in lifting the gas. These jet models do not suffer oscillations upstream of the magnetosonic slow point. From this point on, these models resemble closely the cold jets. They are, however, the most efficient at accelerating the flow, even though their total energy flux is lower than the purely cold jets.
\item Hot jets are thermally-dominated jet configurations (plasma-$\beta = P/(B^2/8\pi) \lesssim 1$), where the magnetic field is contributing significantly to the acceleration and collimation of the jet only downstream of the Alfv\'en point. These solutions start off with a large poloidal speed, negative toroidal velocity and $|B_\phi/B_p|<<1$. The acceleration is only mild and they have low energy flux densities. Within this regime, we see two types of energy transfer channels that lead to an increase in kinetic energy. In hot jets the gas pressure is responsible for the acceleration in the initial jet segment, which can extend even just downstream of the Alfv\'en point. A fraction of the thermal energy is transferred to the magnetic energy, which then is used for the last acceleration until the tip of the jet.
\end{itemize}
We then describe a procedure for the identification of specific jet solutions to be compared to an astrophysical source, in our case the water fountain W43A.
W43A is believed to be an asymptotic giant branch star in the process of becoming a planetary nebula. During this current, short-lived phase, the source is launching collimated molecular jets, the nature of which is debated.
We assume that the jets of W43A is launched by a disk-driven ionized inner shell, which is surrounded by a molecular jet sheath.
Since the true nature or even the existence of such a jet core is unknown, we adopted the constraints on the molecular gas as upper/lower limits to the corresponding quantities of the atomic jet, namely the size, hydrogen number density, velocity and magnetic field strength to identify possible jet configuration.
We conducted an exhaustive examination of our sample and we established that, given the observed molecular properties within the jets of W43A and our (large, but finite) collection of solutions, the most suitable jet model for the jets of W43A is a thermally-dominated jet configuration with a high injection speed, but not efficiently accelerating for most of its extent. We found that the strength of the toroidal component of the magnetic field is the parameter that affects the most this comparison.
This procedure can be applied to other sources, for which the magnetic field has been difficult to measure, in order to determine a range of plausible magnetic field strengths given observational constraints on density, velocity and jet size.
In future works, we will expand our grid of solutions to the additional two dimensions, namely the radial scaling of the current, $F$, and the polytropic index, $\Gamma$, and compare the full sample to other astrophysical sources.
\section*{Acknowledgements}
CC and WV acknowledge support from the Swedish Research Council (VR).
\section*{Data Availability}
A catalogue of all the solutions that have been found is available on request to the main author.
\bibliographystyle{mnras}
|
1,116,691,497,160 | arxiv | \section{Introduction}
The goal of this article is to discuss the spectral properties
and the $L^\infty$-bounds associated to a mixed local and nonlocal problem,
also in relation to some concrete motivations arising from
population dynamics and mathematical biology.
The methodology that we exploit here relies on functional analysis
and methods from (classical and nonlocal) partial differential
equations. Given the mixed character of the operator taken into account
and the new set of external conditions,
the standard mathematical framework to deal with
partial and integro-differential equations needs to be conveniently
modified to suit this new scenario.\medskip
More specifically, in~\cite{VERO}, we have introduced
a new set of nonlocal Neumann conditions,
extending those previously set forth in~\cite{MR3651008},
with the aim of dealing with a mathematical problem
motivated by ethology and biology.
More specifically, in~\cite{VERO}
a biological population was taken into consideration
within an environment which could be partially hostile.
The population competes for the resources
via a logistic equation and diffuses by a possible combination
of classical and nonlocal dispersal processes
(a detailed derivation of the diffusion model
is also presented in the appendix of~\cite{VERO}).
The population can be also provided by an additional birth growth
due to pollination, and the main question targeted in~\cite{VERO}
is whether or not it is possible to {\em rearrange the given
environmental resources (within given upper and lower constraints)
to allow for the survival of the species}.\medskip
The nonlinear
mathematical analysis developed in~\cite{VERO}
also relies on some auxiliary results
from the linear theory, such as {\em spectral decompositions}
and {\em uniform bounds for subsolutions}, which
have their independent interest. We collect here these
results, providing full proofs in detail.
\medskip
The setting in which we work is the following.
We let~$s\in(0,1)$ and~$\alpha$, $\beta\in[0,+\infty)$
with~$\alpha+\beta>0$,
and we consider the mixed operator
\begin{equation}\label{9uyhifd774} -\alpha\Delta +\beta(-\Delta)^s.\end{equation}
As customary, the operator~$(-\Delta)^s$
is the fractional Laplacian
$$ (-\Delta)^s u(x):=\frac12\,\int_{\mathbb{R}^n}\frac{2u(x)
-u(x+\zeta)-u(x-\zeta)}{|\zeta|^{n+2s}}\,d\zeta,$$
where other normalization constants have
been removed to ease the notation
(in any case, additional normalizing
constants do not affect our arguments,
and they can also be comprised into the parameter~$\beta$
in~\eqref{9uyhifd774} if one wishes to do so).\medskip
As a matter of fact, the theory that we develop
here, as well as in~\cite{VERO},
works in greater generality (e.g., one
can replace the fractional Laplacian with
a more general integro-differential operator
with only minor modifications in the main proofs),
but we rather limit ourselves to the paradigmatic
case of the fractional Laplacian for the sake of simplicity
in the exposition. Moreover, the results
obtained are new even in the case of ``purely nonlocal
diffusion'', i.e. when~$\alpha=0$
in~\eqref{9uyhifd774}.
\medskip
In terms of theory and applications,
we recall that operators with mixed classical
and fractional orders have been studied
under different points of views,
see for instance~\cite{MR2095633, MR2180302, MR2129093, MR2243708,
MR2422079, MR2542727,
MR2653895, MR2911421, MR2912450,
MR2928344, MR2963799,
MR3051400, MR3194684, MR3485125,
MR3950697, MR3912710,
2017arXiv170605306D, 2018arXiv181107667D, 2019arXiv190702495A,
biagvecc, ABATANGELO, CABRE}
and the references therein.
Besides their clear mathematical interest,
these operators find natural applications
in biology, in view of the long-jump dispersal
strategies followed by several species,
as confirmed by a number of experimental data, see e.g.~\cite{NATU},
and theoretically studied under several perspectives,
see e.g.~\cite{MR1636644, MR2332679, MR2411225, MR2601079, MR2897881,
MR2924452, MR3026598, MR3035974, MR3082317, MR3169773, MR3285831,
MR3498523, 3579567, MR3590646,
MR3590678, MR3639140, MR3771424}
(other concrete applications arise in plasma physics,
see~\cite{PhysRevE.87.063106} and the references therein).
\medskip
As usual, the mathematical framework in~\eqref{9uyhifd774}
is endowed by a spatial domain
on which the corresponding equation takes place.
For this,
we take a bounded open set~$\Omega\subset\mathbb{R}^n$
of class~$C^1$.
When~$\beta=0$, we take the additional hypothesis that
\begin{equation}\label{connected}
{\mbox{$\Omega$ is connected.}}\end{equation}
{F}rom the biological point of view, $\Omega$
represents the natural environment inhabited by
a given biological
population, whose density is described
by a function~$u:\mathbb{R}^n\to\mathbb{R}$
(as customary in nonlocal problems,
one has to prescribe functions in all of the space
to make sense of the fractional diffusive operators). \medskip
We prescribe external conditions to~$u$ in order
to make~$\Omega$ an ecological niche.
To this end, see~\cite{VERO}, we
set a variational formulation related to the
operator in~\eqref{9uyhifd774} which
endows the equation in the set~$\Omega$
with a suitable Neumann condition. The functional
space that we consider is
\begin{equation}\label{Xdefab}
X_{\alpha,\beta}=X_{\alpha,\beta}(\Omega):=
\begin{dcases}
H^1(\Omega) & {\mbox{ if }} \;\beta=0,
\\
H^s_\Omega & {\mbox{ if }} \;\alpha=0,
\\
H^1(\Omega)\cap H^s_\Omega & {\mbox{ if }}\; \alpha \beta\neq 0,
\end{dcases}
\end{equation}
where $$
H^s_\Omega:=
\left\lbrace
u:\mathbb{R}^n\to\mathbb{R} \;{\mbox{ s.t. }}\;
u\in L^2(\Omega) \;{\mbox{ and }}\;
\iint_\mathcal{Q}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy <+\infty
\right\rbrace,
$$
and~${\mathcal{Q}}$ is the cross-shaped set
on~$\Omega$ given by
$$ {\mathcal{Q}}:=\big(\Omega\times\Omega\big)
\cup\big(\Omega\times(\mathbb{R}^n\setminus\Omega)\big)\cup
\big((\mathbb{R}^n\setminus\Omega)\times\Omega\big).$$
We observe
that~$X_{\alpha,\beta}$ is a Hilbert
space with respect to the scalar product
\begin{equation}\label{scalar}\begin{split}
(u,v)_{X_{\alpha,\beta}}&\;:=\int_\Omega u(x)v(x)\,dx+
\alpha \int_\Omega \nabla u(x) \cdot\nabla v(x)\,dx
\\
&\qquad+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy,
\end{split}\end{equation}
for every $u,v\in X_{\alpha,\beta}$.
We also define the seminorm
\begin{equation}\label{seminorm}
[u]^2_{X_{\alpha,\beta}}:=\frac\alpha2
\int_\Omega |\nabla u(x)|^2\,dx
+\frac{\beta}{4}
\iint_\mathcal{Q}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy.
\end{equation}
Given~$f\in L^2(\Omega)$, we say
that~$u\in X_{\alpha,\beta}$ is a solution of
\begin{equation}\label{LAWEAKXA-S} -\alpha\Delta u+\beta(-\Delta)^s u=f\qquad{\mbox{ in }}\;\Omega\end{equation}
with~$(\alpha,\beta)$-Neumann condition
if
\begin{equation}\label{LAWEAKXA}
\alpha \int_\Omega \nabla u(x) \cdot\nabla v(x)\,dx
+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy
=\int_\Omega f(x)\,v(x)\,dx,\end{equation}
for every~$v\in X_{\alpha,\beta}$.
We remark that, formally, the external condition in~\eqref{LAWEAKXA}
can be detected by taking~$v$ with~$v=0$ in~$\mathbb{R}^n\setminus\overline{\Omega}$
(which produces a normal derivative prescription along~$\partial\Omega$)
and then by taking~$v=0$ in~$\overline\Omega$
(which produces a nonlocal prescription in~$\mathbb{R}^n\setminus\overline\Omega$):
that is, formally, the external condition in~\eqref{LAWEAKXA}
can be written in the form
\begin{equation}\label{NEU-3}
\begin{dcases}
\mathscr{N}_{s} u=0 & \qquad{\mbox{in }}\; \mathbb{R}^n \setminus \overline\Omega,
\\
\displaystyle\frac{\partial u}{\partial \nu}=0 &\qquad{\mbox{on }} \;\partial\Omega,\end{dcases}
\end{equation}
where~$\nu$ is the exterior normal to~$\Omega$,
and we use the notation
\begin{equation}\label{DRSPV}
\mathscr{N}_{s} u(x):=\int_\Omega \frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy\qquad\qquad
{\mbox{for every }}\;x\in\mathbb{R}^n \setminus \overline\Omega,\end{equation}
and the first condition in~\eqref{NEU-3}
being dropped when~$\alpha=0$, the second condition
in~\eqref{NEU-3}
being dropped when~$\beta=0$.\medskip
We recall that the nonlocal Neumann prescription
in~\eqref{DRSPV} is precisely the one introduced in~\cite{MR3651008}
in light of probabilistic consideration
(i.e., a particle following a~$\frac{s}2$-stable
process is sent back to the original domain
by following the same process).
Also, as shown in~\cite{MR3651008},
the setting in~\eqref{DRSPV} provides a coherent
functional analysis setting.\medskip
In the situation treated in this paper,
this setting is superimposed to a classical framework
when~$\alpha\ne0$: in particular,
we remark that, when~$\alpha\ne0$ and~$\beta\ne0$,
both the prescriptions in~\eqref{NEU-3}
are in force, but they do not cause any overdetermined
conditions, and indeed, as shown in~\cite{VERO},
the notion of solutions in this case is well-posed.\medskip
Moreover, we stress that the setting in~\eqref{LAWEAKXA}
provides a ``zero-flux''
condition, in the sense that if~\eqref{LAWEAKXA-S}
has a solution, then necessarily
\begin{equation}\label{ZERAJS-jPP} \int_\Omega f(x)\,dx=0,\end{equation}
as it can be seen by taking~$v:=1$ in~\eqref{LAWEAKXA}.
\medskip
We now describe in detail the results
stated and proved in this paper.
\subsection{Eigenvalue and eigenfunctions
for the $(\alpha,\beta)$-Neumann condition}
The first set of results that we discuss
here is related to a generalized eigenvalue problem
associated to equation~\eqref{LAWEAKXA-S}
with~$(\alpha,\beta)$-Neumann condition.
Namely, we let~$m:\Omega\to\mathbb{R}$ and
we consider the weighted eigenvalue equation
\begin{equation}\label{probauto}
\begin{dcases}
-\alpha\Delta u +\beta(-\Delta)^su= \lambda mu & \quad{\mbox{
in }}\;\Omega,
\\
{\mbox{with $(\alpha,\beta)$-Neumann condition.}}\end{dcases}
\end{equation}
According to~\eqref{LAWEAKXA}
the notion of solution in~\eqref{probauto} is in the weak sense
in the space~$X_{\alpha,\beta}$: namely
we say that~$u\in X_{\alpha,\beta}$ is a solution of~\eqref{probauto}
if
\begin{equation}\label{ORAGSBNmer}\alpha \int_\Omega \nabla u(x) \cdot\nabla v(x)\,dx
+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy
=\lambda \int_\Omega m(x) u(x)v(x)\,dx,\end{equation}
for every~$v\in X_{\alpha,\beta}$.
\medskip
To deal with the integrability condition of the weight~$m$,
it is convenient to
consider the following ``critical'' exponent:
\begin{equation}\begin{split}\label{qbar}
\underline{q}:=\;&\begin{dcases}
\displaystyle\frac{2^*}{2^*-2} & {\mbox{ if $\beta=0$ and~$n>2$}},\\
\displaystyle\frac{2^*_s}{2^*_s-2} & {\mbox{ if $\beta\ne0$ and~$n>2s$}},\\
1 & {\mbox{ if $\beta=0$ and~$n\le2$, or
if $\beta\ne0$ and~$n\le2s$}},\end{dcases}\\=\;&
\begin{dcases}
\displaystyle\frac{n}{2} & {\mbox{ if $\beta=0$ and~$n>2$}},\\
\displaystyle\frac{n}{2s} & {\mbox{ if $\beta\ne0$ and~$n>2s$}},\\
1 & {\mbox{ if $\beta=0$ and~$n\le2$, or
if $\beta\ne0$ and~$n\le2s$.}}
\end{dcases}
\end{split}\end{equation}
As customary, the exponent~$2^*_s$ denotes the fractional
Sobolev critical exponent for~$n>2s$
and it is equal to~$\frac{2n}{n-2s}$. Similarly,
the exponent~$2^*$ denotes the classical
Sobolev critical exponent for~$n>2$
and it is equal to~$\frac{2n}{n-2}$.
Furthermore, we suppose that
\begin{equation}\label{feuwtywvv123445}
m\in L^q(\Omega),\quad {\mbox{for some $q \in \big(\underline{q},+\infty\big]$,}}
\end{equation}
where~$\underline{q}$ is given in~\eqref{qbar}.
In this setting, problem \eqref{probauto}
admits a spectral decomposition of classical flavor,
according to the following result:
\begin{prop}\label{PROAUTOVA}
Suppose that~$m^+$, $m^-\not\equiv 0$ and\footnote{As customary, we use the standard
notation
$$ m^+(x):=\max\{0,m(x)\}\qquad {\mbox{ and }}\qquad
m^-(x):=\max\{0,-m(x)\}. $$} that
\begin{equation}\label{yt5645867600000}
\int_\Omega m(x)\,dx\neq 0.\end{equation}
Then, problem \eqref{probauto} admits two unbounded sequences of
eigenvalues:
\[
\cdots\le\lambda_{-2}\leq \lambda_{-1}<\lambda_0=0
<\lambda_1\leq
\lambda_2 \le\cdots\;\;.
\]
In particular, if
$$\int_\Omega m(x)\,dx<0,$$ then
\begin{equation}\label{lopouygbv}
\lambda_1=\min_{u\in X_{\alpha,\beta}}
\left\lbrace [u]^2_{X_{\alpha,\beta}}\, {\mbox{ s.t. }}
\int_\Omega m(x)u^2(x)\,dx=1 \right\rbrace
\end{equation}
where we use the notation in~\eqref{seminorm}.
If instead
$$\int_\Omega m(x)\,dx>0,$$ then
\[
\lambda_{-1}=-\min_{u\in X_{\alpha,\beta}}
\left\lbrace [u]^2_{X_{\alpha,\beta}} \,{\mbox{ s.t. }}
\int_\Omega m(x)u^2(x)\,dx=-1 \right\rbrace.
\]
\end{prop}
The first positive eigenvalue $\lambda_1$, as given by
Proposition~\ref{PROAUTOVA}, has the following structural
properties:
\begin{prop}\label{prop:lambda}
Suppose that~$m^+\not\equiv 0$ and
$$\int_{\Omega} m(x)\,dx<0.$$
Then,
the first positive eigenvalue $\lambda_1$ of \eqref{probauto} is
simple, and the first eigenfunction $e$ can be taken such that~$e\ge0$.
A similar statement holds if $m^-\not\equiv 0$ and
$$ \int_{\Omega} m(x)\,dx>0.$$
\end{prop}
To deal with the eigenvalue problem in~\eqref{probauto},
it is convenient to recall the notation in~\eqref{Xdefab} and
to introduce the space
\begin{equation}\label{defV}
V_m:=\left\lbrace u\in X_{\alpha,\beta}\,{\mbox{ s.t. }}
\int_\Omega m(x)u(x)\,dx=0 \right\rbrace.
\end{equation}
To ease the notation, we will simply write~$V$ instead of~$V_m$ in what follows.
We observe that, in view of~\eqref{ZERAJS-jPP},
\begin{equation}\label{ALLEI}
{\mbox{all the eigenfunctions of problem~\eqref{probauto}
belong to~$V$.}}
\end{equation}
As we will see in Corollary~\ref{G:BOUND},
a global bound holds true for these eigenfunctions.
To obtain this bound, we develop a general theory,
of independent interest, to bound globally from below
the weak subsolutions that fulfill the $(\alpha,\beta)$-Neumann
conditions, as we now discuss in detail.
\subsection{Global uniform bounds for
subsolutions under $(\alpha,\beta)$-Neumann
condition}
We give here an $L^\infty$-result for solutions,
and more general, subsolutions of
equation~\eqref{LAWEAKXA-S}
under~$(\alpha,\beta)$-Neumann condition.
To apply this bound to the eigenfunctions
of problem~\eqref{probauto}, it is also convenient to allow an additional
linear term in the equation that we take into account.
The result that we have is the following one:
\begin{theorem}\label{OSC55}
Let~$V$ be as in \eqref{defV} and~$\underline{q}$ be as in~\eqref{qbar}.
Let~$q\in \left(\underline{q},+\infty\right)$ and~$c$, $f\in L^q(\Omega)$.
Let~$u\in V$ satisfy
\begin{equation}\label{WEAK-DGSUBSOL}
\begin{split}&
\alpha \int_{\Omega} \nabla u \cdot\nabla { v }\,dx
+\frac{\beta}{2}
\iint_{{\mathcal{Q}}}\frac{(u(x)-u(y))({ v }(x)-{ v }(y))}{|x-y|^{{{n}}+2s}}\,dx\,dy
\\&\qquad\le
\int_{\Omega}\big(c(x)u(x)+ f(x)\big)\,{ v }(x)\,dx
\end{split}
\end{equation}
for each~${ v }\in X_{\alpha,\beta}$ such that~${ v }\ge0$ in~$\Omega$.
Then, there exists~$C>0$,
depending on~${n}$, $\alpha$, $\beta$,
$q$, $\Omega$, $\|c\|_{L^q(\Omega)}$ and~$m$ such that
\begin{equation}\label{BOU-o1}
\sup_{\Omega} u^+\le
C\,\left(
\| u^+\|_{L^2(\Omega)}+\|f\|_{L^q(\Omega)}
\right).
\end{equation}
\end{theorem}
In a forthcoming paper, we plan to use Theorem~\ref{OSC55}
as the cornerstone for a regularity theory
for mixed equations under $(\alpha,\beta)$-Neumann conditions.\medskip
As a consequence of~\eqref{ALLEI}
and Theorem~\ref{OSC55} (applied with~$f:=0$ and~$c:=\lambda m$),
we easily obtain the following global bound for eigenfunctions:
\begin{cor}\label{G:BOUND}
All the eigenfunctions of problem~\eqref{probauto}
belong to~$L^\infty(\Omega)$.\end{cor}
In the rest of the paper, we provide full
detailed proofs for
Propositions~\ref{PROAUTOVA}
and~\ref{prop:lambda} (in Section~\ref{AUTOPER})
and for
Theorem~\ref{OSC55} (in Section~\ref{KM:09009936523846765}).
\section{Eigenvalues and eigenfunctions and proof of Propositions~\ref{PROAUTOVA}
and~\ref{prop:lambda}}\label{AUTOPER}
The proofs of Propositions~\ref{PROAUTOVA}
and~\ref{prop:lambda} rely on classical functional analysis,
revisited
in a mixed local-nonlocal framework.
We start these arguments
by pointing out that a Poincar\'e-type inequality holds in the space~$V$
introduced in~\eqref{defV}:
\begin{lem}\label{POI66}
Let~$m$ be such that
\begin{equation}\label{yt5645867600000PRE}
\int_\Omega m(x)\,dx\neq 0.\end{equation}
Then, recalling the notation in~\eqref{seminorm}, we have that
\begin{equation}\label{poincare}
\int_\Omega u^2(x)\, dx\leq C [u]^2_{X_{\alpha,\beta}},
\end{equation}
for every $u\in V$,
where $C>0$ depends only on~$n$, $\Omega$, $s$ and~$m$.
\end{lem}
\begin{proof}
We argue
by contradiction and we suppose that
there exists a sequence of functions~$u_k\in V$ such that
\begin{equation}\label{48450y95uyhjr}
\int_\Omega u_k^2(x)\,dx=1
\end{equation}
and
\begin{equation}\label{poinc1}
[u_k]^2_{X_{\alpha,\beta}}<\frac{1}{k}.
\end{equation}
In particular, the sequence $(u_k)_k$ is bounded in
$X_{\alpha,\beta}$ uniformly in~$k$. As a consequence, from the compact
embedding of
$X_{\alpha,\beta}$ in $L^2(\Omega)$ (see e.g. Corollary~7.2
in~\cite{MR2944369} if~$\alpha=0$), we have that,
up to a subsequence, $u_k$ converges to some function~$u\in L^2(\Omega)$
as~$k\to+\infty$. Moreover, $u_k$ converges to~$u$
a.e. in $\Omega$ as~$k\to+\infty$, and~$|u_k|\le h$ for some~$h\in L^2(\Omega)$
for every~$k\in\mathbb{N}$ (see e.g. Theorem~IV.9 in~\cite{MR697382}).
As a result, since $u_k\in V$, we can apply the Dominated Convergence Theorem
to conclude that
\begin{equation}\label{poinc2}
\int_\Omega m(x)u(x)\,dx=0.
\end{equation}
In addition, we deduce from~\eqref{48450y95uyhjr} that
\begin{equation}\label{48450y95uyhjrbis}
\int_\Omega u^2(x)\,dx=1.\end{equation}
On the other hand, by the Fatou Lemma, the lower semicontinuity
of the~$L^2$-norm and~\eqref{poinc1} we have that
\begin{equation}\begin{split}\label{45wejyuiniunliol}
&\frac{\alpha}2\int_{\Omega}|\nabla u|^2\,dx
+\frac\beta4\int_\Omega\int_\Omega\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy
\\&\qquad \le \liminf_{k\to+\infty}\left(
\frac{\alpha}2\int_{\Omega}|\nabla u_k|^2\,dx+\frac{\beta}2
\int_\Omega\int_\Omega\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\right)
\le \lim_{k\to+\infty}\frac1k=0.
\end{split}\end{equation}
Now, if~$\beta=0$, this says that
$$ \int_{\Omega}|\nabla u|^2\,dx=0,$$
which implies that~$u$ is constant in~$\Omega$, thanks to~\eqref{connected}.
If instead~$\beta\ne0$, we have
from~\eqref{45wejyuiniunliol} that
$$ \int_\Omega\int_\Omega\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy=0,$$
which gives that~$u$ is constant in $\Omega$.
Hence in both case, we have that~$u$ is constant in~$\Omega$.
Moreover, we observe that~$u$ cannot vanish identically in~$\Omega$, in light
of~\eqref{48450y95uyhjrbis}. Using these observations into~\eqref{poinc2}
we conclude that
$$ \int_\Omega m(x)\,dx=0,$$
which is in contradiction with~\eqref{yt5645867600000PRE}.
This completes the proof of
formula~\eqref{poincare}.
\end{proof}
We notice that, thanks to~\eqref{poincare}, the seminorm
in~\eqref{seminorm} is actually a norm on the space~$V$ and
it is equivalent to the norm on~$X_{\alpha,\beta}$
given by~\eqref{scalar}.
Moreover, the scalar product defined as
\begin{equation}\label{uf4t}
\langle u,v\rangle_{X_{\alpha,\beta}}:=\alpha \int_\Omega \nabla u\cdot\nabla v\,dx
+\frac\beta2 \iint_\mathcal{Q}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy \end{equation}
is equivalent to the one in $X_{\alpha,\beta}$
given by~\eqref{scalar}. In this setting, we also denote
$$ \|u\|_V:=\sqrt{\langle u,v\rangle_{X_{\alpha,\beta}}}.$$
To complete the functional setting for the eigenvalue problem
in~\eqref{probauto}, we also remark that~$V$ is closed with respect to the weak
convergence:
\begin{lem}\label{lemmaclosed}
The space~$V$ introduced in~\eqref{defV} is closed with respect
to the weak convergence in~$V$.
\end{lem}
\begin{proof}
We take a sequence of functions~$u_j\in V$ weakly converging to some~$u$,
and we claim that~$u\in V$. Indeed, we have that~$u_j$ weakly
converges to~$u$ in~$X_{\alpha,\beta}$,
and~$u\in X_{\alpha,\beta}$. Furthermore,
by the compact embeddings (see e.g. Corollary~7.2
in~\cite{MR2944369} if~$\alpha=0$),
$u_j \to u$ in~$L^p(\Omega)$ for any~$p\in [1,2^*_s)$ if~$\alpha=0$
and for any~$p\in [1,2^*)$ if~$\alpha\ne0$.
Moreover, $u_j$ converges to~$u$
a.e. in $\Omega$, and~$|u_j|\le h$ for some~$h\in L^p(\Omega)$
(see e.g. Theorem~IV.9 in~\cite{MR697382}).
As a result, since $u_j\in V$, recalling~\eqref{feuwtywvv123445},
we can apply the Dominated Convergence Theorem
to conclude that
\begin{equation*}
\int_\Omega m(x)u(x)\,dx=0,
\end{equation*}
which proves that~$u\in V$, thus completing the proof of Lemma~\ref{lemmaclosed}.
\end{proof}
With this preliminary work, we can give the proofs
of Propositions~\ref{PROAUTOVA} and~\ref{prop:lambda}
by relying on functional analysis methods:
\begin{proof}[Proof of Proposition~\ref{PROAUTOVA}]
We notice that
\begin{equation}\label{notice}
{\mbox{the simple eigenfunction $\lambda_0=0$ has only
constant functions as eigenfunctions.}}\end{equation}
Indeed, if~$u$ is an eigenfunction
associated to~$\lambda_0=0$, then, by~\eqref{ORAGSBNmer},
\begin{equation}\label{jietyugvsde3957}
\alpha \int_\Omega \nabla u(x) \cdot\nabla v(x)\,dx
+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy=0,
\end{equation}
for all functions~$v\in X_{\alpha,\beta}$.
In particular, taking~$u$ as test function in~\eqref{jietyugvsde3957},
we obtain that
\begin{equation}\label{jietyugvsde395722}
\alpha \int_\Omega |\nabla u(x)|^2\,dx
+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy=0.
\end{equation}
Now, if~$\beta=0$, formula~\eqref{jietyugvsde395722} implies that
$$ \int_\Omega |\nabla u(x)|^2\,dx =0.$$
This, together with~\eqref{connected}, gives that~$u$ is constant in~$\Omega$,
thus proving~\eqref{notice} in this case.
If instead~$\beta\neq0$, we deduce from~\eqref{jietyugvsde395722} that
$$ \iint_\mathcal{Q}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy=0,$$
which implies~\eqref{notice}.
Now, to obtain the other
eigenvalues, we restrict to the space~$V$ introduced in~\eqref{defV}.
We point out that the assumption in~\eqref{yt5645867600000}
guarantees that the Poincar\`e inequality in~\eqref{poincare}
holds true on the space~$V$.
Also, we define the linear operator $T:V\to V$ by
\begin{equation}\label{defT}
\langle Tv,w\rangle_{X_{\alpha,\beta}}=\int_\Omega m(x)v(x)w(x)\,dx,
\end{equation}
for every~$v$, $w\in V$.
It is easy to see that~$T$ is symmetric. Furthermore,
we claim that
\begin{equation}\label{compact}
{\mbox{$T$ is compact.}}\end{equation}
To prove this, we let $(u_j)_j$ be a bounded sequence in $V$.
Then, $(u_j)_j$ is a bounded sequence in~$X_{\alpha,\beta}$, and therefore there exists~$u\in
X_{\alpha,\beta}$ such that~$u_j$ weakly converges to~$u$ in~$X_{\alpha,\beta}$
as~$j\to+\infty$. Moreover, from Lemma~\ref{lemmaclosed}, we have that~$u\in V$.
Now, by the compact embeddings,
\begin{equation}\label{poit78676}
{\mbox{$u_j \to u$ in $L^p(\Omega)$ for any~$p\in [1,2^*_s)$ if~$\alpha=0$
and for any~$p\in [1,2^*)$ if~$\alpha\ne0$.}}\end{equation}
Using \eqref{defT} with $v:=u_j-u$ and $w:=Tu_j-Tu$,
we deduce that
\begin{equation}\label{r435fnasdaw25}
\|Tu_j-Tu\|_V^2=\langle T(u_j-u), Tu_j-T_u\rangle_{X_{\alpha,\beta}}
=\int_\Omega m(u_j-u)\big(Tu_j-Tu\big)\,dx.\end{equation}
Now we apply
H\"older's inequality with exponents~$q$, as given in~\eqref{feuwtywvv123445},
$p$, as given by~\eqref{poit78676}, and either~$2^*_s$ if~$\alpha=0$
or~$2^*$ if~$\alpha\ne0$. In this way, using also the continuous embedding
of~$V$ either in~$L^{2^*_s}(\Omega)$ if~$\alpha=0$
or~$L^{2^*}(\Omega)$ if~$\alpha\ne0$,
we obtain from~\eqref{r435fnasdaw25}
that
$$ \|Tu_j-Tu\|_V^2
\leq C\|m\|_{L^q(\Omega)}\|u_j-u\|_{L^p(\Omega)}
\|Tu_j-Tu\|_V,
$$
for some positive constant~$C$ independent of~$j$.
This implies that
\[
\|Tu_j-Tu\|_V\leq C \|m\|_{L^q(\Omega)}\|u_j-u\|_{L^p(\Omega)}.
\]
Accordingly, recalling~\eqref{poit78676}, we obtain
that~$Tu_j\to Tu$ in $V$ as~$j\to+\infty$.
This completes the proof of~\eqref{compact}.
Now we observe that,
in light of~\eqref{ORAGSBNmer}, and recalling~\eqref{uf4t} and~\eqref{defT},
we can write the weak formulation of problem \eqref{probauto} as
\begin{equation}\label{SJNDI-32i3rtjrgnnvnbn}
\langle u,v\rangle_{X_{\alpha,\beta}} =\lambda \langle Tu,v\rangle_{X_{\alpha,\beta}}
\quad {\mbox{ for all }} v\in X_{\alpha,\beta}.
\end{equation}
Therefore, we can apply standard results in spectral theory of
self-adjoint and compact operators to obtain the existence and the
variational characterization of eigenvalues (see
e.g.~\cite[Propo\-si\-tion 1.10]{defi}; see also~\cite{MR576277}
and the references therein for related classical results).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:lambda}]
We first observe that if~$\beta\ne0$ and~$w$ is an eigenfunction
according to~\eqref{probauto}, then
\begin{equation}\label{ENOUGH}
{\mbox{$w\equiv0$ in~$\Omega$ entails that~$w\equiv0$
in the whole of~$\mathbb{R}^n$.}}
\end{equation}
To check this, suppose that~$w\equiv0$ in~$\Omega$
and write~\eqref{probauto}
explicitly as in~\eqref{ORAGSBNmer}, namely
\begin{equation}\label{WEAKSOL-mla}
\begin{split}&
\alpha \int_\Omega \nabla w(x) \cdot\nabla v(x)\,dx
+\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(w(x)-w(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dx\,dy
\\&\qquad=\lambda
\int_\Omega m(x)\,w(x)\,v(x)\,dx
\end{split}
\end{equation}
for all functions~$v\in X_{\alpha,\beta}$.
In particular, choosing~$v:=w$ in~\eqref{WEAKSOL-mla},
$$ 0=
\frac{\beta}{2}
\iint_\mathcal{Q}\frac{(w(x)-w(y))^2}{|x-y|^{n+2s}}\,dx\,dy=
\beta
\iint_{\Omega\times(\mathbb{R}^n\setminus\Omega)}
\frac{w^2(y)}{|x-y|^{n+2s}}\,dx\,dy.$$
Whence, if~$\beta\ne0$, it follows that~$w(y)=0$ for each~$y\in\Omega$,
thus establishing~\eqref{ENOUGH}.
Now, we prove that
\begin{equation}\label{posi22}
{\mbox{all the eigenfunctions corresponding to~$\lambda_1$ do not change
sign.}}
\end{equation}
For this, we let~$u$ be an eigenfunction corresponding to
the first positive eigenvalue $\lambda_1$.
In particular, recalling~\eqref{lopouygbv},
we have that~$u\in X_{\alpha,\beta}$ and
\begin{equation}\label{forse}
\int_{\Omega } m(x)u^2(x)\,dx=1.\end{equation}
If~$u$ is either nonnegative or nonpositive, then~\eqref{posi22} is established.
Hence, we are left with the case in which~$u$ changes sign in~$\Omega$.
In this case, we have that both~$u^+\not\equiv0$
and~$u^-\not\equiv 0$, and we claim that
\begin{equation}\label{posi33}
{\mbox{both~$u^+$ and $u^-$ are
eigenfunctions corresponding to~$\lambda_1$.}}
\end{equation}
To this end,
we notice that
\begin{equation}\label{49vbhgjhb}
\int_{\Omega} u^2(x)\,dx =\int_\Omega (u^+(x))^2\,dx + \int_\Omega (u^-(x))^2\,dx.
\end{equation}
Moreover, recalling~\eqref{seminorm}, by inspection one sees that
\begin{equation}\begin{split}\label{iehtierhgg}&
[u]_{X_{\alpha,\beta}}^2\\=\;& \alpha \int_\Omega |\nabla u|^2\,dx
+\beta \iint_\mathcal{Q}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\\
=\;&\alpha \int_\Omega \left(|\nabla u^+|^2+|\nabla u^-|^2\right)\,dx
+\frac\beta2 \iint_\mathcal{Q}\frac{|u^+(x)-u^+(y)|^2}{|x-y|^{n+2s}}\,dx\,dy\\
&\qquad + \frac\beta2 \iint_\mathcal{Q}\frac{|u^-(x)-u^-(y)|^2}{|x-y|^{n+2s}}\,dx\,dy
- \beta \iint_\mathcal{Q}\frac{(u^+(x)-u^+(y))(u^-(x)-u^-(y))}{|x-y|^{n+2s}}\,dx\,dy
\\ \geq\;&[u^+]_{X_{\alpha,\beta}}^2+[u^-]_{X_{\alpha,\beta}}^2.
\end{split}\end{equation}
This and~\eqref{49vbhgjhb} imply that~$u^+$, $u^-\in X_{\alpha,\beta}$.
Also, in light of~\eqref{forse}, we have that
\[ 1=
\int_\Omega m(x)u^2(x)\,dx=\int_\Omega m(x)(u^+(x))^2\,dx
+\int_\Omega m(x)(u^-(x))^2\,dx.
\]
Hence, using this and~\eqref{iehtierhgg}, and
recalling the characterization of~$\lambda_1$ given in~\eqref{lopouygbv},
\begin{equation}\label{disug+-}
\frac{1}{\lambda_1}=\frac{1}{[u]_{X_{\alpha,\beta}}^2}=
\frac{\displaystyle\int_\Omega m(x)u^2(x)\,dx}{[u]_{X_{\alpha,\beta}}^2}
\leq \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx
+\int_\Omega m(x)(u^-(x))^2\,dx}
{[u^+]_{X_{\alpha,\beta}}^2+[u^-]_{X_{\alpha,\beta}}^2}.
\end{equation}
Now we claim that, for any $a_1$, $a_2$, $b_1$, $b_2>0$, either
\begin{equation}\label{ab1}
\frac{a_1+a_2}{b_1+b_2}=\frac{a_1}{b_1}=\frac{a_2}{b_2},
\end{equation}
or
\begin{equation}\label{ab2}
\frac{a_1+a_2}{b_1+b_2}<\max\left\lbrace\frac{a_1}{b_1},
\frac{a_2}{b_2}\right\rbrace.
\end{equation}
Indeed, if $\frac{a_1}{b_1}=\frac{a_2}{b_2}$, then
$$ \frac{a_1+a_2}{b_1+b_2}=\frac{a_2}{b_2}\cdot \frac{\frac{a_1}{a_2}+1}{\frac{b_1}{b_2}+1}=
\frac{a_2}{b_2}\cdot\frac{\frac{a_1}{a_2}+1}{\frac{a_1}{a_2}+1}=\frac{a_2}{b_2},$$
that is~\eqref{ab1}. If instead we suppose that
$\frac{a_1}{b_1}>\frac{a_2}{b_2}$ (being the case in which $\frac{a_1}{b_1}<\frac{a_2}{b_2}$
similar), then
\[
\frac{a_1+a_2}{b_1+b_2}=\frac{b_1(a_1+a_2)}{b_1(b_1+b_2)}
<\frac{a_1b_1+a_1b_2}{b_1(b_1+b_2)}=\frac{a_1(b_1+b_2)}{b_1(b_1+b_2)}=
\frac{a_1}{b_1},
\]
which proves~\eqref{ab2}.
Now, if we suppose that
$$ \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}{[u^+]_{X_{\alpha,\beta}}^2}>
\frac{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}{[u^-]_{X_{\alpha,\beta}}^2}$$
then we deduce from~\eqref{disug+-} and~\eqref{ab2},
applied here with
\begin{eqnarray*}
&& a_1:= \int_\Omega m(x)(u^+(x))^2\,dx, \quad
a_2:=\int_\Omega m(x)(u^-(x))^2\,dx, \\&& b_1:=[u^+]_{X_{\alpha,\beta}}^2,\quad
{\mbox{ and }}\quad b_2:=[u^-]_{X_{\alpha,\beta}}^2,\end{eqnarray*}
that
$$\frac{1}{\lambda_1} < \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}{[u^+]_{X_{\alpha,\beta}}^2},$$
which contradicts the minimality of~$\lambda_1$.
Similarly, if
$$ \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}{[u^+]_{X_{\alpha,\beta}}^2}<
\frac{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}{[u^-]_{X_{\alpha,\beta}}^2},$$ then
$$\frac{1}{\lambda_1} < \frac{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}{[u^-]_{X_{\alpha,\beta}}^2},$$
which is again a contradiction with the minimality of~$\lambda_1$.
As a consequence, we have that
$$ \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}{[u^+]_{X_{\alpha,\beta}}^2}=
\frac{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}{[u^-]_{X_{\alpha,\beta}}^2}.$$
In this case, we can apply~\eqref{ab1} and we obtain from~\eqref{disug+-} that
$$\frac{1}{\lambda_1} \le \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}{[u^+]_{X_{\alpha,\beta}}^2}
=\frac{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}{[u^-]_{X_{\alpha,\beta}}^2},$$
that is
\begin{equation}\label{-678686uhi}
\lambda_1\ge \frac{[u^+]_{X_{\alpha,\beta}}^2}{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}
=\frac{[u^-]_{X_{\alpha,\beta}}^2}{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}.
\end{equation}
Now, if the inequality in~\eqref{-678686uhi}
is strict, we have a contradiction
with the minimality of~$\lambda_1$. Accordingly,
$$
\lambda_1= \frac{[u^+]_{X_{\alpha,\beta}}^2}{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx}
=\frac{[u^-]_{X_{\alpha,\beta}}^2}{\displaystyle\int_\Omega m(x)(u^-(x))^2\,dx}.
$$
This implies that~$u^+$ and $u^-$ are both
eigenfunctions corresponding to~$\lambda_1$ (unless they are trivial)
thus establishing~\eqref{posi33}.
Our next claim is to prove that
\begin{equation}\label{posi44}
{\mbox{either~$u\equiv u^+$ or~$u\equiv u^-$.}}\end{equation}
We observe that, if~$\beta=0$, then~\eqref{posi44} follows
from the standard maximum principle for the Laplace operator
(see e.g.~\cite{MR2597943}).
If instead~$\beta\ne0$,
we use~\eqref{posi33} and~\eqref{disug+-}
to see that
\begin{equation*}
\frac{1}{\lambda_1}
\leq \frac{\displaystyle\int_\Omega m(x)(u^+(x))^2\,dx
+\int_\Omega m(x)(u^-(x))^2\,dx}
{[u^+]_{X_{\alpha,\beta}}^2+[u^-]_{X_{\alpha,\beta}}^2}
=\frac{1}{\lambda_1}.
\end{equation*}
In particular, equality holds in the latter formula, and accordingly,
recalling~\eqref{iehtierhgg}, we have that
$$ 0=-\iint_\mathcal{Q}\frac{(u^+(x)-u^+(y))(u^-(x)-u^-(y))}{|x-y|^{n+2s}}\,dx\,dy
=\iint_\mathcal{Q}\frac{2u^+(x)u^-(y)}{|x-y|^{n+2s}}\,dx\,dy.
$$
This gives that
\begin{equation}\label{setuerghdfjbv}
u^+(x)u^-(y) =0\qquad {\mbox{ for all }} (x,y)\in\mathcal{Q}.
\end{equation}
We can also suppose that~$u^+\not\equiv0$ (in~$\mathbb{R}^n$ if~$\beta\ne0$
and in~$\Omega$ if~$\beta=0$),
otherwise~$u\equiv u^-$ and we are done.
This and~\eqref{ENOUGH} give that~$u^+\not\equiv0$ in~$\Omega$.
Hence, we can take~$\bar{x}\in\Omega$ such that~$u^+(\bar{x})\ne0$.
{F}rom this and~\eqref{setuerghdfjbv}, we obtain that
\begin{equation*}
u^+(\bar{x})u^-(y) =0\qquad {\mbox{ for all }} y\in\mathbb{R}^n.
\end{equation*}
As a consequence, we find that~$u^-\equiv0$ in~$\mathbb{R}^n$,
which establishes~\eqref{posi44}.
In turn, the claim in~\eqref{posi44} implies
the one in~\eqref{posi22}, as desired.
We now prove that~$\lambda_1$ is simple.
First we show that
\begin{equation}\label{GMP}
{\mbox{the geometric multiplicity of
$\lambda_1$ is 1.}}\end{equation} For this, let $u_1$ and $u_2$ be eigenfunctions
corresponding to $\lambda_1$. {F}rom~\eqref{posi22} we know that~$u_2$
does not change sign, hence (up to exchanging $u_2$ with~$-u_2$),
we can suppose that~$u_2\ge0$ (in~$\mathbb{R}^n$, if~$\beta\ne0$,
and in~$\Omega$, if~$\beta=0$).
{F}rom this and~\eqref{ENOUGH}, it follows that
$$ \int_\Omega u_2(x)\,dx>0.$$
As a result, we can define
$$ a:=\frac{\displaystyle\int_\Omega u_1(x)\,dx}{
\displaystyle\int_\Omega u_2(x)\,dx},$$
and we find that
\begin{equation}\label{09qwr8hrhtg} \int_\Omega \big(u_1(x)-au_2(x)\big)\,dx=0.\end{equation}
In addition, from~\eqref{posi22}, we know that the eigenfunction~$u_1-au_2$
does not change sign, and therefore~\eqref{09qwr8hrhtg}
entails that~$u_1-au_2\equiv0$ in~$\Omega$.
This and~\eqref{ENOUGH} show that~$u_1-au_2\equiv0$
also in~$\mathbb{R}^n$ when~$\beta\ne0$, and this proves that~$u_1$
and $u_2$ are linearly dependent, giving~\eqref{GMP},
as desired.
Finally, we prove that
\begin{equation}\label{AGM}
{\mbox{the algebraic multiplicity of
$\lambda_1$ is 1.}}\end{equation}
To this end, we recall the notation in~\eqref{defV} and~\eqref{defT},
and we claim that
\begin{equation}\label{TKEP}
{\rm Ker}\big( (I-\lambda_1 T)^2\big)={\rm Ker}(I-\lambda_1 T),\end{equation}
where~$I$ is the identity in~$V$.
To prove~\eqref{TKEP},
let $u\in {\rm Ker}\big((I-\lambda_1 T)^2\big)$. Then,
setting $U:=u-\lambda_1 Tu$,
we have that~$U-\lambda_1 TU=0$, and accordingly,
by~\eqref{SJNDI-32i3rtjrgnnvnbn}, $U$ is an eigenfunction
corresponding to~$\lambda_1$.
{F}rom this fact and~\eqref{GMP},
we conclude that~$U=te_1$ for
some $t\in \mathbb{R}$, where~$e_1$ is a given
eigenfunction corresponding to $\lambda_1$.
As a result,
\[ t\langle e_1,e_1\rangle_{X_{\alpha,\beta}}=\langle U,e_1\rangle_{X_{\alpha,\beta}}=
\langle u-\lambda_1 Tu,e_1\rangle_{X_{\alpha,\beta}}
= \langle u,e_1-\lambda_1 Te_1\rangle_{X_{\alpha,\beta}}= \langle u,0\rangle_{X_{\alpha,\beta}}=0,
\]
which implies that~$t=0$.
This yields that~$U=0$ and therefore~$u\in
{\rm Ker}(I-\lambda_1 T)$. This shows that~$
{\rm Ker}\big( (I-\lambda_1 T)^2\big)\subseteq{\rm Ker}(I-\lambda_1 T)$,
and the other inclusion is obvious.
The proof of~\eqref{TKEP} is therefore complete.
{F}rom~\eqref{TKEP}, we obtain that for all~$k\in\mathbb{N}$ with~$k\ge1$,
$$
{\rm Ker}\big( (I-\lambda_1 T)^k\big)=
{\rm Ker}(I-\lambda_1 T),$$
and thus
$$ \bigcup_{k=1}^{+\infty}
{\rm Ker}\big( (I-\lambda_1 T)^k\big)=
{\rm Ker}(I-\lambda_1 T).$$
The latter has dimension~1, thanks to~\eqref{GMP},
and therefore the claim in~\eqref{AGM} is established.
\end{proof}
\section{Boundedness of weak subsolutions and proof
of Theorem~\ref{OSC55}}\label{KM:09009936523846765}
For the proof of Theorem~\ref{OSC55},
we give here a general Sobolev inequality for the functions in the space~$V$
introduced in~\eqref{defV}
which can be seen as a natural counterpart of the Poincar\'e
inequality given in
Lemma~\ref{POI66} (the proof is somewhat of classical flavor,
but we provide full details for the sake of completeness):
\begin{lem}\label{NEWSOB}
Let~$m$ be such that
\begin{equation*}
\int_{\Omega} m(x)\,dx\neq0.
\end{equation*}
Let~$\eta$ be the fractional Sobolev
exponent~$2^*_s:=\frac{2n}{n-2s}$ if~$\beta\ne0$ and~$n>2s$, the
classical Sobolev exponent~$2^*:=\frac{2n}{n-2}$ if~$\beta=0$ and~$n>2$
and~$\eta\ge1$
arbitrary in the other cases.
If~$V$ is as in \eqref{defV} and~$u\in V$, then
\begin{equation}\label{D-SON}
\int_{\Omega} u^\eta(x)\,dx\le C\,
\left( \alpha \,\int_{\Omega} |\nabla u(x)|^2\,dx
+\frac{\beta}{2}\,\iint_{{\mathcal{Q}}}\frac{(u(x)-u(y))^2}{|x-y|^{{{n}}+2s}}\,dx\,dy
\right)^{\frac\eta2},
\end{equation}
where~$C>0$ depends only on~$n$, $\Omega$, $s$ and~$m$.
\end{lem}
\begin{proof} As usual, in this proof we will freely
rename~$C>0$ line after line. First of all, we observe
that the following ``generalized'' Sobolev inequality for
any function~$f\in X_{\alpha,\beta}$ holds true:
\begin{equation}\label{GENESOB-1}
\| f\|_{L^{{\eta_1}}(\Omega)}\le C\,\|f\|_{H^1(\Omega)},
\end{equation}
where~${\eta_1}:=2^*$ if~$n>2$, and~${\eta_1}\ge1$
arbitrary if~$n\le2$. Indeed, when~$n>2$, the claim in~\eqref{GENESOB-1}
is the standard Sobolev embedding (see e.g. Theorem~2
on page~279 of~\cite{MR2597943}).
If instead~$n=2$, we let~$\sigma:=\frac{{\eta_1}}{{\eta_1}+1}\in(0,1)$.
By Proposition~2.2 in~\cite{MR2944369}, we know that
\begin{equation}\label{bfSKDcnv}
\|f\|_{H^\sigma(\Omega)}
\le C\,\|f\|_{H^1(\Omega)}.\end{equation}
Also, we have that~$2\sigma <2=n$ and
$$ 2_\sigma^*=\frac{2n}{n-2\sigma}=\frac{2}{1-\sigma}=
2({\eta_1}+1)\ge{\eta_1}.
$$
Hence, by Theorem~6.7 in~\cite{MR2944369},
we obtain that~$\|f\|_{L^{{\eta_1}}(\Omega)}\le C\|f\|_{H^\sigma(\Omega)}$.
{F}rom this and~\eqref{bfSKDcnv}, we obtain~\eqref{GENESOB-1}
in this case.
Finally, when~$n=1$, we have that~\eqref{GENESOB-1}
is a consequence of Morrey embedding (see e.g. Theorem~5
on page~283 of~\cite{MR2597943}). These considerations
complete the proof of~\eqref{GENESOB-1}.
As a fractional counterpart of~\eqref{GENESOB-1}, we notice that
\begin{equation}\label{GENESOB-2}
\| f\|_{L^{{\eta_s}}(\Omega)}\le C\,\|f\|_{H^s(\Omega)},
\end{equation}
where~${\eta_s}:=2^*_s$ if~$n>2s$, and~${\eta_s}\ge1$
arbitrary if~$n\le2s$.
Indeed, when~$n>2s$, we can use Theorem~6.7
in~\cite{MR2944369} and obtain~\eqref{GENESOB-2}.
If instead~$n\le2s$, the claim in~\eqref{GENESOB-2}
is contained in
Theorem~6.10 of~\cite{MR2944369}.
Now we take~$\eta$ as in the statement of
Lemma~\ref{NEWSOB}
and we claim that
\begin{equation}\label{SMDC}
\| f\|_{L^{{\eta}}(\Omega)}\le C\,\big(
\alpha\|f\|_{H^1(\Omega)}+\beta\|f\|_{H^s(\Omega)}\big).
\end{equation}
Indeed,
if~$\beta\ne0$,
the claim in~\eqref{SMDC} follows from~\eqref{GENESOB-2}.
If instead~$\beta=0$, then necessarily~$\alpha>0$
and thus
the claim in~\eqref{SMDC} is a consequence of~\eqref{GENESOB-1}.
Having proved~\eqref{SMDC}, we can now combine it
with the Poincar\'e inequality in Lemma~\ref{POI66}
in order to complete the proof of~\eqref{D-SON}.
To this end, since~$u\in V$,
Lemma~\ref{POI66} gives that
\begin{equation} \label{JOSDHKFBzv98348ty03rug1}\| u\|_{L^2(\Omega)}\le C\,[u]_{X_{\alpha,\beta}}=
C\,\sqrt{
\frac\alpha2 \,\int_{\Omega} |\nabla u(x)|^2\,dx
+\frac{\beta}{4}\,\iint_{{\mathcal{Q}}}\frac{(u(x)-u(y))^2}{|x-y|^{{{n}}+2s}}\,dx\,dy
}.\end{equation}
Moreover, by~\eqref{SMDC},
\begin{equation}\label{JOSDHKFBzv98348ty03rug}
\begin{split}
\| u\|_{L^{{\eta}}(\Omega)}\,&\le C\,\big(
\alpha\|u\|_{H^1(\Omega)}+\beta\|u\|_{H^s(\Omega)}\big)\\
&\le C\,\sqrt{
\frac\alpha2 \,\int_{\Omega} |\nabla u(x)|^2\,dx
+\frac{\beta}{4}\,\iint_{{\mathcal{Q}}}\frac{(u(x)-u(y))^2}{|x-y|^{{{n}}+2s}}\,dx\,dy
}+C\,\|u\|_{L^2(\Omega)}.
\end{split}\end{equation}
Then, we insert~\eqref{JOSDHKFBzv98348ty03rug1}
into~\eqref{JOSDHKFBzv98348ty03rug}, and we obtain~\eqref{D-SON},
as desired.
\end{proof}
Now, we dive into the details of the proof
of Theorem~\ref{OSC55}, which is based
on a suitable choice of test functions
and an iteration argument.
\begin{proof}[Proof of Theorem~\ref{OSC55}] We combine for this proof some
classical and nonlocal techniques, see e.g.~\cite{MR1669352, MR1911531, MR3060890, MR3161511, MR3237774, MR3542614, MR3593528}.
Differently from the previous
literature, we focus here on the case of
the $(\alpha,\beta)$-Neumann conditions.
For the facility of the reader, we try to make our arguments
as self-contained as possible.
Given~$k\ge0$, we let~$v:=(u-k)^+$.
We claim that
\begin{equation}\label{MASd-sd}
(u(x)-u(y))( v(x)-v(y))\ge(v(x)-v(y))^2.
\end{equation}
To prove this, we can suppose that~$u(x)\ge u(y)$, up to exchanging the roles of~$x$ and~$y$.
Also, if both~$u(x)$ and~$u(y)$ are larger than~$k$,
we have that~$v(x)=u(x)-k$ and~$v(y)=u(y)-k$, and thus~\eqref{MASd-sd}
follows in this case (in fact, with equality instead of inequality).
Therefore, we can suppose that~$u(x)\ge k\ge u(y)$,
whence~$v(x)=u(x)-k$ and~$v(y)=0$,
and then
\begin{eqnarray*}&&
(u(x)-u(y))( v(x)-v(y))-(v(x)-v(y))^2
= (u(x)-u(y))( u(x)-k)-(u(x)-k)^2\\&&\qquad
= \big( (u(x)-u(y))-(u(x)-k)\big)( u(x)-k)=(k-u(y))(u(x)-k)\ge0.
\end{eqnarray*}
This establishes~\eqref{MASd-sd}.
By~\eqref{MASd-sd},
\begin{equation}\label{19gtasgbcsd-2}
\iint_{{\mathcal{Q}}}\frac{(u(x)-u(y))({ v }(x)-{ v }(y))}{|x-y|^{{{n}}+2s}}\,dx\,dy\ge
\iint_{{\mathcal{Q}}}\frac{(v(x)-v(y))^2}{|x-y|^{{{n}}+2s}}\,dx\,dy.\end{equation}
In addition,
\begin{eqnarray*}
\int_{\Omega} \nabla u(x)\cdot\nabla{ v }(x)\,dx=
\int_{\Omega} |\nabla v(x)|^2\,dx.
\end{eqnarray*}
Consequently, by~\eqref{WEAK-DGSUBSOL},
\begin{equation}\label{0980980987654tgb}
\begin{split}
{\mathcal{I}}
\; :=\;&
\alpha \,\int_{\Omega} |\nabla v(x)|^2\,dx
+\frac{\beta}{2}\,\iint_{{\mathcal{Q}}}\frac{(v(x)-v(y))^2}{|x-y|^{{{n}}+2s}}\,dx\,dy
\\ \le\;&
\int_{\Omega}\big(c(x)u(x)+ f(x)\big)\,{ v }(x)\,dx\\
\le\;&
\int_{\Omega}\Big( |c(x)|\,|u(x)|\,v(x)+ |f(x)|\,v(x)\Big)\,dx.
\end{split}
\end{equation}
We also remark that
\begin{equation}\label{876janscvd945t}
|u(x)|\,v(x)\le 4(v^2(x)+k^2).
\end{equation}
Indeed, if~$u(x)\le k$, then~$v(x)=0$ and~\eqref{876janscvd945t}
plainly follows. If instead~$u(x)>k$, then~$v(x)=u(x)-k$,
and consequently
\begin{eqnarray*}&& |u(x)|\,v(x)- 4v^2(x)-4k^2=u(x)\,v(x)- 4v^2(x)-4k^2=
(v(x)+k)\,v(x)- 4v^2(x)-4k^2\\&&\qquad=kv(x)-3v^2(x)-4k^2
\le0,\end{eqnarray*}
thus establishing~\eqref{876janscvd945t}.
{F}rom~\eqref{0980980987654tgb} and~\eqref{876janscvd945t},
we conclude that
\begin{equation}\label{0980980987654tgb-22}
{\mathcal{I}}
\le C\,
\int_{\Omega\cap\{v\ne0\}} \Big(
|c(x)|\,v^2(x)+ k^2|c(x)|+
|f(x)|\,v(x)\Big)\,dx ,
\end{equation}
up to renaming~$C>1$.
Now, we denote by~${\mathcal{Z}}$
the Lebesgue measure of the set~$\Omega\cap\{v\ne0\}=\Omega\cap\{u>k\}$
and we let~$\eta$ be as in the statement of Lemma~\ref{NEWSOB},
with the additional requirement that~$\eta>\frac{2q}{q-1}$
if~$\beta\ne0$ and~$n\le2s$, and if~$\beta=0$ and~$n\le2$
(these situations corresponding to ``the other cases''
mentioned in the statement of Lemma~\ref{NEWSOB}).
We claim that
\begin{equation} \label{H66OAK}\frac1{q}+\frac{1}{\eta} < 1.
\end{equation}
Indeed, we use here~\eqref{qbar} and we see that, if~$\beta=0$ and~$n>2$,
$$ \frac1{q}+\frac{1}{\eta}<
\frac1{\underline{q}}+\frac{n-2}{2n}=\frac{2}{n}+\frac{n-2}{2n}
=\frac{n+2}{2n}<1.$$
If instead~$\beta\ne0$ and~$n>2s$,
$$ \frac1{q}+\frac{1}{\eta}<
\frac1{\underline{q}}+\frac{n-2s}{2n}=\frac{2s}{n}+\frac{n-2s}{2n}=\frac{n+2s}{2n}
<1.$$
In all the other cases,
$$ \frac1{q}+\frac{1}{\eta}<
\frac1{q}+\frac{q-1}{q}
=1.$$
These observations prove~\eqref{H66OAK}.
Now, from~\eqref{H66OAK},
we can define
\begin{equation}\label{ETATT}\eta':=\frac{1}{1-\displaystyle\frac1q-\frac1\eta}
\end{equation}
and we can
exploit the H\"older inequality with exponents~$q$
and~$\eta$ and~$\eta'$, thus finding that
\begin{equation}\label{09oqdwfkkk89PS}
\int_{\Omega} |f(x)|\,v(x)\,dx\le
\|f\|_{L^q(\Omega)}\left(
\int_{\mathbb{R}^{{n}}} (v(x))^\eta
\right)^{\frac1\eta}\,{\mathcal{Z}}^{\frac1{\eta'}}
\end{equation}
We fix now~$\delta\in(0,1)$, to be taken conveniently small in what
follows,
and we claim that
\begin{equation}\label{KS-dfp}
\int_{\Omega} |f(x)|\,v(x)\,dx\le\delta{\mathcal{I}}
+C_\delta\,
\|f\|_{L^q(\Omega)}^2\,{\mathcal{Z}}^{\vartheta},
\end{equation}
with (recalling~\eqref{ETATT})
\begin{equation}\label{KS-dfp-0983w4}
\vartheta:=\frac2{\eta'}=2\left(1-\frac1q-\frac1\eta\right),\end{equation}
for a suitable~$C_\delta>1$.
Indeed,
using~\eqref{09oqdwfkkk89PS} and Lemma~\ref{NEWSOB},
\begin{eqnarray*}
\int_{\Omega} |f(x)|\,v(x)\,dx&\le&C\,
\|f\|_{L^q(\Omega)}\,\sqrt{\mathcal{I}}\,{\mathcal{Z}}^{\frac1{\eta'}}\\
&\le&\delta\,{\mathcal{I}}+C_\delta\,\Big(
\|f\|_{L^q(\Omega)}\,{\mathcal{Z}}^{\frac1{\eta'}}\Big)^2,
\end{eqnarray*}
which gives~\eqref{KS-dfp}.
Then, combining~\eqref{0980980987654tgb-22}
and~\eqref{KS-dfp}, we find that
\begin{equation*}
\begin{split}&
{\mathcal{I}}
\le C\,
\int_{\Omega \cap\{v\ne0\}} \Big(
|c(x)|\,v^2(x)+ k^2|c(x)|\Big)\,dx +
C\delta\,{\mathcal{I}}+C_\delta\,
\|f\|_{L^q(\Omega)}^2\,{\mathcal{Z}}^{\vartheta}\end{split}\end{equation*}
up to renaming constants.
Consequently, choosing~$\delta$ sufficiently small
(and considering~$\delta$ fixed from now on), we obtain
\begin{equation}\label{CAC-PIV}
\begin{split}&
{\mathcal{I}}
\le C\,
\int_{\Omega\cap\{v\ne0\}} \Big(
|c(x)|\,v^2(x)+ k^2|c(x)|\Big)\,dx +C\,
\|f\|_{L^q(\Omega)}^2{\mathcal{Z}}^{\vartheta}
\end{split}\end{equation}
up to renaming constants.
In this setting, formula~\eqref{CAC-PIV}
will play a role of a pivotal Caccioppoli-type inequality, according to
the following argument. We claim that there exists~$c_\star>0$
such that if~${\mathcal{Z}}<c_\star$ then
\begin{equation}\label{CAC-PIV-2}
\begin{split}&
{\mathcal{I}}
\le C\,\big(k^2+
\|f\|_{L^q(\Omega)}^2\big)\,{\mathcal{Z}}^{1-\frac1q}
.\end{split}\end{equation}
To check this, we recall~\eqref{KS-dfp-0983w4},
and we use the H\"older inequality and Lemma~\ref{NEWSOB}
to see that
\begin{eqnarray*}
&&\int_{\Omega}
|c(x)|\,v^2(x)\,dx
\le\|c\|_{L^q(\Omega)}\,\| v\|_{L^{\eta}(\Omega)}^{2} \;{\mathcal{Z}}^{
1-\frac1q-\frac2\eta}\\&&\qquad\qquad=
\|c\|_{L^q(\Omega)}\,\| v\|_{L^{\eta}(\Omega)}^{2} \;
{\mathcal{Z}}^{\frac\vartheta2-\frac1\eta}\le C\,{\mathcal{I}} \;
{\mathcal{Z}}^{\frac\vartheta2-\frac1\eta}
\end{eqnarray*}
and
\begin{eqnarray*}
\int_{\Omega\cap\{v\ne0\}} |c(x)|\,dx\le\|c\|_{L^q(\Omega)}
\;{\mathcal{Z}}^{1-\frac1{q}}
\le C\,{\mathcal{Z}}^{1-\frac1{q}}.
\end{eqnarray*}
We stress that here the constants denoted by~$C$ are allowed
to depend also on~$\|c\|_{L^q(\Omega)}$.
Plugging this information into~\eqref{CAC-PIV}, we obtain that
\begin{equation*}
\begin{split}&
{\mathcal{I}}
\le
C\,{\mathcal{I}} \;
{\mathcal{Z}}^{\frac\vartheta2-\frac1\eta} +
C\,k^2\,{\mathcal{Z}}^{1-\frac1{q}}+C\,
\|f\|_{L^q(\Omega)}^2{\mathcal{Z}}^{\vartheta}.
\end{split}\end{equation*}
Noticing that~$\frac\vartheta2-\frac1\eta>0$ and~$1-\frac1{q}\le\vartheta$,
if~$|\mathcal{Z}|$ is sufficiently small we obtain~\eqref{CAC-PIV-2}, as desired.
We also remark that, by Lemma~\ref{NEWSOB},
$$ \int_{\Omega}v^2(x)\,dx\le\left(
\int_{\Omega}v^\eta(x)\,dx\right)^{\frac2\eta}\,{\mathcal{Z}}^{1-\frac2\eta}\le{\mathcal{I}}\,{\mathcal{Z}}^{1-\frac2\eta}.$$
This and~\eqref{CAC-PIV-2} yield that, if~$|\mathcal{Z}|\le c_\star$,
\begin{equation}\label{CAC-PIV-3}
\begin{split}&
\int_{\Omega}v^2(x)\,dx
\le C\,\big(k^2+
\|f\|_{L^q(\Omega)}^2\big)\,{\mathcal{Z}}^{2-\frac1q-\frac2\eta}
.\end{split}\end{equation}
We stress that~$2-\frac1q-\frac2\eta>1$,
hence~\eqref{CAC-PIV-3} gives that
\begin{equation}\label{KSD-34ro-1}
\begin{split}&
\int_{\Omega}v^2(x)\,dx
\le C\,\big(k^2+
\|f\|_{L^q(\Omega)}^2\big)\,{\mathcal{Z}}^{1+\epsilon_0}
\end{split}\end{equation}
for some~$\epsilon_0>0$.
That is, setting~$A(k):=\Omega\cap\{u> k\}$
and
$$\varphi(k):=\int_{A(k)}(u(x)-k)^2(x)\,dx:=\int_{\Omega}v^2(x)\,dx,$$
in light of~\eqref{KSD-34ro-1}
we can write that, if~$|A(k)|\le c_\star$, then
\begin{equation}\label{KSD-34ro-2}
\begin{split}&
\varphi(k)
\le C\,\big(k^2+
\|f\|_{L^q(\Omega)}^2\big)\,|A(k)|^{1+\epsilon_0}.
\end{split}\end{equation}
We observe that if~$x\in A(k)$ then~$u(x)>k$ and thus~$u^+(x)>k$.
Therefore,
$$ |A(k)|\le \frac{1}{k}\int_{A(k)} u^+(x)\,dx\le
\frac{\sqrt{|A(k)|}}{k}\,\|u^+\|_{L^2(\Omega)}.$$
Hence, it follows that
\begin{equation}\label{COKAPPA-0} |A(k)|\le\left(\frac{\|u^+\|_{L^2(\Omega)}}{k}\right)^2
\le c_\star,\end{equation}
as long as
\begin{equation}\label{COKAPPA}
k\ge \frac{\|u^+\|_{L^2(\Omega)}}{\sqrt{c_\star}}=:\kappa.
\end{equation}
In particular, in view of~\eqref{COKAPPA-0},
we know that~\eqref{KSD-34ro-2} holds true for all~$k$ satisfying~\eqref{COKAPPA}.
Now we define, for every~$\ell\in\mathbb{N}$,
\begin{eqnarray*}
&&K:=\kappa+\|f\|_{L^q(\Omega)}
\\{\mbox{and }} &&k_\ell:= \kappa+K\left(1-\frac1{2^\ell}\right).\end{eqnarray*}
We point out that
$$ k_\ell-k_{\ell-1}=\frac{K}{2^\ell},$$
and, as a result, if~$x\in A(k_\ell)$ then~$u(x)- k_{\ell-1}\ge k_\ell-k_{\ell-1}=\frac{K}{2^\ell}$.
For this reason, we have that
$$ A(k_\ell)\le\frac{2^{2\ell}}{K^2}\int_{A(k_\ell)} (u(x)-k_{\ell-1})^2\,dx\le
\frac{2^{2\ell}}{K^2}\int_{A(k_{\ell-1})} (u(x)-k_{\ell-1})^2\,dx=\frac{2^{2\ell}}{K^2}\varphi(k_{\ell-1}).$$
Using this information together with~\eqref{KSD-34ro-2} (exploited here with~$k:=k_\ell$,
and we remark that~$k_\ell\ge\kappa$, hence condition~\eqref{COKAPPA}
is satisfied),
we discover that
\begin{equation}\label{KSD-34ro-3}
\begin{split}&
\varphi(k_\ell)
\le \frac{C^\ell\,(k_\ell^2+
\|f\|_{L^q(\Omega)}^2)}{K^2}\,(\varphi(k_{\ell-1}))^{1+\epsilon_0}.
\end{split}\end{equation}
Since~$k_\ell\le \kappa+K$, up to renaming constants we obtain from~\eqref{KSD-34ro-3}
that
\begin{equation*}
\varphi(k_\ell)
\le \frac{C^\ell\,(\kappa^2+K^2)}{K^2}\,(\varphi(k_{\ell-1}))^{1+\epsilon_0},
\end{equation*}
and consequently, if~$c_\star$ is sufficiently small,
$$ 0=\lim_{\ell\to+\infty}\varphi(k_\ell)=\varphi(\kappa+K).
$$
As a result, $u^+(x)\le\kappa+K$, whence the claim in~\eqref{BOU-o1}
plainly follows.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{ABATANGELO}{article}{
author={Abatangelo, Nicola},
author={Cozzi, Matteo},
title = {An elliptic boundary value problem with fractional
nonlinearity},
journal = {arXiv e-prints},
date = {2020},
eid = {arXiv:2005.09515},
pages = {arXiv:2005.09515},
archivePrefix = {arXiv},
eprint = {2005.09515},
adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200509515A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{MR3912710}{article}{
author={Affili, Elisa},
author={Valdinoci, Enrico},
title={Decay estimates for evolution equations with classical and
fractional time-derivatives},
journal={J. Differential Equations},
volume={266},
date={2019},
number={7},
pages={4027--4060},
issn={0022-0396},
review={\MR{3912710}},
doi={10.1016/j.jde.2018.09.031},
}
\bib{MR3169773}{article}{
author={Alfaro, Matthieu},
author={Coville, J\'{e}r\^{o}me},
author={Raoul, Ga\"{e}l},
title={Travelling waves in a nonlocal reaction-diffusion equation as a
model for a population structured by a space variable and a phenotypic
trait},
journal={Comm. Partial Differential Equations},
volume={38},
date={2013},
number={12},
pages={2126--2154},
issn={0360-5302},
review={\MR{3169773}},
doi={10.1080/03605302.2013.828069},
}
\bib{2019arXiv190702495A}{article}{
author = {Alibaud, Natha{\"e}l},
author={del Teso, F{\'e}lix},
author={Endal, J{\o}rgen},
author={Jakobsen, Espen R.},
title = {The Liouville theorem and linear operators satisfying the maximum principle},
journal = {arXiv e-prints},
date = {2019},
eid = {arXiv:1907.02495},
pages = {arXiv:1907.02495},
archivePrefix = {arXiv},
eprint = {1907.02495},
primaryClass = {math.AP},
adsurl = {https://ui.adsabs.harvard.edu/abs/2019arXiv190702495A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{MR2601079}{article}{
author={Apreutesei, Narcisa},
author={Bessonov, Nikolai},
author={Volpert, Vitaly},
author={Vougalter, Vitali},
title={Spatial structures and generalized travelling waves for an
integro-differential equation},
journal={Discrete Contin. Dyn. Syst. Ser. B},
volume={13},
date={2010},
number={3},
pages={537--557},
issn={1531-3492},
review={\MR{2601079}},
doi={10.3934/dcdsb.2010.13.537},
}
\bib{MR2911421}{article}{
author={Barles, Guy},
author={Chasseigne, Emmanuel},
author={Ciomaga, Adina},
author={Imbert, Cyril},
title={Lipschitz regularity of solutions for mixed integro-differential
equations},
journal={J. Differential Equations},
volume={252},
date={2012},
number={11},
pages={6012--6060},
issn={0022-0396},
review={\MR{2911421}},
doi={10.1016/j.jde.2012.02.013},
}
\bib{MR3194684}{article}{
author={Barles, Guy},
author={Chasseigne, Emmanuel},
author={Ciomaga, Adina},
author={Imbert, Cyril},
title={Large time behavior of periodic viscosity solutions for uniformly
parabolic integro-differential equations},
journal={Calc. Var. Partial Differential Equations},
volume={50},
date={2014},
number={1-2},
pages={283--304},
issn={0944-2669},
review={\MR{3194684}},
doi={10.1007/s00526-013-0636-2},
}
\bib{MR2422079}{article}{
author={Barles, Guy},
author={Imbert, Cyril},
title={Second-order elliptic integro-differential equations: viscosity
solutions' theory revisited},
journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire},
volume={25},
date={2008},
number={3},
pages={567--585},
issn={0294-1449},
review={\MR{2422079}},
doi={10.1016/j.anihpc.2007.02.007},
}
\bib{MR2095633}{article}{
author={Bass, Richard F.},
author={Kassmann, Moritz},
title={Harnack inequalities for non-local operators of variable order},
journal={Trans. Amer. Math. Soc.},
volume={357},
date={2005},
number={2},
pages={837--850},
issn={0002-9947},
review={\MR{2095633}},
doi={10.1090/S0002-9947-04-03549-4},
}
\bib{MR2180302}{article}{
author={Bass, Richard F.},
author={Kassmann, Moritz},
title={H\"{o}lder continuity of harmonic functions with respect to operators
of variable order},
journal={Comm. Partial Differential Equations},
volume={30},
date={2005},
number={7-9},
pages={1249--1259},
issn={0360-5302},
review={\MR{2180302}},
doi={10.1080/03605300500257677},
}
\bib{MR3498523}{article}{
author={Berestycki, Henri},
author={Coville, J\'{e}r\^{o}me},
author={Vo, Hoang-Hung},
title={Persistence criteria for populations with non-local dispersion},
journal={J. Math. Biol.},
volume={72},
date={2016},
number={7},
pages={1693--1745},
issn={0303-6812},
review={\MR{3498523}},
doi={10.1007/s00285-015-0911-2},
}
\bib{biagvecc}{article}{
author = {Biagi, Stefano},
author={Dipierro, Serena},
author={Valdinoci, Enrico},
author={Vecchi, Eugenio},
title = {Mixed local and nonlocal elliptic operators:
regularity and maximum principles},
journal = {arXiv e-prints},
date = {2020},
eid = {arXiv:2005.06907},
pages = {arXiv:2005.06907},
archivePrefix = {arXiv},
eprint = {2005.06907},
adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200506907B},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{MR2653895}{article}{
author={Biswas, Imran H.},
author={Jakobsen, Espen R.},
author={Karlsen, Kenneth H.},
title={Viscosity solutions for a system of integro-PDEs and connections
to optimal switching and control of jump-diffusion processes},
journal={Appl. Math. Optim.},
volume={62},
date={2010},
number={1},
pages={47--80},
issn={0095-4616},
review={\MR{2653895}},
doi={10.1007/s00245-009-9095-8},
}
\bib{PhysRevE.87.063106}{article}{
author = {Blazevski, Daniel},
author={del-Castillo-Negrete, Diego},
title = {Local and nonlocal anisotropic transport
in reversed shear magnetic fields: Shearless Cantori and nondiffusive transport},
journal = {Phys. Rev. E},
volume = {87},
issue = {6},
pages = {063106},
numpages = {15},
year = {2013},
doi = {10.1103/PhysRevE.87.063106},
url = {https://link.aps.org/doi/10.1103/PhysRevE.87.063106}
}
\bib{MR3639140}{article}{
author={Bonnefon, Olivier},
author={Coville, J\'{e}r\^{o}me},
author={Legendre, Guillaume},
title={Concentration phenomenon in some non-local equation},
journal={Discrete Contin. Dyn. Syst. Ser. B},
volume={22},
date={2017},
number={3},
pages={763--781},
issn={1531-3492},
review={\MR{3639140}},
doi={10.3934/dcdsb.2017037},
}
\bib{MR697382}{book}{
author={Brezis, Ha\"{\i}m},
title={Analyse fonctionnelle},
language={French},
series={Collection Math\'{e}matiques Appliqu\'{e}es pour la Ma\^{\i}trise. [Collection
of Applied Mathematics for the Master's Degree]},
note={Th\'{e}orie et applications. [Theory and applications]},
publisher={Masson, Paris},
date={1983},
pages={xiv+234},
isbn={2-225-77198-7},
review={\MR{697382}},
}
\bib{MR576277}{article}{
author={Brown, K. J.},
author={Lin, S. S.},
title={On the existence of positive eigenfunctions for an eigenvalue
problem with indefinite weight function},
journal={J. Math. Anal. Appl.},
volume={75},
date={1980},
number={1},
pages={112--120},
issn={0022-247X},
review={\MR{576277}},
doi={10.1016/0022-247X(80)90309-1},
}
\bib{MR3485125}{article}{
author={Cabr\'{e}, Xavier},
author={Serra, Joaquim},
title={An extension problem for sums of fractional Laplacians and 1-D
symmetry of phase transitions},
journal={Nonlinear Anal.},
volume={137},
date={2016},
pages={246--265},
issn={0362-546X},
review={\MR{3485125}},
doi={10.1016/j.na.2015.12.014},
}
\bib{CABRE}{article}{
author={Cabr\'{e}, Xavier},
author={Dipierro, Serena},
author={Valdinoci, Enrico},
title={The Bernstein technique for integro-differential
equations},
journal = {preprint},
}
\bib{3579567}{article}{
author={Caffarelli, Luis},
author={Dipierro, Serena},
author={Valdinoci, Enrico},
title={A logistic equation with nonlocal interactions},
journal={Kinet. Relat. Models},
volume={10},
date={2017},
number={1},
pages={141--170},
issn={1937-5093},
review={\MR{3579567}},
doi={10.3934/krm.2017006},
}
\bib{MR3051400}{article}{
author={Caffarelli, Luis},
author={Valdinoci, Enrico},
title={A priori bounds for solutions of a nonlocal evolution PDE},
conference={
title={Analysis and numerics of partial differential equations},
},
book={
series={Springer INdAM Ser.},
volume={4},
publisher={Springer, Milan},
},
date={2013},
pages={141--163},
review={\MR{3051400}},
doi={10.1007/978-88-470-2592-9\_10},
}
\bib{MR2332679}{article}{
author={Cantrell, Robert Stephen},
author={Cosner, Chris},
author={Lou, Yuan},
title={Advection-mediated coexistence of competing species},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={137},
date={2007},
number={3},
pages={497--518},
issn={0308-2105},
review={\MR{2332679}},
doi={10.1017/S0308210506000047},
}
\bib{MR3026598}{article}{
author={Cantrell, Robert Stephen},
author={Cosner, Chris},
author={Lou, Yuan},
author={Ryan, Daniel},
title={Evolutionary stability of ideal free dispersal strategies: a
nonlocal dispersal model},
journal={Can. Appl. Math. Q.},
volume={20},
date={2012},
number={1},
pages={15--38},
issn={1073-1849},
review={\MR{3026598}},
}
\bib{MR2411225}{article}{
author={Chen, Xinfu},
author={Hambrock, Richard},
author={Lou, Yuan},
title={Evolution of conditional dispersal: a reaction-diffusion-advection
model},
journal={J. Math. Biol.},
volume={57},
date={2008},
number={3},
pages={361--386},
issn={0303-6812},
review={\MR{2411225}},
doi={10.1007/s00285-008-0166-2},
}
\bib{MR2928344}{article}{
author={Chen, Zhen-Qing},
author={Kim, Panki},
author={Song, Renming},
author={Vondra\v{c}ek, Zoran},
title={Sharp Green function estimates for $\Delta+\Delta^{\alpha/2}$ in
$C^{1,1}$ open sets and their applications},
journal={Illinois J. Math.},
volume={54},
date={2010},
number={3},
pages={981--1024 (2012)},
issn={0019-2082},
review={\MR{2928344}},
}
\bib{MR2912450}{article}{
author={Chen, Zhen-Qing},
author={Kim, Panki},
author={Song, Renming},
author={Vondra\v{c}ek, Zoran},
title={Boundary Harnack principle for $\Delta+\Delta^{\alpha/2}$},
journal={Trans. Amer. Math. Soc.},
volume={364},
date={2012},
number={8},
pages={4169--4205},
issn={0002-9947},
review={\MR{2912450}},
doi={10.1090/S0002-9947-2012-05542-5},
}
\bib{MR2963799}{article}{
author={Ciomaga, Adina},
title={On the strong maximum principle for second-order nonlinear
parabolic integro-differential equations},
journal={Adv. Differential Equations},
volume={17},
date={2012},
number={7-8},
pages={635--671},
issn={1079-9389},
review={\MR{2963799}},
}
\bib{MR2897881}{article}{
author={Cosner, Chris},
author={D\'{a}vila, Juan},
author={Mart\'{\i}nez, Salome},
title={Evolutionary stability of ideal free nonlocal dispersal},
journal={J. Biol. Dyn.},
volume={6},
date={2012},
number={2},
pages={395--405},
issn={1751-3758},
review={\MR{2897881}},
doi={10.1080/17513758.2011.588341},
}
\bib{MR3285831}{article}{
author={Coville, J\'{e}r\^{o}me},
title={Nonlocal refuge model with a partial control},
journal={Discrete Contin. Dyn. Syst.},
volume={35},
date={2015},
number={4},
pages={1421--1446},
issn={1078-0947},
review={\MR{3285831}},
doi={10.3934/dcds.2015.35.1421},
}
\bib{MR3035974}{article}{
author={Coville, J\'{e}r\^{o}me},
author={D\'{a}vila, Juan},
author={Mart\'{\i}nez, Salom\'{e}},
title={Pulsating fronts for nonlocal dispersion and KPP nonlinearity},
journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire},
volume={30},
date={2013},
number={2},
pages={179--223},
issn={0294-1449},
review={\MR{3035974}},
doi={10.1016/j.anihpc.2012.07.005},
}
\bib{defi}{article}{
author={de Figueiredo, Djairo Guedes},
title={Positive solutions of semilinear elliptic problems},
conference={
title={Differential equations},
address={S\~ao Paulo},
date={1981},
},
book={
series={Lecture Notes in Math.},
volume={957},
publisher={Springer, Berlin-New York},
},
date={1982},
pages={34--87},
review={\MR{679140}},
}
\bib{MR2542727}{article}{
author={de la Llave, Rafael},
author={Valdinoci, Enrico},
title={A generalization of Aubry-Mather theory to partial differential
equations and pseudo-differential equations},
journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire},
volume={26},
date={2009},
number={4},
pages={1309--1344},
issn={0294-1449},
review={\MR{2542727}},
doi={10.1016/j.anihpc.2008.11.002},
}
\bib{2018arXiv181107667D}{article}{
author = {Dell'Oro, Filippo},
author={Pata, Vittorino},
title = {Second order linear evolution equations with general dissipation},
journal = {arXiv e-prints},
date = {2018},
eid = {arXiv:1811.07667},
pages = {arXiv:1811.07667},
archivePrefix = {arXiv},
eprint = {1811.07667},
primaryClass = {math.AP},
adsurl = {https://ui.adsabs.harvard.edu/abs/2018arXiv181107667D},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{2017arXiv170605306D}{article}{
author = {del Teso, F{\'e}lix},
author = {Endal, J{\o}rgen},
author = {Jakobsen, Espen R.},
title = {On distributional solutions of local and nonlocal problems of porous medium type},
journal = {arXiv e-prints},
date = {2017},
eid = {arXiv:1706.05306},
pages = {arXiv:1706.05306},
archivePrefix = {arXiv},
eprint = {1706.05306},
primaryClass = {math.AP},
adsurl = {https://ui.adsabs.harvard.edu/abs/2017arXiv170605306D},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{MR3237774}{article}{
author={Di Castro, Agnese},
author={Kuusi, Tuomo},
author={Palatucci, Giampiero},
title={Nonlocal Harnack inequalities},
journal={J. Funct. Anal.},
volume={267},
date={2014},
number={6},
pages={1807--1836},
issn={0022-1236},
review={\MR{3237774}},
doi={10.1016/j.jfa.2014.05.023},
}
\bib{MR3542614}{article}{
author={Di Castro, Agnese},
author={Kuusi, Tuomo},
author={Palatucci, Giampiero},
title={Local behavior of fractional $p$-minimizers},
journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire},
volume={33},
date={2016},
number={5},
pages={1279--1299},
issn={0294-1449},
review={\MR{3542614}},
doi={10.1016/j.anihpc.2015.04.003},
}
\bib{MR2944369}{article}{
author={Di Nezza, Eleonora},
author={Palatucci, Giampiero},
author={Valdinoci, Enrico},
title={Hitchhiker's guide to the fractional Sobolev spaces},
journal={Bull. Sci. Math.},
volume={136},
date={2012},
number={5},
pages={521--573},
issn={0007-4497},
review={\MR{2944369}},
doi={10.1016/j.bulsci.2011.12.004},
}
\bib{VERO}{article}{
author={Dipierro, Serena},
author={Proietti Lippi, Edoardo},
author={Valdinoci, Enrico},
title={(Non)local logistic equations with Neumann conditions},
journal = {preprint},
}
\bib{MR3651008}{article}{
author={Dipierro, Serena},
author={Ros-Oton, Xavier},
author={Valdinoci, Enrico},
title={Nonlocal problems with Neumann boundary conditions},
journal={Rev. Mat. Iberoam.},
volume={33},
date={2017},
number={2},
pages={377--416},
issn={0213-2230},
review={\MR{3651008}},
doi={10.4171/RMI/942},
}
\bib{MR3950697}{article}{
author={Dipierro, Serena},
author={Valdinoci, Enrico},
author={Vespri, Vincenzo},
title={Decay estimates for evolutionary equations with fractional
time-diffusion},
journal={J. Evol. Equ.},
volume={19},
date={2019},
number={2},
pages={435--462},
issn={1424-3199},
review={\MR{3950697}},
doi={10.1007/s00028-019-00482-z},
}
\bib{MR1636644}{article}{
author={Dockery, Jack},
author={Hutson, Vivian},
author={Mischaikow, Konstantin},
author={Pernarowski, Mark},
title={The evolution of slow dispersal rates: a reaction diffusion model},
journal={J. Math. Biol.},
volume={37},
date={1998},
number={1},
pages={61--83},
issn={0303-6812},
review={\MR{1636644}},
doi={10.1007/s002850050120},
}
\bib{MR2597943}{book}{
author={Evans, Lawrence C.},
title={Partial differential equations},
series={Graduate Studies in Mathematics},
volume={19},
edition={2},
publisher={American Mathematical Society, Providence, RI},
date={2010},
pages={xxii+749},
isbn={978-0-8218-4974-3},
review={\MR{2597943}},
doi={10.1090/gsm/019},
}
\bib{MR1911531}{book}{
author={Garroni, Maria Giovanna},
author={Menaldi, Jose Luis},
title={Second order elliptic integro-differential problems},
series={Chapman \& Hall/CRC Research Notes in Mathematics},
volume={430},
publisher={Chapman \& Hall/CRC, Boca Raton, FL},
date={2002},
pages={xvi+221},
isbn={1-58488-200-X},
review={\MR{1911531}},
doi={10.1201/9781420035797},
}
\bib{MR1669352}{book}{
author={Han, Qing},
author={Lin, Fanghua},
title={Elliptic partial differential equations},
series={Courant Lecture Notes in Mathematics},
volume={1},
publisher={New York University, Courant Institute of Mathematical
Sciences, New York; American Mathematical Society, Providence, RI},
date={1997},
pages={x+144},
isbn={0-9658703-0-8},
isbn={0-8218-2691-3},
review={\MR{1669352}},
}
\bib{MR3593528}{article}{
author={Iannizzotto, Antonio},
author={Mosconi, Sunra},
author={Squassina, Marco},
title={Global H\"{o}lder regularity for the fractional $p$-Laplacian},
journal={Rev. Mat. Iberoam.},
volume={32},
date={2016},
number={4},
pages={1353--1392},
issn={0213-2230},
review={\MR{3593528}},
doi={10.4171/RMI/921},
}
\bib{MR2129093}{article}{
author={Jakobsen, Espen R.},
author={Karlsen, Kenneth H.},
title={Continuous dependence estimates for viscosity solutions of
integro-PDEs},
journal={J. Differential Equations},
volume={212},
date={2005},
number={2},
pages={278--318},
issn={0022-0396},
review={\MR{2129093}},
doi={10.1016/j.jde.2004.06.021},
}
\bib{MR2243708}{article}{
author={Jakobsen, Espen R.},
author={Karlsen, Kenneth H.},
title={A ``maximum principle for semicontinuous functions'' applicable to
integro-partial differential equations},
journal={NoDEA Nonlinear Differential Equations Appl.},
volume={13},
date={2006},
number={2},
pages={137--165},
issn={1021-9722},
review={\MR{2243708}},
doi={10.1007/s00030-005-0031-6},
}
\bib{MR2924452}{article}{
author={Kao, Chiu-Yen},
author={Lou, Yuan},
author={Shen, Wenxian},
title={Evolution of mixed dispersal in periodic environments},
journal={Discrete Contin. Dyn. Syst. Ser. B},
volume={17},
date={2012},
number={6},
pages={2047--2072},
issn={1531-3492},
review={\MR{2924452}},
doi={10.3934/dcdsb.2012.17.2047},
}
\bib{MR3590678}{article}{
author={Massaccesi, Annalisa},
author={Valdinoci, Enrico},
title={Is a nonlocal diffusion strategy convenient for biological
populations in competition?},
journal={J. Math. Biol.},
volume={74},
date={2017},
number={1-2},
pages={113--147},
issn={0303-6812},
review={\MR{3590678}},
doi={10.1007/s00285-016-1019-z},
}
\bib{MR3082317}{article}{
author={Montefusco, Eugenio},
author={Pellacci, Benedetta},
author={Verzini, Gianmaria},
title={Fractional diffusion with Neumann boundary conditions: the
logistic equation},
journal={Discrete Contin. Dyn. Syst. Ser. B},
volume={18},
date={2013},
number={8},
pages={2175--2202},
issn={1531-3492},
review={\MR{3082317}},
doi={10.3934/dcdsb.2013.18.2175},
}
\bib{MR3771424}{article}{
author={Pellacci, Benedetta},
author={Verzini, Gianmaria},
title={Best dispersal strategies in spatially heterogeneous environments:
optimization of the principal eigenvalue for indefinite fractional
Neumann problems},
journal={J. Math. Biol.},
volume={76},
date={2018},
number={6},
pages={1357--1386},
issn={0303-6812},
review={\MR{3771424}},
doi={10.1007/s00285-017-1180-z},
}
\bib{MR3060890}{article}{
author={Servadei, Raffaella},
author={Valdinoci, Enrico},
title={A Brezis-Nirenberg result for non-local critical equations in low
dimension},
journal={Commun. Pure Appl. Anal.},
volume={12},
date={2013},
number={6},
pages={2445--2464},
issn={1534-0392},
review={\MR{3060890}},
doi={10.3934/cpaa.2013.12.2445},
}
\bib{MR3161511}{article}{
author={Servadei, Raffaella},
author={Valdinoci, Enrico},
title={Weak and viscosity solutions of the fractional Laplace equation},
journal={Publ. Mat.},
volume={58},
date={2014},
number={1},
pages={133--154},
issn={0214-1493},
review={\MR{3161511}},
}
\bib{MR3590646}{article}{
author={Sprekels, J\"{u}rgen},
author={Valdinoci, Enrico},
title={A new type of identification problems: optimizing the fractional
order in a nonlocal evolution equation},
journal={SIAM J. Control Optim.},
volume={55},
date={2017},
number={1},
pages={70--93},
issn={0363-0129},
review={\MR{3590646}},
doi={10.1137/16M105575X},
}
\bib{NATU}{article}{
author={Viswanathan, G. M.},
author={Afanasyev, V.},
author={Buldyrev, S. V.},
author={Murphy, E. J.},
author={Prince, P. A.},
author={Stanley, H. E.},
title={L\'evy flight search patterns of wandering albatrosses},
journal={Nature},
volume={381},
date={1996},
pages={413--415},
issn={1476-4687},
doi={10.1038/381413a0},
}
\end{biblist}\end{bibdiv}
\end{document}
|
1,116,691,497,161 | arxiv | \section{Introduction}
\begin{none}\label{0.7}
Throughout this paper, for brevity, put
\[\dmeff=\dmeff({\rm Spec}\,k,{\bf Z}),\quad \dmeffl=\dmeff({\rm Spec}\,k,{\bf Q}),\] whose definitions are in \cite[11.1.1]{CD12}. Here, $k$ is an algebraically closed field, and for most part (from (\ref{0.1})) of this paper, we assume that $k={\bf C}$. Let
\begin{enumerate}[(i)]
\item ${\bf 1}$ denote the object $M({\rm Spec}\,k)$ of $\dmeff$ or $\dmeffl$,
\item ${\bf L}$ denote the object ${\bf 1}(1)[2]$ of $\dmeff$ or $\dmeffl$,
\item $\underline{\rm Hom}$ denote the internal hom of $\dmeff$ or $\dmeffl$.
\end{enumerate}
Note that in \cite[11.1.4]{CD12}, we have the change of coefficients functor
\[\dmeff\rightarrow \dmeffl.\]
\end{none}
\begin{none}\label{0.8}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over an algebraically closed field $k$. The Picard group of $X$ admits the exact sequence
\[0\longrightarrow {\rm Pic}^0(X)\longrightarrow {\rm Pic}(X)\longrightarrow {\rm NS}(X)\longrightarrow 0\]
of abelian groups. The exact sequence is related to a decomposition
\begin{equation}\label{0.8.1}
\underline{\rm Hom}({\bf L}^{n-1},M(X))={\rm NS}_{\bf Q}(X)\oplus {\rm Pic}_{\bf Q}^0(X)\oplus {\bf L}
\end{equation}
in $\dmeff({\rm Spec}\,k,{\bf Q})$, where the subscript ${\bf Q}$ means the corresponding ones with $\bf Q$-coefficient. This should come from a conjectural Chow-K\"unneth decomposition \cite[Definition 6.1.1]{MNP} $M(X)=M_0(X)\oplus \cdots \oplus M_{2n}(X)$ like
\[\underline{\rm Hom}({\bf L}^{n-1},M_{2n-2}(X))={\rm NS}_{\bf Q}(X),\]
\[\underline{\rm Hom}({\bf L}^{n-1},M_{2n-1}(X))={\rm Pic}_{\bf Q}^0(X),\]
\[\underline{\rm Hom}({\bf L}^{n-1},M_{2n}(X))={\bf L}.\]
What is the generalization of (\ref{0.8.1}) for higher codimensions? The answer will give also a decomposition of each motive in the slice filtration \cite{HK}
\[
{\bf L}^n=\underline{\rm Hom}({\bf L}^{n},M(X))\otimes {\bf L}^n\rightarrow \underline{\rm Hom}({\bf L}^{n-1},M(X))\otimes {\bf L}^{n-1}\rightarrow \cdots \rightarrow \underline{\rm Hom}({\bf 1},M(X))=M(X)
\]
of $M(X)$.
When $k={\bf C}$, we use intermediate Jacobians to study the question as follows.
\end{none}
\begin{thm}\label{0.1}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over $\mathbf{C}$, and let $d\in [1,n]$ be an integer. Then
\begin{enumerate}[{\rm (1)}]
\item ${\rm NS}_{hom,{\bf Q}}^d(X)\oplus {\rm Griff}_{\bf Q}^d(X)$ is a direct summand of $\underline{\rm Hom}(\mathbf{L}^{n-d},M(X))$ in $\dmeffl$,
\item $J_{a,{\bf Q}}^d(X)$ is a direct summand of $\underline{\rm Hom}(\mathbf{L}^{n-d},M(X))$ in $\dmeffl$.
\end{enumerate}
See {\rm (\ref{1.8})} and {\rm (\ref{2.4})} for the definitions of ${\rm NS}_{hom,{\bf Q}}^d(X)$, ${\rm Griff}_{\bf Q}^d$, and $J_{a,{\bf Q}}^d(X)$.
\end{thm}
\begin{none}\label{0.2}
In particular, using this, we obtain the generalization of (\ref{0.8.1}) for codimension 2 as follows.
\end{none}
\begin{thm}\label{0.3}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over $\mathbf{C}$ with $n\geq 2$. Then for some motive $M_2(X)^*$ in $\dmeffl$, there is a decomposition
\[\underline{\rm Hom}({\bf L}^{n-2},M(X))={\rm NS}_{hom,{\bf Q}}^2(X)\oplus {\rm Griff}_{\bf Q}^2(X)\oplus J_{a,{\bf Q}}^2(X)\oplus M_2(X)^*\oplus ( {\rm Pic}_{\bf Q}^0(X)\otimes {\bf L})\oplus {\bf L}^2.\]
\end{thm}
\begin{none}\label{0.4}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over $\mathbf{C}$, and let
\[i_{2n-2}:M_2(X)^*\otimes {\bf L}^{n-2}\rightarrow M(X)\]
denote the morphism induced by the morphism $M_2(X)^*\rightarrow \underline{\rm Hom}({\bf L}^{n-2},M(X))$ obtained by the above decomposition. If $M(X)$ has a Chow-K\"uunneth decomposition
\[M(X)=M_0(X)\oplus M_1(X)\oplus \cdots \oplus M_{2n}(X),\]
then the morphism $i_{2n-2}$ is the candidate for a morphism $M_{2n-2}(X)\rightarrow M(X)$ induced by a decomposition. Thus we have the following conjecture.
\end{none}
\begin{conj}\label{0.5}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$. Then there is a morphism $p_{2n-2}:M(X)\rightarrow M_2(X)^*\otimes {\bf L}^{n-2}$ in $\dmeffl$ such that
\begin{enumerate}[(1)]
\item $p_{2n-2} i_{2n-2}={\rm id}$,
\item $i_{2n-2}p_{2n-2}:M(X)\rightarrow M(X)$ induces the K\"unneth projector
\[H^*(X,{\bf Q})\rightarrow H^{2n-2}(X,{\bf Q})\rightarrow H^*(X,{\bf Q}),\]
\item the dual projector $(i_{2n-2}p_{2n-2})^t$ induces the K\"unneth projector
\[H^*(X,{\bf Q})\rightarrow H^2(X,{\bf Q})\rightarrow H^*(X,{\bf Q}).\]
\end{enumerate}
\end{conj}
\begin{none}\label{0.6}
A successful construction of $p_{2n-2}$ with the above properties gives the construction of projectors of $M(X)$ defining $M_2(X)$ and $M_{2n-2}(X)$. In particular, since projectors of $M(X)$ defining $M_0(X)$, $M_1(X)$, $M_{2n-1}(X)$, and $M_{2n}(X)$ are already constructed, the conjecture together with some vanishing conjectures in \cite[5.8]{Jan} will prove the K\"unneth type standard conjecture when the dimension of $X$ is $3$.
\end{none}
\begin{none}\label{0.9}
{\it Organization of the paper.} In Section 2, we prove (\ref{0.1}(1)) by constructing a morphism $\underline{\rm Hom}(\mathbf{L}^{n-d},M(X))\rightarrow {\rm NS}_{alg,{\bf Q}}^d(X)$ and its section. In Section 3, we prove (\ref{0.1}(2)) by constructing a morphism $\underline{\rm Hom}(\mathbf{L}^{n-d},M(X))\rightarrow J_{a,{\bf Q}}^d(X)$ and its section. In Section 4, we prove (\ref{0.3}) by constructing the other pieces and using \cite[7.3.10]{KMP}. In Section 5, we discuss some conjectures other than (\ref{0.5}).
\end{none}
\begin{none}\label{0.10}
{\it Conventions and notations.} Alongside (\ref{0.7}), we have the following.
\begin{enumerate}[(1)]
\item Let $T$ be a complex analytic variety or a scheme over an algebraically closed field $k$. We denote by ${\rm cl}(T)$ the set of closed points of $T$.
\item $Sm/{\bf C}$ denotes the category of smooth ${\bf C}$-schemes.
\item For any ${\bf Q}$-vector space $V$, consider the constant Nisnevich sheaf with transfer on $Sm/{\bf C}$ associated to $V$. We denote by $V$ (by abuse of notation) its associated object in $\dmeffl$.
\end{enumerate}
\end{none}
\section{Proof of (\ref{0.1}(1))}
\begin{df}\label{1.8}
Let $X$ be a scheme smooth over ${\bf C}$, and let $d$ be a nonnegative integer. We put
\[CH_{alg}^d(X)=\{Z\in CH^d(X):Z\sim_{alg}0\},\]
\[CH_{hom}^d(X)=\{Z\in CH^d(X):Z\sim_{hom}0\}\]
\[{\rm NS}_{alg}^d(X)=CH^d(X)/CH_{alg}^d(X),\]
\[{\rm NS}_{hom}^d(X)=CH^d(X)/CH_{hom}^d(X),\]
\[{\rm Griff}^d(X)=CH_{hom}^d(X)/CH_{alg}^d(X)\]
where $\sim_{alg}$ (resp.\ $\sim_{hom}$) denotes the algebraic equivalence relation (resp.\ homological equivalence relation for the singular cohomology. We also denote by
\[CH_{alg,{\bf Q}}^d(X),\;\;CH_{hom,{\bf Q}}^d(X),\;\;{\rm NS}_{alg,{\bf Q}}^d(X),\;\;{\rm NS}_{hom,{\bf Q}}^d(X),\;\;{\rm Griff}_{\bf Q}^d(X)\]
the corresponding ones defined for ${\bf Q}$-coefficient.
\end{df}
\begin{df}\label{1.1}
Let $X$ and $Y$ be schemes smooth over ${\bf C}$, and let $d$ be a nonnegative integer. We put
\[CH_{X}^d(Y)=CH^d(Y\times X).\]
When $Y$ is connected, we put
\[CH_{alg,X}^d(Y)=\{Z\in CH^d(Y\times X):i_y^*Z\sim_{alg} 0\},\]
\[CH_{hom,X}^d(Y)=\{Z\in CH^d(Y\times X):i_y^*Z\sim_{hom} 0\},\]
\[{\rm NS}_{alg,X}^d(Y)=CH_X^d(Y)/CH_{alg,X}^d(Y),\]
\[{\rm NS}_{hom,X}^d(Y)=CH_X^d(Y)/CH_{hom,X}^d(Y)\]
where $y$ is a closed point of $Y$ and $i_y$ denotes the closed immersion $y\times X\rightarrow Y\times X$. Note that the above definitions are independent of $y$ since $i_y^*Z$ and $i_{y'}^*Z$ are algebraically equivalent for two closed points $y$ and $y'$ of $Y$.
When $Y$ is not necessarily connected and has the connected components $\{Y_i\}_{i\in I}$, we put
\[CH_{alg,X}^d(Y)=\bigoplus_{i\in I}CH_{alg,X}^d(Y_i),\]
\[CH_{hom,X}^d(Y)=\bigoplus_{i\in I}CH_{hom,X}^d(Y_i),\]
\[{\rm NS}_{alg,X}^d(Y)=\bigoplus_{i\in I}{\rm NS}_{alg,X}^d(Y_i),\]
\[{\rm NS}_{hom,X}^d(Y)=\bigoplus_{i\in I}{\rm NS}_{hom,X}^d(Y_i).\]
We consider $CH_X^d$, $CH_{alg,X}^d$, $CH_{hom,X}^d$, ${\rm NS}_{alg,X}^d$, and ${\rm NS}_{hom,X}^d$ as presheaves with transfer on $Sm/{\bf C}$.
We also denote by
\[CH_{X,{\bf Q}}^d,\;\;CH_{alg,X,{\bf Q}}^d,\;\;CH_{hom,X,{\bf Q}}^d,\;\;{\rm NS}_{alg,X,{\bf Q}}^d,\;\;{\rm NS}_{hom,X,{\bf Q}}^d\]
the corresponding ones defined for ${\bf Q}$-coefficient.
\end{df}
\begin{prop}\label{1.2}
Under the notations and hypotheses of {\rm (\ref{1.1})}, the homomorphisms
\[{\rm NS}_{alg,X}^d(Y)\rightarrow {\rm NS}_{alg,X}^d({\rm Spec}\,{\bf C}),\]
\[{\rm NS}_{hom,X}^d(Y)\rightarrow {\rm NS}_{hom,X}^d({\rm Spec}\,{\bf C})\]
induced by $i_y^*$ are isomorphisms.
\end{prop}
\begin{proof}
The homomorphisms are surjective since $i_y^*:CH^d(Y\times X)\rightarrow CH^d(X)$ is surjective. The homomorphisms are injective since the kernels of the homomorphisms
\[CH^d(Y\times X)\rightarrow {\rm NS}_{alg}^d(X),\]
\[CH^d(Y\times X)\rightarrow {\rm NS}_{hom}^d(X)\]
are
\[\{Z\in CH^d(Y\times X):i_y^*Z\in CH^d(X)_{alg}\},\]
\[\{Z\in CH^d(Y\times X):i_y^*Z\in CH^d(X)_{hom}\}\]
respectively.
\end{proof}
\begin{cor}\label{1.3}
The presheaves ${\rm NS}_{alg,X}^d$ and ${\rm NS}_{hom,X}^d$ on $Sm/X$ are constant Nisnevich sheaves with transfer associated to ${\rm NS}_{alg}^d(X)$ and ${\rm NS}_{hom}^d(X)$ respectively.
\end{cor}
\begin{proof}
Let $Y$ be a connected scheme smooth over ${\bf C}$, and let $p:Y\rightarrow {\rm Spec}\,{\bf C}$ denote the structural morphism. Then let $y\in Y$ be a closed point, and let $c_y:y\rightarrow Y$ denote the closed immersion for the point $y$. Consider the homomorphisms
\[{\rm NS}_{alg,X}^d({\rm Spec}\,{\bf C})\rightarrow {\rm NS}_{alg,X}^d(Y)\rightarrow {\rm NS}_{alg,X}^d({\rm Spec}\,{\bf C})\]
induced by $p$ and $c_y$ respectively. The composition is an isomorphism since $c_yp={\rm id}$, and the second arrow is an isomorphism by (\ref{1.2}). Thus the first arrow is an isomorphism. This shows that ${\rm NS}_{alg,X}^d$ is a constant Zariski sheaf associated with ${\rm NS}_{alg,X}^d({\rm Spec}\,{\bf C})= {\rm NS}_{alg}^d(X)$. Thus it is a constant Nisnevich sheaf with transfer.
The proof for ${\rm NS}_{hom,X}^d$ is the same as above.
\end{proof}
\begin{none}\label{1.4}
Note that (\ref{1.3}) also holds for ${\bf Q}$-coefficient. Thus from now, we can use the notations
\[{\rm NS}_{alg,{\bf Q}}^d(X),\;\;{\rm NS}_{hom,{\bf Q}}^d(X)\]
instead of ${\rm NS}_{alg,X,{\bf Q}}^d$ and ${\rm NS}_{hom,X,{\bf Q}}^d$ respectively following the convention in (\ref{0.10}).
\end{none}
\begin{df}\label{1.5}
For $i\in {\bf Z}$, we denote by
\[h_i:\dmeff\rightarrow {\rm Sh}^{tr}(Sm/{\bf C})\]
the homology functor obtained by the homotopy $t$-structure defined in \cite[Definition 3.1]{Ayo}. Here, ${\rm Sh}^{tr}(Sm/{\bf C})$ denote the category of sheaves with transfer on $Sm/{\bf C}$ with coefficient ${\bf Q}$.
\end{df}
\begin{none}\label{1.6}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$. As in \cite[Section A.3]{H} we have that
\[h_i(\underline{\rm Hom}({\bf L}^{n-d},M(X)))=0,\]
\[h_0(\underline{\rm Hom}({\bf L}^{n-d},M(X)))\cong CH_{X}^d\]
in $\dmeff$ for $i<0$. Then we have the morphisms
\[\underline{\rm Hom}({\bf L}^{n-d},M(X))\rightarrow h_0(\underline{\rm Hom}({\bf L}^{n-d},M(X)))\stackrel{\sim}\rightarrow CH_{X}^d.\]
in $\dmeff$. We also have the morphism
\[CH_{X}^d \rightarrow {\rm NS}_{alg,X}^d=NS_{alg}^d(X)\]
in $\dmeff$ taking the quotient of $CH^d(Y\times X)$ for $Y\in Sm/{\bf C}$. In conclusion, we have the morphism
\begin{equation}\label{1.6.2}
\underline{\rm Hom}({\bf L}^{n-d},M(X))\rightarrow {\rm NS}_{alg}^d(X)
\end{equation}
in $\dmeff$. Thus we get the morphism
\begin{equation}\label{1.6.1}
\underline{\rm Hom}({\bf L}^{n-d},M(X))\rightarrow {\rm NS}_{alg,{\bf Q}}^d(X)
\end{equation}
in $\dmeffl$.
\end{none}
\begin{prop}\label{1.7}
Under the notations and hypotheses of {\rm (\ref{1.6})}, the above morphism has a section in $\dmeffl$.
\end{prop}
\begin{proof}
Since ${\rm NS}_{alg,{\bf Q}}^d(X)$ is a ${\bf Q}$-vector space, it has a basis $\{a_i\}_{i\in I}$ for some set $I$. Then ${\rm NS}_{alg,{\bf Q}}^d(X)$ is isomorphic to $\bigoplus_{i\in I}{\bf Q}$. In $\dmeffl$, we have an isomorphism
\[{\rm NS}_{alg,{\bf Q}}^d(X)\cong\bigoplus_{i\in I} {\bf 1}.\]
Now we have bijections
\begin{equation}\label{1.7.1}\begin{split}
&{\rm Hom}_{\dmeffl}({\rm NS}_{alg,{\bf Q}}^d(X),\underline{\rm Hom}({\bf L}^{n-d},M(X)))\cong{\rm Hom}_{\dmeffl}({\rm NS}_{alg,{\bf Q}}^d(X)\otimes {\bf L}^{n-d},M(X))\\
\cong&I\times {\rm Hom}_{\dmeffl}( {\bf L}^{n-d},M(X))\cong I\times CH_{{\bf Q}}^d(X)\cong {\rm Hom}_{\rm Set}(I,CH_{{\bf Q}}^d(X))
\end{split}\end{equation}
where $\rm Set$ denotes the category of sets. Choose $\{b_i\in CH_{\bf Q}^d(X)\}_{i\in I}$ such that the image of $b_i$ in ${\rm NS}_{alg,{\bf Q}}^d(X)$ is $a_i$. Then via (\ref{1.7.1}), the function $I\rightarrow CH^d(X)$ given by
\[i\mapsto b_i\]
corresponds to a section of (\ref{1.6.1}).
\end{proof}
\begin{none}\label{1.9}
The quotient homomorphism ${\rm NS}_{alg,{\bf Q}}^d(X)\rightarrow {\rm NS}_{hom,{\bf Q}}^d(X)$ has a section since they are ${\bf Q}$-vector spaces. Thus we have a decomposition
\[{\rm NS}_{alg,{\bf Q}}^d(X)\cong {\rm NS}_{hom,{\bf Q}}^d(X)\oplus {\rm Griff}_{\bf Q}^d(X),\]
and then (\ref{1.7}) completes the proof of (\ref{0.1}(1)).
\end{none}
\section{Proof of (\ref{0.1}(2))}
\begin{lemma}\label{2.5}
Let $X$ and $Y$ be schemes of finite type over an algebraically closed field $k$. Assume that $X$ is integral and that each connected component of $Y$ is integral. If $X$ is quasi-projective over $k$, then for any function $f:{\rm cl}(Y)\rightarrow {\rm cl}(X)$, there are at most one morphism $Y\rightarrow X$ of schemes inducing $f$.
\end{lemma}
\begin{proof}
The question is Zariski local on $Y$, so we reduce to the case when $Y$ is integral and affine. Then the statement follows from the classical fact that the category of varieties quasi-projective over $k$ is a full subcategory of the category of schemes over $k$.
\end{proof}
\begin{none}\label{2.4}
We review here several facts about intermediate Jacobians and Abel-Jacobi maps. Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$, and let $d\in [1,n]$ be an integer.
\begin{enumerate}[(1)]
\item For $x\in {\rm cl}(X)$, we have the Albanese map
\[{\rm Alb}_{X,x}:X\rightarrow {\rm Alb}(X)\]
mapping $x$ to $0$.
\item We have the intermediate Jacobian $J^d(X)$, which is a complex torus. See \cite[Definition 12.2]{Voi} for the definition.
\item We have the Abel-Jacobi map
\[AJ_X^d:CH_{hom}^d(X)\rightarrow {\rm cl}(J^d(X)).\]
See \cite[p.\ 294]{Voi} for the definition.
\item We have $J_a^d(X)$, which is an abelian subvariety of $J^d(X)$. See \cite[2.3.2]{Via} for the definition. We have the commutative diagram
\[\begin{tikzcd}
CH_{alg}^d(X)\arrow[d]\arrow[r,"AJ_X^d"]&{\rm cl}(J_a^d(X))\arrow[d]\\
CH_{hom}^d(X)\arrow[r,"AJ_X^d"]&{\rm cl}(J^d(X))
\end{tikzcd}\]
of abelian groups where the vertical arrows are the obvious inclusions, and the upper horizontal arrow is surjective. When $d=n$, we have that $J^n(X)=J_a^n(X)={\rm Alb}(X)$.
\item We denote by ${\rm Alb}(X)$ (resp.\ $J_a^d(X)$) (by abuse of notation) the element in $\dmeff$ associated to the abelian variety ${\rm Alb}(X)$ (resp.\ $J_a^d(X)$). See \cite{Org} for the definition. We also denote by ${\rm Alb}_{\bf Q}(X)$ (resp.\ $J_{a,{\bf Q}}^d$) the corresponding object in $\dmeffl$.
\item Let $Y$ be a scheme smooth over ${\bf C}$. By \cite[\S 4]{Lie}, there is a homomorphism
\[AJ_{X,Y}^d:CH_{alg,X}^d(Y)\rightarrow {\rm Hom}_{{\rm Sch}_{\bf C}}(Y,J_a^d(X))\]
of abelian groups such that for $y\in {\rm cl}(Y)$ and $Z\in CH_{alg,X}^d(Y)$, $AJ_{X,Y}^d(Z)$ is the morphism $Y\rightarrow J_a^d(X)$ mapping $y$ to $AJ_X^d(i_y^*Z)$. Here, ${\rm Sch}_{\bf C}$ denotes the category of ${\bf C}$-schemes, and $i_y:y\times X\rightarrow Y\times X$ denotes the closed immersion. Note that by (\ref{2.5}), $AJ_{X,Y}^d$ is uniquely determined by the above information.
\item Let $Y$ be an $m$-dimensional connected scheme smooth and projective over ${\bf C}$, and let $Z\in CH_X^d(Y)$ be an element. Consider the homomorphism $\psi_Z:{\rm Alb}(Y)\rightarrow J_a^d(X)$ of abelian varieties induced by the morphism of the Hodge structures
\[H^{2m-1}(Y,{\bf Z})\rightarrow H^{2d-1}(X,{\bf Z})\]
induced by $Z$ (see \cite[Theorem 12.17]{Voi} for detail). Then by \cite[\S 3, \S 4]{Lie}, for $y\in {\rm cl}(Y)$ and $Z\in CH_{X}^d(Y)$, we have the commutative diagram
\begin{equation}\label{2.4.1}\begin{tikzcd}
Y\arrow[d,"{\rm Alb}_{Y,y}"']\arrow[rd,"AJ_{X,Y}^d(Z')"]\\
{\rm Alb}(Y)\arrow[r,"\psi_Z"']&J_a^d(X)
\end{tikzcd}\end{equation}
of schemes where $Z'=Z-C\times i_y^*Z$. Here, $i_y:y\times X\rightarrow Y\times X$ denote the closed immersion.
\end{enumerate}
\end{none}
\begin{prop}\label{2.2}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$, and let $d\in [1,n]$ be an integer. Then $AJ_{X,-}^d:CH_{alg,X}^d\rightarrow J_a^d(X)$ is a morphism of presheaves with transfer on $Sm/{\bf C}$.
\end{prop}
\begin{proof}
Let $Y$ and $Y'$ be schemes smooth over ${\bf C}$, and let $V$ be a finite correspondence from $Y'$ to $Y$. The statement is that the diagram
\begin{equation}\label{2.2.1}\begin{tikzcd}
CH_{alg,X}(Y)\arrow[r,"AJ_{X,Y}^d"]\arrow[d,"\alpha"]&{\rm Hom}_{{\rm Sch}_{\bf C}}(Y,J_a^d(X))\arrow[d,"\beta"]\\
CH_{alg,X}(Y')\arrow[r,"AJ_{X,Y'}^d"]&{\rm Hom}_{{\rm Sch}_{\bf C}}(Y',J_a^d(X))
\end{tikzcd}\end{equation}
of abelian groups commutes where ${\rm Sch}_{\bf C}$ denotes the category of ${\bf C}$-schemes, and $\alpha$ and $\beta$ denote the homomorphisms induced by $V$. To show this, we may assume that $Y$ and $Y'$ are connected and $V$ is an elementary correspondence.
Here, we will review the definition of $\beta$ given in \cite[3.1.2]{Org}. Let $f:Y'\rightarrow J_a^d(X)$ be a morphism of schemes. If $V$ has degree $r$, then we have the morphism $Y'\rightarrow Y^{(r)}$ induced by $V$, and we have the morphisms
\[Y'\rightarrow Y^{(r)}\stackrel{f^{(r)}}\rightarrow (J_a^d(X))^{(r)}\stackrel{\rm sum}\rightarrow J_a^d(X)\]
of schemes. Here, $Y^{(r)}$, $J_a^{(r)}(X)$, and $f^{(r)}$ denote the symmetric powers. The composition is $\beta(f)$.
Let $Z\in CH_{alg,X}(Y)^d$ be an element, and let $y'\in {\rm cl}(Y')$ be a closed point. Then via $V$, $y'$ corresponds to $a_1y_1+\cdots +a_sy_s$ for some $a_1\ldots,a_s\in {\bf N}^+$ and $y_1,\ldots,y_s\in {\rm cl}(Y)$. By definition, $AJ_{X,Y}^d(Z)$ maps $y\in {\rm cl}(Y)$ to $AJ_{X}^d(i_y^*Z)$ where $i_y:y\times X\rightarrow Y\times X$ denotes the closed immersion. Using the above description of $\beta$, we see that $\beta(AJ_{X,Y}^d(Z))$ maps $y'$ to
\[a_1AJ_X^d (i_{y_1}^*Z)+\cdots +a_s AJ_X^d(i_{y_s}^*Z).\]
Since $i_{y'}^*(\alpha(Z))=a_1i_{y_1}^*Z+\cdots +a_si_{y_s}^*Z$, we see that $AJ_{X,Y'}^d(\alpha (Z))$ maps $y'$ to
\[AJ_X^d(a_1i_{y_1}^*Z+\cdots +a_si_{y_s}^*Z)=a_1AJ_X^d (i_{y_1}^*Z)+\cdots +a_s AJ_X^d(i_{y_s}^*Z).\]
Thus $\beta(AJ_{X,Y}(Z))$ and $AJ_{X',Y'}(\alpha(Z))$ maps $y'$ to the same closed point of $J_a^d(X)$. Then by (\ref{2.5}), (\ref{2.2.1}) commutes.
\end{proof}
\begin{none}\label{2.3}
By (\ref{2.2}), we can consider
\[AJ_{X,-}^d:{\rm CH}_{alg,X}^d\rightarrow J_a^d(X)\]
as a morphism in $\dmeff$. In (\ref{1.6.2}), we have the morphism
\[\underline{\rm Hom}({\bf L}^{n-d},M(X))\rightarrow {\rm NS}_{alg}(X)^d\]
in $\dmeff$. Let $K$ denote its cocone. By (\ref{1.6}), we have that $h_0(K)\cong{\rm CH}_{alg,X}^d$ and $h_i(K)=0$ for $i<0$.
Thus we have the morphisms
\[K\rightarrow h_0(K)\cong {\rm CH}_{alg,X}^d\stackrel{AJ_{X,Y}^d}\longrightarrow J_{a}^d(X)\]
in $\dmeff$. Consider the induced morphism
\[\gamma:K_{\bf Q}\rightarrow J_{a,{\bf Q}}^d(X)\]
in $\dmeffl$ where $K_{\bf Q}$ denotes the image of $K$ in $\dmeffl$. Our next goal is to construct its section in $\dmeffl$.
In \cite[2.3.3]{Via}, it is shown that there is a curve $C$ smooth and projective over {\bf C} (not necessarily connected) and an element $Z\in CH^d(C\times X)$ such that the induced homomorphism $\psi_Z:{\rm Alb}(C)\rightarrow J_a^d(X)$ of abelian varieties is surjective. From (\ref{2.4.1}), we have the commutative diagram
\[\begin{tikzcd}
M(C)\arrow[d]\arrow[r]&K_{\bf Q}\arrow[d,"\gamma"]\\
{\rm Alb}_{\bf Q}(C)\arrow[r,"\psi_{Z,{\bf Q}}"]&J_{a,{\bf Q}}^d(X)
\end{tikzcd}\]
in $\dmeffl$ where the left vertical arrow is induced by the Albanese map and the upper horizontal arrow is induced by $Z'=Z-C\times i_y^*Z$. Here, $i_y:y\times X\rightarrow Y\times X$ denotes the closed immersion.
The category whose objects are abelian varieties and the set of morphism from $A$ to $B$ are ${\rm Hom}(A,B)\otimes {\bf Q}$ is semi-simple, so $\psi_{Z,{\bf Q}}$ has a section since $\psi_Z$ is surjective. Since ${\rm Alb}_{\bf Q}(C)$ is a direct summand of $M(C)$ in $\dmeffl$, the composition $M(C)\rightarrow J_{a,{\bf Q}}^d(X)$ has a section. Thus $\gamma$ has a section. This completes the proof of (\ref{0.1}(2)) since $K_{\bf Q}$ is a direct summand of $\underline{\rm Hom}({\bf L},M(X))$ by (\ref{0.1}(1)).
\end{none}
\section{Proof of (\ref{0.3})}
\begin{lemma}\label{3.1}
Let $M$ be an object of $\dmeffl$, and let $\alpha,\beta:M\rightarrow M$ be projectors. We put
\[F={\rm im}\,\alpha,\quad G={\rm im}\,\beta.\]
Assume that ${\rm Hom}_{\dmeffl}(G,F)=0$. Then $F\oplus G$ is a direct summand of $M$.
\end{lemma}
\begin{proof}
The assumption implies that $\alpha\beta=0$. Using this, we have that
\[\alpha(\beta-\beta\alpha)=\alpha\beta-\alpha\beta\alpha=0,\]
\[(\beta-\beta\alpha)\alpha=\beta\alpha-\beta\alpha^2=0,\]
\[(\beta-\beta\alpha)^2=\beta^2-\beta^2\alpha-\beta\alpha\beta+\beta\alpha\beta\alpha=\beta-\beta\alpha.\]
Thus $\beta-\beta\alpha$ is a projector orthogonal to $\alpha$. Since
\[\beta(\beta-\beta\alpha)\beta=\beta^3-\beta\alpha\beta=\beta,\]
\[(\beta-\beta\alpha)\beta(\beta-\beta\alpha)=\beta^3- \beta\alpha\beta^2-\beta^3\alpha+\beta\alpha\beta^2\alpha=\beta-\beta\alpha,\]
we have that ${\rm im}\,\beta\cong{\rm im}(\beta-\beta\alpha)$. Thus $\alpha+\beta-\beta\alpha$ is a projector whose image is isomorphic to $F\oplus G$.
\end{proof}
\begin{none}\label{3.2}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over $\mathbf{C}$ with $n\geq 2$, and let $x$ be a closed point of $X$. Note that ${\bf 1}$ and ${\rm Alb}_{\bf Q}(X)$ are direct summands of $M(X)$. Then
\[{\bf L}^n\cong \underline{\rm Hom}({\bf 1},{\bf L}^n),\quad {\bf L}^{n-1}\otimes {\rm Pic}_{\bf Q}^0(X)\cong \underline{\rm Hom}({\rm Alb}_{\bf Q}(X),{\bf L}^n)\]
are direct summands of $\underline{\rm Hom}(M(X),{\bf L}^n)$, which is isomorphic to $M(X)$ by \cite[16.24]{MVW}. Thus using \cite[16.25]{MVW}, we see that ${\bf L}^2$ and ${\bf L}\otimes {\rm Pic}_{\bf Q}^0(X)$ are direct summands of $\underline{\rm Hom}({\bf L}^{n-2},M(X))$. We also have that
\[{\rm NS}_{hom,{\bf Q}}^2(X)\oplus {\rm Griff}_{\bf Q}^2(X)\cong {\rm NS}_{alg,{\bf Q}}^2(X)\]
in $\dmeffl$ by (\ref{1.9}). Thus to prove (\ref{0.2}), by (\ref{3.1}), it suffices to show that
\[{\rm Hom}_{\dmeffl}({\bf L}^2,{\rm Pic}_{\bf Q}^0(X)\otimes {\bf L})=0,\quad {\rm Hom}_{\dmeffl}({\bf L}^2,J_{a,{\bf Q}}^2(X))=0,\]
\[ {\rm Hom}_{\dmeffl}({\bf L}^2,{\rm NS}_{alg,{\bf Q}}^2(X))=0,\quad {\rm Hom}_{\dmeffl}({\rm Pic}_{\bf Q}^0(X)\otimes {\bf L},J_{a,{\bf Q}}^2(X))=0,\]
\[{\rm Hom}_{\dmeffl}({\rm Pic}^0(X)\otimes {\bf L},{\rm NS}_{alg,{\bf Q}}^2(X))=0,\quad {\rm Hom}_{\dmeffl}(J_{a,{\bf Q}}^2(X),{\rm NS}_{alg}^2(X))=0.\]
These follow from \cite[7.3.10]{KMP} because of the following reasons.
\begin{enumerate}[(i)]
\item The motive ${\bf L}^2$ is isomorphic to $M_4(S_0)$ for some $S_0$.
\item The motive ${\rm Pic}_{\bf Q}^0(X)\otimes {\bf L}$ is isomorphic to $M_3(S_1)$ for some $S_1$.
\item The motive $J_{a,{\bf Q}}^2(X)$ is isomorphic to $M_1(S_2)$ for some $S_2$.
\item The motive ${\rm NS}_{alg,{\bf Q}}^2(X)$ is isomorphic to $M_0(S_3)$ for some $S_3$.
\end{enumerate}
Here, $S_0$, $S_1$, $S_2$, and $S_3$ are (not necessarily connected) surfaces smooth and projective over ${\bf C}$. This completes the proof of (\ref{0.2}).
\end{none}
\section{Conjectures}
\begin{df}\label{4.5}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$, and let $d\in [1,n]$ be an integer. Consider the homomorphism
\[AJ_{X,{\bf Q}}^d:CH_{hom,{\bf Q}}^d(X)\rightarrow {\rm cl}(J_a^d(X))\otimes_{\bf Z}{\bf Q}\]
of ${\bf Q}$-vector spaces induced by $AJ_X^d$. We put
\[CH_{Jac,{\bf Q}}^d(X)={\rm ker}\,AJ_{X,{\bf Q}}^d.\]
\end{df}
\begin{none}\label{4.1}
Here, we give two conjectures other than (\ref{0.5}).
\end{none}
\begin{conj}\label{4.3}
Let $X$ an $n$-dimensional connected scheme smooth and projective over ${\bf C}$ with $n\geq 2$. Then
\[CH_{Jac,{\bf Q}}^2(X)\subset CH_{alg,{\bf Q}}^2(X).\]
\end{conj}
\begin{none}\label{4.4}
Let us conjecturally prove (\ref{4.3}). The statement is that any element in the kernel of \[AJ_{X,{\bf Q}}^2:CH_{hom,{\bf Q}}^2(X)\rightarrow {\rm cl}(J_a^2(X))\otimes_{\bf Z}{\bf Q}\]
is algebraically equivalent to $0$. Assume that $M(X)$ has a Chow-K\"unneth decomposition $M_0(X)\oplus \cdots \oplus M_{2n}(X)$ in $\dmeffl$. The conjectural Bloch-Beilinson filtration on $CH^2(X)$ expects that
\begin{equation}\label{4.4.1}
{\rm ker}\,AJ_{X,{\bf Q}}^2 \cong {\rm Hom}_{\dmeffl}({\bf L}^{n-2},M_{2n-2}(X)), \quad 0={\rm Hom}_{\dmeffl}({\bf L}^{n-2},M_r(X))
\end{equation}
for $r<2n-4$ and $r>2n-1$.
If some nonzero element in the kernel of $AJ_{X,{\bf Q}}^2$ is not algebraically equivalent to $0$, then it gives a direct summand ${\bf 1}$ of ${\rm NS}_{alg,{\bf Q}}^2(X)$, which is also a direct summand of $\underline{\rm Hom}({\bf L}^{n-2},M(X))$ in $\dmeffl$ by (\ref{0.1}(1)). The induced morphism
\[{\bf 1}\rightarrow \underline{\rm Hom}({\bf L}^{n-2},M_0(X)\oplus M_1(X)\oplus \cdots \oplus M_{2n-3}(X)\oplus M_{2n-1}(X)\oplus M_{2n}(X))\]
in $\dmeffl$ is $0$ by (\ref{4.4.1}) and the assumption that the element is in the kernel of $AJ_{X,{\bf Q}}^2$. Thus we see that ${\bf 1}$ is a direct summand of $\underline{\rm Hom}({\bf L}^{n-2},M_{2n-2}(X))$. Conjecturally, we have that $M_{2n-2}(X)\cong {\bf L}^{n-2}\otimes M_2(X)$ in $\dmeffl$. Then by the cancellation law \cite[16.25]{MVW}, we see that ${\bf 1}$ is a direct summand of
\[\underline{\rm Hom}({\bf L}^{n-2},M_{2n-2}(X))\cong \underline{\rm Hom}({\bf L}^{n-2},{\bf L}^{n-2}\otimes M_2(X))\cong \underline{\rm Hom}({\bf 1},M_2(X))\cong M_2(X).\]
In particular, we have a nonzero morphism $M_2(X)\rightarrow {\bf 1}$ in $\dmeff$. This contradicts to the conjecture \cite[5.8]{Jan}.
\end{none}
\begin{conj}\label{4.2}
Let $X$ be an $n$-dimensional connected scheme smooth and projective over ${\bf C}$, and let
\[M(X)=M_0(X)\oplus \cdots\oplus M_{2n}(X)\]
be a conjectural Chow-K\"unneth decomposition. Then
\[\underline{\rm Hom}({\bf L}^{n-d},M_{2n-2d}(X))={\rm NS}_{hom,{\bf Q}}^d(X),\]
\[\underline{\rm Hom}({\bf L}^{n-d},M_{2n-2d+1}(X))=CH_{hom,\bf Q}^d(X)/(CH_{Jac,{\bf Q}}^d(X)+CH_{alg,{\bf Q}}^d(X)))\oplus J_{a,{\bf Q}}^d(X)\]
for any integer $d\in [1,d]$.
\end{conj}
\begin{none}\label{4.6}
The meaning of the second equation is that the motive $\underline{\rm Hom}({\bf L}^{n-d},M_{2n-2d+1}(X))$ is the direct sum of $J_{a,{\bf Q}}^d(X)$ and the image of the homomorphism
\[CH_{hom,{\bf Q}}^d(X)/CH_{alg,{\bf Q}}^d(X)\rightarrow {\rm cl}(J^d(X))/{\rm cl}(J_a^d(X))\]
of abelian groups. In particular, this implies that to study the motive, we do not need the whole complex torus $J^d(X)$.
\end{none}
\titleformat*{\section}{\center \scshape }
|
1,116,691,497,162 | arxiv | \section{Main results}
This paper concerns the spectral measure of the Almost Mathieu operator:
\begin{equation*}
(H_{\lambda,\alpha,\theta} u)_n= u_{n+1}+u_{n-1} +2\lambda \cos 2
\pi (n\alpha + \theta) u_n,
\end{equation*}
where $\theta\in \mathbb{R}$ is the phase, $\alpha\in {\mathbb R}\backslash
{\mathbb Q}$ is the frequency and $\lambda\in {\mathbb R}$ is the coupling constant,
which has been extensively studied because of its strong backgrounds in
physics and also because it provides interesting examples in spectral theory
\cite{L1}. We will find the exact transition point from singular
continuous spectrum to purely point spectrum of Almost Mathieu operator, thus solve Jitomirskaya's conjecture in 1995 \cite{Ji95}(see also Problem 8 in \cite{J07}).
More precisely, let $\frac{p_n}{q_n}$ be the $n-$th convergent of $\alpha,$ and
define
\begin{equation}\label{defbeta}\beta(\alpha):=\limsup_{n\rightarrow \infty}\frac{\ln
q_{n+1}}{q_n},\end{equation} our main results are the following:
\begin{Theorem}\label{main}
Let $\alpha\in {\mathbb R}\backslash {\mathbb Q}$ with $0<\beta(\alpha)<\infty$, then
we have the following:
\begin{enumerate}
\item If $|\lambda|<1,$ then $H_{\lambda,\alpha,\theta}$ has
purely absolutely continuous spectrum for all $\theta$.
\item If $1\leq |\lambda|<e^\beta,$ then $H_{\lambda,\alpha,\theta}$ has
purely singular continuous spectrum for all $\theta$.
\item If $|\lambda|>e^\beta,$ then $H_{\lambda,\alpha,\theta}$ has
purely point spectrum with exponentially decaying eigenfunctions
for a.e. $\theta$.
\end{enumerate}
\end{Theorem}
\begin{Remark}
Part (1) is proved by Avila \cite{Aab}, we state here just for
completeness.\end{Remark}
\begin{Remark} The cases $\beta=0, \infty$ have been solved in previous works \cite{Aab,AJ05}. Together with Theorem \ref{main},
one sees the sharp phase transition scenario of three types of the spectral measure. Moreover, the type of the spectral measure
is clear for all $(\lambda, \beta)$ except the line $\lambda=e^\beta$. See Figure 1 and Figure 2 below.
\end{Remark}
\begin{figure}[th]
\centering
\begin{tikzpicture}[scale = 2.5,x={(1cm,0cm)},y={(0cm,0.5cm)}]
\tikzset{mypoints/.style={fill=white,draw=black,thick}}
\coordinate (origin) at (0,0);
\coordinate (beta) at (2.5,0);
\coordinate (lambda) at (0,5);
\draw [draw,->, thick] (origin) -- (beta);
\draw [draw,->, thick] (origin) -- (lambda);
\coordinate (ac_1) at (0,1);
\coordinate (ac_2) at (2,1);
\coordinate (ac_3) at (2,0);
\draw [thick,dashed] (ac_1) -- (ac_2);
\fill[gray!60,opacity=0.6] (origin)--(ac_1)--(ac_2)--(ac_3)--cycle;
\coordinate[label = left: {$\lambda = 1$}] (A) at (0*0.17,{(e)^(0*0.17)});
\coordinate (B) at (0.5*0.17,{(e)^(0.5*0.17)});
\coordinate (C) at (1*0.17,{(e)^(1*0.17)});
\coordinate (D) at (1.5*0.17,{(e)^(1.5*0.17)});
\coordinate (E) at (2*0.17,{(e)^(2*0.17)});
\coordinate (F) at (2.5*0.17,{(e)^(2.5*0.17)});
\coordinate (G) at (3*0.17,{(e)^(3*0.17)});
\coordinate (H) at (3.5*0.17,{(e)^(3.5*0.17)});
\coordinate (I) at (3.9*0.17,{(e)^(3.9*0.17)});
\coordinate (J) at (4.35*0.17,{(e)^(4.35*0.17)});
\coordinate (K) at (4.8*0.17,{(e)^(4.8*0.17)});
\coordinate (L) at (5.2*0.17,{(e)^(5.2*0.17)});
\coordinate (M) at (5.6*0.17,{(e)^(5.6*0.17)});
\coordinate (N) at (5.95*0.17,{(e)^(5.95*0.17)});
\coordinate (O) at (6.3*0.17,{(e)^(6.3*0.17)});
\coordinate (P) at (6.65*0.17,{(e)^(6.65*0.17)});
\coordinate (Q) at (7*0.17,{(e)^(7*0.17)});
\coordinate (R) at (7.3*0.17,{(e)^(7.3*0.17)});
\coordinate (S) at (7.6*0.17,{(e)^(7.6*0.17)});
\coordinate (T) at (7.9*0.17,{(e)^(7.9*0.17)});
\coordinate (U) at (8.2*0.17,{(e)^(8.2*0.17)});
\coordinate[label = right: {$e^{\beta}$}] (V) at (8.45*0.17,{(e)^(8.45*0.17)});
\coordinate (OO) at (0,{(e)^(8.45*0.17)});
\fill[gray!60,opacity=0.6] (A)--(B)--(C)--(D)--(E)--(F)--(G)--(H)--(L)--(M)--(N)--(O)--(P)--(Q)--(R)--(S)--(T)--(U)--(V)--(OO)--cycle;
\foreach \p in {A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V}
\fill[mypoints] (\p) circle (0.3pt);
\draw(2.7,0) node{$\beta(\alpha)$};
\draw(0,5.25) node{$\lambda$};
\draw(1,0.5) node{$ac$};
\draw(1.5,1.8) node{$sc$};
\draw(0.5,3) node{$pp$};
\end{tikzpicture}
\caption{Phase transition diagram}
\label{fig:1}
\end{figure}
\begin{figure}[th]
\centering
\begin{tikzpicture}[scale = 2.5,x={(1cm,0cm)},y={(0cm,0.5cm)}]
\coordinate (origin) at (0,0);
\coordinate (A) at (1,0);
\coordinate (B) at (2.5,0);
\coordinate[label = right: {$\lambda$}] (C) at (4.2,0);
\draw(0.5,0.2) node{$ac$};
\draw(1.75,0.2) node{$sc$};
\draw(3.25,0.2) node{$pp$};
\draw [draw,->, thick] (origin) -- (C);
\foreach {\bf x}/\xtext in {0,1}
\draw[xshift={\bf x} cm] (0pt,1pt) -- (0pt,-1pt) node[below,fill=white]
{$\xtext$};
\draw[xshift=2.5 cm] (0pt,1pt) -- (0pt,-1pt) node[below,fill=white]
{$e^{\beta}$};
\end{tikzpicture}
\caption{Phase transition for fixed $\alpha$.}
\label{fig:2}
\end{figure}
\begin{Remark}
Theorem \ref{main}(3), also called Anderson localization (AL), is optimal in the sense
that the result can not be true for $G_{\delta}$ dense $\theta$ \cite{JS}. The
arithmetic property of $\theta$ will influence the spectral measure. \end{Remark}
Now we
briefly recall the history of this problem. By symmetry, we just need to consider the case $\lambda>0$. In 1980, Aubry-Andr\'e
\cite{AA80} conjectured that the spectral measure of
$H_{\lambda,\alpha,\theta}$ depends on $\lambda$ in the following
way:
\begin{enumerate}
\item If $\lambda<1,$ then $H_{\lambda,\alpha,\theta}$ has
purely absolutely continuous spectrum for all $\alpha\in
{\mathbb R}\backslash {\mathbb Q}$, and all $\theta\in{\mathbb R}$.
\item If $\lambda >1,$ then $H_{\lambda,\alpha,\theta}$ has
pure point spectrum for all $\alpha\in {\mathbb R}\backslash {\mathbb Q}$, and all
$\theta\in {\mathbb R}$.
\end{enumerate}
However, Aubry and Andr\'e overlooked the role of the arithmetic
property of $\alpha$. Avron-Simon \cite{AS} soon found that by
Gordon's lemma \cite{G}, $H_{\lambda,\alpha,\theta}$ has no
eigenvalues for any $\lambda\in{\mathbb R}$, $\theta\in{\mathbb R}$ if $\beta(\alpha)=\infty$. Since then,
people pondered how the arithmetic property of $\alpha$ influences
the spectral type and under which condition Aubry-Andr\'e's
conjecture \cite{AA80} is true.
When $\alpha$ is Diophantine (i.e. there exist $\gamma,\tau>0$ such that
$\|k\alpha\|_{{\mathbb T}} \geq
\frac{\gamma^{-1}}{|k|^{\tau}},$ for all $0
\neq k \in {\mathbb Z}$), and $\lambda$ is large enough, the operator has
pure point spectrum \cite{E97,FSW,Sin}, and when $\lambda$ is
small enough, the operator has absolutely continuous spectrum
\cite{CD,DS75,E92}. The common feature of the above results is that
they both rely on KAM-type arguments, thus the largeness or
smallness of $\lambda $ depend on Diophantine constant
$\gamma,\tau$, we therefore call such results perturbative results.
Non-perturbative approach to localization problem was
developed by Jitomirskaya, based on partial advance \cite{J94,J95}, she
finally proved that if $\alpha$ is Diophantine,
$H_{\lambda,\alpha,\theta}$ has AL for all $\lambda>1$ and a.e. $\theta\in{\mathbb R}$. It
follows from the strong version of Aubry duality \cite{GJLS},
$H_{\lambda^{-1},\alpha,\theta}$ has purely absolutely continuous
spectrum for a.e. $\theta\in{\mathbb R}$. Therefore, Jitomirskaya \cite{J99}
proved Aubry-Andr\'e's conjecture in the measure setting, i.e. the
conjecture holds for almost every $\alpha\in {\mathbb R}\backslash {\mathbb Q},$
$\theta\in \mathbb{R}$.
Before Jitomirskaya's result, Last \cite{L93}, Gesztesy-Simon \cite{GS},
Last-Simon \cite{LS}
have already showed that $H_{\lambda,\alpha,\theta}$ has absolutely
continuous components for every $\lambda<1$, $\alpha\in {\mathbb R}\backslash
{\mathbb Q},$ $\theta\in \mathbb{R}$, so the conjecture in subcritical regime
still has some hope to be true, which was also conjectured by Simon
\cite{Si00}. Recently, Avila-Jitomirskaya \cite{AJ08} showed that if
$\alpha$ is Diophantine, then $H_{\lambda,\alpha,\theta}$ is purely
absolutely continuous for every $\theta\in \mathbb{R}$. For
$\beta>0$, Avila-Damanik \cite{AD} proved that the conjecture (1)
for almost every $\theta$. The complete answer of
Aubry-Andr\'e's conjecture (1) was provided by Avila \cite{Aab}. One thus sees
that $\lambda=1$ is the phase transition point from absolutely continuous spectrum to singular spectrum.
The remained issue is Aubry-Andr\'e's conjecture (2) when
$\alpha$ is Liouvillean. People already knew that the spectral measure is pure point for Diophantine $\alpha$ and almost every phases, while it is purely singular continuous for $\beta(\alpha)=\infty$ and all phase. So there must be phase transition when $\beta(\alpha)$ goes from zero to infinity. In 1995, Jitomirskaya \cite{Ji95}
modified the second part of the Aubry-Andr\'e's conjecture and conjectured the following\begin{enumerate}
\item If $1<\lambda<e^\beta$, the spectrum is purely singular continuous for all $\theta$.
\item If $\lambda>e^\beta$, the spectrum is pure
point with exponential decaying eigenfunctions for a.e. $\theta$.
\end{enumerate}
Thus $\lambda=e^\beta$ is conjectured to be
the exact phase transition point from continuous spectrum to pure point spectrum. There are some partial results on Jitomirskaya's conjecture. By Gordon's lemma \cite{G} and the
exact formula of Lyapunov exponent \cite{BJ}, one can prove that
$H_{\lambda, \alpha,\theta}$ has purely singular continuous spectrum
for any $\theta\in{\mathbb R}$ if $1<\lambda<e^{\frac{\beta}{2}}$, see also
Remark \ref{gordon} for more discussions. For the pure point part,
Avila-Jitomirskaya \cite{AJ05} showed that if
$\lambda>e^{\frac{16\beta}{9}}$, then $H_{\lambda,\alpha,\theta}$
has AL for a.e. $\theta\in{\mathbb R}$. You-Zhou \cite{YZ} proved that if $\lambda>Ce^{\beta}$ with $C$ large enough \footnote{If one check carefully the proof, it already gives $C=1$.}, then the eigenvalues of $H_{\lambda,\alpha,\theta}$ with exponentially decaying eigenfunctions are dense in the spectrum. Readers can find more discussions on these two results in section \ref{ander}.
The main contribution of this paper is to give a full proof of Jitomirskaya's conjecture.
We remark that the spectral type at the transition points $\lambda=1$ and $\lambda=
e^\beta$ have not been completely understood so far. Partial results include the
following: in case $\lambda=1$, since the Lebesgue measure of the
spectrum is zero for every $\alpha\in {\mathbb R}\backslash {\mathbb Q}$
\cite{AK06,L94}, by Aubry duality \cite{GJLS}, we know $H_{\lambda,
\alpha,\theta}$ is purely singular continuous for a.e.
$\theta\in{\mathbb R}$. In fact, Avila \cite{App} has proved more: if
$\theta$ is not rational w.r.t $\alpha$, then $H_{\lambda,
\alpha,\theta}$ is purely singular continuous. We remark that, by
Gordon's lemma \cite{G}, if $\beta > 0$, then $H_{\lambda,
\alpha,\theta}$ is purely singular continuous for $\lambda=1$ and
every $\theta\in{\mathbb R}$, we include this in Theorem \ref{main}(2).
Excluding or proving the existence of point spectrum in case that
$\alpha $ is Diophantine is one of the major interesting problems for
the critical almost Mathieu operator. For the second transition
point $\lambda=e^\beta$, one knows almost nothing but purely
singular continuous spectrum for a $G_{\delta}$ set of $\theta$
\cite{JS}. The spectral type possibly depends on the finer
properties of approximation of $\alpha$, as conjectured by
Jitomirskaya in \cite{J07}.
\section{preliminaries}
For a bounded
analytic (possibly matrix valued) function $F$ defined on $ \{ \theta | | \Im \theta |< h \}$, let
$
\|F\|_h= \sup_{ | \Im \theta |< h } \| F(\theta)\| $ and denote by $C^\omega_{h}({\mathbb T},*)$ the
set of all these $*$-valued functions ($*$ will usually denote ${\mathbb R}$,
$SL(2,{\mathbb R})$).
\subsection{Continued Fraction Expansion}\label{sec:2.1}
Let $\alpha \in (0,1)$ be irrational. Define $ a_0=0,
\alpha_{0}=\alpha,$ and inductively for $k\geq 1$,
$$a_k=[\alpha_{k-1}^{-1}],\qquad \alpha_k=\alpha_{k-1}^{-1}-a_k=G(\alpha_{k-1})=\{{1\over \alpha_{k-1}}\},$$
Let $p_0=0, p_1=1, q_0=1, q_1=a_1,$ then we define inductively
$p_k=a_kp_{k-1}+p_{k-2}$, $q_k=a_kq_{k-1}+q_{k-2}.$
The sequence $(q_n)$ is the denominators of best rational
approximations of $\alpha$ since we have \begin{equation} \forall 1
\leq k < q_n,\quad \|k\alpha\|_{{\mathbb T}} \geq \|q_{n-1}\alpha\|_{{\mathbb T}},
\end{equation}
and
\begin{equation}
\|q_n \alpha \|_{{\mathbb T}} \leq {1 \over q_{n+1}}.
\end{equation}
Note that $(\ref{defbeta})$ is equivalent to
\begin{equation}\label{equibeta}
\limsup_{k\rightarrow \infty} \frac{1}{|k|} \ln \frac{1}{ \|k\alpha\|_{{\mathbb T}}}=\beta.
\end{equation}
\subsection{Cocycles} A cocycle $(\alpha, A)\in {\mathbb R}\backslash
{\mathbb Q}\times C^\omega({\mathbb T}, SL(2,{\mathbb R}))$ is a linear skew product:
\begin{eqnarray*}\label{cocycle}
(\alpha,A):&{\mathbb T}^{1} \times {\mathbb R}^2 \to {\mathbb T}^{1} \times {\mathbb R}^2\\
\nonumber &(\theta,v) \mapsto (\theta+\alpha,A(\theta) \cdot v),
\end{eqnarray*}
for $n \geq 1$, the products are defined as
$$A_n(\theta)=A(\theta+(n-1)\alpha) \cdots
A(\theta),$$ and $A_{-n}(\theta)=A_n(\theta-n\alpha)^{-1}.$ For this kind of cocycles, the Lyapunov exponent
$$ L(\alpha, A)=\lim_{n\rightarrow \infty} \frac {1} {n}
\int \ln \|A_n(\theta)\| d\theta,
$$
is well defined.
Assume now $A\in C^0({\mathbb T}, SL(2,{\mathbb R}))$ is homotopic to the
identity. Then there exists $\psi:{\mathbb T} \times {\mathbb T} \to {\mathbb R}$ and $u:{\mathbb T}
\times
{\mathbb T} \to {\mathbb R}^+$ such that $$ A(x) \cdot \left (\begin{matrix} \cos 2 \pi y \\
\sin 2 \pi y \end{matrix} \right )=u(x,y) \left (\begin{matrix} \cos 2 \pi (y+\psi(x,y))
\\ \sin 2 \pi (y+\psi(x,y)) \end{matrix} \right ). $$ The function $\psi$ is
called a {\it lift} of $A$. Let $\mu$ be any probability measure on
${\mathbb T} \times {\mathbb T}$ which is invariant by the continuous map $T:(x,y)
\mapsto (x+\alpha,y+\psi(x,y))$, projecting over Lebesgue measure on
the first coordinate (for instance, take $\mu$ as any accumulation
point of $\frac {1} {n} \sum_{k=0}^{n-1} T_*^k \nu$ where $\nu$ is
Lebesgue measure on ${\mathbb T} \times {\mathbb T}$). Then the number $$
\mathrm{rot}_f(\alpha,A)=\int \psi d\mu \mod {\mathbb Z} $$ does not depend on the
choices of $\psi$ and $\mu$, and is called the {\it fibered rotation
number} of $(\alpha,A)$, see \cite {JM82} and \cite {H}.
Let $$R_{\phi}=\left (\begin{matrix} \cos 2\pi \phi&-\sin 2\pi \phi\\
\sin 2 \pi \phi &\cos 2\pi \phi \end{matrix} \right ),$$ then any $A\in
C^0({\mathbb T}, SL(2,{\mathbb R})$ is homotopic to $\theta \mapsto R_{n\theta}$ for
some $n\in{\mathbb Z}$, we call $n$ the degree of $A$, and denote $\deg A =n
$. The fibered rotation number is invariant under conjugation in the
following sense: For cocycles $(\alpha,A_1)$ and $(\alpha,A_2)$, if there exists $B \in
C^0({\mathbb T},$ $ PSL(2,{\mathbb R}))$, such that
$B(\theta+\alpha)^{-1}A_1(\theta)B(\theta)=A_2(\theta),$ then we say $(\alpha,A_1)$ is conjugated to $(\alpha,A_1)$. If $B$ has degree $n$, then we have
\begin{equation}\label{rot-conj}
\mathrm{rot}_f(\alpha, A_1)= \mathrm{rot}_f(\alpha, A_2)+\frac{1}{2} n \alpha.
\end{equation}
If furthermore $B \in C^0({\mathbb T},$ $SL(2,{\mathbb R}))$ with $\deg B=n \in{\mathbb Z}$, then we
have
\begin{equation}\label{rot-conj'}
\mathrm{rot}_f(\alpha, A_1)= \mathrm{rot}_f(\alpha, A_2)+ n \alpha.
\end{equation}
The cocycle $(\alpha,A)$ is $C^\omega$ reducible, if it can be $C^\omega$ conjugated to a constant cocycle.
The cocycle $(\alpha,A)$ is called $C^\omega$ rotations reducible, if there exist $B \in
C^\omega({\mathbb T},SL(2,{\mathbb R}))$ such that
$B(\theta+\alpha)^{-1}A(\theta)B(\theta)\in SO(2,{\mathbb R}).$
The crucial reducibility results for us is the following:
\begin{Theorem}\cite{AFK,HoY}\label{hy1} Let
$(\alpha, A) \in {\mathbb R}\backslash {\mathbb Q} \times C^\omega_{h}({\mathbb T},SL(2,{\mathbb R}))$
with $h>\tilde{h}>0,$ $R\in SL(2,{\mathbb R})$, for every $\tau>1,$
$\gamma>0,$ if $\mathrm{rot}_f(\alpha, A)\in DC_\alpha(\tau,\gamma),$ where
$$DC_\alpha(\tau,\gamma)=\{ \phi\in {\mathbb R}| \|2\phi-m\alpha\|_{{\mathbb R}/{\mathbb Z}}\geq \frac{\gamma}{(|m|+1)^\tau}, m\in{\mathbb Z}\}$$
then there exist $T=T(\tau),$ $\kappa=\kappa(\tau)$, such that if $$\|
A(\theta)-R\|_h<T(\tau
)\gamma^\kappa(h-\tilde{h})^\kappa,$$ then there exist $B \in C^\omega_{\tilde{h}}({\mathbb T},SL(2,{\mathbb R}))$,
$\varphi \in C^\omega_{\tilde{h}}({\mathbb T},{\mathbb R})$,
such that
$$B(\theta+\alpha) A(\theta) B(\theta)^{-1}= R_{\varphi(\theta)},$$
with estimates
$\|B-\operatorname{id}\|_{\tilde{h}}\leq \|
A(\theta)-R\|_h^{\frac{1}{2}}$,
$\|\varphi(\theta)-\hat\varphi(0)\|_{\widetilde{h}}\leq 2\|
A(\theta)-R\|_h.$
\end{Theorem}
\subsection{Almost Mathieu cocycle}
Note that a sequence $(u_n)_{n \in {\mathbb Z}}$ is a formal solution of the
eigenvalue equation $H_{\lambda,\alpha,\theta} u=Eu$ if and only if
it satisfied $\begin{pmatrix}
u_{n+1}\{\mathbb U}_n\end{pmatrix}=S_{E}^{\lambda}(\theta+n\alpha) \cdot
\begin{pmatrix} u_n\{\mathbb U}_{n-1} \end{pmatrix},$ where
\begin{eqnarray*}
S_{E}^{\lambda}(\theta)=\left( \begin{array}{ccc}
E-2\lambda\cos2\pi(\theta) & -1\cr
1 & 0\end{array} \right)\in SL(2,\mathbb{R}).
\end{eqnarray*}
We call $(\alpha,S_{E}^{\lambda} )$ an almost Mathieu cocycle.
Denote the spectrum of $H_{\lambda,\alpha,\theta}$ by
$\Sigma_{\lambda,\alpha}$, which is independent of $\theta$ when
$\alpha\in {\mathbb R}\backslash {\mathbb Q}$. If $E \in \Sigma_{\lambda,\alpha}$,
then the Lyapunov exponent of almost Mathieu cocycle can be computed
directly.
\begin{Theorem}\cite{BJ}\label{bj-formula} If $\alpha \in {\mathbb R}
\setminus {\mathbb Q}$, $E \in \Sigma_{\lambda,\alpha}$, then we have
$$L(\alpha,S_E^{\lambda})=\max
\{0,\ln |\lambda|\}.$$
\end{Theorem}
\subsection{Global theory of one frequency quasi-periodic $SL(2,{\mathbb R})$ cocycle}
We make a short review of Avila's global theory of one frequency quasi-periodic $SL(2,{\mathbb R})$ cocycle
\cite{Aglobal}. Suppose that $A\in$ $C^\omega({\mathbb R}/{\mathbb Z},{\mathrm{SL}}(2,{\mathbb R}))$ admits a
holomorphic extension to $|\Im \theta|<\delta$, then for
$|\epsilon|<\delta$ we can define $A_\epsilon \in
C^\omega({\mathbb R}/{\mathbb Z},{\mathrm{SL}}(2,{\mathbb C}))$ by $A_\epsilon(\theta)=A(\theta+i \epsilon)$.
The cocycles which are not uniformly hyperbolic are classified
into three regimes: subcritical, critical, and supercritical. In
particular, $(\alpha, A)$ is said to be
subcritical, if there exists $\delta>0,$ such that
$L(\alpha,A_{\varepsilon})=0$ for $|\varepsilon|<\delta.$
The heart of Avila's global theory is his \textquotedblleft Almost Reducibility Conjecture\textquotedblright (ARC), which says that subcriticality
implies almost reducibility.
Recall the cocycle $(\alpha,A)$ is called almost reducible, if
there exists $h_*>0$, and a sequence $B_n \in C^\omega_{h_*}({\mathbb T},PSL(2,{\mathbb R}))$ such that $
B_n(\theta+\alpha)^{-1}A(\theta)B_n(\theta)$ converges to constant
uniformly in $|\Im \theta|<h_*.$
For our purpose, we need this \textit{strong} version of almost
reducibility, and $h_*$ should be chosen to be $\delta- \epsilon$ with $\epsilon$ arbitrary small.
The full solution of ARC was recently given by Avila in \cite{Aac,A2}. In the case $\beta(\alpha)>0$, it is the following:
\begin{Theorem}\cite{Aac}\label{arc}
Let $\alpha\in{\mathbb R}\backslash {\mathbb Q}$ with $\beta(\alpha)>0$, $h>0$, $A
\in C^\omega_h({\mathbb T},{\mathbb R})$. If $(\alpha, A)$ is subcritical, then for
any $0<h_*<h$ there exists $C>0$ such that if $\delta>0$ is small
enough, then there exist $B \in C^\omega_{h_*}({\mathbb T},PSL(2,{\mathbb R}))$ and
$R_*\in SO(2,{\mathbb R})$ such that $\|B\|_{h_*}\leq e^{C\delta q}$ and
$$ \| B(\theta+\alpha)^{-1}A(\theta)B(\theta)- R_*\|_{h_*}\leq e^{-\delta q}.$$
\end{Theorem}
\subsection{Aubry duality}
Suppose that the quasi-periodic Schr\"odinger operator
\begin{equation}
(H_{V,\alpha,\theta} x)_n= x_{n+1}+x_{n-1} +V( n\alpha + \theta)
x_n=Ex_n,
\end{equation}
has an
analytic quasi-periodic Bloch wave $x_n = e^{2\pi i n\varphi} \overline{\psi}\left(n\alpha + \phi
\right)
$ for some $
\overline{\psi}\in C^\omega({\mathbb T}, {\mathbb C})$ and $\varphi \in [0,1)$.
It is easy to see the Fourier coefficients of $\overline{\psi}(\theta)$ satisfy the following Long-range operator:
\begin{equation}
(\widehat{L}_{V,\alpha, \varphi}u)_n=\sum _{k\in{\mathbb Z}}
V_ku_{n-k}+2cos2\pi (\varphi+n\alpha)u_n=Eu_n,
\end{equation}
Almost Mathieu operator is the only operator
which is invariant under Aubry duality, and the dual of
$H_{\lambda,\alpha,\theta}$ is $H_{\lambda^{-1},\alpha,\varphi}$.
Rigorous spectral Aubry duality was founded by
Gordon-Jitomirskaya-Last-Simon in \cite{GJLS}, where they proved that
if $H_{\lambda,\alpha,\theta}$ has pure point spectrum for a.e.
$\theta\in{\mathbb R}$, then $H_{\lambda^{-1},\alpha,\varphi}$ has purely
absolutely continuous spectrum for a.e. $\varphi\in{\mathbb R}$. Readers can find more discussions about dynamical Aubry duality in section 4.
\section{Singular continuous spectrum}
In this section, we prove Theorem \ref{main} (2). We re-state it as in following
\begin{Theorem}\label{singular}
Let $\alpha\in {\mathbb R}\backslash {\mathbb Q}$ with $0<\beta(\alpha)\le\infty$. If
$1\leq \lambda<e^{\beta}$, then $H_{\lambda,\theta,\alpha}$ has
purely singular continuous spectrum for any $\theta\in{\mathbb T}$.
\end{Theorem}
\begin{Remark}\label{gordon}
We stress again by classical Gordon's argument \cite{G}, one can only
obtain result in rigime $1\leq \lambda<e^{\frac{\beta}{2}}$. The
reason why one can only obtain $e^{\frac{\beta}{2}}$ is that, in the
classical Gordon's lemma, one has to approximate the
solution by periodic ones along double periods.
\end{Remark}
\begin{pf}
If $1<\lambda<e^{\beta}$, $E \in \Sigma_{\lambda,\alpha}$, then by
Theorem \ref{bj-formula}, one always has $L(E,\alpha)=\ln\lambda>0$.
By Kotani's theory \cite{Ko84}, the operator
$H_{\lambda,\theta,\alpha}$ doesn't support any absolutely
continuous spectrum, thus one only needs to exclude the point
spectrum. In the case $\lambda=1$, since Lebesgue measure of
$\Sigma_{1,\alpha}$ is zero for any $\alpha\in {\mathbb R}\backslash {\mathbb Q}$
\cite{AK06,L94}, then $H_{1,\theta,\alpha}$ also doesn't support any
absolutely continuous spectrum, thus to prove Theorem
\ref{singular}, it is also enough to exclude the point spectrum.
As in classical Gordon's lemma, we approximate the quasi-periodic
cocycles by periodic ones. Denote
$A(\theta)=S_{E}^{\lambda}(\theta)$ and
\begin{eqnarray}A_m(\theta)&=&A(\theta+(m-1)\alpha)\cdots A(\theta+\alpha)A(\theta),\\
\nonumber &=&A^m(\theta)\cdots A^2(\theta)A^1(\theta) \end{eqnarray}
\begin{eqnarray}\widetilde{A}_m(\theta)&=&A(\theta+(m-1)\frac{p_n}{q_n})\cdots
A(\theta+\frac{p_n}{q_n})A(\theta),\\
\nonumber &=&\widetilde{A}^m(\theta)\cdots
\widetilde{A}^2(\theta)\widetilde{A}^1(\theta) , \end{eqnarray} for $m\geq1.$
We also denote $A_{-m}(\theta)=A_m(\theta-m\alpha)^{-1},$
$\widetilde{A}_{-m}(\theta)=\widetilde{A}_m(\theta-m\frac{p_n}{q_n})^{-1}.$
Our proof is based on the following
\begin{Proposition}\label{appro}
Let $\alpha\in {\mathbb R}\backslash {\mathbb Q}$. If
$\lambda \geq 1$ and $E\in \Sigma_{\lambda,\alpha}$, then
for any $\epsilon>0$, there exists $N=N(E,\lambda, \epsilon)>0$ such
that if $q_n>N$, then we have
\begin{eqnarray}
\label{appro-2}\sup_{\theta\in{\mathbb T}}\|\widetilde{A}_{ \pm
q_n}(\theta)-A_{\pm q_n}(\theta)\| & \leq& \frac{1}{q_{n+1}}
e^{(\ln\lambda+ \epsilon)q_n},\\
\label{appro-7}\sup_{\theta\in{\mathbb T}}\| A_{q_n}(\theta+q_n\alpha)-
A_{q_n}(\theta)\|& \leq& \frac{1}{q_{n+1}}
e^{(\ln\lambda+ \epsilon)q_n}.
\end{eqnarray}
\end{Proposition}
\begin{pf}
Furman's result \cite{F} gives
\begin{eqnarray}\label{appro-3}\lim_{m \rightarrow
\pm \infty} \sup_{\theta \in {\mathbb T}} \frac{1}{|m|}\log
\|A_m(\theta)\|\le L(\alpha,S_E^{\lambda}).\end{eqnarray}
Then by Theorem
\ref{bj-formula}, we know for any $\epsilon>0$, there exists
$K=K(E,\lambda,\epsilon)>0$, such that for any $|m| \geq K$, we have
\begin{eqnarray}\label{appro-6}
\sup_{\theta\in{\mathbb T}}\|A_{m}(\theta)\| & \leq&
e^{|m|(\ln\lambda+\epsilon/2)}.
\end{eqnarray}
In the following, we only consider $m$ is positive, the proof is similar for negative $m$. In order to prove $(\ref{appro-2}),$
we need the following:
\begin{Lemma}\label{appro-lemma}
Let $\alpha\in {\mathbb R}\backslash {\mathbb Q}$. If $
\lambda \geq 1$ and $E\in \Sigma_{\lambda,\alpha}$, then for any
$\epsilon>0$, there exists $N_-=N_-(E,\lambda, \epsilon)>2K$, such
that
\begin{eqnarray} \label {3.7}
\sup_{\theta\in{\mathbb T}}\|\widetilde{A}_{m}(\theta)\| & \leq&
e^{m(\ln\lambda+2 \epsilon/3)}
\end{eqnarray}for any $q_n \geq N_-$, $m \geq K$.
\end{Lemma}
\begin{pf}
Clearly, for fixed $m \in {\mathbb Z}$ and $\delta>0$, if $q_n$ is
sufficiently large we have
$$\sup_{\theta\in{\mathbb T}}\big|\frac{1}{m} \ln \|\widetilde{A}_m(\theta)\|- \frac{1}{m} \ln \|A_m(\theta)\| \big|<\delta.$$
Thus, there exists $N_-=N_-(E,\lambda,\epsilon)>0$ such that if $q_n
\geq N_-$ then $(\ref {3.7})$ holds for $K \leq m \leq 2K-1$. Since
any $m \geq K$ can be written as a sum of integers $m_i$ satisfying
$K \leq m_i \leq 2K-1$, this implies that $(\ref {3.7})$ holds for
all $m \geq K$.
\end{pf}
Once we have Lemma \ref{appro-lemma}, $(\ref{appro-2})$ can be
proved directly by telescoping arguments. In fact, if $q_n \geq N_-$
we can write
\begin{eqnarray*}
A_{q_n} - \widetilde{A}_{q_n} &=& \sum_{i=1}^{q_n}A^{q_n}\cdots
A^{i+1}\Big(A^{i}-\widetilde{A}^{i}\Big)\widetilde{A}^{i-1}\cdots
\widetilde{A}_1\\ &=& \Big(\sum_{i=1}^{K}+ \sum_{i=K+1}^{q_n-K}
+\sum_{i=q_n-K+1}^{q_n}\Big)A^{q_n}\cdots
A^{i+1}\Big(A^{i}-\widetilde{A}^{i}\Big)\widetilde{A}^{i-1}\cdots
\widetilde{A}_1\\
&=& (I)+(II)+(III),
\end{eqnarray*}
since for $i\leq q_n$, we have $\|A^{i}-\widetilde{A}^{i}\|\leq
\frac{4 \pi \lambda (i-1)}{q_{n}q_{n+1}}\leq \frac{4 \pi
\lambda}{q_{n+1}},$ then by $(\ref{appro-6})$ and Lemma
\ref{appro-lemma}, we can estimate
\begin{eqnarray*}
(I)&\leq& \frac{4 \pi \lambda}{q_{n+1}} \sum_{i=1}^{K}
(4\lambda+3)^{i-1}e^{(q_n-i)(\ln\lambda+2\epsilon/3)}, \\
(II)&\leq& \frac{4 \pi \lambda}{q_{n+1}}
\sum_{i=K+1}^{q_n-K}e^{(q_n-1)(\ln\lambda+2\epsilon/3)},\\
(III)&\leq & \frac{4 \pi \lambda}{q_{n+1}} \sum_{i=q_n-K+1}^{q_n}
(4\lambda+3)^{q_n-i}e^{(i-1)(\ln\lambda+2\epsilon/3)}.
\end{eqnarray*}
If $q_n$ is sufficiently large, then $(\ref{appro-2})$ follows
directly. Using the similar argument as above, we can prove $(\ref{appro-7})$.
\end{pf}
Now we finish the proof of Theorem \ref{singular} by contradiction.
For any fixed $\theta$, we suppose that $E$ is an eigenvalue of
$H_{\lambda,\alpha,\theta}$, then there exists $\overline{v}= \begin{pmatrix} v_0\\v_{-1}
\end{pmatrix} $ with $\|\overline{v}\|=1,$ and for any
$\varepsilon>0$, there exists
$\overline{N}=\overline{N}(E,\lambda,\varepsilon)$, such that if
$|m|> \overline{N}(E,\lambda,\varepsilon)$, then
$\|A_m(\theta)\overline{v}\|\leq \varepsilon.$
In particular, for any $0<2\epsilon< \ln\lambda-\beta$, we can
select $q_n>\max\{ N(E,\lambda,\epsilon),$ $
\overline{N}(E,\lambda,\varepsilon)\}$, and
$q_{n+1}>e^{(\beta-\epsilon)q_n}$, such that
\begin{equation}\label{initial} \|A_{q_n}(\theta)\overline{v}\|\leq
\varepsilon,
\qquad \|A_{-{q_n}}(\theta)\overline{v}\|\leq \varepsilon,
\end{equation} where
$N(E,\lambda,\epsilon)$ is defined in Proposition \ref{appro}.
What's important is the following observation:
\begin{Lemma}\label{trace}The following estimate holds:
\begin{eqnarray*}
\|A_{q_n}(\theta+q_n\alpha)+A_{-{q_n}}(\theta+q_n\alpha)\| \leq 2\varepsilon+ 10 e^{-(\beta-\ln\lambda-2
\epsilon)q_n}.
\end{eqnarray*}
\end{Lemma}
\begin{pf}
By $(\ref{appro-2})$, it is sufficient for us to prove
\begin{eqnarray}\label{midesti}
\|\widetilde{A}_{q_n}(\theta+q_n\alpha)+\widetilde{A}_{-{q_n}}(\theta+q_n\alpha)\| \leq 2\varepsilon+ 8 e^{-(\beta-\ln\lambda-2
\epsilon)q_n}.
\end{eqnarray}
By Hamilton-Clay Theorem, for any $M\in SL(2,{\mathbb R})$, one has
\begin{eqnarray}\label{hamicaly}
M+M^{-1}={\text{tr}} M\cdot Id,
\end{eqnarray}
for every $\theta' \in {\mathbb T}$. Take
$M=\widetilde{A}_{{q_n}}(\theta')$, then
\begin{eqnarray}\label{hami-caly}
\widetilde{A}_{{q_n}}(\theta')+\widetilde{A}_{-{q_n}}(\theta')={\text{tr}}
\widetilde{A}_{q_n}(\theta').\end{eqnarray}
By assumptions $(\ref{initial})$ and $(\ref{appro-2})$, we have
\begin{eqnarray*}&&\| {\text{tr}}
\widetilde{A}_{q_n}(\theta)\|\\
\nonumber &\leq& \| A_{q_n}(\theta)\overline{v}+ A_{-{q_n}}(\theta)
\overline{v}\|+\|\widetilde{A}_{q_n}(\theta)-A_{q_n}(\theta)\|+\|\widetilde{A}_{-{q_n}}(\theta)-A_{-{q_n}}(\theta)\|\\
\nonumber &\leq& 2\varepsilon+2 e^{-(\beta-\ln\lambda-2
\epsilon)q_n}.
\end{eqnarray*}
As a result of $(\ref{appro-2})$ and $(\ref{appro-7})$, we have
\begin{eqnarray*}
&&\| {\text{tr}} \widetilde{A}_{q_n}(\theta+q_n\alpha)\|\\ &\leq& \| {\text{tr}}
\widetilde{A}_{q_n}(\theta+q_n\alpha)- {\text{tr}}
A_{q_n}(\theta+q_n\alpha)\|+\|{\text{tr}} A_{q_n}(\theta+q_n\alpha)- {\text{tr}}
A_{q_n}(\theta)\|\\
&& +\|{\text{tr}} A_{q_n}(\theta)-{\text{tr}} \widetilde{A}_{q_n}(\theta)\|+\|{\text{tr}}
\widetilde{A}_{q_n}(\theta)\|\\
&\leq & 2\varepsilon+8 e^{-(\beta-\ln\lambda-2 \epsilon)q_n},
\end{eqnarray*}
then $(\ref{midesti})$ follows from $(\ref{hami-caly})$.
\end{pf}
However by Lemma \ref{trace}, we have
\begin{eqnarray*}
&&\|A_{2q_n}(\theta)\overline{v}\|=
\|A_{q_n}(\theta+q_n\alpha) A_{q_n}(\theta)\overline{v} \|\\
&\geq& \|A_{-{q_n}}(\theta+q_n\alpha) A_{q_n}(\theta)\overline{v} \|- \|\widetilde{A}_{q_n}(\theta+q_n\alpha)+\widetilde{A}_{-{q_n}}(\theta+q_n\alpha)\| \|A_{q_n}(\theta)\overline{v}\| \\
&\geq&1- 2\varepsilon^2-10 \varepsilon e^{-(\beta-\ln\lambda-2
\epsilon)q_n}> \frac{1}{2},\end{eqnarray*} which contradicts with
the assumption that $E$ is an eigenvalue.
\end{pf}
\section{Anderson localization}\label{ander}
In this section, we prove Theorem \ref{main} (3). We re-state it as the following
\begin{Theorem}\label{anderson-transition}
Let $\alpha\in{\mathbb R}\backslash {\mathbb Q}$ be such that $0<\beta(\alpha)<\infty.$
If $\lambda>e^{\beta},$ then the almost Mathieu operator
$H_{\lambda,\alpha,\phi}$ has Anderson Localization for a.e. $\phi$.
\end{Theorem}
Traditional method for Anderson Localization is to prove the exponentially decay of Green function \cite{AJ05,J94,J95,J99}. Due to the limitation of the method, Anderson Localization can be proved only for Liouvillean frequency with $\lambda>e^{ \frac{16\beta}{9}}$ so far \cite{AJ05}. So there is still a gap between $e^{\beta}$ and $e^{ \frac{16\beta}{9}}$.
In this paper, we develop a new approach depending on the reducibility and Aubry duality.
We will show that Theorem \ref{anderson-transition} can be obtained by
dynamical Aubry duality and the following full measure reducibility result:
\begin{Theorem}\label{full}
Let $\alpha \in {\mathbb R} \setminus {\mathbb Q}$ with $\beta(\alpha)>0$, if
$\lambda>e^\beta$, $\mathrm{rot}_f(\alpha, S_E^{\lambda^{-1}})$ is
Diophantine w.r.t. $\alpha$, then $(\alpha, S_E^{\lambda^{-1}})$ is
reducible.
\end{Theorem}
The dynamical Aubry duality was established by Puig \cite{Pui06},
who proved that Anderson
localization of the Long range operator $\widehat{L}_{V,\alpha, \varphi}$ for almost every $\varphi\in{\mathbb T}$ implies reducibility of $(\alpha,S_E^V)$ for almost every
energies. Conversely, to deal with localization problem by reducibility was first realized by
You-Zhou in \cite{YZ}. However, in \cite{YZ} they can only prove the eigenvalues of $\widehat{L}_{V,\alpha, \varphi}$ with exponentially decaying eigenfunctions are dense in the spectrum. The main issue remained is to prove those eigenfunctions form a complete basis. The key point in this paper is that, we find that the quantitative estimates in the proof of Theorem \ref{full} actually provides
an asymptotical
distribution of the eigenvalues and eigenfunctions, which ultimately implies pure point spectrum for almost every phases.
Compared with tradition localization argument, the price we have to pay is that we lose precise arithmetic control on the localization phases.
However, by this approach, one can indeed establish a kind of equivalence
between quantitative full measure reducibility of Schr\"odinger
operator (or Schr\"odinger cocycle) and Anderson
localization of its dual Long-range operator.\\
\noindent
\textbf{Proof Theorem \ref{anderson-transition}:} We
need the following definition:
\begin{Definition}
For any fixed $N\in{\mathbb N},C>0,\varepsilon>0$, a normalized eigenfunction
$u(n)$ is said to be $(N,C,\varepsilon)$-good, if $|u(n)|\leq
e^{-C\varepsilon|n|}$, for $|n|\geq (1-\varepsilon)N$.
\end{Definition}
We label the $(N,C,\varepsilon)$-good eigenfunctions of $H_{\lambda,
\alpha,\phi}$ by $u_j^\phi(n)$, denote the corresponding
eigenvalue by $E_j^\phi$, also we denote
$$ \mathcal {E}_{N,C,\varepsilon}^{\phi}=\{E_j^\phi| u_j^\phi(n) \text{ is a
$(N,C,\varepsilon)$-good normalized eigenfunction}\}$$
and denote $\mathcal {E}(\phi)= \bigcup_{N>0} \mathcal {E}_{N,C,\varepsilon}^{\phi}.$
Let $\mu_{\delta_0,\phi}^{pp}$ be the spectral measure supported on
$\mathcal {E}(\phi)$ with respect to $\delta_0$.\\
The following spectral analysis is completely new and will be crucial for our proof.
\begin{Proposition}\label{distribution}
Suppose that there exists $C>0$, such that for any $\delta>0,$ there
exists $\varepsilon>0$, and for a.e. $\phi$,
\begin{equation}\label{good} \#\{\text{linearly independent
$(N,C,\varepsilon)$-good eigenfunctions}\}\geq
(1-\delta)2N,\end{equation} for $N$ large enough, then for a.e.
$\phi$, we have $\mu_\phi=\mu_{\delta_0,\phi}=
\mu_{\delta_0,\phi}^{pp}$.
\end{Proposition}
\begin{pf}
Fix $\phi\in{\mathbb T}^1$ such that (\ref{good}) is satisfied. Denote
$$ K_{N,C,\varepsilon}^{\phi}=\{ j\in{\mathbb N}| u_j^\phi(n) \text{ is a
$(N,C,\varepsilon)$-good eigenfunction}\}.$$ Notice that for any fixed
${N,C,\varepsilon}$, $\# K_{N,C,\varepsilon}^{\phi}$ is finite, and also \begin{equation}\label{eigen}
\sum_{|n|\leq (1-\varepsilon)N}|u_j^\phi(n)|^2 >1- e^{-C\varepsilon
N},
\end{equation}
for $(N,C,\varepsilon)$-good
eigenfunction $u_j^\phi(n)$.
Let $\widetilde{\mu}^{pp}_{\delta_n,\phi}=
\widetilde{\mu}^{pp}_{\delta_n,\phi}(N,C,\varepsilon)$ be the
truncated spectral measure supported on $\mathcal
{E}_{N,C,\varepsilon}^{\phi}$. Then by spectral theorem and
$(\ref{eigen})$, we have
\begin{eqnarray*}
\frac{1}{2N}\sum_{|n|\leq N}|\mu^{pp}_{\delta_n,\phi}|&>&
\frac{1}{2N}\sum_{|n|\leq N}|\widetilde{\mu}^{pp}_{\delta_n,\phi}|\\
&=&\frac{1}{2N}\sum_{|n|\leq N}\langle P_{\mathcal
{E}_{N,C,\varepsilon}^{\phi}}\delta_n,
\delta_n\rangle\\
&=&\frac{1}{2N}\sum_{|n|\leq N}\sum_{j\in
K_{N,C,\varepsilon}^{\phi}}\langle P_{E_j^\phi}\delta_n,
\delta_n\rangle\\
&>&\frac{1}{2N}\sum_{|n|\leq(1-\varepsilon)N}\sum_{j\in
K_{N,C,\varepsilon}^{\phi}}|u_j^\phi(n)|^2\\
&>&\frac{1}{2N} \# K_{N,C,\varepsilon}^{\phi}(1- e^{-C\varepsilon N})\\
&>& (1-\delta)(1-e^{-C\varepsilon N}).
\end{eqnarray*}
Since $ \mathcal {E}(\phi) =\mathcal {E}(\phi+\alpha)$,
we can rewrite the above inequalities as $$ \frac{1}{2N}\sum_{|n|\leq N}|\mu^{pp}_{\delta_0,\phi+n\alpha}|> (1-\delta)(1-e^{-C\varepsilon N}),$$
Let $N$ go to $\infty$, since $\delta$ is arbitrary small, we have
$$ \int_{{\mathbb T}^1}
| \mu^{pp}_{\delta_0,\phi}|d\phi=1,$$ by Birkhoff's ergodic theorem.
Thus for $a.e.$ $\phi\in{\mathbb T}^1$,
$\mu_\phi=\mu_{\delta_0,\phi}= \mu_{\delta_0,\phi}^{pp}.$
\end{pf}
Let $\Theta_\gamma=\{\phi| \phi\in DC_\alpha(\tau,\gamma)
\}$. We have $ \bigcup_{\gamma>0}\Theta_\gamma=1$, which
implies that for any $\delta>0$, there exists $
\widetilde{\varepsilon}>0$, such that if
$|\gamma|<\widetilde{\varepsilon},$ then $|\Theta_\gamma |>
1-\frac{\delta}{3}.$ By Birkhoff's ergodic theorem again, we have
$$\lim_{\widetilde{N} \rightarrow \infty} \frac{1}{2\widetilde{N}} \sum_{|k|\leq \widetilde{N}} \chi_{\Theta_\gamma }(\phi+k\alpha)= \int_{{\mathbb T}^1}\chi_{\Theta_\gamma }(\phi)d\phi. $$
Thus for $N$ large enough (we take
$\widetilde{N}=N(1-\frac{\delta}{3})$), we have
\begin{equation}\label{number}
\#\{k| \phi+k\alpha \in \Theta_\gamma, |k|\leq
2N(1-\frac{\delta}{3}) \}\geq (1-\delta)2N.\end{equation}
For any $\phi \in \Theta_\gamma$, we choose $\bar N$ sufficiently
large such that (\ref{number}) holds for $N>\bar N$. We will prove
that $H_{\lambda, \alpha,\phi}$ has at least $(1-\delta)2N$ different
eigenvalues $E_k^\phi$ whose eigenfunctions $u_k^\phi(n)$ are $(N,\ln
\lambda -\beta-\epsilon, \varepsilon)$-good for any $\epsilon$. To
prove this, we need the following \textit{quantitative} version of Theorem \ref{full}:
\begin{Proposition}\label{prop}
Let $\alpha \in {\mathbb R} \setminus {\mathbb Q}$ with $\beta(\alpha)>0$ and
$\lambda>e^\beta$. Suppose that $\mathrm{rot}_f(\alpha,
S_{\lambda^{-1}E_k}^{\lambda^{-1}})=\phi+k\alpha \in
DC_\alpha(\tau,\gamma)$. Then for any fixed $\gamma>0$, $\tau>0$
and small enough $\epsilon>0$, there exist
$c_1(\lambda, \gamma,\tau,\epsilon,\alpha), c_2(\lambda,\gamma,\tau,\epsilon)$ and $B_k \in
C^\omega_{\ln\lambda-\beta-\epsilon}({\mathbb T},SL(2,{\mathbb R}))$, such that
\begin{equation}\label{prop-1} B_k(\theta+\alpha)^{-1}
S_{\lambda^{-1}E_k}^{\lambda^{-1}}(\theta)B_k(\theta)=R_{\phi+k^{'}\alpha},\end{equation}
with estimates:
\begin{eqnarray}
\label{esti-1} \| B_k\|_{\ln\lambda-\beta-\epsilon} &\leq&
c_1(\lambda,\gamma,\tau,\epsilon,\alpha),\\
\label{esti-2} |k-k^{'}|&\leq& c_2(\lambda,\gamma,\tau,\epsilon).
\end{eqnarray}
\end{Proposition}
\begin{pf}
If $\lambda>e^{\beta}>1$, $\lambda^{-1} E_k \in \Sigma_{\lambda^{-1},\alpha}$, then the almost Mathieu cocycle $(\alpha,
S_{\lambda^{-1}E_k}^{\lambda^{-1}})$ is subcritical in the regime
$|\mathfrak{I}\theta|<\ln\lambda$. To prove Proposition \ref{prop}, we need
Theorem \ref{bj-formula} and the following:
\begin{Lemma}
If $\alpha \in {\mathbb R} \setminus {\mathbb Q}$, $\lambda>1$, $E \in {\mathbb R}$, then for
$\epsilon \geq 0$,
$$L(\alpha,(S_E^{\lambda^{-1}})_\epsilon)=\max
\{L(\alpha,S_E^{\lambda^{-1}}),(\epsilon-\ln \lambda)\}.$$
\end{Lemma}
\begin{pf} The proof can be found in Appendix A of \cite{Aglobal}.
\end{pf}
Now by Theorem \ref{arc}, for $0<2\epsilon<\ln \lambda-\beta$, there exists a sequence of
$\widetilde{B}_n \in
C^\omega_{\ln\lambda-\epsilon/2}({\mathbb T},PSL(2,{\mathbb R}))$ such that
$$\widetilde{B}_n(\theta+\alpha)^{-1}S_{\lambda^{-1}E_k}^{\lambda^{-1}}(\theta)\widetilde{B}_n(\theta)= R_{\varphi_n}+F_n(\theta),$$
with estimate
\begin{eqnarray}
\label{esti-1'}\|\widetilde{B}_n\|_{\ln\lambda-\epsilon/2}&\leq&
e^{C\delta^{'} q_n},\\
\nonumber \|F_n\|_{\ln\lambda-\epsilon/2}&\leq&
e^{-\delta^{'} q_n},
\end{eqnarray}
which implies
\begin{equation}\label{deg1}
|\deg \widetilde{B}_n| \leq c(\lambda,\epsilon) q_n.
\end{equation}
One may consult footnote 5 of \cite{Aac} in proving this.
If $\phi+k\alpha \in DC_\alpha(\tau,\gamma),$ we have
\begin{eqnarray*}
&&\|2(\phi+k\alpha)-m\alpha-k'\alpha\|_{{\mathbb R}/{\mathbb Z}}\\
&\geq& \frac{\gamma}{(|m+k^{'}|+1)^\tau} \geq
\frac{(1+|k^{'}|)^{-\tau}\gamma}{(|m|+1)^\tau}.
\end{eqnarray*}
By $(\ref{rot-conj})$, this formula implies that $ \mathrm{rot}_f(\alpha, R_{\varphi_n}+F_n(\theta)) \in
DC_\alpha(\tau,(1+|\deg \widetilde{B}_n|)^{-\tau}\gamma)$. Let $q_s$ be the smallest denominator such that
\begin{eqnarray*}
q_{s+1}&> &e^{(\beta-o(1))q_s},\\
e^{-q_s \delta^{'}} &<& T(\tau)(\frac{\gamma}{(1+c(\lambda,\epsilon)|q_s|)^{\tau}})^\kappa(\frac{\epsilon}{2})^\kappa,
\end{eqnarray*}
where $T=T(\tau),$ $\kappa=\kappa(\tau)$ are defined in Theorem
\ref{hy1}. By Theorem \ref{hy1}, there
exist $\overline{B}_k(\theta) \in
C^\omega_{\ln\lambda-\epsilon}({\mathbb T},SL(2,{\mathbb R})),$ $\eta_k(\theta) \in
C^\omega_{\ln\lambda-\epsilon}({\mathbb T},{\mathbb R}),$ such that
$$\overline{B}_k(\theta+\alpha)^{-1}( R_{\varphi_s}+F_s(\theta))\overline{B}_k(\theta)=R_{\eta_k(\theta)}.$$
with estimates $ \| \eta_k \|_{\ln\lambda-\epsilon} \leq e^{-q_s \delta^{'}} $ and
\begin{equation} \label{esti-9}
\| \overline{B}_k-id \|_{\ln\lambda-\epsilon} \leq e^{-q_s \delta^{'}/2} .
\end{equation} Let $\psi_k(\theta)$ satisfy
\begin{equation}\label{homo}
\psi_k(\theta+\alpha)-\psi_k(\theta)=\eta_k(\theta)-\hat{\eta}_k(0).
\end{equation}
since $\ln \lambda>\beta$, by $(\ref{equibeta})$, we know that there exists $c=c(\alpha,\epsilon)$ such that $(\ref{homo})$ has analytic solution $\psi_k(\theta) \in
C^\omega_{\ln\lambda-\beta-\epsilon}({\mathbb T},{\mathbb R})$ with estimate
\begin{equation}\label{esti-10}
\|\psi_k\|_{\ln\lambda-\beta-\epsilon} \leq c(\alpha,\epsilon) \| \eta_k \|_{\ln\lambda-\epsilon} \leq c(\alpha,\epsilon) e^{-q_s \delta^{'}} .
\end{equation}
Let $B_k(\theta)= \widetilde{B}_s(\theta)\overline{B}_k(\theta)R_{\psi_k(\theta)}$, then there exists $k^{'}\in {\mathbb Z}$, such that
$$B_k(\theta+\alpha)^{-1}
S_{\lambda^{-1}E_k}^{\lambda^{-1}}(\theta)B_k(\theta)=R_{\hat{\eta}_k(0)}= R_{\phi+k^{'}\alpha}.$$
Since $\mathrm{rot}_f(\alpha,
S_{\lambda^{-1}E_k}^{\lambda^{-1}})$ is irrational w.r.t $\alpha$, then $B_k(\theta) \in
C^\omega_{\ln\lambda-\beta-\epsilon}({\mathbb T},SL(2,{\mathbb R}))$, one can consult Remark 1.5 of \cite{AK06} for this proof. Notice that $\deg R_{\psi_k(\theta)}=0$ and by $(\ref{esti-9})$, we have $\deg \overline{B}_k =0$. Consequently by $(\ref{rot-conj'})$,
we have
\begin{eqnarray}
\label{deg2}k^{'}=k- \deg \widetilde{B}_s.\end{eqnarray}
$(\ref{esti-2})$ then follows from $(\ref{deg1})$ and $(\ref{deg2})$, and
$(\ref{esti-1})$ follows from $(\ref{esti-1'})$, $(\ref{esti-9})$ and $(\ref{esti-10})$.
\end{pf}
Rewrite $(\ref{prop-1})$ as
\begin{equation}\label{a-1} B_k(\theta+\alpha)^{-1}
S_{\lambda^{-1}E_k}^{\lambda^{-1}}(\theta)B_k(\theta)= \left(
\begin{array}{ccc}
e^{2\pi i(\phi+k^{'}\alpha)}& 0\cr
0 & e^{-2\pi i(\phi+k^{'}\alpha)}\end{array} \right),\end{equation}
and write $B_k(\theta)=\left(
\begin{array}{ccc}
z_{11}(\theta) & z_{12}(\theta) \cr z_{21}(\theta) &z_{22}(\theta)
\end{array} \right),$ then
we have
\begin{eqnarray}\label{block-red}&& (\lambda^{-1}E_k-2\lambda^{-1}
\cos(\theta))z_{11}(\theta)\\ \nonumber&=&
z_{11}(\theta-\alpha)e^{-2\pi
i(\phi+k^{'}\alpha)}+z_{11}(\theta+\alpha)e^{2\pi
i(\phi+k^{'}\alpha)}.\end{eqnarray} Taking the Fourier
transformation for $(\ref{block-red})$, we have
\begin{eqnarray*}
\widehat{z}_{11}(n+1)+\widehat{z}_{11}(n-1)+2\lambda\cos2\pi
(\phi+k^{'}\alpha+n\alpha)\widehat{z}_{11}(n)=
E_k\widehat{z}_{11}(n),
\end{eqnarray*}
then $\widehat{z}_{11}(n)$ is a eigenfunction, since $z_{11}\in
C^\omega_{\ln\lambda-\beta-\epsilon}({\mathbb T},{\mathbb C})$. To normalize
$\widehat{z}_{11}(n)$, we need the following observation:
\begin{Lemma}\label{z1-estimate}
We have the following:
$$\|\widehat{z}_{11}\|_{l^2}\geq (2\|B\|_{C^0})^{-1}.$$
\end{Lemma}
\begin{pf}
Write $$u=\left(\begin{array}{c} z_{11}(\theta) \\
z_{21}(\theta)
\end{array}\right), \qquad v=\left(\begin{array}{c} z_{12}(\theta) \\
z_{22}(\theta)
\end{array}\right),$$ then
$\|u\|_{L^2}\|v\|_{L^2}>1$ since $\det B_k(\theta)=1.$ This implies that
$$ \|z_{11}\|_{L^2}+ \|z_{21}\|_{L^2}= \|u\|_{L^2}> \|v\|_{L^2}^{-1}>(\|B\|_{C^0})^{-1}.$$
By $(\ref{a-1})$, we have
$z_{21}(\theta+\alpha)=e^{-2\pi i(\phi+k^{'}\alpha)}z_{11}(\theta),$
therefore, we have
$$\|\widehat{z}_{11}\|_{l^2}=\|z_{11}\|_{L^2} \geq (2\|B\|_{C^0})^{-1}.$$
\end{pf}
Normalizing $\widehat{z}_{11}(n)$ by $u_k^{\phi}(n)=\frac{\widehat{z}_{11}(n+k^{'})}{
\|\widehat{z}_{11}\|_{l^2}}$.
Now we prove it is in fact $(N,\ln \lambda -\beta-\epsilon,
\varepsilon)$-good. Let
$$ 2\varepsilon< \frac{\delta}{3}- \frac{c_3(\lambda,\gamma,\tau,\epsilon,\alpha)}{N}- \frac{c_2(\lambda,\gamma,\tau,\epsilon)}{N},
$$ where $c_3(\lambda, \gamma,\tau,\epsilon,\alpha)=\frac{\ln 2c_1(\lambda,\gamma,\tau,\epsilon,\alpha)}{ \ln\lambda-\beta-\epsilon }.$ Since
$u_k^{\phi}(n)=u_k^{\phi+k^{'}\alpha}(n-k^{'})$, then by Proposition \ref{prop} and Lemma \ref{z1-estimate},
we
have
\begin{eqnarray*}
|u_k^{\phi}(n) | &=&|u_k^{\phi+k^{'}\alpha}(n-k^{'})|\\
&\leq& \| B_k\|_{\ln\lambda-\beta-\epsilon}^2
e^{-|n-k^{'}| (\ln \lambda -\beta-\epsilon)}\\
&\leq & e^{ ( c_3(\lambda,\gamma,\tau,\epsilon,\alpha)+
|k|+c_2(\lambda,\gamma,\tau,\epsilon))(\ln \lambda
-\beta-\epsilon)} e^{-|n| (\ln \lambda -\beta-\epsilon)}\\
&\leq&
e^{ (N(1-\frac{\delta}{3})+c_2(\lambda,\gamma,\tau,\epsilon)+c_3(\lambda,\gamma,\tau,\epsilon,\alpha))(\ln
\lambda -\beta-\epsilon)}e^{-|n|(\ln \lambda -\beta-\epsilon) }\\
&\leq& e^{-|n|(\ln \lambda -\beta-\epsilon)\varepsilon},
\end{eqnarray*}
for $|n|\geq N(1-\varepsilon)$, which means $ (u_k^{\phi}(n))$ is
$(N,\ln \lambda -\beta-\epsilon, \varepsilon)$-good.
By Proposition \ref{distribution} and the above estimate, we know for $a.e.$
$\phi\in{\mathbb T}^1$,
$H_{\lambda,\alpha,\phi}$ has Anderson Localization.
\section*{Acknowledgements}
A.A was partially supported by the ERC Starting Grant\textquotedblleft Quasiperiodic\textquotedblright and
by the Balzan project of Jacob Palis. J. Y was partially supported by NNSF of China (11471155) and
973 projects of China (2014CB340701). Q. Z was partially supported by
Fondation Sciences Math\'{e}matiques de Paris (FSMP) and and ERC
Starting Grant \textquotedblleft Quasiperiodic\textquotedblright.
|
1,116,691,497,163 | arxiv | \section*{Introduction}
Can one describe isomorphism of two number fields $\bK$ and $\bL$ from associated analytic or topological objects? Here are some attempts (``no''-answers indexed by \textbf{N}; ``yes''-answers by \textbf{Y}):
\begin{enumerate}
\item[\textbf{(N1)}] An \textbf{equality of their Dedekind zeta functions} (so-called \emph{arithmetic equivalence}) does not imply that $\bK$ and $\bL$ are isomorphic, as was shown by Ga{\ss}mann (\cite{G}, cf.\ also Perlis \cite{Perlis1}, or \cite{Klingen}). An example is provided by $$\bK=\Q(\sqrt[8]{3}) \mbox{ and } \bL=\Q(\sqrt[8]{3 \cdot 2^4})$$ (\cite{Perlis1}, \cite{Komatsu}). However, the implication is true \emph{if} $\bK$ and $\bL$ are Galois over $\bQ$ (Theorem of Bauer \cite{Bauer1} \cite{Bauer2}, nowadays a corollary of Chebotarev's density theorem, see, e.g., Neukirch \cite{Neukirch} 13.9).
\item[\textbf{(N2)}] An \textbf{isomorphism of their adele rings} $\A_{\bK}$ and $\A_{\bL}$ as topological rings does not imply that $\bK$ and $\bL$ are isomorphic, cf.\ Komatsu (\cite{Komatsu2}). An example is $$\bK=\Q(\sqrt[8]{2 \cdot 9}) \mbox{ and } \bL=\Q(\sqrt[8]{2^5 \cdot 9}).$$ An adelic isomorphism does imply in particular an equality of the zeta functions of $\bK$ and $\bL$, but is not equivalent to it --- the example in (\textbf{N1}) has non-isomorphic adele rings, cf.\ \cite{Komatsu}. However, for a global function field adelic isomorphism and arithmetic equivalence is the same, cf.\ Turner \cite{Turner}.
\item[\textbf{(N3)}] \label{abab} An \textbf{isomorphism of the Galois groups of the maximal abelian extensions} $G^{\mbox{{\tiny \textup{ab}}}}_{\bK}$ and $G^{\mbox{{\tiny \textup{ab}}}}_{\bL}$ as topological groups does not imply an isomorphism of the fields $\bK$ and $\bL$. For example, $$\bK=\Q(\sqrt{-2}) \mbox{ and } \bL=\Q(\sqrt{-3})$$ have isomorphic abelianized absolute Galois groups (see Onabe \cite{Onabe}).
\end{enumerate}
However \dots
\begin{enumerate}
\item[\textbf{(Y1)}] An \textbf{isomorphism of their absolute Galois groups} $G_{\bK}$ and $G_{\bL}$ as topological groups implies isomorphism of the fields $\bK$ and $\bL$: this is the celebrated theorem of Neukirch and Uchida (In \cite{NeukirchInv}, Neukirch proved this for fields that are Galois over $\Q$; in \cite{U3}, Uchida proved the general case, cf.\ also \cite{NBook} 12.2, Ikeda \cite{Ikeda} and unpublished work of Iwasawa). It can be considered the first manifestation (zero-dimensional case) of the so-called ``anabelian'' philosophy of Grothendieck (\cite{Gro}, esp.\ footnote (3)): the neologism ``anabelian'' seems to have been coined by Grothendieck by contrast with statement \textbf{(N3)} above.
\item[\textbf{(Y2)}] In an unpublished work, Richard Groenewegen \cite{Groen} proved a \textbf{Torelli theorem} for number fields: if two number fields have ``strongly monomially equivalent'' $h^0$-function in Arakelov theory (in the sense of van der Geer and Schoof, cf.\ \cite{GS}), then they are isomorphic.
\end{enumerate}
The starting point for this study is the observation that the zeta function of a number field $\bK$ can be realized as the partition function of a quantum statistical mechanical (QSM) system in the style of Bost and Connes (cf.\ \cite{BC} for $\bK=\bQ$). The QSM-systems for general number fields that we consider are those that were constructed by Ha and Paugam (see section 8 of \cite{HP}, which is a specialization of their more general class of QSM-systems associated to Shimura varieties), and further studied by Laca, Larsen and Neshveyev in \cite{LLN}.
This quantum statistical mechanical system consists of a $C^*$-algebra $A_{\bK}$ (the noncommutative analogue of a topological space) with a time evolution $\sigma_{\bK}$ (i.e., a continuous group homomorphism $\R \rightarrow \Aut{A_{\bK}}$) --- for the exact definition, see Section \ref{s2} below, but the structure of the algebra is
$$
A_{\bK}:=C(X_{\bK})\rtimes J^+_{\bK}, \mbox{ with } X_{\bK}:=G^{\mbox{{\tiny \textup{ab}}}}_{\bK}\times_{\hat\cO_{\bK}^*} \hat\cO_{\bK},
$$
where $\hat\cO_{\bK}$ is the ring of finite integral adeles and $J^+_{\bK}$ is the semigroup of ideals, which acts on the space $X_{\bK}$ by Artin reciprocity. The time evolution is only non-trivial on elements $\mu_{\fn} \in A_{\bK}$ corresponding to ideals $\fn \in J^+_{\bK}$, where it acts by multiplication with the norm $N(\fn)^{it}$. We also need the (non-involutive) dagger-subalgebra $A_{\bK}^\dagger$ generated algebraically by functions in $C(X_{\bK})$ and the partial isometries $\mu_{\fn}$ for $\fn \in J_{\bK}^+$ (but \emph{not} $\mu_{\fn}^*$; such non-self adjoint algebras and their closures have been considered before in connection with the reconstruction of dynamical systems up to (piecewise) conjugacy, see e.g.\ \cite{Davidson}).
For now, it is important to notice that the structure involves the abelianized Galois group and the adeles, but not the absolute Galois group. In this sense, it is ``not anabelian''; but of course, it is ``noncommutative'' (in noncommutative topology, the crossed product construction is an analog of taking quotients). In light of the previous discussion, it is now natural to ask whether the QSM-system (which contains simultaneously the zeta function from \textbf{(N1)}, a topological space built out of the adeles from \textbf{(N2)} and the abelianized Galois group from \textbf{(N3)}) does characterize the number field.
We call two general QSM-systems \emph{isomorphic} if there is a $C^*$-algebra isomorphism between the algebras that intertwines the time evolutions. Our main result is that the QSM-system cancels out the defects of \textbf{(N1)---(N3)} in exactly the right way:
\begin{introtheorem} \label{main}
Let $\bK$ and $\bL$ denote arbitrary number fields. Then the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(}i{)}] \textup{[Field isomorphism]} $\bK$ and $\bL$ are isomorphic as fields;
\item[\textup{(}ii{)}] \textup{[QSM isomorphism]} there is an isomorphism $\varphi$ of QSM systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$ that respects the dagger subalgebras: $\varphi(A_{\bK}^\dagger)=A_{\bL}^\dagger$.
\end{enumerate}
\end{introtheorem}
One may now ask whether the ``topological'' isomorphism from (ii) can somehow be captured by an analytic invariant, such as the Dedekind zeta function, which in itself doesn't suffice. Our second main theorem says that this is indeed the case:
\begin{introtheorem} \label{main2}
Let $\bK$ and $\bL$ denote arbitrary number fields. Then the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(}i{)}] \textup{[Field isomorphism]} $\bK$ and $\bL$ are isomorphic as fields;
\item[\textup{(}iii{)}] \textup{[L-isomorphism]} there is group isomorphism between (the Pontrjagin duals of) the abelianized Galois groups $$\psi \, : \, \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \widehat{G}_{\bL}^{\mbox{{\tiny \textup{ab}}}}$$ such that for every character $\chi \in \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, we have an identification of $L$-series {for these generalized Dirichlet characters}
$$ L_{\bK}(\chi,s) = L_{\bL}(\psi(\chi),s). $$
\end{enumerate}
\end{introtheorem}
Condition (iii) can be considered as the correct generalization of arithmetic equivalence (which is (iii) for the trivial character only) to an analytic equivalence that \emph{does} capture isomorphism. It should also be observed at this point that (Hecke) $L$-series occur naturally in the description of generalized equilibrium states (KMS-states) of the QSM-system, and this is how we originally discovered the statement of the theorem.
{Finally, there is the following purely algebraic reformulation, which upgrades \textbf{(N3)} by adding a certain compatibility of the isomorphism of abelianized Galois groups with ramification:
\begin{introtheorem} \label{main3}
Let $\bK$ and $\bL$ denote arbitrary number fields. Then the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(}i{)}] \textup{[Field isomorphism]} $\bK$ and $\bL$ are isomorphic as fields;
\item[\textup{(}iv{)}] \textup{[Reciprocity isomorphism]} there is a topological group isomorphism $$\hat{\psi} \, : \, {G}_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} {G}_{\bL}^{\mbox{{\tiny \textup{ab}}}}$$ and an isomorphism $$\Psi \, : \, J_{\bK}^+ \overset{\sim}{\rightarrow} J_{\bL}^+$$ of semigroups of ideals such that the following two compatibility conditions are satisfied:
\begin{enumerate}
\item[\textup{(a)}] compatibility of $\Psi$ with norms: $N_{\bL}(\Psi(\fn))=N_{\bK}(\fn)$ for all ideals $\fn \in J_{\bK}^+$; and
\item[\textup{(b)}] compatibility with the Artin map: for every finite abelian extension $$\bK'=\left(\bK^{\mbox{{\tiny \textup{ab}}}}\right)^N/\bK$$ (with $N$ a subgroup in $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$) and every prime $\p$ of $\bK$ unramified in $\bK'$, the prime $\Psi(\p)$ is unramified in the corresponding field extension $$\bL':=\left(\bL^{\mbox{{\tiny \textup{ab}}}}\right)^{\hat\psi(N)}/\bL,$$ and we have
$$ \hat\psi \left( \mathrm{Frob}_{\p} \right) = \mathrm{Frob}_{\Psi(\p)}. $$
\end{enumerate}
\end{enumerate}
\end{introtheorem}
}
We first say a few words about the proofs. We start by proving that QSM-isomorphism (ii) implies field isomorphism (i). For this, we first prove that the fields are arithmetically equivalent (by interpreting the zeta functions as partition functions and studying the relation between the Hamiltonians for the two systems), and then we use some results from the reconstruction of dynamical systems from non-involutive algebras to deduce an identification of the semigroups of integral ideals of $\bK$ and $\bL$ and a compatible homeomorphism of $X_{\bK}$ with $X_{\bL}$. We use this to prove that $\varphi$ preserves a layered structure in the algebra corresponding to ramification in the field, and this allows us to prove that there is a homomorphism of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ with $G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ ``compatible with the Artin map'', and an isomorphism of unit ideles (built up locally from matching of inertia groups), and finally, multiplicative semigroups of the totally positive elements (viz., positive in every real embedding of the number field) of the rings of integers, which occur as inner endomorphisms of the dagger-subalgebra. We then prove that the map is additive modulo large enough inert primes, using the Teichm\"uller lift. Finally, it is easy to pass from an isomorphism of semirings of totally positive elements to an isomorphism of the fields.
Then we prove that L-isomorphism (iii) implies QSM-isomorphism (ii): from the matching of $L$-series, we get a matching of semigroups of ideals, compatible with the Artin map, by doing some character theory with the $L$-series of the number fields as counting functions of ideals that have a given norm and a given image under the Artin map in the maximal abelian extension where they remain unramified. We then extend these maps to the whole algebra by a topological argument.
In this context, one may try to rewrite the main theorems in a functorial way, as a bijection of certain Hom-sets. It would be interesting to understand the relation to the functor from number fields to QSM-systems in \cite{LNT}.
It is easy to see that reciprocity isomorphism (iv) implies L-isomorphism (iii), and of course, field isomorphism (i) implies the rest.
The proof seems to indicate that a mere isomorphism of the $C^*$-algebras $A_{\bK}$ and $A_{\bL}$ does not suffice to conclude that $\bK$ and $\bL$ are isomorphic; we make heavy use of the compatibility with time evolution given by the norms. It would be interesting to know whether one can leave out from QSM-isomorphism the condition of preserving the dagger subalgebra. Neshveyev has shown us an example of a (non-dagger) inner endomorphism of $(A_{\bK},\sigma_{\bK})$ that doesn't respect $C(X_{\bK})$. On the other hand, QSM-isomorphism does imply arithmetic equivalence, so by Ga{\ss}mann's results, QSM-isomorphism (without requiring dagger isomorphism) for Galois extensions of $\Q$ already implies field isomorphism.
Finally, we remark that our proof is constructive: we exhibit, from the various other isomorphisms, an explicit field isomorphism.
\begin{remark*} We make a few remarks about the condition of L-isomorphism in the theorem. First of all, the equivalence of field isomorphism and L-isomorphism/reciprocity isomorphism is a purely number theoretical statement, without reference to QSM-systems. It is a number theoretical challenge to provide a direct proof of this equivalence (of course, one can clear the current proof of QSM-lingo).
Secondly, one may wonder whether the L-isomorphism condition (iii) can be replaced by something weaker. As we already observed, requiring (iii) for the trivial character only is not enough, but what about, for example, this condition:
\begin{quote}
\textup{(}iii{)}$_2$ \emph{All rational quadratic $L$-series of $\bK$ and $\bL$ are equal, i.e.\ for all integers $d$ that are not squares in $\bK$ and $\bL$, we have $L_{\bK}(\chi_d,s)=L_{\bL}(\chi_d,s)$.}
\end{quote}
By considering only rational characters, one does not need to introduce a bijection of abelianized Galois groups, since there is an automatic matching of conductors. One can also consider a similar statement (iii)$_n$ for all $n$-th order rational $L$-series.
It turns out that (iii)$_2$ is \emph{not} equivalent to (ii). We prove that as soon as $\bK$ and $\bL$ have the same zeta functions, condition (iii)$_2$ holds (the proof uses \emph{Ga{\ss}mann-equivalence}, and was discovered independently by Lotte van der Zalm in her undergraduate thesis \cite{Lotte}). Another number theoretical challenge is to give a purely analytical proof of this statement (i.e., not using group theory).
Finally, we note that the condition of L-isomorphism is motivic: it gives an identification of $L$-series of rank one motives over both number fields (in the sense of \cite{Deligne}, \S 8).
\end{remark*}
\begin{remark*} After announcing our result at the GTEM conference in Barcelona (september 2010), Bart de Smit rose to the first number theoretical challenge (to prove the equivalence of field isomorphism and L-isomorphism), by using Galois theory, cf.\ \cite{BDS}. The method of de Smit allowed him to prove that if for two number fields $\bK$ and $\bL$, the sets of zeta functions of all their abelian extension are equal, then the fields are isomorphic. He can also prove that it suffices for this conclusion to hold that there is a bijection between the $2$-torsion subgroups of $\widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and $\widehat G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ (so the sets of all quadratic or trivial characters) such that the corresponding $L$-series are equal, and for given fields, one can construct a finite list of quadratic characters which it suffices to check. Also, with Hendrik Lenstra, he has proven that every number field has an abelian $L$-series that does not occur for any other number field.
\end{remark*}
\begin{remark*}[Anabelian vs.\ noncommutative]
The anabelian philosophy is, in the words of Gro\-then\-dieck (\emph{Esquisse d'un programme}, \cite{Gro}, footnote (3)) ``a construction which pretends to ignore [\dots] the algebraic equations which traditionally serve to describe schemes, [\dots] to be able to hope to reconstitute a scheme [\dots] from [\dots] a purely topological invariant [\dots]''.
In the zero-dimensional case, the fundamental group plays no r\^ole, only the absolute Galois group, and we arrive at the theorem of Neukirch and Uchida (greatly generalized in recent years, notably by Bogomolov-Tschinkel \cite{BoTs}, Mochizuki \cite{Moch} and Pop \cite{Pop}, compare \cite{Sz}).
Our main result indicates that QSM-systems for number fields can be considered as some kind of substitute for the absolute Galois group. The link to Grothendieck's proposal arises via a philosophy from noncommutative geometry that ``topology = $C^*$-algebra'' and ``time evolution = Frobenius''. This would become a genuine analogy if one could unearthen a ``Galois theory'' that describes a categorical equivalence between number fields on the one hand, and their QSM-systems on the other hand. Anyhow, it seems Theorem \ref{main} indicates that one may, in some sense, substitute ``noncommutative'' for ``anabelian''.\footnote{Interestingly, the Wikipedia entry for ``Anabelian geometry'' starts with ``Not to be confused with Noncommutative Geometry'' (retrieved 16 Aug 2010).} This substitution has an interesting side effect: in the spirit of Kronecker's programme, one wants to characterize a number field by structure that is ``internal'' to it (i.e., not using extensions): this is the case for the QSM-system, since class field theory realizes Kronecker's programme for abelian extensions. On the other hand, anabelian geometry characterizes a number field by its absolute Galois group, an object whose ``internal'' understanding remains largely elusive and belongs to the Langlands programme.
In the style of Mochizuki's \emph{absolute} version of anabelian geometry (cf.\ \cite{Mo}), one may ask how to reconstruct a number field from its associated QSM-system (or $L$-series), rather than to reconstruct an isomorphism of fields from an isomorphism of QSM-systems (or an L-isomorphism).
It would be interesting to study the analogue of our results for the case of function fields, and higher dimensional schemes. Jacob \cite{Jacob} and Consani-Marcolli \cite{ConsM} have constructed function field analogues of QSM systems that respectively have the Weil and the Goss zeta function as partition function. The paper \cite{CKZ} studies arithmetic equivalence of function fields using the Goss zeta function.
\end{remark*}
\begin{remark*}[Link with hyperring theory]
Connes and Consani have studied the adele class space as a hyperring in the sense of Krasner (\cite{Krasner}). They prove in \cite{CC} (Theorem 3.13) that
\begin{quote} (v) [Hyperring isomorphism] \ \emph{the two adele class spaces $\A_{\bK}\! /\! \bK^* \cong \A_{\bL}\! /\! \bL^*$ are isomorphic as hyperrings over the Krasner hyperfield;}
\end{quote} is equivalent to field isomorphism. The proof is very interesting: it uses classification results from incidence geometry. One may try to prove that QSM-isomorphism implies hyperring isomorphism directly (thus providing a new proof of the equivalence of field isomorphism with QSM-isomorphism; this is especially tempting, since Krasner developed his theory of hyperrings for applications to class field theory).
Observe that the equivalence of hyperring isomorphism with field isomorphism is rather far from the anabelian philosophy (which would be to describe algebra by topology), since it uses (algebraic) isomorphism of hyperrings to deduce isomorphism of fields. But it might be true that the \emph{topology/geometry} of the hyperring can be used instead. As a hint, we refer to Theorem 7.12 in \cite{CC}: over a global function field, the groupoid of prime elements of the hyperring of adele classes \emph{is} the abelianized loop groupoid of the curve, cf.\ also \cite{CC2}, Section 9.
\end{remark*}
\begin{remark*}[Analogues in Riemannian geometry]
There is a well-known (limited) analogy between the theory of $L$-series in number theory and the theory of spectral zeta functions in Riemannian geometry. For example, the ideas of Ga{\ss}mann were used by Sunada to construct isospectral, non-isometric manifolds (cf.\ \cite{Sunada}): the spectral zeta function does not determine a Riemannian manifold up to isometry (actually, not even up to homeomorphism).
In \cite{Riem}, it was proven that the isometry type of a closed Riemannian manifold is determined by a \emph{family} of Dirichlet series associated to the Laplace-Beltrami operator on the manifold. In \cite{CMRiem}, it was proven that one can reconstruct a compact hyperbolic Riemann surface from a suitable \emph{family} of Dirichlet series associated to a spectral triple. These can be considered as analogues in manifold theory of the equivalence of (i) and (iii).
One might consider as another analogy of (iii) the matching of all $L$-series of Riemannian coverings of two Riemannian manifolds, but this appears not to be entirely satisfactory; for example, there exist simply connected isospectral, non-isometric Riemannian manifolds (cf.\ Sch\"uth \cite{Schueth}).
One may consider Mostow rigidity (a hyperbolic manifold of dimension at least three is determined by its fundamental group) as an analogue of the anabelian theorem. Again, this is very \emph{an}abelian, since the homology rarely determines a manifold.
There is a further occurence of $L$-series in geometry (as was remarked to us by Atiyah): the Riemann zeta function is the only Dedekind zeta function that occurs as spectral zeta function of a manifold (namely, the circle); but more general $L$-series can be found in the geometry of the resolution of the cusps of a Hilbert modular variety (\cite{Atiyah}, compare \cite{Solv}), a kind of ``virtual manifold'' that also has a ``quotient structure'', just like the QSM-system algebra is a noncommutative quotient space.
\end{remark*}
\section*{Disambiguation of notations}
There will be one notational sloppiness throughout: we will denote maps that are induced by a given isomorphism $\varphi$ by the same letter $\varphi$.
Since the number theory and QSM literature have conflicting standard notations, we include a table of notations for the convenience of the reader:
\medskip
\begin{footnotesize}
\noindent $R^*$ \dotfill invertible elements of a ring $R$\\
\noindent $R^\times$ \dotfill non-zero elements of a ring $R$\\
\noindent $\widehat{G}$ \dotfill Pontrjagin dual: continuous $\Hom(G,S^1)$ of a topological abelian group $G$ \\
\noindent $G^0$ \dotfill connected component of identity \\
\noindent $\bK, \bL, \bM, \bN$ (blackboard bold capitals) \dotfill number fields\\
\noindent $L_{\bK}(\chi,-)$ \dotfill $L$-series of field $\bK$ for generalized Dirichlet character $\chi\in \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$\\
\noindent $J_{\bK}^+$ \dotfill semigroup of integral ideals of a number field $\bK$ \\
\noindent $N=N_{\bK}=N_{\Q}^{\bK}$ \dotfill the norm map on ideals of the number field $\bK$ \\
\noindent $\fn,\p,\q$ (fraktur letters) \dotfill integral ideals of a number field \\
\noindent $\cO_{\bK}$ \dotfill ring of integers of a number field $\bK$\\
\noindent $\cO_{\bK,+}$ \dotfill semiring of totally positive integers of a number field $\bK$\\
\noindent $\hat\cO_{\bK,\p}$ \dotfill completed local ring of $\p$-adic integers in $\bK$\\
\noindent $\hat\cO_{\bK}$ \dotfill ring of finite integral adeles of a number field $\bK$\\
\noindent $\bar{\bK}_{\p}$ \dotfill residue field of $\bK$ at $\p$\\
\noindent ${\bK}_{\fn}$ \dotfill maximal abelian extension of $\bK$ unramified outside prime divisors of $\fn$\\
\noindent $f(\p|p)=f(\p|{\bK})$ \dotfill inertia degree of $\p$ over $p$, in $\bK$ \\
\noindent $\f_\chi$ \dotfill conductor of $\chi$ \\
\noindent $f_\chi$ \dotfill element of $A_{\bK}$ that implements the character $\chi$ \\
\noindent $f_{\chi,\fm}$ \dotfill generator of $C(X_{\bK})$ as in Lemma \ref{gengen} \\
\noindent $G_{\bK}$ \dotfill absolute Galois group of $\bK$ \\
\noindent $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ \dotfill Galois group of maximal abelian extension of $\bK$ \\
\noindent $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ \dotfill Galois group of maximal abelian extension of $\bK$ unramified at divisors of $\fn$ \\
\noindent $\mathring{G}_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ \dotfill Galois group of maximal abelian extension of $\bK$ unramified outside divisors of $\fn$ \\
\noindent $\A_{\bK}$ \dotfill adele ring of a number field $\bK$\\
\noindent $\A_{\bK,f}$ \dotfill finite (non-archimedean) part of the adele ring of a number field $\bK$\\
\noindent $A_{\bK}$ \dotfill the $C^*$ algebra of the QSM-system of the number field $\bK$\\
\noindent $\vartheta_{\bK}$ \dotfill Artin reciprocity map $\A_{\bK}^* \rightarrow G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ \\
\noindent $\beta$ \dotfill positive real number representing ``inverse temperature''\\
\noindent $X_{\bK}$ \dotfill topological space of $[(\gamma,\rho)] \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat\cO_{\bK}^*} \hat\cO_{\bK}$ underlying part of the algebra $A_{\bK}$ \\
\noindent $X^1_{\bK}$ \dotfill dense subspace of $[(\gamma,\rho)] \in X_{\bK}$ on which none of components of $\rho$ is zero \\
\noindent $\mu_{\fn}$ \dotfill element of the $C^*$-algebra $A_{\bK}$ corresponding to the ideal $\fn \in J_{\bK}^+$\\
\noindent $e_{\fn}$ \dotfill $=\mu_{\fn} \mu_{\fn}^*$, projector\\
\noindent $\epsilon_\gamma$ \dotfill symmetry of $A_{\bK}$ induced by multiplication, for $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, with $[(\gamma,1)]$ on $X_{\bK}$\\
\noindent $\varepsilon_s$ \dotfill endomorphism of $A_{\bK}$ given by $\varepsilon_s(f)(\gamma,\rho)=f(\gamma,s^{-1}\rho)e_{s\hat\cO_{\bK} \cap \bK}$ \\
\noindent $\rho$ \dotfill finite integral adele $\in \hat\cO_{\bK}$\\
\noindent $\rho_{\p}$ \dotfill $\p$-component of an adele $\rho$\\
\noindent $\rho_{\fn}(f)$ \dotfill action of ideal $\fn$ on $f \in C(X_{\bK})$ : $\rho_{\fn}(f)=f(\vartheta_{\bK}(\fn) \gamma, \fn^{-1} \rho)e_{\fn}$\\
\noindent $\fn \ast x$ \dotfill action of ideal $\fn$ on $x \in X_{\bK}$ : $\fn \ast [(\gamma,\rho)]=[(\vartheta_{\bK}(\fn)^{-1} \gamma, \fn \rho)]$\\
\noindent $\sigma_{\fn}(f)$ \dotfill partial inverse to $\rho_{\fn}$ : $\sigma_{\fn}(f)=f(\fn \ast \rho)$ \\
\noindent $\sigma_{\bK}=\sigma_t=\sigma_{\bK,t}$ \dotfill the time evolution (in time $t$) of the QSM-system of the number field $\bK$\\
\noindent $\rtimes$ \dotfill crossed product construction of $C^*$-algebras (not semidirect product of groups) \\
\noindent $\omega$ \dotfill a state of a $C^*$-algebra \\
\noindent $\omega_\beta$ \dotfill a $\KMS_\beta$ state of a $C^*$-algebra \\
\noindent $\pi_\omega$ \dotfill GNS-representation corresponding to $\omega$ \\
\noindent $\cM_\omega$ \dotfill weak closure of algebra in GNS-representation \\
\noindent $H$ \dotfill Hamiltonian\\
\noindent $\cH$ \dotfill Hilbert space \\
\noindent $\KMS_\beta(A,\sigma)$ \dotfill the set of $\KMS_\beta$-states of the QSM-system $(A,\sigma)$ \\
\noindent $\KMS_\beta(\bK)$ \dotfill $\KMS_\beta(A_{\bK},\sigma_{\bK})$ \\
\end{footnotesize}
\part{QSM-ISOMORPHISM OF NUMBER FIELDS}
\section{Isomorphism of QSM systems} \label{isom}
We recall some definitions and refer to \cite{BR}, \cite{CM2}, and Chapter 3 of \cite{CM} for more information and for some physics background. After that, we introduce isomorphism of QSM-systems, and prove it preserves $\KMS$-states (cf.\ infra).
\begin{df}
A \emph{quantum statistical mechanical system} (QSM-system) $(A,\sigma)$ is a (unital) $C^*$-algebra $A$ together with a so-called \emph{time evolution} $\sigma$, which is a continuous group homomorphism $$\sigma \, : \, \R \rightarrow \Aut A \, : \, t \mapsto \sigma_t.$$
A \emph{state} on $A$ is a continuous positive unital linear functional $ \omega \, : \, A \rightarrow \C$. We say $\omega$ is a \emph{KMS$_\beta$ state} for some $\beta \in \R_{>0}$ if for all $a,b \in A$, there exists a function $F_{a,b}$, holomorphic in the strip $0<\Im\, z<\beta$ and bounded continuous on its boundary, such that
$$ F_{a,b}(t)=\omega(a\sigma_t(b)) \mbox{ and } F_{a,b}(t+i\beta)=\omega(\sigma_t(b)a) \ \ \ \ (\forall t \in \R). $$
Equivalently, $\omega$ is a $\sigma$-invariant state with $\omega(ab)=\omega(b\sigma_{i\beta}(a))$ for $a,b$ in a dense set of $\sigma$-analytic elements. The set $\KMS_\beta(A,\sigma)$ of $\KMS_\beta$ states is topologized as a subspace of the convex set of states, a weak* closed subset of the unit ball in the operator norm of bounded linear functionals on the algebra.
A $\KMS_\beta$ state is called \emph{extremal} if it is an extremal point in the (compact convex) set of $\KMS_\beta$ states for the weak (i.e., pointwise convergence) topology.
\end{df}
\begin{remark}[Physical origins]
This notion of QSM-system is one of the possible physical theories of quantum statistical mechanics; one should think of $A$ as the algebra of observables, represented on some Hilbert space $\cH$ with orthonormal basis $\{\Psi_i\}$; the time evolution, in
the given representation, is generated by a Hamiltonian $H$ by \begin{equation} \label{imp} \sigma_t(a) = e^{itH}ae^{-itH},\end{equation} and (mixed) states of the system are combinations $$a \mapsto \sum \lambda_i \langle \Psi_i | a \Psi_i \rangle$$ which will mostly be of the form $$a \mapsto \tr(\rho a)$$ for some density matrix $\rho$. A typical equilibrium state (here, this means stable by time evolution) is a Gibbs state $$a \mapsto \tr(a e^{-\beta H})/\tr(e^{-\beta H})$$ at temperature $1/\beta$, where we have normalized by the partition function $$\tr(e^{-\beta H}).$$ The KMS-condition (named after Kubo, Martin and Schwinger) is a correct generalization of the notion of equilibrium state to more general situations, for example when the trace class condition $$\tr(e^{-\beta H})< \infty,$$
needed to
define Gibbs states, no longer holds (cf.\ Haag, Hugenholtz and Winnink \cite{HHW}).
\end{remark}
\begin{remark}[Semigroup crossed product]
We recall the construction of a \emph{semigroup crossed product algebra}. A semigroup $C^*$-dynamical system is a triple
$(A,S,\rho)$ of a $C^*$-algebra $A$, a semigroup $S$ and
an action $\rho$ of $S$ by endomorphisms of $A$. A
covariant representation $(\pi,\mu)$ is a pair of a
representation $\pi$ of the $C^*$-algebra $A$ as
bounded operators on a Hilbert space $\cH$ and a
representation $\mu$ of the semigroup $S$ on $\cH$
by isometries, with the property that
$$ \pi(\rho_s(a)) = \mu_s \pi(a) \mu_s^* $$
for all $a\in A$ and $s\in S$. Then the crossed product
$C^*$-algebra $A\rtimes_\rho S$ is the universal
$C^*$-algebra such that each covariant representation
$(\pi,\mu)$ factors through a representation of $A\rtimes_\rho S$.
The existence of $A\rtimes_\rho S$, with an embedding of $A$
in $A\rtimes_\rho S$, is guaranteed when
the semigroup $S$ is an Ore semigroup, namely it is
cancellative ($as=bs$ or $sa=sb$ implies $a=b$ in $S$)
and right-reversible ($Ss\cap St\neq \emptyset$ for all $s,t\in S$),
the action $\rho$ is by injective endomorphisms, which extend
continuously to the multiplier algebra $M(A)$ mapping the identity
to a projection.
Under these same hypotheses on the semigroup $S$ and the action
$\rho$, the algebra $A\rtimes_\rho S$ is the closure of the linear span
of all monomials of the form $\mu_s^* a \mu_t$, with $s,t \in S$ and
$a\in A$, where the $\mu_s$ here denote the isometries in
$A\rtimes_\rho S$ associated to elements $s\in S$. In particular,
the isometries $\mu_s$ satisfy $\mu_s \mu_t = \mu_{st}$ and
$\mu_s^* \mu_s=1$, while $\mu_s \mu_s^*$ is a projector.
One also has the relations $a \mu_s^* = \mu_s^* \rho_s(a)$ and
$\mu_s a = \rho_s(a) \mu_s$.
See \cite{Larsen} for a more detailed discussion of semigroup
crossed product algebras and their relation to partially defined
actions of the associated enveloping group $G =S^{-1} S$
(which exists and is unique up to canonical isomorphism in
the Ore case).
\end{remark}
\begin{df}
The \emph{dagger subalgebra} $B^\dagger$ of the semigroup crossed product $ B=A\rtimes_\rho S$ is the (non-involutive) subalgebra generated algebraically by $A$ and and $\mu_t$ for $t \in S$ (but not including the $\mu_t^*$).
\end{df}
What we call ``dagger subalgebra'' (and its closure) can be seen as a noncommutative analogue of the disc algebra; its study was initiated by Arveson and Josephson, for references see, e.g., \cite{Davidson}, \cite{Power}.
\medskip
We now introduce the following equivalence relation for QSM-systems:
\begin{df} \label{dfQSMiso}
An \emph{isomorphism} of two QSM-systems $(A,\sigma)$ and $(B,\tau)$ is a $C^*$-algebra isomorphism $\varphi: A \overset{\sim}{\rightarrow} B$ that intertwines time evolutions, i.e., such that the following diagram commutes:
$$\xymatrix{ A \ar@{->}[r]^{\varphi}_{\sim} \ar@{->}[d]_{\sigma} & B \ar@{->}[d]^{\tau} \\
A \ar@{->}[r]^{\varphi}_{\sim} & B
}$$
\end{df}
\begin{df} \label{dfdaggeriso}
If $(A,\sigma)$ and $(B,\tau)$ are two QSM-systems with given dagger-subalgebras $A^\dagger \subseteq A$ and $B^\dagger \subseteq B$ that are preserved by the respective time evolutions (i.e., $\sigma(A^\dagger) \subseteq A^\dagger$ and $\tau(B^\dagger) \subseteq B^\dagger$), then we call an isomorphism $\varphi$ of the two systems a \emph{dagger-isomorphism} if $\varphi(A^\dagger)=B^\dagger$. \end{df}
\begin{lem} \label{basic} Let $\varphi : (A,\sigma) \overset{\sim}{\rightarrow} (B,\tau)$ denote an isomorphism of QSM systems. Then for any $\beta>0$, \begin{enumerate}
\item[\textup{(}i{)}] pullback $$\varphi^\ast \, : \, \KMS_\beta(B,\tau) \overset{\sim}{\rightarrow} \KMS_\beta(A,\sigma) \, : \, \omega \mapsto \omega \circ \varphi $$
is a homeomorphism between the spaces of $\KMS_\beta$
states on $B$ and $A$;
\item[\textup{(}ii{)}]$\varphi^\ast$ induces a homeomorphism between extremal $\KMS_\beta$ states on $B$ and $A$.
\end{enumerate}
\end{lem}
\begin{proof}
The map $\varphi$ obviously induces a bijection between states on $B$ and states on $A$.
For (i), let $F_{a,b}$ be the holomorphic function that implements the $\KMS_\beta$-condition for the state $\omega$ on $(B,\tau)$ at $a,b \in B$, so $$F_{a,b}(t)=\omega(a \tau_t(b)) \mbox{ and } F_{a,b}(t+i\beta)=\omega(\tau_t(b)a).$$ The following direct computation then shows that the function $F_{\varphi(c),\varphi(d)}$ implements the $\KMS_\beta$-condition for the state $\varphi^\ast \omega$ on $(A,\sigma)$ at $c,d \in A$:
$$(\omega \circ \varphi)(c \sigma_t(d)) = \omega ( \varphi(c) \tau_t (\varphi(d)) = F_{\varphi(c),\varphi(d)}(t),$$ and similarly at $t+i\beta$. Also, note that pullback is continuous, since $C^*$-algebra isomorphism is compatible with the topology on the set of $\KMS$-states.
For (ii), if a $\KMS_\beta$ state $\omega$ on $B$ is not extremal, then the GNS-representation $\pi_\omega$ of $\omega$ is not factorial.
As in Prop 3.8 of \cite{CM2}, there exists a positive linear functional, which is dominated by $\omega$, namely $\omega_1 \leq \omega$, and which extends from $B$ to the von Neumann algebra given by the weak closure $\cM_\omega$ of $B$ in the GNS representation. The functional $\omega_1$ is of the
form $\omega_1 (b)=\omega(hb)$ for some positive element $h$ in the center of the von Neumann algebra $\cM_\omega$. Consider then the pull back $$\varphi^*(\omega)(a)=\omega(\varphi(a))$$ and
$$\varphi^*(\omega_1)(a)=\omega_1(\varphi(a)) =\omega(h \varphi(a))$$ for $a \in A$. The continuous linear
functional $\varphi^*(\omega_1)$ has norm $\| \varphi^*(\omega_1)\| \leq 1$. In fact, since we are dealing with unital algebras, $$\| \varphi^*(\omega_1)\| =\varphi^*(\omega_1)(1)=\omega(h).$$
The linear functional $\omega_2(b)=\omega((1-h)b)$ also satisfies the positivity property
$\omega_2(b^* b)\geq 0$, since $\omega_1 \leq \omega$. The decomposition
$$\varphi^*(\omega) = \lambda \eta_1 + (1-\lambda) \eta_2,$$
with $\lambda = \omega(h)$, $$\eta_1=\varphi^*(\omega_1)/\omega(h)\mbox{ and }\eta_2= \varphi^*(\omega_2)/\omega(1-h)$$ shows that the state $\varphi^*(\omega)$ is
not extremal. Notice that $\eta_1$ and $\eta_2$ are both $\KMS$ states. To see this, it suffices
to check that the state $\omega_1(b)/\omega(h)$ is $\KMS$. In fact, one has for all analytic elements $a,b \in B$:
$$\omega_1(ab) = \omega(hab) =
\omega(a h b) = \omega(hb\tau_{i\beta}(a)).$$
\end{proof}
\begin{df} An \emph{automorphism} of a QSM-system $(A,\sigma)$ is an isomorphism to itself. The group of such automorphisms is denoted by $\Aut(A,\sigma)$.
An \emph{endomorphism} of a QSM-system $(A,\sigma)$ is a $\ast$-homomorphism $A \rightarrow A$ that commutes with $\sigma_t$ for all $t$. We denote them by $\End(A,\sigma)$.
An \emph{inner endomorphism} is defined by $a \mapsto uau^*$ for some isometry $u \in A$ which is an eigenvector of the time evolution, i.e., $u^*u=1$ and there exists an eigenvalue $\lambda$ such that $\sigma_t(u)=\lambda^{it} u$ for all $t$. We denote them by $\mathrm{Inn}(A,\sigma)$. (Inner endomorphisms act trivially on $\KMS$-states, cf.\ \cite{CM}, Ch.\ 3, Section 2.3.)
If $A^\dagger \subset A$ is a dagger-subalgebra preserved by the time
evolution, we denote by $\mathrm{Inn}^{\dagger}(A,\sigma)$ the set of \emph{dagger inner endomorphisms}: the inner endomorphisms
of $(A,\sigma)$ defined by isometries in $A^{\dagger}$ that are eigenvectors of the time evolution.
\end{df}
\section{A QSM-system for number fields} \label{s2}
Bost and Connes (\cite{BC}) introduced a QSM-system for the field of rational numbers, and \cite{CMR},
\cite{CMR2} did so for imaginary quadratic fields. More general QSM-systems associated to arbitrary number fields were constructed by Ha and Paugam in \cite{HP} as a special case of their more
general class of systems for Shimura varieties, which in turn generalize the $\GL(2)$-system
of \cite{CM2}. We recall here briefly the construction of the systems for number fields in an
equivalent formulation (cf.\ also \cite{LLN}).
\begin{se}We denote by $J^+_{\bK}$ the semigroup of integral ideals, with the norm function
$$N \, : \, J_{\bK}^+ \rightarrow \Z \, : \, \fn \mapsto N(\fn)=N^{\bK}_{\Q}(\fn)=N_{\bK}(\fn).$$ Denote by $G^{\mbox{{\tiny \textup{ab}}}}_{\bK}$ the Galois group of the
maximal abelian extension of $\bK$. The Artin reciprocity map is denote by
$$ \vartheta_{\bK} \, : \, \A_{\bK}^* \rightarrow G_{\bK}^{\mbox{{\tiny \textup{ab}}}}.$$
By abuse of notation, we will also write $\vartheta_{\bK}(\fn)$ for the image under this map of an ideal $\fn$, which is seen as an idele by choosing a non-canonical section $s$ of
$$\xymatrix@R=0pt{ \A_{\bK,f}^* \ar@{->>}[r] & J_{\bK} \ar@/^1.5pc/[l]_{s} & : & (x_{\p})_{\p} \mapsto \displaystyle\prod\limits_{\p \mbox{\footnotesize{ finite }}} \p^{v_{\p}(x_{\p})} }$$
The abuse lies in the fact that the image depends on this choice of section (thus, up to a unit in the finite ideles), but it is canonically defined in (every quotient of) the Galois group $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ of the maximal abelian extension unramified at prime divisors of $\fn$: on every finite quotient of this, it is the ``Frobenius element'' of $\fn$. The notation $\vartheta_{\bK}(\fn)$ will only occur in situations where this ambiguity plays no role, for example, we evaluate characters $\chi$ on $\vartheta_{\bK}(\fn)$ only if the conductor $\f_\chi$ of $\chi$ is coprime to $\fn$ (so $\chi$ factors over $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$). If $\fn=\p$ is a prime ideal with a chosen uniformizer $\pi_{\p}$ then we get a diagram
$$\xymatrix@R=0pt{ J_{\bK}^+ \ar@{->}[r]^{s} \ar@/^1.5pc/[rrr] & \A_{\bK}^* \ar@{->>}[r]^{\vartheta_{\bK}} & G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \ar@{->>}[r] & G_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}} \\
\p \ar@{->}[r] & (1,\dots,1,\pi_{\p},1,\dots,1)\ar@{->}[rr] & & \vartheta_{\bK}(\p) }$$
in which the arrow $\vartheta_{\bK} \circ s$ depends on $s$, but the curved arrow doesn't depend on $s$.
We consider the fibered product $$ X_{\bK}:=G^{\mbox{{\tiny \textup{ab}}}}_{\bK}\times_{\hat\cO_{\bK}^*} \hat\cO_{\bK},$$
(where $\hat\cO_{\bK}$ is the ring of finite integral adeles), where the balancing is defined for $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and $\rho \in \hat{\cO}_{\bK}$ by the equivalence $$(\gamma,\rho) \sim (\vartheta_{\bK}({u}^{-1}) \cdot \gamma, u \rho) \mbox{ for all } u \in \hat\cO_{\bK}^*.$$
\end{se}
\begin{df} \label{dfsystem}
The \emph{QSM-system $(A_{\bK}, \sigma_{\bK})$ associated to a number field $\bK$} is defined by
\begin{equation}\label{QSMnf}
A_{\bK}:=C(X_{\bK}) \rtimes J^+_{\bK} = C(G^{\mbox{{\tiny \textup{ab}}}}_{\bK}\times_{\hat\cO_{\bK}^*} \hat\cO_{\bK})\rtimes J^+_{\bK},
\end{equation}
where the crossed product structure is given by $\fn \in J_{\bK}^+$ acting on $f \in C(X_{\bK})$ as
$$ \rho_{\fn}(f)(\gamma,\rho)=f(\vartheta_{\bK}(\fn) \gamma, s(\fn)^{-1} \rho)e_{\fn}, $$
with $e_{\fn}=\mu_{\fn} \mu_{\fn}^*$ the projector onto the space of $[(\gamma,\rho)]$ where $s(\fn)^{-1}\rho \in \hat\cO_{\bK}$. Here $\mu_{\fn}$ is the isometry that implements the action of $J_{\bK}^+$.
Note that, because of the balancing over the finite idelic units $\hat \cO^*_{\bK}$, the dependence of $\vartheta_{\bK}(\fn)$ on $s$ is again of no influence. By further slight abuse of notation, we will leave out the section $s$ from the notation, and write the action as $f \mapsto f(\vartheta_{\bK}(\fn) \gamma, \fn^{-1} \rho)e_{\fn}$.
Of further use to us will be the partial inverse to this action defined by
$$ \sigma_{\fn}(f)(x) =f(\fn \ast x)$$
where we have defined the action $\fn \ast x$ of an ideal $\fn \in J_{\bK}^+$ on an element $x \in X_{\bK}$ as
$$ \fn \ast [(\gamma,\rho)]=[(\vartheta_{\bK}(\fn)^{-1} \gamma, \fn \rho)].$$
Then indeed,
$$ \mu_{\fn} \mu^*_{\fn} =e_{\fn}; \ \mu_{\fn}^* \mu_{\fn} = 1; \ \rho_{\fn}(f) = \mu_{\fn} f \mu_{\fn}^*; $$ $$ \sigma_{\fn}(f)=\mu_{\fn}^* f \mu_{\fn};\ \sigma_{\fn}(\rho_{\fn}(f)) = f; \ \rho_{\fn}(\sigma_{\fn}(f))=fe_{\fn}. $$
The dagger subalgebra $A_{\bK}^{\dagger}$ is the algebraic crossed product
generated by functions $f\in C(X_{\bK})$ and isometries $\mu_{\fn}$ with
the relations
\begin{equation}\label{relsmunuialg}
\mu_{\fn} f = \rho_{\fn}(f) \mu_{\fn}, \ \ \ \ f \mu_{\fn} = \mu_{\fn} \sigma_{\fn}(f) e_{\fn},
\end{equation}
where $\rho_{\fn}$ and $\sigma_{\fn}$ are as in Section \ref{dfsystem}.
This is not an involutive subalgebra because it does not contain the adjoints
$\mu_{\fn}^*$, but $A_{\bK}$ is the $C^*$-algebra generated by $A_{\bK}^{\dagger}$.
Finally, the time evolution
is given by
\begin{equation}\label{sigmaK}
\left\{ \begin{array}{ll} \sigma_{\bK,t}(f) =f, & \forall f \in C(G^{\mbox{{\tiny \textup{ab}}}}_{\bK}\times_{\hat\cO_{\bK}^*} \hat\cO_{\bK}); \\ \sigma_{\bK,t}(\mu_{\fn}) = N(\fn)^{it}\, \mu_{\fn}, & \forall \fn \in J^+_{\bK}. \end{array} \right.
\end{equation}
where $\mu_{\fn}$ are the isometries that implement the semigroup action of $J^+_{\bK}$.
The time evolution preserves the dagger subalgebra $A_{\bK}^{\dagger}$.
\end{df}
\section{Hilbert space representation, partition function, KMS-states}
\begin{se}\label{kmschar} A complete classification of the $\KMS$ states for the systems $(A_{\bK},\sigma_{\bK})$ was obtained in \cite{LLN}, Thm.\ 2.1.
In particular, in the low temperature range $\beta > 1$, the extremal $\KMS_\beta$ states are parameterized
by elements $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, and are in Gibbs form, given by normalized $L$-series
\begin{equation}\label{KMSlow}
\omega_{\beta,\gamma} (f) = \frac{1}{\zeta_{\bK}(\beta)} \sum_{\fn \in J_{\bK}^+} \frac{f(\fn \ast \gamma)}{
N(\fn)^{\beta}}.
\end{equation}
Let $\chi$ denote a character of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ (extended as usual by $0$ on ideals not coprime to its conductor $\f_\chi$). We define a function $f_\chi \in C(X_{\bK})$ by
\begin{equation} \label{fchi1} f_\chi(\gamma,\rho):= \left\{ \begin{array}{ll} \chi^{-1}(\gamma \vartheta_{\bK}(\rho')) & \mbox{ if } \forall v \mid \f_\chi, \rho_v \in \hat\cO_{\bK,v}^*\\ 0 & \mbox{ otherwise, } \end{array} \right. \end{equation}
with $\rho' \in \hat\cO_{\bK}^*$ any invertible integral idele such that $\rho'_v=\rho_v$ for all $v \mid \f_\chi$ (the value is independent of this choice).
Then from the definition we get \begin{equation} \label{fchi2} f_{\chi}(\fn \ast \gamma) = \left\{ \begin{array}{ll} \chi(\vartheta_{\bK}(\fn)) \chi^{-1}(\gamma) & \mbox{ if } (\fn;\f_\chi)=1, \\ 0 & \mbox{ otherwise, } \end{array} \right.\end{equation}
so that
\begin{equation}\label{KMSlowa}
\omega_{\beta,\gamma} (f_\chi) = \frac{1}{\zeta_{\bK}(\beta)\chi(\gamma)} \cdot L_{\bK}(\chi,\beta), \end{equation}
is up to normalization the usual $L$-series of $\chi$ (which is defined using the convention to sum only over ideals coprime to the conductor of the $\chi$).
\end{se}
\begin{se}\label{hil}
Associated to any element
$\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ is a natural representation $\pi_{\gamma}$ of the algebra $A_{\bK}$ on the Hilbert space $\ell^2(J_{\bK}^+)$. Namely, let $\varepsilon_{\fm}$ denote the canonical basis of $\ell^2(J_{\bK}^+)$.
Then the action on $\ell^2(J_{\bK}^+)$ of an element
$ f_{\fn} \mu_{\fn} \in A_{\bK}$ with $\fn \in J_{\bK}^+$ and $f_{\fn} \in C(X_{\bK})$ is given by
$$ \pi_\gamma(f_{\fn} \mu_{\fn}) \,\, \varepsilon_{\fm} = f_{\fn}(\fn \fm \ast \gamma)
\, \varepsilon_{\fn \fm}. $$
In this picture, the time evolution is implemented (in the sense of formula (\ref{imp})) by a Hamiltonian
\begin{equation}\label{HamK}
H_{\sigma_{\bK}} \varepsilon_{\fn} = \log N(\fn) \,\, \varepsilon_{\fn}.
\end{equation}
\end{se}
\begin{se}
In this representation,
$$ \tr( \pi_\gamma(f) e^{-\beta H_{\sigma_{\bK}}}) = \sum_{\fn \in J_{\bK}^+} \frac{f(\fn \ast \gamma)}{
N(\fn)^{\beta}}. $$
Setting $f=1$, the Dedekind zeta function $$\zeta_{\bK}(\beta) =\sum\limits_{\fn\in J_{\bK}^+} N(\fn)^{-\beta}$$ appears as the partition function $$\zeta_{\bK}(\beta) =\tr( e^{-\beta H_{\sigma_{\bK}}})$$ of the system (convergent for $\beta>1$).
\end{se}
\begin{remark}[Formulation in terms of $\bK$-lattices] As shown in \cite{CM2} and \cite{CM}, the original Bost--Connes system admits a geometric reformulation
in terms of commensurability classes of one-di\-men\-si\-o\-nal $\Q$-lattices, which in Section 3 of \cite{LLN} was generalized to number fields. More specifically, the moduli space of $\bK$-lattices up to scaling is the abelian part $C(X_{\bK})$ of the algebra (a classical quotient), and the moduli space up to scaling \emph{and} commensurability exhibit the complete algebra (a genuinely noncommutative space). We recall the definitions for convenience.
Denote by $\bK_\infty=\prod_{v|\infty} \hat{\bK}_v$ the product
of the completions at the archimedean places, and by $(\bK_\infty^*)^0$ the connected
component of the identity in $\bK^*_\infty$. An \emph{$1$-dimensional $\bK$-lattice} is a pair $(\Lambda, \phi)$, where $\Lambda\subset \bK_\infty$
is a lattice with $\cO_{\bK}\Lambda = \Lambda$ and $\phi: \bK/\cO_{\bK} \to \bK \Lambda/\Lambda$
is an $\cO_{\bK}$-module homomorphism. The set of one-di\-men\-si\-o\-nal $\bK$-lattices can be identified
with
\begin{equation}\label{modspaceKlatt}
\cM_{\bK,1} = \A_{\bK}^* / \bK^* \times_{\hat\cO^*_{\bK}} \hat\cO_{\bK},
\end{equation}
as in \cite{CMR} and \cite{ConsM}, cf.\ \cite{LLN} Lemma 3.3. Two $\bK$-lattices are \emph{commensurable}, denoted by $$(\Lambda_1,\phi_1)\sim (\Lambda_2,\phi_2),$$
if $\bK \Lambda_1=\bK \Lambda_2$ and $\phi_1 =\phi_2$ modulo $\Lambda_1+\Lambda_2$.
The \emph{scaling equivalence} corresponds to identifying one-dimensional
$\bK$-lattices $(\Lambda,\phi)$ and $(k\Lambda, k\psi)$, where $k\in (\bK_{\infty}^*)^0$
and $\psi$ is a pointwise limit
of elements $r \phi$ with $r \in \cO_{\bK}^* \cap (\bK_\infty^*)^0$. The resulting convolution
algebra corresponds to the action of $\A_{\bK,f}^*/\hat \cO^*_{\bK} \simeq J_{\bK}$ on the \emph{moduli space of one-dimensional $\bK$-lattices up to scaling}
$$ \bar{\cM}_{\bK,1} = \A_{\bK}^*/\overline{\bK^* (\bK_\infty^*)^0} \times_{\hat\cO^*_{\bK}} \hat\cO_{\bK} \simeq
G^{ab}_{\bK} \times_{\hat\cO^*_{\bK}} \hat\cO_{\bK}. $$
The algebra $A_{\bK}$ can be interpreted as the quotient of the groupoid of the commensurability relation by the scaling action. The Hilbert space construction can be fit into the general framework of groupoid algebra representations.
In the lattice picture, the low temperature KMS states are parameterized by the \emph{invertible} one-dimensional
$\bK$-lattices, namely those for which the $\cO_{\bK}$-module homomorphism $\varphi$ is actually an isomorphism, see \cite{CM}, \cite{CMR}, \cite{LLN}, and Chapter 3 of \cite{CM2}.
\end{remark}
\section{Hamiltonians and arithmetic equivalence}
We first show that the existence of an isomorphism of the quantum statistical mechanical
systems implies arithmetic equivalence; this is basically because the zeta functions of $\bK$ and $\bL$ are the partition functions of the respective systems. Some care has to be taken since the systems are not represented on the same Hilbert space.
\begin{prop}\label{isotoareq}
Let $\varphi: (A_{\bK},\sigma_{\bK}) \to (A_{\bL},\sigma_{\bL})$ be an isomorphism of QSM-systems of number fields $\bK$ and $\bL$. Then $\bK$ and $\bL$ are arithmetically equivalent, i.e., they have the same Dedekind zeta function.
\end{prop}
\begin{proof} The isomorphism $\varphi: (A_{\bK},\sigma_{\bK}) \to (A_{\bL}, \sigma_{\bL})$
induces an identification of the sets of extremal $\KMS$-states of the two systems, via pullback
$\varphi^*: \KMS_\beta(\bL) \to \KMS_\beta(\bK)$.
Consider the GNS representations associated to regular low temperature
$\KMS$ states $\omega=\omega_\beta$ and $\varphi^*(\omega)$. We denote the
respective Hilbert spaces by ${\mathcal H}_\omega$ and
${\mathcal H}_{\varphi^*\omega}$. As in Lemma 4.3 of \cite{CCM}, we observe
that the factor ${\mathcal M}_\omega$ obtained as the weak closure of $A_{\bL}$
in the GNS representation is of type I$_\infty$, since we are only considering the
low temperature KMS states that are of Gibbs form. Thus, the space
${\mathcal H}_\omega$ decomposes as
$${\mathcal H}_\omega = {\mathcal H}(\omega)\otimes {\mathcal H}',$$
with an irreducible representation $\pi_\omega$ of $A_{\bL}$ on ${\mathcal H}(\omega)$
and $${\mathcal M}_\omega =\{ T \otimes 1\,|\, T\in \mathcal{B}( {\mathcal H}(\omega)) \}$$
($\mathcal{B}$ indicates the set of bounded operators). Moreover, we have
$$ \langle (T\otimes 1) 1_\omega, 1_\omega\rangle = {\rm Tr}(T \rho) $$
for a density matrix $\rho$ (positive, of trace class, of unit trace).
We know that the low temperature extremal KMS states for the system
$(A_{\bL}, \sigma_{\bL})$ are of Gibbs form and given by the explicit expression
in equation (\ref{KMSlow})
for some $\gamma \in G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$; and similarly for the system $(A_{\bK},\sigma_{\bK})$. Thus, we can identify
${\mathcal H}(\omega)$ with $\ell^2(J_{\bL}^+)$ and the
density $\rho$ correspondingly with
$$\rho=e^{-\beta H_{\sigma_{\bL}}}/{\rm Tr}(e^{-\beta H_{\sigma_{\bL}}});$$ this is the representation considered in Section \ref{hil}. As in
Lemma 4.3 of \cite{CCM}, the evolution group $e^{it H_\omega}$ generated by the
Hamiltonian $H_\omega$ that implements
the time evolution $\sigma_{\bL}$ in the GNS representation on ${\mathcal H}_\omega$
agrees with $e^{it H_{\sigma_{\bL}}}$ on the factor ${\mathcal M}_\omega$. We find
$$ e^{it H_\omega} \pi_\omega(f) e^{-it H_\omega} = \pi_\omega(\sigma_{\bL}(f)) =
e^{it H_{\sigma_{\bL}}} \pi_\omega(f) e^{-it H_{\sigma_{\bL}}}. $$
As observed in \S 4.2 of \cite{CCM}, this gives us that the Hamiltonians differ by a constant,
$$H_\omega = H_{\sigma_{\bL}} + \log \lambda_1,$$ for some $\lambda_1\in \R^*_+$.
The argument for the GNS representation
for $\pi_{\varphi^*(\omega)}$ is similar and it gives an identification
of the Hamiltonians $$H_{\varphi^*(\omega)} = H_{\sigma_{\bK}} + \log \lambda_2$$ for
some constant $\lambda_2\in \R^*_+$.
The algebra isomorphism $\varphi$ induces a unitary equivalence $\Phi$
of the Hilbert spaces of the GNS representations of the corresponding states,
and the Hamiltonians that implement the time evolution in these representations
are therefore related by $$H_{\varphi^*(\omega)} = \Phi H_{\omega} \Phi^*.$$
In particular the Hamiltonians $H_{\varphi^*(\omega)}$ and $H_\omega$ then
have the same spectrum.
Thus, we know from the discussion above that $$H_{\bK} = \Phi H_{\bL} \Phi^* + \log \lambda,$$
for a unitary operator $\Phi$ and a $\lambda\in \R^*_+$.
This gives at the level of zeta functions
\begin{equation} \label{grom} \zeta_{\bL} (\beta) = \lambda^{-\beta} \zeta_{\bK}(\beta). \end{equation}
Now consider the left hand side and right hand side as classical Dirichlet series of the form $$\sum\limits_{n \geq 1} \frac{a_n}{n^{\beta}} \mbox{ \ and \ }\sum\limits_{n \geq 1} \frac{b_n}{(\lambda n)^{\beta}},$$ respectively. Observe that $$a_1=b_1=1.$$ Taking the limit as $\beta \rightarrow + \infty$ in \eqref{grom}, we find
$$ a_1 = \lim_{\beta \rightarrow +\infty} b_1 \lambda^{-\beta}, $$ from which we conclude that $\lambda=1$.
Thus, we obtain
$\zeta_{\bK}(\beta) = \zeta_{\bL}(\beta)$, which gives arithmetic equivalence
of the number fields.
\end{proof}
By expanding the zeta functions as Euler products, we deduce
\begin{cor}
If the QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$ of two number fields $\bK$ and $\bL$ are isomorphic, then there is a bijection of the primes $\p$ of $\bK$ above $p$ and the primes $\q$ of $\bL$ above $p$ that preserves the inertia degree: $f(\p|{\bK})=f(\q|{\bL})$. \qed
\end{cor}
Using some other known consequences of arithmetical equivalence, we get the following (\cite{Perlis1}, Theorem 1):
\begin{cor}
If the QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$ of two number fields $\bK$ and $\bL$ are isomorphic, then the number fields have the same degree over $\Q$, the same discriminant, normal closure, isomorphic unit groups, and the same number of real and complex embeddings. \qed
\end{cor}
However, it does not follow from arithmetical equivalence that $\bK$ and $\bL$ have the same class group (or even class number), cf.\ \cite{PdS}.
\section{Layers of the QSM-system}
\begin{se} \label{hairy} The group $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ has quotient groups $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ defined as the Galois group of the maximal abelian extension of $\bK$ which is unramified at primes dividing $\fn$. This structure is also reflected in the algebra of the QSM-system, cf.\ also \cite{LLN}, proof of Thm.\ 2.1, or section 3 of \cite{CMR} (including a description in terms of $\bK$-lattices).
Let $\mu_{\bK}$ denote the measure on $$X_{\bK}=G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat\cO^*_{\bK}}\hat\cO_{\bK}$$ given as the products of normalized Haar measures on $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and on every factor $\hat{\cO}_{\bK,\p}$ of $\hat\cO_{\bK}$ (so that $\hat\cO_{\bK,\p}^*$ has measure $1-1/N_{\bK}(\p)$).
Fix an ideal $\fn$ and consider the space
$$ X_{\bK,\fn} := G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \hat \cO_{\bK,\fn}, $$
where $\hat \cO_{\bK,\fn} = \prod_{\p \mid \fn} \hat \cO_{\bK,\p}.$
Then $$ X_{\bK} = \lim_{{\longrightarrow}\atop{\fn}} X_{\bK,\fn}. $$
Let $J_{\bK,\fn}^+$ denote the subsemigroup of $J_{\bK}^+$ generated by the prime ideals dividing $\fn$. Consider the subspace $$X_{\bK,\fn}^*:=G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \hat \cO^*_{\bK,\fn}$$ of $X_{\bK,\fn}$. It is isomorphic as a topological group to \begin{equation} \label{ster} X_{\bK,\fn}^* \cong G_{\bK}^{\mbox{{\tiny \textup{ab}}}}/ \vartheta_{\bK}(\hat \cO_{\bK,\fn}^*) = G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}},\end{equation}
the Galois group of the maximal abelian extension of $\bK$ that is unramified at the primes dividing $\fn$.
We can decompose
$$ X_{\bK,\fn} = X^1_{\bK,\fn} \coprod X_{\bK,\fn}^2$$ with $$ X^1_{\bK,\fn}:=\coprod_{\fm' \in J_{\bK,\fn}^+} \fm'\ast X^*_{\bK,\fn}\quad \mbox{ and }\quad X^2_{\bK,\fn}:=\bigcup_{\p \mid \fn} Y_{\bK, \p}, $$
where $$Y_{\bK,\p} = \{ (\gamma, \rho) \in X_{\bK,\fn} \, : \, \rho_{\p}=0 \} . $$
The decomposition of $X^1_{\bK,\fn}$ is into disjoint subsets, because $[(\gamma,\rho)] \in \fm' \ast X_{\bK,\fn}^*$ precisely if $\rho$ is exactly ``divisible'' by $\fm'$.
We observe that by Equation (\ref{ster}), we have a homeomorphism \begin{equation} \label{homx1} X^1_{\bK,\fn} \cong \coprod_{\fm' \in J_{\bK,\fn}^+} G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}. \end{equation}
Now by Fourier analysis, the characters of $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ (so the characters of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ whose conductor is coprime to $\fn$) are dense in the algebra of functions on $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$. The algebra of continuous functions on the coproduct $C(\coprod_{\fm' \in J_{\bK,\fn}^+} G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}})$ is then generated by linear combinations of such characters with support in just one of the components. By pulling this back via the homeomorphism in \eqref{homx1}, we find a set of generators for the algebra of continuous functions on $X^1_{\bK,\fn}$:
\end{se}
\begin{df} Write an element $x \in X^1_{\bK,\fn} $ as $x=\fm' \ast [(\gamma,\rho)],$ for some $\rho \in \hat\cO_{\bK,\fn}^*$ (so it is in the $\fm'$-component of the decomposition $X^1_{\bK,\fn} = \coprod_{\fm' \in J_{\bK,\fn}^+} \fm'\ast X^*_{\bK,\fn}$). Let $\chi$ denote character of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ whose conductor is coprime to $\fn$, and let $\fm \in J_{\bK,\fn}^+$. Then we define the function
$$ f_{\chi,\fm} \colon \fm' \ast [(\gamma,\rho)] \mapsto \delta_{\fm,\fm'} \chi(\vartheta_{\bK}(\fm^{-1}) \gamma), $$
where $\delta_{\fm,\fm'}$ is the Kronecker delta.
This is the pullback by the homeomorphism in \eqref{homx1} of the function which is the character $\chi$ precisely in the $\fm$-component of the space.
\end{df}
The above results imply that these functions generate the algebra $C(X_{\bK,\fn}^1)$. We can now prove:
\begin{lem} \label{gengen} The algebra of functions $C(X_{\bK,\fn})$ is generated by the functions $f_{\chi,\fm}$ in $ C(X^1_{\bK,\fn} ),$ for all $\chi \in \widehat G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ and ideals $\fm \in J_{\bK,\fn}^+$.
\end{lem}
\begin{proof} Observe that $X_{\bK,\fn}^2$ is a set of $\mu_{\bK}$-measure zero.
By total disconnectedness, the algebra $C(X_{\bK,\fn})$ is generated by the characteristic functions of clopen sets. We claim:
\begin{lem} \label{regular}
The space $X_{\bK,\fn}$ has no non-empty open sets of $\mu_{\bK}$-measure zero.
\end{lem}
\begin{proof}[Proof of Lemma \ref{regular}] A $\p$-adic ring of integers $\hat \cO_{\p}$ does not have non-empty open sets $U$ of measure zero, since $U$ contains a ball of sufficiently small radius around any point in it, and this will have Haar measure the $\p$-adic absolute value of the radius; the same argument applies to $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, by considering it as the idele class group modulo connected component of the identity and using the idele norm.
\end{proof}
It follows that $X_{\bK,\fn}^1$ is dense in $X_{\bK,\fn}$, as the complement
cannot contain any open set. It therefore suffices to give generators for $C(X^1_{\bK,\fn} )$, which we have already done.
\end{proof}
\begin{cor} \label{dude}
The set $$ X_{\bK}^1 := \bigcup_{\fm \in J^+_{\bK}} \fm \ast (G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \hat \cO_{\bK}^\ast) = \{ [(\gamma,\rho)] \colon \rho_{\p} \neq 0 \ \forall \p \} $$ is dense in $X_{\bK}$.
\end{cor}
\begin{proof}
It follows from the above proof that $$ \bigcup_{\fm \in J^+_{\bK,\fn}} \fm \ast (G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \hat \cO_{\bK,\fn}^\ast)$$ is dense in $X_{\bK,\fn}$, so by taking the union over all $\fn$, we find the result (recall that the closure of a union contains the union of the closures). \end{proof}
\begin{remark}[$\bK$-lattices] Let $\bar \cM_{\bK,1}$ denote the space of one-dimensional $\bK$-lattices up to scaling; recall that $C(X_{\bK})=C(\bar \cM_{\bK,1})$. The preceding theory organizes this space into an inductive system of the spaces $C(\bar \cM_{\bK,1,\fn})$ of functions that depend on the datum $\phi$ of a $\bK$-lattice $(\Lambda,\phi)$ only through its projection to $\hat\cO_{\bK,\fn}$.
\end{remark}
\section{Crossed product structure and QSM-isomorphism}
In this section, we deduce from the dagger-isomorphism of the QSM-systems the conjugacy of the corresponding ``dynamical systems'' $(X_{\bK},J_{\bK}^+)$ and $(X_{\bL},J_{\bL}^+)$. There is a large literature on recovering such systems from (non-involutive) operator algebras, starting with Arveson-Josephson. We refer to \cite{Davidson} for a recent overview and theorems with minimal conditions, leading to ``piecewise conjugacy''. Here, we will present as simple as possible a proof for our case, where we can exploit our assumption that the \emph{algebraically generated} dagger-subalgebra is preserved, as well as the ergodicity of the action and some strong density assumptions on fixed point sets.
\begin{notation}
Fix a rational prime number $p$ and a positive integer $f$. Let $J_{\bK,p^f}^+$ denote the sub-semigroup of $J_{\bK}^+$ generated by the primes $\p_1^{\bK}=\p_1,\dots,\p_N^{\bK}=\p_N$ of norm $N_{\bK}(\p_i)=p^f$. Let $A^{\dagger}_{\bK,p^f}$ denote the (non-involutive)
subalgebra of $A^{\dagger}_{\bK}$ generated algebraically by the functions $C(X_{\bK})$ and the
isometries $\mu_{\p}$ with $\p=\p_i$ a prime in $J_{\bK,p^f}^+$.
We will use multi-index notation: for $\alpha=(\alpha_1,\dots,\alpha_N) \in \Z_{\geq 0}^N$, we let $\mu_\alpha=\mu_{\p_1}^{\alpha_1} \dots \mu_{\p_N}^{\alpha_N}$, and let $|\alpha|=N$ denote the length of $\alpha$. Similarly, we let $\sigma_{\alpha}=\sigma_{\mu_{\alpha}}$, etc.\ (beware not to confuse the partial inverse $\sigma_\alpha$ with the time evolution $\sigma_t$). Any element $a \in A_{\bK,p^f}^\dagger$ can be uniquely written in the form $$a=\sum_{\alpha} \mu_\alpha E_\alpha (a)$$ for ``generalized Fourier coefficients'' $E_{\alpha}(a) \in C(X_{\bK})$.
\end{notation}
\begin{prop}\label{isoJs}
A dagger-isomorphism of QSM-systems $\varphi: (A_{\bK},\sigma_{\bK}) \overset{\sim}{\rightarrow} (A_{\bL},\sigma_{\bL})$ induces a homeomorphism $\Phi: X_{\bK}\overset{\sim}{\rightarrow} X_{\bL}$ and a norm-preserving semigroup isomorphism
$\varphi : J_{\bK}^+ \overset{\sim}{\rightarrow} J_{\bL}^+ $ \textup{(}viz., such that $N_{\bL}(\varphi(\fn)) = N_{\bK}(\fn)$\textup{)}, satisfying the compatibility condition
$$ \Phi(\fn \ast x) = \varphi(\fn) \ast \Phi(x).$$
\end{prop}
\begin{proof}
First of all, $\varphi$ maps the $\sigma_t$-eigenspace for eigenvalue $1$ in the dagger subalgebra $A_{\bK}^\dagger$ to that in $A_{\bL}^\dagger$: from the representation through generalized Fourier series, it is easy to see that these eigenspaces consist exactly of the functions $C(X_{\bK})$, respectively $C(X_{\bL})$. Hence $\varphi$ induces an isomorphism between these algebras, and hence a homeomorphism $$\Phi \colon X_{\bK} \rightarrow X_{\bL}.$$
Now fix a rational prime $p$ and a positive integer $f$. We claim that $\varphi$ induces an isomorphism $$\varphi: A^{\dagger}_{\bK,p^f}\overset{\sim}{\rightarrow} A^{\dagger}_{\bL,p^f}.$$ Indeed, we have by assumption that $\varphi$ maps the dagger subalgebra $A_{\bK}^\dagger$ to $A_{\bL}^\dagger$. Now $A_{\bK,p^f}^\dagger$ is precisely the subalgebra generated by $C(X)$ and the $p^{fit}$-eigensubspace of $\sigma_t$ acting on $A_{\bK}^\dagger$. Since $\varphi$ is compatible with time evolution, it maps the $p^{fit}$-eigenspace of $\sigma_{\bK,t}$ to that of $\sigma_{\bL,t}$, so the claim holds.
We now interject a topological lemma which will be used in the proof:
\begin{lem} \label{denseopen}
\mbox{ }
\begin{enumerate}
\item[\textup{(i)}] Let $x=[(\gamma,\rho)] \in X_{\bK}$ and assume that there exist two distinct ideals $\fm$ and $\fn$ with
$\fm \ast x = \fn \ast x. $ Then the $\p$-component $x_{\p}$ of $x$ is zero for some $\p$ dividing the least common multiple of $\fm$ and $\fn$.
\item[\textup{(ii)}] The set $$ X_{\bK}^0:=\{ x \in X_{\bK} \colon \fm \ast x \neq \fn \ast x \mbox{\ for all } \fm \neq \fn \in J_{\bK,p^f}^+ \} $$
contains a dense open set in $X_{\bK}$.
\item[\textup{(iii)}] The set $X_{\bK}^{00}:= X_{\bK}^0 \cap \Phi^{-1}(X_{\bL}^0)$ is dense in $X_{\bK}$.
\end{enumerate}
\end{lem}
\begin{proof}
The equality $\fm \ast x = \fn \ast x$ means the existence of an idelic unit with $\vartheta_{\bK}(\fm) = \vartheta_{\bK}(u) \vartheta_{\bK}(\fn)$ and $\rho s(\fm) = u s(\fn) \rho$. Thus, if $\rho$ has non-zero component at all divisors of $\fm$ and $\fn$, then it follows from the second equality that $\fm=\fn$.
Now consider the set consisting of $x \in X_{\bK}$ such that $x_{\p} \neq 0$ for \emph{all} $\p=\p_1,\dots,\p_N$ in $J_{\bK,p^f}^+$. By the above, it is contained in $X_{\bK}^0$. Also the set is open, as the complement of finitely many closed sets (namely, the ones on which $x_{\p}=0$ for the finitely many primes $\p$ of norm $p^f$). Finally, it is dense, since it contains the set $X_{\bK}^1$ (the subset where \emph{no} component of $\rho$ is zero), of which we have already shown that it is dense in $X_{\bK}$ in Lemma \ref{dude}.
Since $\Phi$ is a homeomorphism, $\Phi^{-1}(X_{\bL}^0)$ is dense open in $X_{\bK}$, and it suffices to notice that the intersection of dense open sets is dense. \end{proof}
We now show that one can algebraically describe the set of images of $x \in X^0_{\bK}$ under the generators $\p_i$:
\begin{lem}
Let $\cC$ denote the commutator ideal in $A_{\bK,p^f}^\dagger$ and $\cC^2$ the span of products of elements in $\cC$. For $x_0 \in X_{\bK}$, let $I_{x_0}$ denote the ideal of functions $f \in C(X_{\bK})$ that vanish at $x_0$.
Then for $x_0, y_0 \in X^0_{\bK}$, we have that $$y_0 \in \{ \p_1 \ast x_0,\dots, \p_N \ast x_0 \}$$ if and only if
$$M_{x_0,y_0}:= I_{y_0} \cC + \cC I_{x_0} +\cC^2$$ has codimension one as a subvectorspace of $\cC$.
\end{lem}
\begin{proof}
We claim that
$$ \cC = \{ a \in A_{\bK,p^f}^\dagger \colon E_0(a)=0 \mbox{ and } E_\alpha(a) \in \cE_\alpha \ \forall \alpha \neq 0 \}, $$
where $\cE_\alpha$ is the $C(X_{\bK})$-ideal generated by
the ``coboundaries" $$h=f-\sigma_\alpha(f) $$ for some $f \in C(X_{\bK})$. Indeed, this follows from computing commutators $[\mu_\alpha,f] = \mu_\alpha(f-\sigma_{\alpha}(f))$ for $f \in C(X_{\bK})$.
Similarly, one finds
$$ \cC^2 = \{ a \in A_{\bK,p^f}^\dagger \colon E_\alpha(a)=0 \mbox{ for all } |\alpha| \leq 1 \mbox{ and } E_\alpha(a) \in \cE^2_\alpha \ \forall |\alpha|>1 \}, $$
where $\cE^2_\alpha$ is the ideal in $C(X_{\bK})$ generated by products of coboundaries. Since the action of $J_{\bK,p^f}^+$ is continuous, the ideals $\cE_\alpha$ are closed in $C(X_{\bK})$, so $\cE^2_\alpha=\cE_\alpha$.
Hence the space $\cC/\cC^2$ is isomorphic to $\cC/\cC^2= \bigoplus\limits_{|\alpha|=1} \mu_{\alpha} \cE_{\alpha} $.
Now $M_{x_0,y_0}=I_{y_0} \cC + \cC I_{x_0} +\cC^2$ is described as
$$ M_{x_0,y_0} = \{ a \in \cC \colon E_{\alpha}(a) \in \left(\sigma_{\alpha}(I_{y_0}) +I_{x_0} \right)\cE_{\alpha} \ \forall |\alpha|=1 \} $$
Fix an index $|\beta|=1$, corresponding to $\p_k$. Since $I_{x_0}$ is a closed maximal ideal in $C(X)$, either $\sigma_{\beta}(I_{y_0}) \subseteq I_{x_0}$, or $\sigma_\beta(I_{y_0}) + I_{x_0} = C(X_{\bK})$. The first case occurs exactly if $y_0 = \p_k \ast x_0$. Also, this case occurs at most for one such $k$, since we assume that $x_0 \in X_{\bK}^0$. Hence either
$ M_{x_0,y_0} = \cC$, or there exists a unique $k$ (so a unique corresponding $\beta$) such that $y_0 = \p_k \ast x_0$ and $M_{x_0,y_0} = \{ a \in \cC \colon E_{\beta}(a) \in I_{x_0}\cE_{\beta}\} $, which has codimension $1$ in $\cC$.
\end{proof}
Now recall that we know that $\varphi$ is induced from a homeomorphism $\Phi: X_{\bK} \rightarrow X_{\bL}$. Since $\varphi$ is an algebra homomorphism, we find that
$$ \varphi(M^{\bK}_{x_0,y_0}) = M^{\bL}_{\Phi(x_0),\Phi(y_0)} $$
(where we use superscript $\bK$ and $\bL$ to refer to the different fields).
Now suppose that $x \in X^{00}_{\bK}$. Then the sets $\{\p^{\bK}_i \ast x \}_{i=1}^N$ and $\{\p^{\bL}_i \ast \Phi(x)\}_{i=1}^N$ contain $N$ distinct elements, and the above reasoning shows that they are mapped to each other by $\Phi$:
this gives, for each
$x\in X_{\bK}^{00}$, a permutation of $\p^{\bL}_i$, and hence a locally constant function $\alpha: X_{\bK}^{00} \times J_{\bK,p^f}^+ \to J_{\bL,p^f}^+$ with
\begin{equation} \label{dodo} \Phi(\fn \ast x) = \alpha_x(\fn) \ast \Phi(x). \end{equation}
Since $X_{\bK}^{00}$ is dense in $X_{\bK}$, we can extend $\alpha$ by continuity to $X_{\bK} \times J_{\bK,p^f}^+$, such that identity \eqref{dodo} still holds.
Gluing back together the algebras $A_{\bK,p^f}^\dagger$ for various $p$ and $f$, we finally find a homeomorphism
$\Phi \colon X_{\bK} \overset{\sim}{\rightarrow} X_{\bL}$ (which is by construction independent of $p^f$), and a locally constant map
$$ \alpha \colon X_{\bK} \times J_{\bK}^+ \rightarrow J_{\bL}^+ $$
such that $N_{\bL}(\alpha_x(\fn)) = N_{\bK}(\fn)$, and \eqref{dodo} holds for all $x$ and $\fn \in J_{\bK}^+$ (this is known as \emph{piecewise conjugacy} of the dynamical systems $(X_{\bK},J_{\bK}^+)$ and $(X_{\bL},J_{\bL}^+)$ in the terminology of Davidson and Katsoulis \cite{Davidson}).
We now proceed to showing that $\alpha_x$ is actually constant. For this, consider the level set
$$ \tilde{X}_{\bK} := \{ x \in X_{\bK} \colon \Phi(x) \in X_{\bL}^1 \mbox{ and } \alpha_x(\fn)=\alpha_1(\fn) \ \forall \fn \in J_{\bK}^+ \}. $$
Observe that we only consider $x$ for which $\Phi(x)$ is in $X_{\bL}^1$, the dense subspace of $X_{\bL}$ in which none of the idele components is zero (cf.\ \ref{dude}).
We claim that the set $\tilde{X}_{\bK}$ is invariant under the action of $J_{\bK}^+$. We will verify that for all $\fm \in J_{\bK}^+$, we have that $\alpha_x=\alpha_1$ if and only if $\alpha_{\fm \ast x} = \alpha_1$.
We compute that for $\fn \in J_{\bK}^+$ one has
\begin{eqnarray} \label{boho} \alpha_{\fm \ast x}(\fn) \ast \Phi(\fm \ast x) &=& \Phi(\fn \ast (\fm \ast x)) = \Phi(\fm \fn \ast x) \\ &=& \alpha_x(\fm \fn) \ast \Phi(x) = \alpha_x(\fn) \ast (\alpha_x(\fm) \ast \Phi(x)) \nonumber \\ &=& \alpha_x(\fn) \ast \Phi(\fm \ast x). \nonumber\end{eqnarray}
We now claim that if $\Phi(x) \in X_{\bL}^1$, then also $\Phi(\fm \ast x) \in X_{\bL}^1$ for all $\fm \in J_{\bK}^+$; this follows from the compatibility $\Phi(\fm \ast x) = \alpha_x(\fm) \ast \Phi(x)$ and the fact that $\Phi(x)=[(\gamma,\rho)] \in X_{\bL}^1$ if and only if none of the local components $\rho_{\p}$ of $\rho$ is zero, which is preserved under the action of $\alpha_x(\fm)$. Hence in the above formula, $\Phi(\fm \ast x) \in X_{\bL}^1$.
Now if $y \in X_{\bL}^1$, then by Lemma \ref{denseopen}, for any ideals $\fm', \fn' \in J_{\bL}^+$, we have an equivalence \begin{equation} \label{mum} \fm' \ast y = \fn' \ast y \ \iff \ \fm' = \fn'. \end{equation}
Thus, we conclude from \eqref{boho} that we have an equality of ideals $\alpha_{\fm \ast x}(\fn) = \alpha_x(\fn)$ for all $\fn \in J_{\bK}^+$. Hence $\alpha_x=\alpha_1$ if and only if $\alpha_{\fm \ast x} = \alpha_1$, which shows that $\tilde{X}_{\bK}$ is an invariant set for the action of $J_{\bK}^+$ on $X_{\bK}$.
Now recall from \cite{LLN} (Proof of Theorem 2.1 on p.\ 332) that the action of $J_{\bK}^+$ on $X_{\bK}$ is ergodic for the measure $\mu_{\bK}$ (cf.\ Section \ref{hairy}). Thus, the invariant set $\tilde{X}_{\bK}$ has measure zero or one. It cannot have measure zero: it contains the element $x=1$, and since $\alpha_x$ is locally constant, it contains an open neighbourhood of $1$, and non-empty open sets in $X_{\bK}$ have strictly positive measure (by Lemma \ref{regular}). We conclude that $\tilde{X}_{\bK}$ is of full measure hence also its superset
$$ \tilde{X}'_{\bK}= \{ x \in X_{\bK} \colon \alpha_x = \alpha_1 \}$$
is of full measure and closed. Hence the complement is an open set of measure zero, hence empty (Lemma \ref{regular}). We conclude that $\tilde{X}'_{\bK}=X_{\bK}$ and we indeed have $\alpha_x=\alpha_1$ for all $x \in X_{\bK}$.
\end{proof}
\section{QSM-isomorphism and isomorphism of abelianized Galois groups}
In this section, we prove that QSM-isomorphism implies an isomorphism of abelianized Galois groups.
\begin{remark} \label{Ulm}
The isomorphism type of the infinite abelian group $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ is determined by its so-called \emph{Ulm invariants}. For $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, those were computed abstractly by Kubota (\cite{Kubota}), and Onabe (\cite{Onabe}) computed them explicitly for quadratic imaginary fields. For example, $G_{\Q(i)}^{\mbox{{\tiny \textup{ab}}}}$ is never isomorphic to any other group for such a field, but $\Q(\sqrt{-2})$ and $\Q(\sqrt{-3})$ have isomorphic abelianized absolute Galois groups (and they are not isomorphic as fields).
\end{remark}
\begin{lem}
Consider the projector
$e_{\bK,\fn}=\mu_{\fn} \mu_{\fn}^*$. Then the range of $e_{\bK,\fn}$ is mapped by $\Phi$ to the range of $e_{\bL,\varphi(\fn)}$:
$$ \Phi(\mathrm{Range}(e_{\bK,\fn})) = \mathrm{Range}(e_{\bL,\varphi(\fn)}). $$
\end{lem}
\begin{proof}
By definition, we have that $x=[(\gamma,\rho)]$ is in the range of $e_{\bK,\fn}$ if and only if $x=[(\gamma',\fn \rho')]$ for some $\gamma' \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and $\rho' \in \hat \cO_{\bK}$. This is equivalent to
$$ x = [(\vartheta_{\bK}(\fn)^{-1} \gamma'',\fn \rho')] = \fn \ast x'$$ for some $x'=[(\gamma'',\rho')] \in X_{\bK}$.
If we now apply $\Phi$, we get that the statement is equivalent to
$$ \Phi(x) = \Phi(\fn \ast x') = \varphi(\fn) \ast \Phi(x')$$
for some $\Phi(x') \in X_{\bL}$ --- here, we have used Proposition \ref{isoJs}. The latter statement is equivalent to
$\Phi(x)$ belonging to the range of $e_{\bL,\varphi(\fn)}$. \end{proof}
\begin{prop} \label{gab}
An isomorphism $\varphi$ of QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$ induces a topological group isomorphism
$$ \tilde{\Phi} := \Phi\cdot \Phi(1)^{-1} \, : \, G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}. $$
\end{prop}
\begin{proof}
Fix an ideal $\fm \in J_{\bK}^+$, and consider the subspace of $X_{\bK}$ given by
$$ V_{\bK,\fm}:=\bigcap_{(\fm,\fn)=1} \mathrm{Range}(e_{\bK,\fn}) = G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \{(0,\dots,0,\hat\cO_{\bK,\fm},0,\dots,0)\}, $$
with $\hat\cO_{\bK,\fm} = \prod_{\p \mid \fm} \hat\cO_{\bK,\p}$.
This is mapped by $\Phi$ to
\begin{eqnarray*} \Phi(V_{\bK,\fm}) &=& \bigcap_{(\fm,\fn)=1} \Phi(\mathrm{Range}(e_{\bK,\fn})) = \bigcap_{(\varphi(\fm),\varphi(\fn))=1} \mathrm{Range}(e_{\bL,\varphi(\fn)})\\ &=& G_{\bL}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bL}^*} \{(0,\dots,0,\hat\cO_{\bL,\varphi(\fm)},0,\dots,0)\} = V_{\bL,\varphi(\fm)}. \end{eqnarray*}
Now define $1_{\fm}$ to be the integral adele which is $1$ at the prime divisors of $\fm$ and zero elsewhere, and consider the subgroup
$$H_{\bK,\fm}:=G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \{1_{\fm}\} \subseteq X_{\bK}.$$
By the above, $\Phi(H_{\bK,\fm})$ is a subset of $V_{\bL,\varphi(\fm)}$.
The group $H_{\bK,\fm}$ consists of classes $[(\gamma,1_{\fm})] \sim [(\gamma',1_{\fm})] \iff \exists u \in \hat\cO_{\bK}^*$ with $\gamma'=\vartheta_{\bK}(u)^{-1} \gamma$ and $1_{\fm}=u1_{\fm}$. This last equation means that $u_{\q}=1$ at divisors $\q$ of $\fm$ with no further restrictions, i.e., $u \in \prod_{\q \nmid \fm} \hat\cO_{\q}^*$, so that by class field theory
$$ H_{\bK,\fm} \cong G_{\bK}^{\mbox{{\tiny \textup{ab}}}} / \vartheta_{\bK}\left(\prod_{\q \nmid \fm} \hat\cO_{\q}^*\right) \cong \mathring{G}_{\bK,\fm}^{\mbox{{\tiny \textup{ab}}}}, $$
where $\mathring{G}_{\bK,\fm}^{\mbox{{\tiny \textup{ab}}}}$ is the Galois group of the maximal abelian extension of $\bK$ that is unramified \emph{outside} prime divisors of $\fm$. Class field theory implies that $\mathring{G}_{\bK,\fm}^{\mbox{{\tiny \textup{ab}}}}$ has a dense subgroup generated by $\vartheta_{\bK}(\fn)$ for $\fn$ running through the ideals $\fn$ that are coprime to $\fm$. Said differently, $H_{\bK,\fm}$ is generated by $\gamma_{\fn}:=[(\vartheta_{\bK}(\fn)^{-1},1_{\fm})]$ for $\fn$ running through the ideals coprime to $\fm$. Write $\one_{\fm}=[(1,1_{\fm})]$, and $\Phi(\one_{\fm}) = [(x_{\fm},y_{\fm})]$. Since $\fm$ and $\fn$ are coprime, we have $[(\vartheta_{\bK}(\fn)^{-1},1_{\fm})] = [(\vartheta_{\bK}(\fn)^{-1},\fn 1_{\fm})]$, and hence we can write $\gamma_{\fn} = \fn \ast \one_{\fm}$.
Now for two ideals $\fn_1$ and $\fn_2$ coprime to $\fm$, we can perform the following computation: \begin{eqnarray*}
\Phi(\one_{\fm}) \cdot \Phi(\gamma_{\fn_1} \cdot \gamma_{\fn_2}) &=& \Phi(\one_{\fm}) \cdot \left( \varphi(\fn_1) \varphi(\fn_2) \ast \Phi(\one_{\fm}) \right) \\ &=& [(\vartheta_{\bL}(\varphi(\fn_1) \varphi(\fn_2))^{-1} x_{\fm}^2, \varphi(\fn_1) \varphi(\fn_2) y_{\fm}^2)] \\ &=&
[(\vartheta_{\bL}(\varphi(\fn_1))^{-1} x_{\fm}, \varphi(\fn_1) y_{\fm})] \cdot [(\vartheta_{\bL}(\varphi(\fn_2))^{-1} x_{\fm}, \varphi(\fn_2) y_{\fm})] \\ &=& \left( \varphi(\fn_1) \ast \Phi(\one_{\fm}) \right) \cdot \left( \varphi(\fn_2) \ast \Phi(\one_{\fm}) \right) \\ &=& \Phi(\gamma_{\fn_1}) \cdot \Phi(\gamma_{\fn_2}).
\end{eqnarray*}
By density, we find that for all $\gamma_1, \gamma_2 \in H_{\bK,\fm}$, we have
$$ \Phi(\one_{\fm}) \Phi(\gamma_{1} \gamma_{2}) = \Phi(\gamma_{1}) \Phi(\gamma_{2}). $$
We now consider the image $\Phi(H_{\bK,\fm})$. Recall from the computation with ranges at the beginning of the proof that $\Phi(H_{\bK,\fm}) \subseteq V_{\bL,\varphi(\fm)}$. Choose $\fn$ coprime to $\fm$, so also $\varphi(\fn)$ is coprime to $\varphi(\fm)$, so $y_{\fm}$ is zero on the support of $\varphi(\fn)$. Hence
$$ \Phi(\gamma_{\fn}) = [(\vartheta_{\bL}(\varphi(\fn))^{-1} x_{\fm}, \varphi(\fn) y_{\fm})] = [(\vartheta_{\bL}(\varphi(\fn))^{-1} x_{\fm}, y_{\fm})] \in G_{\bL}^{\mbox{{\tiny \textup{ab}}}} \times \{ \Phi(\one_{\fm}) \}. $$
By density, we conclude that $$\Phi(H_{\bK,\fm}) = G_{\bL}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat\cO_{\bL}^*} \{ \Phi(\one_{\fm}) \}. $$
By enlarging $\fm$, we find that $H_{\bK,\fm}\cong \mathring{G}_{\bK,\fm}^{\mbox{{\tiny \textup{ab}}}}$ is a system of exhausting quotient groups of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$.
Now observe that $\lim\limits_{N(\fm) \rightarrow +\infty} \one_{\fm} = 1$, so that the continuity of $\Phi$ implies that $\lim\limits_{N({\fm}) \rightarrow +\infty} \Phi(\one_{\fm}) = \Phi(1)$.
We conclude that $\Phi$ induces a bijective map
$$ \Phi \colon G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times \{ 1 \} \rightarrow G_{\bL}^{\mbox{{\tiny \textup{ab}}}} \times \{ \Phi(1) \} $$ with the property that
$$ \Phi(1) \Phi(\gamma_{1} \gamma_{2}) = \Phi(\gamma_{1}) \Phi(\gamma_{2}). $$
If we set $\tilde{\Phi}(\gamma):=\Phi(\gamma) \cdot \Phi(1)^{-1},$ we find
$$ \tilde{\Phi}(\gamma_1 \gamma_2) = \Phi(\gamma_1 \cdot \gamma_2) \Phi(1)^{-1} = \Phi(\gamma_1) \Phi(\gamma_2) \Phi(1)^{-2} = \tilde{\Phi}(\gamma_1) \cdot \tilde{\Phi}(\gamma_2), $$
so $\tilde{\Phi}$ is indeed a group isomorphism.
\end{proof}
\begin{quote}
\textbf{Convention.} \emph{To simplify notations, we replace the original isomorphism of QSM-systems $\varphi$ (which is induced by the homeomorphism $\Phi^{-1}$ and the group isomorphisms $\varphi=\alpha_1$) by the QSM-isomorphism which is induced instead by the homeomorphism $\tilde{\Phi}^{-1}$ and the $\varphi=\alpha_1$, and from now on, we denote this new QSM-isomorphism by the same letter $\varphi$, so that for the associated $\tilde{\Phi}$, it holds that $\tilde{\Phi}=\Phi$. }
\end{quote}
\begin{cor} \label{imageonem}
For all $\fm \in J_{\bK}^+$, it holds true that $\Phi(\one_{\fm}) = \one_{\varphi(\fm)}$.
\end{cor}
\begin{proof}
Set $\Phi(\one_{\fm}) = [(x_{\fm},y_{\fm})]$. Since $\Phi$ is a group isomorphism $H_{\bK} \rightarrow \Phi(H_{\bK})$, we find that $\Phi(\one_{\fm}^2)=\Phi(\one_{\fm})$; whence $$[(x^2_{\fm},y^2_{\fm})]=[(x_{\fm},y_{\fm} )], $$ i.e., there exists a unit $u \in \hat\cO_{\bL}^*$ with \begin{equation} \label{cancel} x_{\fm}^2 = \vartheta_{\bL}(u)^{-1} x_{\fm} \mbox{ and } y^2_{\fm} = u y_{\fm}.\end{equation}
Now $y_{\fm}$ is zero outside prime divisors of $\varphi(\fm)$. We claim that $y_{\fm}$ is a local unit at the primes dividing $\varphi(\fm)$. If not, then
$\Phi(\one_{\fm}) \in \mathrm{Range}(e_{\fr})$ for some prime ideal $\fr \in J_{\bL}^+$ which divides $\varphi(\fm)$. This is equivalent to the existence of $x\in X_{\bL}$ such that
$\Phi(\one_{\fm})= \fr \ast x$. This implies that $$\one_{\fm} = \Phi^{-1}(\fr \ast x) = \varphi^{-1}(\fr) \ast \Phi^{-1}(x).$$
We conclude from this that $\one_{\fm} \in \mathrm{Range}(e_{\varphi^{-1}(\fr)})$. Now we observe that $\varphi^{-1}(\fr)$ is a prime ideal above a rational prime dividing $\fm$. In particular, it is not a unit at some prime divisor of $\fm$. But this contradicts the fact that all non-zero adelic components of $\one_{\fm}$ are such units. We conclude that $y_{\fm} \in \hat\cO_{\bL,\varphi(\fm)}^*$ is a unit.
Hence in \eqref{cancel}, we can cancel $x_{\fm}$ (which lies in the group $G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$) and $y_{\fm}$ locally at divisors of $\varphi(\fm)$, to find that
$$ x_{\fm} = \vartheta_{\bL}(u)^{-1} \mbox{ and } y_{\fm} = u 1_{\varphi(\fm)}, $$
hence $$\Phi(\one_{\fm}) = [(x_{\fm},y_{\fm})] = [(\vartheta_{\bL}(u)^{-1}, u 1_{\varphi(\fm)})]=[(1,1_{\varphi(\fm)})] = \one_{\varphi(\fm)}. $$
\end{proof}
\section{Layers, ramification and $L$-series} \label{respect}
In this section, we conclude from the previous section that $\varphi$ ``preserves ramification'', and we deduce from this that $\varphi$ induces an L-isomorphism (viz., an identification of abelian $L$-series). We will use the symbol $\Phi$ also for the group isomorphism that $\Phi \colon G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ induces on quotient groups, i.e., if $N$ is a subgroup of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, then we let $\Phi$ also denote the isomorphism $$ G_{\bK}^{\mbox{{\tiny \textup{ab}}}} / N \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}/\Phi(N)$$ induced by $\Phi$.
\begin{prop} \label{respectramification}
The group isomorphisms $ \Phi \, : \, G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ and $\varphi \, : \, J_{\bK}^+ \overset{\sim}{\rightarrow} J_{\bL}^+$ respect ramification in the sense that if $\bK'=(\bK^{\mbox{{\tiny \textup{ab}}}})^N/\bK$ is a finite extension, and we set $\bL':=(\bL^{\mbox{{\tiny \textup{ab}}}})^{\Phi(N)}$ the corresponding extension of $\bL$, then
$$ \p \mbox{ ramifies in } \bK'/\bK \iff \varphi(\p) \mbox{ ramifies in } \bL'/\bL $$
for every prime $\p \in J_{\bK}^+$.
Hence
$$ \Phi(G_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}}) = G_{\bL,\varphi(\p)}^{\mbox{{\tiny \textup{ab}}}} $$
for every prime $\p \in J_{\bK}^+$.
\end{prop}
\begin{proof}
In the previous section, we saw that $\Phi$ induces an isomorphism
$$ \Phi \, : \, \mathring{G}_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \mathring{G}_{\bL,\varphi(\fn)}^{\mbox{{\tiny \textup{ab}}}},$$
of Galois groups of the maximal abelian extension $\bK_{\fn}$ that is unramified outside the prime divisors of $\fn$ and $\bL_{\varphi(\fn)}$ that is unramified outside $\varphi(\fn)$, respectively.
Now let $\bK'=(\bK^{\mbox{{\tiny \textup{ab}}}})^N$ be a finite extension of $\bK$ ramified precisely above $$\p_1,\dots,\p_r \in J_{\bK}^+,$$ so $\bK' \subseteq \bK_{\p_1\cdots \p_r}$ and $$ \left\{ \begin{array}{ll} N \supseteq \mathrm{Gal}(\bK^{\mbox{{\tiny \textup{ab}}}}/\bK_{\p_1\dots \p_r}) & \\ N \not \supseteq \mathrm{Gal}(\bK^{\mbox{{\tiny \textup{ab}}}}/\bK_{\p_1\dots \widehat{\p_i} \dots \p_r}) & (i=1,\dots,r) \end{array} \right.$$ (where $\widehat\p$ means to leave out $\p$ from the product).
Applying $\Phi$ and using the above result, we find that this is equivalent to
$$ \left\{ \begin{array}{ll} \Phi(N) \supseteq \mathrm{Gal}(\bL^{\mbox{{\tiny \textup{ab}}}}/\bL_{\varphi(\p_1)\dots \varphi(\p_r)}) & \\ \Phi(N) \not \supseteq \mathrm{Gal}(\bL^{\mbox{{\tiny \textup{ab}}}}/\bL_{\varphi(\p_1)\dots \widehat{\varphi(\p_i)} \dots \varphi(\p_r)}) & (i=1,\dots,r) \end{array} \right.$$
Thus, $\bL':=(\bL)^{\Phi(N)}$ is contained in $\bL_{\varphi(\p_1) \cdots \varphi(\p_r)}$ but not in any $\bL_{\varphi(\p_1) \cdots \widehat{\varphi(\p_i)}\cdots \varphi(\p_r)}$, and this means that $\bL'/\bL$ is ramified precisely above $\varphi(\p_1),\dots,\varphi(\p_r)$.
\end{proof}
We now give a direct proof of the fact that (ii) implies (iii) in Theorem \ref{main2}.
\begin{prop}\label{idLfunct}
An isomorphism $\varphi: (A_{\bK},\sigma_{\bK}) \to (A_{\bL}, \sigma_{\bL})$
induces an identification of $L$-series with characters, i.e., there is a group isomorphism of character groups $$\psi \, : \, \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \widehat{G}_{\bL}^{\mbox{{\tiny \textup{ab}}}}$$ such that $$L_{\bK}(\chi,s)=L_{\bL}(\psi(\chi),s)$$ for all $\chi \in \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}}$.
\end{prop}
\begin{proof}
By Proposition \ref{gab}, we have an isomorphism $\Phi \, : \, G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$, hence by Pontrjagin duality, an identification of character groups $$ \psi \, : \, \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \widehat{G}_{\bL}^{\mbox{{\tiny \textup{ab}}}}.$$
A character $\chi \in \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ extends to a function $f_\chi$ as in Section \ref{kmschar}. We claim that the function corresponding to $\psi(\chi)$ is $\varphi(f_\chi)=f_{\psi(\chi)}$. To prove this, it suffices to check that divisors of the conductor $\f_{\psi(\chi)}$ of $\psi(\chi)$ are the same as divisors of $\varphi(\f_{\chi})$. But $\p$ is coprime to $\f_{\chi}$ precisely if $\chi$ factors over $G_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}}$, and by the previous proposition, we find that this is equivalent to $\psi(\chi)=\Phi^*(\chi)$ factoring over $\Phi(G_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}})=G_{\bL,\varphi(\p)}^{\mbox{{\tiny \textup{ab}}}}$, which in its turn means that $\varphi(\p)$ is coprime to the conductor $\f_{\psi(\chi)}$ of $\psi(\chi)$:
$$ (\p,\f_{\chi})=1 \iff (\varphi(\p),\f_{\psi(\chi)})=1. $$
The fact that $\varphi(f_\chi)=f_{\psi(\chi)}$ now implies that
$$ \chi(\vartheta_{\bK}(\fn)) = \psi(\chi)(\vartheta_{\bL}(\varphi(\fn))) $$
for all $\chi \in \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and $\fn \in J_{\bK}^+$ such that $\fn$ is coprime to the conductor of $\chi$. By the intertwining of time evolution, we also have compatibility with norms $$ N_{\bK}(\fn) = N_{\bL}(\varphi(\fn)) $$
for all $\fn \in J_{\bK}^+$. Hence we can compute
\begin{eqnarray*} L_{\bK}(\chi,s) &=& \sum_{\substack{\fn \in J_{\bK}^+\\(\fn,\f_{\chi})=1}} \frac{\chi(\vartheta_{\bK}(\fn))}{N_{\bK}(\fn)^s} = \sum_{\substack{\varphi(\fn) \in J_{\bL}^+\\(\fn,\f_{\chi})=1}} \frac{\psi(\chi)(\vartheta_{\bL}(\varphi(\fn)))}{N_{\bL}(\varphi(\fn))^s}\\ &=& \sum_{\substack{\fm \in J_{\bL}^+\\(\fm,\f_{\psi(\chi)})=1}} \frac{\psi(\chi)(\vartheta_{\bL}(\fm))}{N_{\bL}(\fm)^s} = L_{\bL}(\psi(\chi),s). \end{eqnarray*}
\end{proof}
\begin{remark}
The above result is a manifestation of the matching of $\KMS_\beta$ states. Namely, our isomorphism of QSM-systems gives $\zeta_{\bK}(\beta)=\zeta_{\bL}(\beta)$ (Proposition \ref{isotoareq}), and an isomorphism of character groups $\psi$ as in the previous proof. Lemma \ref{basic} implies that pullback is an isomorphism of $\KMS_\beta$-states. Now for $\beta>1$, such a state $\omega_{\gamma,\beta}^{\bL}$ on $A_{\bL}$ (corresponding to $\gamma \in G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$) is pulled back to a similar state
$$ \omega_{\gamma,\beta}^{\bL}(\varphi(f)) = \omega_{\tilde{\gamma},\beta}^{\bK}(f), $$
for some $\tilde{\gamma} \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and every $f \in A_{\bK}$. We can choose in particular $f=f_\chi$ for a character $\chi \in \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, and then the above identity becomes
$$ \frac{1}{\zeta_{\bL}(\beta)\psi(\chi)(\gamma)}\ L_{\bL}(\psi(\chi),\beta) = \frac{1}{\zeta_{\bK}(\beta)\chi(\tilde \gamma)}\ L_{\bK}(\chi,\beta). $$
If we now compare the constant coefficients and use arithmetic equivalence, we find $ \psi(\chi)(\gamma)=\chi(\tilde \gamma)$, and so finally the identity of these particular $\KMS$-states indeed reads
$$ L_{\bL}(\psi(\chi),\beta) = L_{\bK}(\chi,\beta). $$
\end{remark}
\section{From QSM-isomorphisms to isomorphism of unit ideles and ideles}
\begin{prop} \label{unitideles}
Let $\bK$ and $\bL$ denote two number fields admitting an isomorphism $\varphi$ of their QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$. Let $\p \in J_{\bK}^+$ denote a prime ideal. Then $\varphi$ induces a group isomorphism of local units $$ \varphi \, : \, \hat\cO_{\bK,\p}^* \overset{\sim}{\rightarrow} \hat\cO_{\bL, \varphi(\p)}^* $$ and of unit ideles $$\varphi \, : \, \hat{\cO}^*_{\bK} \overset{\sim}{\rightarrow} \hat{\cO}^*_{\bL}.$$
\end{prop}
\begin{proof}
Consider the maximal abelian extension of $\bK$ in which $\p$ is unramified. It is the fixed field of the inertia group $I_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}}$ of $\p$ in $\bK^{\mbox{{\tiny \textup{ab}}}}$. From the fact that $\Phi$ respects ramification, it follows that $$ \Phi(I_{\bK,\p}^{\mbox{{\tiny \textup{ab}}}}) = I_{\bL, \varphi(\p)}^{\mbox{{\tiny \textup{ab}}}}. $$
But now by local class field theory, we have a canonical isomorphism $$I^{\mbox{{\tiny \textup{ab}}}}_{\bK, \p} \overset{\sim}{\rightarrow} \hat\cO_{\bK, \p}^*.$$ Hence $\varphi$ induces isomorphisms
$$ \varphi \, : \, \hat\cO_{\bK, \p}^* \overset{\sim}{\rightarrow} \hat\cO_{\bL, \varphi(\p)}^* $$
between the topological groups of local units (compare with the discussion in Section 1.2 of \cite{Mo}). Since the integral invertible ideles are the direct product as topological groups of the local units, we get the claim.
\end{proof}
\begin{prop} \label{finiteideles}
Let $\bK$ and $\bL$ denote two number fields admitting an isomorphism $\varphi$ of their QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$. Then $\varphi$ induces a semigroup isomorphism:
$$ \varphi \, : \, (\A_{\bK,f}^* \cap \hat \cO_{\bK},\times) \overset{\sim}{\rightarrow} (\A_{\bL,f}^* \cap \hat \cO_{\bL},\times). $$
\end{prop}
\begin{proof}
We have an exact sequence \begin{equation} \label{sequence} 0 \rightarrow \hat{\cO}^*_{\bK} \rightarrow \A_{\bK,f}^* \cap \hat{\cO}_{\bK} \rightarrow J^+_{\bK} \rightarrow 0,\end{equation} which is (non-canonically) split by choosing a uniformizer $\pi_{\p}$ at every place $\p$ of the field:
$$ \A_{\bK,f}^* \cap \hat{\cO}_{\bK} \overset{\sim}{\rightarrow} J_{\bK}^+ \times \hat{\cO}^*_{\bK} \, : \, (x_{\p})_{\p} \mapsto \left( \prod \p^{\mathrm{ord}_{\p}(x_{\p})}, (x_{\p} \cdot \pi_{\p}^{-{\mathrm{ord}_{\p}(x_{\p})}})_{\p} \right). $$
Hence as a semigroup, $\A_{\bK,f}^* \cap \hat \cO_{\bK}= J^+_{\bK} \times \hat{\cO}^*_{\bK}. $ The result follows from Propositions \ref{gab} and \ref{unitideles}.
\end{proof}
\begin{remark}
Using fractional ideals, one may prove in a similar way that $\varphi$ induces a multiplicative group isomorphism of the finite ideles of $\bK$ and $\bL$.
\end{remark}
\section{From QSM to field isomorphism: multiplicative structure}
In this section, we prove that QSM-isomorphism induces an isomorphism of multiplicative semigroups of rings of (totally positive) integers. The idea is to use certain symmetries of the system to encode this structure.
We first establish some facts on the symmetries of QSM-systems of number fields.
The statement is analogous to Proposition 2.14 of \cite{CMR} and Proposition 3.124 of \cite{CM}, where it was formulated for the case of imaginary quadratic fields, and to Theorem 2.14 of \cite{ConsM}, formulated in the function field case.
\begin{prop} \label{bloeb}
Let $\bK$ denote any number field. An element $s$ of the semigroup $\hat{\cO}_{\bK} \cap \A^*_{\bK,f}$ induces an endomorphism $\varepsilon_{\bK,s}=\varepsilon_s$ of $(A_{\bK},\sigma_{\bK})$ given by $$ \varepsilon_s(f)(\gamma,\rho)= f(\gamma, s^{-1}\rho)e_{\fr} \quad \mbox{ and } \quad \varepsilon_s(\mu_{\fn})= e_{\fr} \mu_{\fn}, $$ where $e_{\fr}$ projects onto the space where
$s^{-1}\rho \in \hat\cO_{\bK}$, for $\fr = s
\hat\cO_{\bK} \cap \bK.$ These endomorphisms preserve the dagger subalgebra $A_{\bK}^{\dagger}$ by construction.
Furthermore,
\begin{enumerate}
\item[\textup{(}i\textup{)}] The subgroup of invertible integral ideles $\hat{\cO}^*_{\bK}$ is exactly the one that acts by automorphisms of the system.
\item[\textup{(}ii\textup{)}] The closure of totally positive units $\bar{\cO_{\bK,+}^*}$ are precisely the elements that give rise to the trivial endomorphism.
\item[\textup{(}iii\textup{)}]
The sub-semiring ${\cO}^\times_{\bK,+} = \cO_{\bK,+}-\{0\}$ of non-zero totally positive elements of the ring of integers is exactly the one that acts by dagger inner endomorphisms.
\end{enumerate}
This is summarized by following commutative diagram:
$$ \xymatrix{ \mathrm{Inn}^\dagger(A_{\bK},\sigma_{\bK}) \ar@{^{(}->}[r] & \mathrm{End}(A_{\bK},\sigma_{\bK}) & \mathrm{Aut}(A_{\bK},\sigma_{\bK}) \ar@{_{(}->}[l] \\ \cO_{\bK,+}^\times \ar@{^{(}->}[r] \ar@{->}[u] & \hat{\cO}_{\bK} \cap \A^*_{\bK,f} \ar@{->}[u]_{\varepsilon_{\bK}} & \hat{\cO}_{\bK}^* \ar@{_{(}->}[l] \ar@{->}[u] \\ \cO_{\bK,+}^* \ar@{^{(}->}[r] \ar@{^{(}->}[u] & \bar{\cO_{\bK,+}^*} \ar@{=}[r] \ar@{^{(}->}[u] & \bar{\cO_{\bK,+}^*} \ar@{^{(}->}[u]
}
$$
\end{prop}
\begin{proof}
The maps $\varepsilon_s$ are indeed endomorphisms, since they are compatible by construction with
the time evolution,
$$ \varepsilon_s \sigma_t = \sigma_t \varepsilon_s , \ \ \forall s\in \hat\cO \cap \A_{\bK,f}^*, \ \
\forall t\in \R. $$ We also see immediately that $\varepsilon_{\bK} \colon s \mapsto \varepsilon_s$ is a semigroup homomorphism.
It is clear from the definition that exactly the elements of $\hat\cO_{\bK}^*$ act by automorphisms.
An element $s$ acts trivially precisely when $(\gamma,\rho) \sim (\gamma,s^{-1}\rho)$ for all $\gamma,\rho$. This means that there exists an idelic unit $u \in \hat\cO_{\bK}^*$ such that $\vartheta_{\bK}(u)=1$ and $s=u$. Now class field theory says that $$\ker(\vartheta_{\bK}) \cap \hat\cO_{\bK}^*=\bar\cO_{\bK,+}^*,$$ the closure of the totally positive units of the ring of integers $\cO_{\bK}$ (compare Prop.\ 1.1 in \cite{LNT}).
To finish the proof, we now study when $\varepsilon_s$ is an inner endomorphism that preserves the dagger subalgebra,
that is, an inner endomorphism implemented by an isometry $u \in A^{\dagger}_{\bK}$, which
is an eigenvector of the time evolution.
We claim the following:
\begin{quote}
\emph{If $\varepsilon_s(f)=u f u^*$ is a non-trivial dagger inner endomorphism for some eigenvector $u \in A^{\dagger}_{\bK}$ of the time evolution with $u^*u=1$, then, $u= a \mu_{\fr}$ for some phase factor $a \in C(X_{\bK})$ with $|a|^2=1$, and
for some totally positive principal ideal $\fr \in J_{\bK}^+$. We then have
$s \in \cO_{\bK,+}^\times$ with $\fr= s \hat\cO_{\bK} \cap{\bK}$.}
\end{quote}
Indeed, suppose $u\in A^\dagger_{\bK}$
with $\sigma_t(u) =\lambda^{it} u$, for some
$\lambda=n/m$ with $m,n$ coprime integers, and with $u^* u=1$.
As an element in $A^{\dagger}_{\bK}$ the isometry $u$
can be written as a sum of monomials
$$ u = \sum_{\fn} \mu_{\fn} f_{\fn} $$
with no $\mu_{\fn}^*$. Thus, it will have $m=1$ and $\lambda=n$ for
some $n$, so that
\begin{equation}\label{smuf1}
u=\sum_{N_{\bK}(\fn)=n} \mu_{\fn} f_{\fn},
\end{equation}
with $f_{\fn}\in C(X_{\bK})$.
First observe the following: we can express all elements in the algebraic crossed product of $C(X_{\bK})$
by $J_{\bK}^+$ as sums of monomials of the form
$\mu_{\fn} f \mu^*_{\fm}$, with $\fn$ and $\fm$ in $J_{\bK}^+$ and $f\in C(X_{\bK})$.
For any pair of elements $\fn$ and $\fm$ in $J_{\bK}^+$ that have
no factor in common in their decomposition into primes of ${\bK}$,
let $V_{\fn,\fm}$ denote the linear span of the elements $\mu_{\fn} f \mu^*_{\fm}$
with $f\in C(X_{\bK})$. Then $V_{\fn,\fm}\cap
V_{\fn',\fm'}=\{0\}$, whenever either $\fn \neq \fn'$ or $\fm\neq \fm'$.
The condition $u^* u=1$ then gives
$$ \sum \bar f_{\fn} \mu_{\fn}^* \mu_{\fn'} f_{\fn'} = 1, $$
which we write equivalently as
\begin{equation}\label{twosums}
\underbrace{\sum_{\fn} | f_{\fn}|^2}_{S_1} +\underbrace{\sum_{\substack{\fn \neq \fn'}}
\bar f_{\fn} \mu_{\fn}^* \mu_{\fn'} f_{\fn'}}_{S_2} =1,
\end{equation}
where the first sum $S_1$ corresponds to the case where $\fn=\fn'$.
We now check that the second sum $S_2$ vanishes. To see this, let $\fu$ be the greatest common factor of $\fn$ and $\fn'$ in their prime decompositions, so that $\fn=\fu\fa$ and $\fn'=\fu\fb$ for $\fa$ and $\fb$ coprime.
Then we get
$$ \bar f_{\fn} \mu_{\fn}^* \mu_{\fn'} f_{\fn'} =
\bar f_{\fn} \mu_{\fa}^* \mu_{\fb} f_{\fn'}, $$
since $\mu_{\fu}^* \mu_{\fu}=1$. Since $\fa$ and $\fb$ have no common factor,
$\mu_{\fa}^* \mu_{\fb}= \mu_{\fb}\mu_{\fa}^*$ and we have that the above expression further equals
$$ = \mu_{\fb} \sigma_{\fb}(\bar f_{\fn}) \sigma_{\fa}( f_{\fn'}) \mu_{\fa}^*. $$
Next notice that, since $\fa$ and $\fb$ have no common factor, this is an element of $V_{\fa,\fb}$.
Thus, in relation \eqref{twosums} the subsum $S_1$ and the constant $1$ on the right hand side are both in the subspace $V_{1,1}$, while all the terms in the second sum $S_2$ are in other
subspaces $V_{\fa,\fb}$ for $\fa \neq \fb$.
We conclude that the second sum $S_2$ in \eqref{twosums} vanishes and thus, the condition that $u^*u=1$ is equivalent to the functions $f_{\fn}$ satisfying
\begin{equation}\label{relm11}
\sum_{\fn} |f_{\fn}|^2 =1.
\end{equation}
Consider then the inner endomorphism $f\mapsto u f u^*$, with $u$ as above.
Substituting the above representation of $u$, we find
\begin{equation}\label{sumsufu}
u f u^* = \sum_{\fn',\fn} \mu_{\fn} f_{\fn} f \bar f_{\fn'} \mu_{\fn'}^* = \sum_{\fn', \fn} \rho_{\fn}( f_{\fn} f \bar f_{\fn'}) \mu_{\fn} \mu_{\fn'}^*
\end{equation}
As above, one verifies that the part of the above sum with $\fn' \neq \fn$
is in a space $V_{\fa,\fb}$ for $(\fa,\fb) \neq (1,1)$, while $u f u^* = \varepsilon_s(f)$ is in $V_{1,1}$, as is the part of the sum where $\fn'=\fn$, which equals
$$ \sum_{\fn} \rho_{\fn}(|f_{\fn}|^2f)e_{\fn}. $$
We conclude that
\begin{equation}\label{esen}
\varepsilon_s(f) = f(\gamma,s^{-1}\rho)e_{\fr} = \sum_{\fn} a_{\fn} \rho_{\fn} ( f ) e_{\fn} .
\end{equation}
for $a_{\fn} = \rho_{\fn}( |f_{\fn}|^2)$ positive, supported in the range of $e_{\fn}$.
Fix $\fn$ an ideal of norm $n$, different from $\fr$. We shall prove that $a_{\fn}=0$. For this, write $\fr=\fa \fb$ and $\fn = \fa \fc$ with $\fb$ and $\fc$ coprime. Assume that $a_{\fn}(x)\neq 0$. From the above, we can assume that $x$ belong to the range of $e_{\fn}=e_{\fa \fb^0 \fc}$. Assume by induction that $x$ belong to the range of $e_{\fa \fb^k \fc}$. We now show that $x$ also belongs to the range of $e_{\fa \fb^{k+1} \fc}$. For this, apply equation \eqref{esen} to the function $f=e_{\fb^k}$. We find
$$ e_{\fr \fb^k}(x) = a_{\fn}(x) e_{\fn \fb^k}(x) + \mbox{ positive terms}. $$
We rewrite this as
$$ e_{\fa \fb^{k+1}}(x) = a_{\fn}(x) e_{\fa \fb^k \fc} (x) + \mbox{ positive terms}. $$
Since by assumption $a_{\fn}(x)>0$ and $e_{\fa \fb^k \fc}(x)=1$, we find from this identity that $e_{\fa \fb^{k+1}}(x) \neq 0$. Hence $x$ belongs to the range of $\fa \fb^k \fc$ and $ \fa \fb^{k+1}$, hence of $\fa \fb^{k+1} \fc$ for all $k$, as claimed. If $\fb \neq 1$, then this never happens. We conclude that $\fb=1$, so $\fr = \fa \mid \fn$, and since $\fr$ and $\fn$ have the same norm, we find $a_{\fn}=0$ unless $\fn=\fr$, so that in the sum on the right hand side only one
term is non-zero, and relation \eqref{esen} becomes $$ \varepsilon_s(f) (\gamma,\rho) = \rho_{\fr}(|f_{\fr}|^2)\rho_{\fr}(f)(\gamma,\rho) e_{\fr}.$$ Working out both sides, we find
\begin{equation}\label{epsilonsmun}
f(\gamma, \fr^{-1} \rho) e_{\fr} = \rho_{\fr}(|f_{\fr}|^2)
f(\theta_{\bK}(\fr)\gamma, \fr^{-1} \rho) e_{\fr}.
\end{equation}
First of all, setting $f=1$, we get that $\rho_{\fr}(|f_{\fr}|^2)=1$. If we apply the partial inverse $\sigma_{\fr}$ to this, we find $|f_{\fr}|^2 = \sigma_{\fr} \rho_{\fr} (|f_{\fr}|^2)=1$. We then conclude from \eqref{relm11} that all other $f_{\fn}=0$ ($\fn \neq \fr)$, so that we indeed get $$u= a \mu_{\fr}$$ for the phase factor $a=f_{\fr}$. Now equality \eqref{epsilonsmun} implies that $\theta_{\bK}(\fr)$ equals $\theta_{\bK}(u)$
for some unit id\`ele $u \in \hat\cO_{\bK}^*$. This means precisely that $\fr$ is trivial in
$G_{\bK}^{\mbox{{\tiny \textup{ab}}}}/\vartheta_{\bK}(\hat\cO_{\bK}^*)$, which is the narrow ideal class group of ${\bK}$. Hence
$\fr$ is a totally positive principal ideal corresponding to a generator $s \in \cO^\times_{{\bK},+}$.
\end{proof}
\begin{remark}
As we have already observed,
the group $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ (which contains an image of $\hat\cO\cap
\A^*_{\bK,f}$) also acts on the QSM system by symmetries, cf. \cite{LLN},
Remark 2.2(i).
This gives two slightly different actions of
$\hat\cO\cap \A^*_{\bK,f}$ on the QSM system, which induce the
same action on the low temperature KMS states. As was remarked to us by
Bora Yalkinoglu, when viewing the algebra $A_{\bK}$ as an endomotive
in the sense of \cite{CCM} and \cite{Mar-endo}, the two actions correspond,
respectively, to the one coming from the $\Lambda$-ring structure in
the sense of Borger \cite{Bor} and to the Galois action coming from
the endomotive construction as in \cite{CCM}.
\end{remark}
\begin{remark}[$\bK$-lattices]
In terms of $\bK$-lattices $(\Lambda,\phi)$, the divisibility condition above corresponds to
the condition that the homomorphism $\phi$ factors through
$$\phi: \bK/\cO_{\bK} \to \bK\Lambda/\fn\Lambda \to \bK\Lambda/\Lambda.$$ The action of the endomorphisms
is then given by
$$ \varepsilon_s(f)((\Lambda,\phi),(\Lambda',\phi'))= f((\Lambda,s^{-1}\phi),(\Lambda',s^{-1}\phi')) $$
when both $(\Lambda,\phi)$ and $(\Lambda',\phi')$ are divisible by $s$ and zero otherwise.
When $s\in \cO_{\bK}^\times$, we can consider
the function
$$ \mu_s ((\Lambda,\phi),(\Lambda',\phi'))= \left\{ \begin{array}{ll}
1 & \Lambda = s^{-1}\Lambda' \ \ \text{ and } \ \ \phi'=\phi; \\
0 & \text{otherwise.}
\end{array}\right. $$
These are eigenvectors of the time evolution, with
$ \sigma_t(\mu_s) = N_{\bK} (\fn)^{it} \mu_s, $
and $ \varepsilon_s(f) = \mu_s \star f \star \mu_s^*, $
for the convolution product of the algebra $A_{\bK}$. For a discussion in this language of why, in the case of totally imaginary fields, only \emph{principal} (in this case, the same as totally positive principal) ideals give inner endomorphisms, see \cite{CM}, p.\ 562.
\end{remark}
\begin{prop} \label{integers}
Let $\bK$ and $\bL$ denote two number fields admitting a dagger isomorphism $\varphi$ of their QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$. Then $\varphi$ induces a semigroup isomorphism between the multiplicative semigroups of totally positive non-zero elements of the rings of integers of $\bK$ and $\bL$:
$$ \varphi \, : \, (\cO^\times_{\bK,+},\times) \overset{\sim}{\rightarrow} (\cO^\times_{\bL,+},\times). $$
\end{prop}
\begin{proof} Proposition \ref{finiteideles} says that $\varphi$ induces an isomorphism
$$ \varphi \, : \, \A_{\bK,f}^* \cap \hat{\cO}_{\bK} \overset{\sim}{\rightarrow} \A_{\bL,f}^* \cap \hat{\cO}_{\bL}. $$
From Proposition \ref{bloeb}, we have a map
$$\varepsilon_{\bK} \, : \, \A_{\bK,f}^* \cap \hat{\cO}_{\bK} \rightarrow \mathrm{End}(A_{\bK},\sigma_{\bK}) \colon s \mapsto \varepsilon_s $$ with kernel $\bar{\cO_{\bK,+}^*}$, and $\varphi$ induces a map
$$ \mathrm{End}(A_{\bK},\sigma_{\bK}) \overset{\sim}{\rightarrow} \mathrm{End}(A_{\bL},\sigma_{\bL}). $$
Now $\varphi$, as an isomorphism of QSM-systems, also preserves the \emph{inner} endomorphisms:
$$ \varphi \, : \, \mathrm{Inn}(A_{\bK},\sigma_{\bK}) \overset{\sim}{\rightarrow} \mathrm{Inn}(A_{\bL},\sigma_{\bL}). $$
Moreover, because the $C^*$-algebra isomorphism $\varphi$ also induces an isomorphism
of the dagger subalgebras $\varphi: A^{\dagger}_{\bK} \overset{\sim}{\rightarrow} A^{\dagger}_{\bL}$, it also preserves
the {\em dagger} inner endomorphisms,
$$ \varphi \, : \, \mathrm{Inn}^{\dagger}(A_{\bK},\sigma_{\bK}) \overset{\sim}{\rightarrow} \mathrm{Inn}^{\dagger}(A_{\bL},\sigma_{\bL}), $$
but we know that $$\varepsilon_{\bK}^{-1}\left(\mathrm{Inn}^{\dagger}(A_{\bK},\sigma_{\bK})\right) = \cO_{\bK,+}^\times, $$ and similarly for $\bL$. Hence to prove that $\varphi$ gives an isomorphism
\begin{equation} \label{PI} \varphi \, : \, \cO_{\bK,+}^\times \overset{\sim}{\rightarrow} \cO_{\bL,+}^\times, \end{equation} it suffices to prove that $\varphi$ maps $\varepsilon_{\bK}^{-1}\left(\mathrm{Inn}^\dagger(A_{\bK},\sigma_{\bK})\right)$ to $\varepsilon_{\bL}^{-1}\left(\mathrm{Inn}^\dagger(A_{\bL},\sigma_{\bL})\right).$ To prove this, we will verify that $\varphi \circ \varepsilon_{\bL} = \varepsilon_{\bK} \circ \varphi$, i.e., the commuting of the right square in the following diagram:
$$ \xymatrix{
& \mathrm{Inn}^\dagger(A_{\bK},\sigma_{\bK}) \ar@{^{(}->}[rr] \ar@{->}[ldd]^{\varphi} & & \mathrm{End}(A_{\bK},\sigma_{\bK}) \ar@{->}[ldd]^{\varphi} \\
& \cO_{\bK,+}^\times \ar@{^{(}->}[rr] \ar@{->}[u] \ar@{-->}[ldd]^{\varphi ?} & & \hat{\cO}_{\bK} \cap \A^*_{\bK,f} \ar@{->}[u]_{\varepsilon_{\bK}} \ar@{->}[ldd]^{\varphi} \\
\mathrm{Inn}^\dagger(A_{\bL},\sigma_{\bL}) \ar@{^{(}->}[rr] & & \mathrm{End}(A_{\bL},\sigma_{\bL}) & \\
\cO_{\bL,+}^\times \ar@{^{(}->}[rr] \ar@{->}[u] & & \hat{\cO}_{\bL} \cap \A^*_{\bL,f} \ar@{->}[u]_{\varepsilon_{\bL}} & \\
}
$$
This is equivalent to the following statement:
\begin{lem} For every $s \in \A_{\bK,f}^* \cap \hat{\cO}_{\bK}$, we have that
$\varphi(\varepsilon_s) = \varepsilon_{\varphi(s)}.$
\end{lem}
\begin{proof}
Since $\A_{\bK,f}^* \cap \hat{\cO}_{\bK}$ is isomorphic to the direct product of $\hat\cO_{\bK}^*$ and $J_{\bK}^+$, it suffices to prove this for these subgroups individually. Since the map $J_{\bK}^+ \rightarrow \mathrm{End}(A_{\bK},\sigma_{\bK})$ is injective, it is automatic that $\varepsilon_{\bK}$ and $\varphi$ intertwine with $\varepsilon_{\bL}$ on elements of this subgroup. Now suppose on the other hand that $s \in \hat\cO_{\bK}^*$. For a function $g \in C(X_{\bL})$, we have by definition
\begin{eqnarray*} \varphi(\varepsilon_s)(g)(x)&=&(\varphi \circ \varepsilon_s \circ \varphi^{-1})g(x)\\ &=&g(\Phi((1,s^{-1})\cdot \Phi^{-1}(x))) \\ &=& g(\Phi((\vartheta_{\bK}(s),1)\cdot y)),\end{eqnarray*}
where we have written $\Phi(y)=x$ for $y \in X_{\bK}$.
By the density statement in Corollary \ref{dude}, it suffices to compute this action on functions that are supported on $y=\fn \ast \gamma'$ for some $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ and $\fn \in J_{\bK}^+$. But for such values, and any $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, we have that
$$\Phi(\gamma \cdot y) = \Phi(\gamma \cdot (\fn \ast \gamma')) = \Phi(\fn \ast (\gamma \gamma')) = \varphi(\fn) \ast \Phi(\gamma\gamma'), $$
by Proposition \ref{gab}, and since $\Phi$ is multiplicative on elements in $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ (Proposition \ref{gab}), we find that that this is further equal to $$ \varphi(\fn) \ast \left(\Phi(\gamma) \Phi(\gamma')\right) = \Phi(\gamma)\Phi(\fn \ast \gamma') = \Phi(\gamma)\Phi(y) . $$
We apply this with $\gamma=\vartheta_{\bK}(s)$ and $y=\Phi^{-1}(x)$, to find
\begin{eqnarray*} \varphi(\varepsilon_s)(g)(x) &=& g(\Phi((\vartheta_{\bK}(s),1)) \cdot x) \\ &=& g((1,\varphi(s)^{-1})\cdot x) \\ &=& \varepsilon_{\varphi(s)}(g)(x),
\end{eqnarray*} which proves the statement.
\end{proof}
With the proof of this lemma, we have reached the end of the proof of Proposition \ref{integers}.
\end{proof}
\section{Recovering the additive structure}
\begin{se} In this section, we show that the map $\varphi$ is additive. For this, we prove it is additive (or, what is the same, the identity map) modulo totally split primes. We do this by lifting elements of the residue field of a totally split prime to integers, which we show are fixed by the map $\varphi$.
For the rest of this section, we assume that $\varphi \colon (A_{\bK},\sigma_{\bK}) \overset{\sim}{\rightarrow} (A_{\bL},\sigma_{\bL}) $ is a dagger isomorphism of QSM-systems of two number fields $\bK$ and $\bL$. Since $\bK$ and $\bL$ are arithmetically equivalent, they have the same discriminant, which we denote by $\Delta$. We choose a prime ideal $\p$ of $\bK$ of norm $p$, and let $\varphi(\p)$ denote the corresponding prime of $\bL$.
\end{se}
\begin{notation} For an integer $N$, we let $\Z_{(N)}$ denote the set of integers coprime to $N$. We recall the following notations: $1_{\p}=(0,\dots,0,1,0,\dots,0)$ denotes the adele with a $1$ at the $\p$-th place and $0$ everywhere else, and $\one_{\p}:=[(1,1_{\p})] \in X_{\bK}$. Finally, when $u \in \hat\cO_{\bK,\p}$, we let $u_{\p}$ denote the integral idele $u_{\p}:=(1,\dots,1,u,1,\dots,1)$, with $u$ in the $\p$-th place and $1$ everywhere else.
\end{notation}
\begin{se}
Recall that the map $\varphi \colon \hat\cO_{\bK,\p}^* \overset{\sim}{\rightarrow} \hat\cO_{\bL,\varphi(\p)}^*$ is constructed by canonically identifying both unit groups with the corresponding inertia groups in the maximal abelian extension, which are mapped to each other by the homomorphism $\Phi$. Said otherwise, for $u \in \hat\cO_{\bK,\p}^*$, the element $\varphi(u)$ is defined by $$ [(1,u_{\p})] = [(\vartheta_{\bK}(u_{\p})^{-1},1)] \mapsto \Phi([(\vartheta_{\bK}(u_{\p})^{-1}),1)]) = [(\Phi(\vartheta_{\bK}(u_{\p})^{-1}),1)] =:[(1,\varphi(u)_{\varphi(\p)})].$$
\end{se}
\begin{se}
We consider the composite map
$$ \lambda_{\bK,\p} \colon \hat\cO_{\bK, \p}^* \rightarrow X_{\bK} \xrightarrow{[\cdot \one_{\p}]} X_{\bK} \colon u \mapsto [(1,u_{\p})] \mapsto [(1,u_{\p} \cdot 1_{\p})] = [(1,(0,\dots,0,u,0,\dots,0)]. $$
This is obviously a group isomorphism onto the image, which we denote by $Z_{\bK,\p}$. \end{se}
\begin{lem} The following diagram commutes:
$$ \xymatrix{ \hat\cO_{\bK,\p}^* \ar@{->>}[r]^{\lambda_{\bK,\p}} & Z_{\bK,\p} \\
\hat\cO_{\bL,\varphi(\p)}^* \ar@{->>}[r]_{\lambda_{\bL,\varphi(\p)}} \ar@{<-}[u]_{\varphi} & Z_{\bL,\varphi(\p)} \ar@{<-}[u]_{\Phi}
}$$
\end{lem}
\begin{proof} We need to verify that for any $u \in \hat\cO_{\bK,\p}^*$, it holds true that
$$ \Phi([(1,u_{\p})] \cdot \one_{\p})=[(1,\varphi(u)_{\varphi(\p)})] \cdot \one_{\varphi(\p)}. $$
We compute that
$$ [(1,u_{\p})] \cdot \one_{\p}=[(\vartheta_{\bK}(u_{\p})^{-1},1_{\p})], $$ which belongs to the group
$$ H_{\bK} = G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \times_{\hat \cO_{\bK}^*} \{ 1_{\p} \}, $$
which, as was shown in the proof of Proposition \ref{gab} and in Corollary \ref{imageonem}, is mapped by $\Phi$ to an element of the form
$$ [(\Phi(\vartheta_{\bK}(u_{\p})^{-1}),1_{\varphi(\p)})] = [1,\varphi(u)_{\varphi(\p)}] \cdot [(1,1_{\varphi(\p)})].$$
This proves the commutativity of the diagram.
\end{proof}
\begin{lem} Consider the map
$$ \varpi_{\bK,\p} \colon \Z_{(p\Delta)} \hookrightarrow \hat\cO_{\bK,\p}^* \rightarrow Z_{\bK,\p} \colon a \mapsto [(1,a\cdot 1_{\p})]$$
(and similarly for $\bL$).
Then the map $\varpi_{\bK,\p}$ is injective, and the associated homeomorphism $\Phi$ is the identity map when restricted to the image of $\varpi$: we have a commutative diagram
$$ \xymatrix{
\Z_{(p\Delta)} \ar@{=}[d] \ar@/^12pt/[rr]^{\varpi_{\bK,\p}} \ar@{->}[r] &\hat\cO^*_{\bK,\p} \ar@{->}[d]^{\varphi} \ar@{->>}[r]_{\lambda_{\bK,\p}} & Z_{\bK,\p} \ar@{->}[d]^{\Phi} \\
\Z_{(p\Delta)} \ar@/_12pt/[rr]_{\varpi_{\bL,\varphi(\p)}} \ar@{->}[r] &\hat\cO^*_{\bL,\varphi(\p)} \ar@{->>}[r]^{\lambda_{\bL,\varphi(\p)}} & Z_{\bL,\varphi(\p)} \\
}$$
where the curved arrows are injective. In particular, $\varphi \colon \hat\cO_{\bK,\p}^* \overset{\sim}{\rightarrow} \hat\cO_{\bL,\varphi(\p)}^*$ is constant on $\Z_{(p\Delta)}$.
\end{lem}
\begin{proof}
To prove the injectivity of $ \varpi_{\bK,\p}$, if $(1,a\cdot 1_{\p}) \sim (1,b\cdot 1_{\p})$ then there exists a unit $w \in \bar{\cO_{\bK,+}^*}$ with $a\cdot 1_{\p}=wb \cdot 1_{\p}$; hence $w \in \bQ \cap \bar{\cO_{\bK,+}^*} = \{1 \}$, so $a=b$.
To prove the commutativity of the diagram, observe that for $a \in \Z_{(p\Delta)}$, we have
$$ \varpi_{\bK,\p}(a) = (a) \ast [(\vartheta_{\bK}(a),1_{\p})] = (a) \ast [(1,1_{\p})] = (a)*\one_{\p}, $$
since $a \in \Z \subseteq \bK^*$ has trivial image under the reciprocity map. We compute the image by $\Phi$:
\begin{eqnarray*}
\Phi(\varpi_{\bK,\p} (a)) &=& \Phi((a) \ast \one_{\p})\\
&=& \varphi((a)) \ast \Phi(\one_{\p})\\
&=& (a) \ast \one_{\varphi(\p)}\\
&=& \varpi_{\bL,\varphi(\p)}(a)
\end{eqnarray*}
In this proof, we have used that $\varphi$ fixes the ideal $(a) \in J_{\bQ}^+$ for $a \in \Z_{(p\Delta)} $; by multiplicativity of $\varphi$, it suffices to prove this for $a$ a rational prime that is unramified in $\bK$ (viz., coprime to $\Delta$). Decompose such $(a)$ in $\bK$ as $(a)=\p_1\dots \p_r$ (with all $\p_i$ distinct, since $a$ is unramified). Since $\varphi=\alpha_1$ is a permutation of the distinct primes above the given rational prime $a$, we find that $\varphi((a))=\p_{\sigma(1)} \dots \p_{\sigma(r)}$ for some permutation $\sigma$ of the indices. Hence $\varphi((a))=(a)$, as desired.
In the computation, we also used that $\Phi(\one_{\p})=\one_{\varphi(\p)}$, which was shown in the previous lemma.
Finally, the previous lemma (commutativity of the right square in the diagram) and the injectivity of the maps $\varpi$ on $\Z_{(p\Delta)}$ shows that the map $\varphi$ is the identity on $\Z_{(p\Delta)}$.
\end{proof}
\begin{thm}
The map $\varphi \colon \cO^\times_{\bK,+} \overset{\sim}{\rightarrow} \cO^\times_{\bL,+}$, extended by $\varphi(0)=0$, is additive.
\end{thm}
\begin{proof}
Choose a rational prime $p$ that is totally split in $\bK$ (in particular, unramified). Then, since $\bK$ and $\bL$ are arithmetically equivalent, we have in particular that $p$ is also totally split in $\bL$. Choose a prime $\p \in J_{\bK}^+$ above $p$, so $f(\p|\bK)=1$; then $f(\varphi(\p)|\bL)=1$, too.
From the map of localisations $ \varphi \colon \hat\cO^*_{\bK,\p} \overset{\sim}{\rightarrow} \hat\cO^*_{\bL,\varphi(\p)}$, we now construct a multiplicative map $\tilde{\varphi}$ of residue fields, using the Teichm\"uller lift $$\tau_{\bK,p} \colon \bar\bK^*_{p}\cong \F^*_p \hookrightarrow \hat\cO^*_{\bK,\p} \cong \Q^*_p$$ in the following diagram:
$$ \xymatrix{ \hat\cO^*_{\bK,\p}\ar@{->}[r]^{\varphi} & \hat\cO^*_{\bL,\varphi(\p)}\ar@{->}[d]^{\mathrm{mod}\, \varphi(\p)} \\ \bar\bK^*_{\p}\ar@{-->}[r]^{\tilde{\varphi}} \ar@{^{(}->}[u]^{\tau_{\bK,p}} & \bar\bL^*_{\varphi(\p)}}$$
The map $\tilde{\varphi}$ is multiplicative by construction. We will now prove that its extension by $\tilde{\varphi}(0)=0$ is additive (or, equivalently, $\tilde{\varphi} \colon \F^*_p \rightarrow \F^*_p$ is the identity map).
We extend the Teichm\"uller character in the usual way to
$$ \tau_{\bK,p} \colon \hat\cO^*_{\bK,\p} \rightarrow \hat\cO^*_{\bK,\p} \colon x \mapsto \lim_{n \rightarrow +\infty} x^{p^n}. $$
Now let $\tilde{a}$ denote any residue class in $\bar{\bK}_{\p}^* \cong \F_p$. Choose an integer $a$ that is congruent to $\tilde{a}$ mod $\p$ and coprime to the discriminant $\Delta$ (which is possible by the Chinese remainder theorem--- observe that $p$ and $\Delta$ are coprime). It holds true that $\tau_{\bK,p}(\tilde{a})=\tau_{\bK,p}(a)$ for the extended Teichm\"uller map.
Since $\varphi$ is continuous in the $p$-adic topology and multiplicative, we find that
\begin{eqnarray*} \varphi(\tau_{\bK,p}({a})) &=& \varphi \left( \lim_{n \rightarrow +\infty} a^{p^n} \right) \\ &=& \lim_{n \rightarrow +\infty} \varphi({a})^{p^n} \\ &=& \tau_{\bL,p} ( \varphi({a} )) \\ &=& \tau_{\bL,p}({a}) \end{eqnarray*}
(the last equality follow from the lemma above),
so that we find
\begin{eqnarray*} \tilde{\varphi}(\tilde{a}) &=& \varphi(\tau_{\bK,p}(a)) \, \mathrm{mod}\, \varphi(\p)\\ &=& \tau_{\bL,p}(a)\, \mathrm{mod}\, \varphi(\p)\\ &=& \tilde{a} \, \mathrm{mod}\, \varphi(\p). \end{eqnarray*}
Hence $\varphi$ is the identity map modulo any totally split prime, so for any such prime $\p \in J_{\bK}^+$ and any $x,y \in \cO_{\bK,+}$, we have
$$ \varphi(x+y) = \varphi(x) + \varphi(y)\, \mathrm{mod}\, \varphi(\p).$$ Since there are totally split primes of arbitrary large norm (by Chebotarev), we find that $\varphi$ itself is additive.
\end{proof}
\begin{thm}
Let $\bK$ and $\bL$ denote two number fields whose QSM-systems $(A_{\bK},\sigma_{\bK})$ and $(A_{\bL},\sigma_{\bL})$ are isomorphic. Then $\bK$ and $\bL$ are isomorphic as fields.
\end{thm}
\begin{proof}
We have just seen that $\varphi$ induces an isomorphism of semigroups of totally positive integers (Proposition \ref{integers}). Now $\cO_{\bK}$ always has a free $\Z$-basis consisting of totally positive elements; indeed, if $y_1=1,y_2,\dots,y_n$ is any basis, replace it by $x_1=1,x_2=y_2+k_2,\dots,x_n=y_n+k_n$ where $k_i$ are integers with $k_i > - \sigma(y_i)$ for all real embeddings $\sigma$ of $\bK$. Then we can extend $\varphi \, : \cO_{\bK} \overset{\sim}{\rightarrow} \cO_{\bL}$ by
$$ \varphi(\sum n_i x_i) \mapsto \sum_n n_i \varphi(x_i); $$
by the above this is well-defined, additive and multiplicative, and hence it extends further to an isomorphism of the quotient fields.
\end{proof}
\part{$L$-SERIES AND QSM-ISOMORPHISM} \label{part2}
{Let $\chi$ denote a character in the Pontrjagin dual of $G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$. We set
$$ L_{\bK}(\chi,s) := \sum_{\fn \in J_{\bK}^+} \frac{\chi(\vartheta_{\bK}(\fn))}{N_{\bK}(\fn)^s}, $$
where it is always understood that we set $\chi(\vartheta_{\bK}(\fn))=0$ if $\fn$ is not coprime to the conductor $\f_{\chi}$ of $\chi$.
This is also the Artin $L$-series for $\chi$ considered as a representation of the Galois group of the finite extension $\bK_\chi / \bK$ through which $\chi$ factors injectively (\cite{Neukirch}, VII.10.6).}
In the next few sections, we first show that (iii) $\Rightarrow$ (ii) in Theorem \ref{main2},
namely the identity of the $L$-functions implies the existence of a dagger isomorphism
of the quantum statistical mechanical systems, that is, a $C^*$-algebra isomorphism
$\varphi: A_{\bK} \overset{\sim}{\rightarrow} A_{\bL}$ intertwining the time evolutions, $\varphi\circ \sigma_{\bK}=
\sigma_{\bL}\circ \varphi$ and preserving the dagger subalgebras $\varphi: A^{\dagger}_{\bK}\overset{\sim}{\rightarrow}
A^{\dagger}_{\bL}$.
\section{QSM-isomorphism from matching $L$-series: compatible isomorphism of ideals}
\begin{prop}
Let $\bK$ and $\bL$ denote two number fields. Suppose $\psi$ is an isomorphism
$$ \psi \, : \, \widehat{G}^{\mbox{{\tiny \textup{ab}}}}_{\bK} \overset{\sim}{\rightarrow} \widehat{G}^{\mbox{{\tiny \textup{ab}}}}_{\bL}$$
that induces an identity of the respective $L$-functions
$$ L_{\bK}(\chi,s) = L_{\bL}(\psi(\chi),s). $$
Then there exists a norm preserving semigroup isomorphism
$$\Psi \, : \, J_{\bK}^+ \rightarrow J_{\bL}^+, $$
which is compatible with the Artin reciprocity map under $\psi$ in the sense that
\begin{equation} \label{NNN} \psi(\chi)(\vartheta_{\bL}(\Psi(\fn))) = \chi(\vartheta_{\bK}(\fn)) \end{equation}
for all characters $\chi$ and ideals $\fn$ such that the conductor of $\chi$ is coprime to $N_{\bK}(\fn)$ (which is also equivalent to (iv) in Theorem \ref{main3} for $\hat{\psi}:=(\psi^{-1})^*$).
\end{prop}
\begin{proof}
Since $\psi(1)=1$, the zeta functions ($L$-series for the trivial character) match on both sides:
$$ \zeta_{\bK}(s)=\zeta_{\bL}(s).$$
This is arithmetic equivalence, and it shows in particular that there is a bijection between the sets of primes of $\bK$ and $\bL$ above a given rational prime $p$ and with a given inertia degree $f$. We need to match these primes in such a way that they are compatible with Artin reciprocity. We want to do this by mapping a prime $\p$ of $\bK$ to a prime $\q$ of $\bL$ above the same $p$, with the same inertia degree, and such that
\begin{equation} \label{poeh} \psi(\chi)(\vartheta_{\bL}(\q)) = \chi(\vartheta_{\bK}(\p)) \end{equation} for all characters $\chi$ whose conductor is coprime to $p$. The main point is to show that it is always possible to find such $\q$, and to show that one may perform this in a bijective way between primes. We prove this by using a combination of $L$-series as counting function for the number of such ideals $\q$.
The identification of $L$-series means that for any character $\chi$, we have
\begin{equation} \label{tate} \sum_{\fn \in J_{\bK}^+} \frac{\chi(\vartheta_{\bK}(\fn))}{N_{\bK}(\fn)^s} = \sum_{\fm \in J_{\bL}^+} \frac{\psi(\chi)(\vartheta_{\bL}(\fm))}{N_{\bL}(\fm)^s}. \end{equation}
We fix an integer $n$ and consider the norm-$n$ part of this identity:
\begin{equation} \label{tate2} \sum_{{\fn \in J_{\bK}^+}\atop{N_{\bK}(\fn)=n}} {\chi(\vartheta_{\bK}(\fn))}= \sum_{{\fm \in J_{\bL}^+}\atop{N_{\bL}(\fm)=n}} \psi(\chi)(\vartheta_{\bL}(\fm)). \end{equation}
In this notation, remember that we have set $\chi$ equal to zero on ideals not coprime to its conductor.
Recall our notation $G_{\bK,\fn}^{\mbox{{\tiny \textup{ab}}}}$ for the Galois group of the maximal abelian extension of $\bK$ that is unramified above the prime divisors of an ideal $\fn$. We will take $\fn$ for the given integer $n$.
We fix a finite quotient group $G$ of $$G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\pi_G}{\twoheadrightarrow} G,$$ and consider only characters that factor over $G$, i.e., that are of the form $\chi \circ \pi_G$ for $\chi$ in the finite group $\widehat{G}$ (which we consider as a subgroups of $\widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ by precomposing with $\pi_G$). We consider only $n$ that are coprime to the conductor of any character in $\widehat{G}$, so actually $\pi_G$ factors over $G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}$, and for such $n$, we sum the identity (\ref{tate2}) over this group $\widehat{G}$,
times the function $\chi(\pi_G(\gamma^{-1}))$ for a fixed element $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ --- interchanging the order of summation, we find
\begin{equation} \label{integratedtate} \sum_{{\fn \in J_{\bK}^+}\atop{N_{\bK}(\fn)=n}} \left(\sum_{\widehat{G}} \chi(\pi_G(\gamma)^{-1})\chi(\vartheta_{\bK}(\fn)) \right) = \sum_{{\fm \in J_{\bL}^+}\atop{N_{\bL}(\fm)=n}} \left(\sum_{\widehat{G}} \chi(\pi_G(\gamma)^{-1})\psi(\chi)(\vartheta_{\bL}(\fm)) \right).
\end{equation}
Let us introduce the following set of ideals for $n \in \Z_{ \geq 1}$ and $\gamma \in G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$:
$$ B_{G,n}(\gamma) = \{ \fn \in J_{\bK}^+ \, : \, N_{\bK}(\fn)=n \, \mbox{ and } \, \pi_G(\vartheta_{\bK}(\fn))=\pi_G(\gamma) \} $$
and denote the cardinality of this set by $$b_{G,n}(\gamma):=\#B_{G,n}(\gamma),$$ (or $b_{\bK,G,n}(\gamma)$ if we want to indicate the dependence on the ground field $\bK$). As is well-known, the value of the left hand side of Equation (\ref{integratedtate}) is
$$\mbox{LHS}(\ref{integratedtate}) = {|G|} \cdot b_{\bK,G,n}(\gamma). $$
We now perform a base change in the bracketed sum on the right hand side of (\ref{integratedtate}), using the homomorphism $$\psi \, : \, \widehat{G}_{\bK}^{\mbox{{\tiny \textup{ab}}}} \rightarrow
\widehat{G}_{\bL}^{\mbox{{\tiny \textup{ab}}}},$$ which we can do since $\psi$ preserves the subgroups indexed by $n$:
$$ \psi(\widehat G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}) = \widehat G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}. $$
Indeed, if $\f_\chi$ is not coprime to $n$, $L_{\bK}(\chi,s)$ has a missing Euler factor at a prime number $p$ dividing $n$. Hence, by the equality of $L$-series, also $L_{\bL}(\psi(\chi),s)$ has such a missing Euler factor, so $\f_{\psi(\chi)}$ is not coprime to $p$ (hence $n$).
To ease notation we write $(\psi^{-1})^*(G)=G'$. We also write $\eta=\psi(\chi)$. Then the bracketed expression on the right hand side of (\ref{integratedtate}) becomes
\begin{equation} \label{RHS2} \sum_{\widehat{G'}} \psi^{-1}(\eta)(\pi_G(\gamma)^{-1})\eta(\pi_{G'}(\vartheta_{\bL}(\fm)))
\end{equation}
Observe that for fixed $\fm$ coprime to $\f_\eta$, $$\Xi_{\fm} \, : \, \eta \mapsto \psi^{-1}(\eta)(\pi_G(\gamma)^{-1})\eta(\pi_{G'}(\vartheta_{\bL}(\fm)))$$ is a character on $\widehat{G'}$. Thus,
$$ \sum_{\widehat{G'}} \psi^{-1}(\eta)(\pi_G(\gamma)^{-1})\eta(\pi_{G'}(\vartheta_{\bL}(\fm))) = \left\{ \begin{array}{ll} {|G'|} & \mbox{ if }\ \Xi_{\fm} \equiv 1; \\ 0 & \mbox{ otherwise.}\end{array} \right.$$
Now $\Xi_{\fm}\equiv 1$ means that
$$ \eta(\pi_{G'}(\vartheta_{\bL}(\fm))) = \psi^{-1}(\eta)(\pi_{G}(\gamma)) \mbox{ for all } \eta \in G'. $$
Since the right expression is equal to $\eta(\pi_{G'}((\psi^{-1})^* \gamma))$, we find that $\Xi_{\fm} \equiv 1$ means that
$$\pi_{G'}(\vartheta_{\bL}(\fm)) = \pi_{G'}((\psi^{-1})^*(\gamma)).$$
Plugging everything back in, we find that the right hand side of Equation (\ref{integratedtate}) becomes
\begin{eqnarray*} \mbox{RHS}(\ref{integratedtate}) &=& {|G'|} \cdot \# \{\fm \in J_{\bL}^+ \mbox{ with } N_{\bL}(\fm)=n \mbox{ and }\pi_{G'}(\vartheta_{\bL}(\fm))=\pi_{G'}((\psi^{-1})^*(\gamma)) \} \\ &=& {|G'|} \cdot b_{\bL,G',n}((\psi^{-1})^*(\gamma)). \end{eqnarray*}
Since $\psi$ is a group isomorphism of finite abelian groups, $|G'|=|\widehat{G'}|=|\psi(\widehat{G})|=|G|$, so we conclude that for all finite quotient groups $G$ of $G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}$
\begin{equation} \label{count} b_{\bK,G,n}(\gamma) = b_{\bL,(\psi^{-1})^*G,n}((\psi^{-1})^*(\gamma)). \end{equation}
Now as a profinite group, $G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}$ can be written as the inverse limit over all its finite quotients, and since all constructions are compatible with these limits, we conclude that the sets
\begin{equation} S_1(n,\gamma):=\{ \fn \in J_{\bK}^+ \, : \, N_{\bK}(\fn)=n \, \mbox{ and } \, \pi_{G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}}(\vartheta_{\bK}(\fn)))=\pi_{G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}}(\gamma) \}
\end{equation}
and
\begin{equation} S_2(n,\gamma):=\{\fm \in J_{\bL}^+ \mbox{ with } N_{\bL}(\fm)=n \mbox{ and }\pi_{G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}}(\vartheta_{\bL}(\fm))=\pi_{G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}}((\psi^{-1})^*(\gamma)) \}
\end{equation}
have the same number of elements. We now set $\gamma=\vartheta(\tilde\fn)$ for a given ideal $\tilde\fn \in J_{\bK}^+$ of norm $n$. Since the Artin map $\vartheta_{\bK} \, : \, J_{\bK}^+ \rightarrow G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}$ is injective on ideals that divide $n$, we find that the set $S_1(N_{\bK}(\tilde\fn),\vartheta_{\bK}(\tilde\fn))$ has a unique element. Hence there is also a unique ideal $\fm \in J_{\bL}^+$ with
$$ N_{\bL}(\fm) = N_{\bK}(\tilde\fn) $$
and
\begin{equation} \label{MMM} \pi_{G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}}(\vartheta_{\bL}(\fm)) = \pi_{G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}}((\psi^{-1})^* ( \vartheta_{\bK}(\tilde\fn))). \end{equation} After applying Pontrjagin duality, this becomes exactly statement (\ref{NNN}).
We set $\Psi(\tilde\fn):=\fm$, and this is our desired map. It is multiplicative, since $(\psi^{-1})^*$ and the Artin maps are so.
Finally, \eqref{NNN} is equivalent to (iv) in Theorem \ref{main3} (``Reciprocity isomorphism'') for $\hat \psi = (\psi^{-1})^*$, since the latter statement is clearly equivalent to \eqref{MMM}. \end{proof}
\section{QSM-isomorphism from matching $L$-series: homeomorphism on $X_{\bK}$}
We now proceed to show that $\psi$ also induced a natural map on the whole abelian part $C(X_{\bK}) \rightarrow C(X_{\bL})$, not just on the part $ \psi \, : \, C(G_{\bK}^{\mbox{{\tiny \textup{ab}}}}) \overset{\sim}{\rightarrow} C(G_{\bL}^{\mbox{{\tiny \textup{ab}}}})$ where it is automatically defined (by continuity of $\psi$). We check this on ``finite'' parts of these algebras that exhaust the whole algebra, as in Section \ref{respect}.
\begin{lem} The map $\psi$ extends to an algebra isomorphism
$$ \psi \, : \, C(G_{\bK}^{ab}\times_{\hat\cO_{\bK}^*} \hat \cO_{\bK}) \rightarrow C(G_{\bL}^{ab}\times_{\hat\cO_{\bL}^*} \hat \cO_{\bL}) .$$
\end{lem}
\begin{proof} Recall that the map $\psi \, : \, \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \widehat G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ induces by duality a group isomorphism
$$ (\psi^{-1})^* \, : \, G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}, $$ and let $\Psi \, : \, J_{\bK}^+ \overset{\sim}{\rightarrow} J_{\bL}^+$ denote the compatible isomorphism of semigroups of ideals introduced in the previous section.
Recall from Section \ref{respect} how we have decomposed the algebra $C(X_{\bK})$ into pieces $C(X_{\bK,n})$, were we now assume $n$ is an integer. We can then define a map
$$ \psi_n \, : \, C(X_{\bK,n}) \rightarrow C(X_{\bL,n}) $$
as the closure of the map given by $$f_{\chi,\fm}\mapsto f_{\psi(\chi),\Psi(\fm)},$$ where $f_{\chi,\fm}$ are the generators of the algebra $C(X_{\bK,n})$ given in Lemma \ref{gengen}. Recall from the previous section that if $\chi \in \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$ has conductor coprime to $n$, so has $\psi(\chi)$, so the map $\psi_n$ is well-defined. The map is a vector space isomorphism by construction, since both $\psi$ and $\Psi$ are bijective.
By taking direct limits (the maps of algebras are compatible with the divisibility relation), we arrive at a topological vector space isomorphism
$$ \psi = \lim_{{\longrightarrow}\atop{n}} \psi_n \, : \, C(X_{\bK}) \overset{\sim}{\rightarrow} C(X_{\bL}). $$
To see that the map $\psi$ is an algebra homomorphism, we need to check it is compatible with multiplication: this will follow from the compatibility of $\Psi$ with the Artin map, which implies that the function $\psi(f_{\chi,\fm})$ is given by a pullback. Indeed, for $x = \Psi(\fm') \ast [(\gamma',\rho')] \in X_{\bL,n}^1$ with $\gamma' \in G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}, \rho' \in \hat\cO_{\bL,n}^*$ and $\fm' \in J_{\bK,n}^+$, we find that
\begin{eqnarray*} \psi(f_{\chi,\fm})(x) &=& f_{\psi(\chi),\Psi(\fm)}(\Psi(\fm') \ast [(\gamma',\rho')]) \\ &=& \delta_{\Psi(\fm),\Psi(\fm')} \psi(\chi)(\vartheta_{\bL}(\Psi(\fm)^{-1}) \psi(\chi)(\gamma'), \end{eqnarray*}
which, by the compatibility of $\Psi$ with the reciprocity map (Equation \eqref{NNN}), is
$$ = \delta_{\fm,\fm'} \chi(\vartheta_{\bK}(\fm)^{-1}) \chi(\psi^*(\gamma')) = (\psi^{-1})^* f_{\chi,\fm}(x). $$
Hence if $\chi$ and $\chi'$ are two characters in $\widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$, and $\fm, \fm'$ are two ideals in $J_{\bK,n}^+$ for $n $ sufficiently large, we find
$$ \psi(f_{\chi,\fm} \cdot f_{\chi',\fm'}) = (\psi^{-1})^* \left( f_{\chi,\fm} \cdot f_{\chi',\fm'} \right)= (\psi^{-1})^* \left(f_{\chi,\fm}\right) \cdot (\psi^{-1})^* \left( f_{\chi',\fm'} \right) = \psi(f_{\chi,\fm}) \cdot \psi(f_{\chi',\fm'}), $$
which implies that $\psi$ is multiplicative.
\end{proof}
\section{QSM-isomorphism from matching $L$-series: end of proof}
\begin{thm}\label{matchLiso}
Let $\bK$ and $\bL$ denote two number fields. Suppose $\psi$ is a group isomorphism
$$ \psi \, : \, \widehat{G}^{\mbox{{\tiny \textup{ab}}}}_{\bK} \overset{\sim}{\rightarrow} \widehat{G}^{\mbox{{\tiny \textup{ab}}}}_{\bL}$$
that induces an identity of the respective $L$-functions
$$ L_{\bK}(\chi,s) = L_{\bL}(\psi(\chi),s). $$
Then there is a dagger isomorphism of QSM-systems
$\varphi: (A_{\bK},\sigma_{\bK}) \to (A_{\bL}, \sigma_{\bL})$.
\end{thm}
\begin{proof}
The maps $\psi \colon X_{\bK} \rightarrow X_{\bL}$ and $\mu_{\fn} \mapsto \mu_{\Psi(\fn)}$ induce an isomorphism
$$ \varphi: A^{\dagger}_{\bK} \rightarrow A^{\dagger}_{\bL}, $$
which extends to a $C^*$-algebra isomorphism between $A_{\bK}$ and $A_{\bL}$.
It remains to verify that this map is indeed a QSM-isomorphism, i.e., that it commutes with time evolution. On the abelian part, there is nothing to verify, since it is stable by time evolution. On the semigroup part, it is a simple consequence of the fact that $\Psi$ preserves norms:
$$ N_{\bL}(\Psi(\fn)) = N_{\bK}(\fn), $$
so that, on the one hand
$$\sigma_{\bL,t}(\varphi(\mu_{\fn})) = N_{\bL}(\Psi(\fn))^{it} \mu_{\Psi(\fn)}, $$ and on the other hand,
$$\varphi(\sigma_{\bK,t}(\mu_{\fn})) = \varphi( N_{\bK}(\fn)^{it} \mu_n) = N_{\bK}(\fn)^{it} \mu_{\Psi(\fn)}.$$
This finishes the proof that $$\sigma_{\bL,t} \circ \varphi = \varphi \circ \sigma_{{\bK},t}.$$
\end{proof}
\begin{remark}
As quoted in the introduction, in \cite{Riem}, it was shown that an equality of infinitely many Dirichlet series associated to a map between closed Riemannian manifolds is equivalent to this map being an isometry. In the same reference, it is then shown how to use this theorem to define a distance between closed Riemannian manifolds, as infimum over a usual distance between complex functions. With number fields, we are now in a very analogous situation, in that we characterize number fields by an equality of Dirichlet series. One might use this to define a distance on the set of all number fields up to isomorphism. It then remains to investigate whether this (forcedly discrete) distance on a countable set has an interesting completion (much like passing from $\Q$ to $\R$): are there interesting `limits' of number fields? Also, metrizing abstract number fields might be useful to our understanding of ``arithmetic statistics''--- the distribution of invariants over number fields (compare \cite{Venk}). \end{remark}
\section{Proof of Theorem \ref{main3}}
We now show that reciprocity isomorphism (iv) implies L-isomorphism (iii). Since obviously, field isomorphism (i) implies reciprocity isomorphism (iv), this will finish the proof of all main theorems from the introduction.
The condition of compatibility with Artin maps at finite level can be rephrased as follows: for any $\fn$ dividing an integer $n$, we have that $$\pi_{G_{\bK,n}^{\mbox{{\tiny \textup{ab}}}}} (\hat \psi(\vartheta_{\bK}(\fn))) = \pi_{G_{\bL,n}^{\mbox{{\tiny \textup{ab}}}}}(\vartheta_{\bL}(\Psi(\fn))), $$ to which we can apply Pontrjagin duality to find that
$$ \chi(\vartheta_{\bK}(\fn)) = \psi(\chi)(\vartheta_{\bL}(\Psi(\fn)), $$
for all characters $\chi$ whose conductor is coprime to $n$. Here, we define the map $\psi$ by
$$ \psi=(\hat\psi^{-1})^* \colon \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} \widehat G_{\bL}^{\mbox{{\tiny \textup{ab}}}}.
$$
Let $\chi \in \widehat G_{\bK}^{\mbox{{\tiny \textup{ab}}}}$. We prove the theorem by performing a change in summation $\fm = \Psi(\fn)$ in the $L$-series as follows (using that norms are preserved, and Artin maps intertwined):
$$ L_{\bK}(\chi,s) = \sum_{\fn \in J_{\bK}^+} \frac{\chi(\vartheta_{\bK}(\fn))}{N_{\bK}(\fn)^s} = \sum_{\fm \in J_{\bL}^+} \frac{\psi(\chi)(\vartheta_{\bL}(\fm))}{N_{\bL}(\fm)^s} = L_{\bL} (\psi(\chi),s). $$
(Recall that in this definition, we have set $\chi(\vartheta_{\bK}(\fn))=0$ as soon as $\fn$ is not coprime to the conductor of $\chi$.)
\begin{remark}
In Uchida's proof of the function field case of the Neukirch-Uchida theorem (\cite{U}), the construction of a multiplicative map of global function fields $(\bK^*,\times) \overset{\sim}{\rightarrow} (\bL^*,\times)$ is based on the existence of topological group isomorphisms of the ideles $ \Psi \, : \, \A_{\bK}^* \overset{\sim}{\rightarrow} \A_{\bL}^*$ and of the abelianized Galois groups $ \hat\psi \, : \, G_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} G_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ which are compatible with the Artin maps, using that in a function field $\bK$, the group $\bK^*$ is the kernel of the Artin map (which is not surjective in this case). The conditions that go into this proof are a bit similar to the ones in Theorem \ref{main3}. Our theorem shows that similar conditions imply the same result for number fields as for function fields, albeit with a rather different proof.
\end{remark}
\begin{remark} Around 1992, Dinakar Ramakrishnan asked whether isomorphism between two number fields $\bK$ and $\bL$ is equivalent to the existence of an isomorphism $\alpha \colon \A_{\bK} \overset{\sim}{\rightarrow} \A_{\bL}$ of their respective adele rings and an isomorphism $\omega \colon W_{\bK}^{\mbox{{\tiny \textup{ab}}}} \overset{\sim}{\rightarrow} W_{\bL}^{\mbox{{\tiny \textup{ab}}}}$ of the abelianizations of their Weil groups. If these two isomorphisms are compatible with reciprocity in the sense that the following diagram commutes
$$ \xymatrix{ \A_{\bK}^* \ar@{->}[r]^{\alpha} & \A_{\bL}^* \\ W_{\bK}^{\mbox{{\tiny \textup{ab}}}} \ar@{->}[r]^{\omega} \ar@{->}[u] & W_{\bL}^{\mbox{{\tiny \textup{ab}}}} \ar@{->}[u] },$$
then their kernels are isomorphic, so $\alpha$ restricts to an isomorphism $\bK^* \overset{\sim}{\rightarrow} \bL^*$, which, extended by $0 \mapsto 0$, gives a field isomorphism of $\bK$ and $\bL$ (the additivity is automatic from the embedding into the adele rings). The question remains whether the same holds without assuming compatibility of the maps via reciprocity.
\end{remark}
\section{Relaxing the conditions on $L$-series}
\begin{se}
One may now wonder whether condition (iii) (L-isomorphism) of Theorem \ref{main2}, can be weakened. For example, is it possible to restrict to characters of fixed type? At least for rational characters of order two (i.e., arising from quadratic extensions by the square root of a rational number), this is not the case, as the following proposition shows.
\end{se}
\begin{prop}
Suppose $\bK$ and $\bL$ are number fields with the same Dedekind zeta function. Then for any quadratic character $\chi$ whose conductor is a rational non-square in $\bK$ nor $\bL$, we have an equality of $L$-series $L_{\bK}(\chi,s)=L_{\bL}(\chi,s)$. \end{prop}
\begin{proof}
We have \begin{equation} \label{1} \zeta_{\bK}(s) = \zeta_{\bL}(s) \end{equation}
This says that $\bK$ and $\bL$ are arithmetically equivalent, which we can express in group theoretical terms by Ga{\ss}mann's criterion (\cite{Perlis1}) as follows: let $\bN$ be Galois over $\Q$ containing $\bK$ and $\bL$; then $\Gal(\bN/\bK)$ and $\Gal(\bN/\bL)$ intersect all conjugacy classes in $\Gal(\bN/\Q)$ in the same number of elements.
Let $\bM=\Q(\sqrt{d})$ for a rational non-square $d$. It is easy to see from Ga{\ss}mann's criterion for arithmetic equivalence that then, the composita $\bK \bM$ and $\bL \bM$ are also arithmetically equivalent (cf.\ e.g.\ Uchida \cite{U2}, Lemma 1): choose $\bN$ so it also contains $\bM$, and verify that $\Gal(\bN/\bK\bM)$ and $\Gal(\bN/\bL\bM)$ intersect all conjugacy classes in $\Gal(\bN/\Q)$ in the same number of elements.
We conclude that the zeta functions of $\bK \bM = \bK(\sqrt{d})$ and $\bL=\bL(\sqrt{d})$ are equal:
\begin{equation} \label{2} \zeta_{\bK\bM}(s) = \zeta_{\bL\bM}(s) \end{equation}
Let $\chi$ be the quadratic character that belongs to $d$. By Artin factorization, we can write \begin{equation} \label{3} \zeta_{\bK \bM}(s) = \zeta_{\bK}(s) \cdot L_{\bK}(\chi,s) \mbox{ and } \zeta_{\bL \bM}(s) = \zeta_{\bL}(s) \cdot L_{\bL}(\chi,s). \end{equation}
We find the conclusion by combining (\ref{1}), (\ref{2}) and (\ref{3}).
\end{proof}
\begin{remark}
We do not know a direct ``analytic'' proof that equality of zeta functions implies equality of all rational quadratic twist $L$-series. As a matter of fact, looked at in a purely analytic way, the result does not appear to be so obvious at all.
\end{remark}
\begin{remark}
Bart de Smit \cite{BDS} has proven that for $\bK$ and $\bL$ to be isomorphic, it suffices to have an equality between the sets of all zeta functions of abelian extensions of $\bK$ and $\bL$, or between the sets of all $L$-series for characters of order $\leq 2$. His method is constructive in the sense that, for given arithmetically equivalent $\bK$ and $\bL$, one may construct a finite set of quadratic characters whose $L$-series have to match for $\bK$ and $\bL$ to be isomorphic.
\end{remark}
\begin{remark}
One may wonder how much information an equality of \emph{sets} of $L$-series with characters encodes about the characters themselves (so not assuming the identification of $L$-series to arise from an isomorphism of abelianized Galois groups). Bart de Smit has constructed an example of two number fields and two characters of \emph{different} order whose $L$-series coincide. Multiplicative relations (more general than equality) between $L$-series on the same number field are discussed in \cite{Artin} (around \emph{Satz} 5).
\end{remark}
\bibliographystyle{amsplain}
|
1,116,691,497,164 | arxiv | \section{Introduction} \label{sec:Introduction}
The InSight spacecraft landed on Elysium Planitia ($4.5^\circ$ N, $135.6^\circ$ E -- \citealp{2020E&SS....701248G}) on 2018 Nov 28, carrying suites both of geophysical \citep{2020NatGe..13..183B} and meteorological instruments \citep{Banfield2019,2020NatGe..13..190B}. Since wind gusts and other atmospheric boundary layer phenomena can perturb the geophysical measurements, particularly the seismic signals, these meteorological instruments provide crucial information for mission's foci, including an exploration of Mars' interior structure and thermal state and constraining its present-day seismicity. As an added benefit, the meteorological instrumentation, the Auxiliary Payload Sensor Suite (APSS), characterize active boundary layer processes. \citet{2018SSRv..214..109S} discusses the wide range of meteorological applications and insights that the mission may yield. For instance, wind speeds measured by InSight's Temperature and Winds for InSight (TWINS) instrument may be combined with measurements of the surface (from the RADiometer instrument) and near-surface temperatures (from TWINS) to assess the accuracy and applicability of theoretical predictions of surface layer heat and momentum transport developed for Earth. InSight's cameras \citep{2018SSRv..214..105M}, the Instrument Deployment Camera (IDC) and Instrument Context Camera (ICC), may also reveal active sediment transport, observations of which, when combined with wind speeds from TWINS, can constrain threshold conditions for aeolian activity on Mars \citep{2012Natur.485..339B}.
InSight also promises to help elucidate one of the most dramatic and significant aeolian interactions on Mars, dust devils. These atmospheric apparitions arise when a convective cell draws in surrounding air which, as it conserves vorticity, spins up into a small-scale (10s to 100s of meters) whirlwind that then collects and lifts surficial dust into the atmosphere. Dust devils also frequently rearrange sediment on the martian surface, leaving long and narrow bright or dark tracks in their wakes \citep{2016SSRv..203..143R}. Observations of dust devils on Mars extend back to the Viking missions \citep{1985Sci...230..175T}, and they have since been observed in imagery from almost all landed missions to Mars. Some dust devils have even been large enough to be seen from orbiting spacecraft \citep[\emph{e.g.},][]{2011GeoRL..3824206C}.
As convective vortices, dust devils may register not just in imagery but also in meteorological time-series if a vortex passes over or near the lander -- the convective cells produce short-lived (few seconds), negative pressure excursions, accompanied by perturbations to the observed wind speed and direction \citep{2016Icar..271..326L}. Consequently, studies also going back to Viking have analyzed such meteorological data, identifying hundreds and even thousands of such encounters \citep{2003Icar..163...78R, 1999GeoRL..26.2781M, 1983JGR....8811005R, 2003JGRE..108.5133F, 2002GeoRL..29.2103M, 2016SSRv..203..277L}.
One significant drawback of such studies, though, is that, without simultaneous imagery or pyranometry, judging whether the encountered convective vortex actually carries dust is difficult. Indeed, such dustless vortices are common on both Mars and Earth, and there is no requirement that a vortex lifts dust, even when dust is available. In a terrestrial field experiment, \citet{LORENZ20151} found about 20\% of encountered vortices exhibited signatures consistent with dust loading. \citet{2016Icar..278..180S} reported 245 vortices encountered by the Mars Science Laboratory and found only two with clear signatures of dust-loading.
Moreover, the exact conditions that allow a vortex to lift dust are unclear. Boundary layer fluid mechanics suggests inter-particle cohesion conspires with gravity to give a threshold wind velocity for dust lifting that falls with particle diameter before rising again \citep{1985wagp.book.....G}. However, seminal laboratory simulations described in \citet{2010Icar..206..306N} show instead that the smallest particles are lofted even with very small velocities.
Worse still, exactly how much dust devils contribute to the lifting of dust into the martian atmosphere remains highly uncertain. According to \citet{2016SSRv..203...89F}, dust devils may contribute between 25\% and 75\% of the total dust flux in the martian atmosphere. Since Mars' atmosphere is so thin and provides so little greenhouse warming compared to Earth's, the aerosols suspended in Mars' atmosphere absorb and scatter significant amounts of radiation, contributing perhaps tens of degrees K of warming \citep{2002Icar..157..259S, 2004JGRE..10911006B}. Thus, accurately assessing both the amount of dust lifted by a dust devil and the frequency with which dust devils occur on Mars are critical to understanding Mars' climate and dust cycle.
In this study, we analyze the pressure and wind speed data collected by InSight's APSS both to probe the structures of individual vortices and to estimate their occurrence rates. To assess how frequently the vortices actually carry dust, we also survey the available InSight imagery. We compare our results to other recent studies of the InSight meteorological data \citep{2021Icar..35514119L, Spiga2021}. Our study differs from these previous ones in several ways: more data were made available since those studies were completed (almost 100 sols more), and we employ several novel detection and time-series analysis schemes. These differences produced some results that agree with those previous studies and some which differed. We also compared our occurrence rates to analysis of space-based observations of tracks left in the region around InSight \citep{2016Icar..266..315R, 2020GeoRL..4787234P}, which allows us to assess how frequently vortices leave tracks on the martian surface.
We start by describing our detailed analysis of the meteorological time-series (Section \ref{sec:Meteorological Time-Series Analysis}). This section includes a description of the data themselves (Section \ref{sec:Time-Series Data}), our procedures for detecting the vortices and analyzing their pressure profiles (Section \ref{sec:Searching for Vortex Encounters and Fitting Pressure Profiles}), our procedures for analyzing the wind profiles (Section \ref{sec:Fitting Wind Profiles}), and the resulting vortex statistics (Section \ref{sec:Vortex Statistics}). Finally, we discuss our estimates for the intrinsic vortex occurrence rates (Section \ref{sec:Inferring Areal Occurrence Rate from the Time-Series Analysis}). In the next section, we discuss our analysis of the ICC imagery to infer the intrinsic occurrence rates for dust devils themselves (Section \ref{sec:Image Analysis}). A detailed comparison of our results to previous studies comes next (Section \ref{sec:Discussion}), followed by a discussion of caveats and future work (Section \ref{sec:Conclusions}). Two appendices follow that provide details on the statistics of our vortex detection scheme and on our model for determining the geometry and uncertainties for each encounter between the InSight lander and a vortex from the observed vortex parameters.
\section{Meteorological Time-Series Analysis}
\label{sec:Meteorological Time-Series Analysis}
In this section, we describe our analysis of the pressure and wind speed time-series. We first describe the data themselves. Next, we discuss our search for vortex signals using the pressure time-series and explore its biases and completeness, which \citet{2015JGRE..120..401J} showed are important for inferring the underlying occurrence rates from the observations. Next, we describe how we modeled the pressure and wind speed profiles of individual vortex encounters, which, as we show, allows us to determine not just the observed pressure and wind excursions but also to infer the encounter geometries and intrinsic vortex parameters \citep[\emph{cf.}][]{2016Icar..271..326L}. We also discuss the statistical distributions and correlations of vortex parameters.
\subsection{Time-Series Data}
\label{sec:Time-Series Data}
The pressure measurements from the APSS are taken at $10\, {\rm Hz}$ with a nominal precision of $50\,{\rm mPa\ Hz^{-1/2}}$ or better, much higher frequency and precision than available from some previous Mars landers \citep[\emph{e.g.},][]{2010JGRE..115.0E16E}. As we discuss below, turbulent excursions give an effective scatter in the pressure data between $0.2$ and $0.5\,{\rm Pa}$, depending on ambient conditions. In any case, such specs make APSS ideal for studying turbulent signals in the martian boundary layer \citep{2018SSRv..214..109S}. APSS has measured pressures nearly non-stop since sol 14 of the mission, and for our study, we considered data up through sol 477 of the mission, amounting to almost 82 GB. The data are available from NASA's Atmosphere's PDS Node. PDS provides several sets of data files for APSS, and we used the CSV files in the ``data\_calibrated'' folder. These data files are different from the raw data files because they include a temperature-dependent calibration -- see https://atmos.nmsu.edu/PDS/data/PDS4/InSight/ps\_bundle/document/pressure\_processing.pdf for details.
Wind data come from the TWINS instrument, the sensors for which sit on booms located on the InSight platform and facing opposite directions about a meter above the solar panels \citep{2020NatGe..13..190B}. TWINS acquires data at $0.1\, {\rm Hz}$ and $1\, {\rm Hz}$ with an accuracy of $1\, {\rm m\ s^{-1}}$ for wind speed and $22.5^\circ$ for wind direction. In fitting the vortex wind profiles, we do not consider the TWINS wind direction data. Although these data could help us to reconstruct the encounter geometries and determine cyclonicity (clockwise or counter-clockwise rotation), the precision of $22.5^\circ$ is insufficient to provide robust constraints (though the directional data are useful for studying other boundary layer processes). Moreover, our analysis requires only the magnitude of the vortex wind speeds, not the direction. TWINS has also provided a dataset spanning nearly the whole time from sol 14 to 477, amounting to more than 18 GB. For our work here, we focus on the higher time resolution ($1\, {\rm Hz}$) wind data, which are somewhat more limited in extent, often only spanning the mid- to late afternoon for a given sol. The wind measurements involve modeled reconstructions, as described in \citet{Banfield2019}, and we used the CSV files in the ``data\_derived'' folder. A higher resolution dataset ($20\,{\rm Hz}$) is labeled as ``modelevent'' on PDS, but it is more limited in extent. So we opted to use the lower-resolution data. ``Derived'' data involve modeling out instrumental effects to achieve a (presumably) more accurate representation of the wind field; ``Calibrated'' data involve converting the raw instrument measurements to physical quantities. See https://atmos.nmsu.edu/PDS/data/PDS4/InSight/twins\_bundle/document/twinsps\_dp\_sis\_issue10.pdf for more details.
Vortex encounters can also produce excursions in ambient temperature as the warm core passes over the sensor \citep{2016SSRv..203...39M}, and APSS does return air temperature data. However, we do not model these time-series since the temperature data show small or negligible excursions during the encounters \citep{2021Icar..35514119L}. In any case, the temperatures would be expected to simply mirror the pressure excursions \citep{2016Icar..271..326L}.
\subsection{Searching for Vortex Encounters and Fitting Pressure Profiles}
\label{sec:Searching for Vortex Encounters and Fitting Pressure Profiles}
Both the pressure and wind speed data exhibit turbulent excursions that constitute a source of non-Gaussian noise and complicate our search for vortex encounters. However, the pressure data are both more plentiful and less affected by these excursions, so we search for encounters using the pressure time-series. This approach resembles many prior studies, including previous analyses of InSight data \citep{Spiga2021, 2021Icar..35514119L}. Figure \ref{fig:data_conditioning_and_fit} depicts our search process in graphical form, and panel (a) shows the raw pressure time-series for sol 395, a representative sol. The vertical dashed orange lines show the vortex signals, whose detection we describe next. Any time-series analysis scheme will unavoidably involve selection effects that can skew the recovered population of signals \citep{2018Icar..299..166J}, and we explore biases of our detection scheme and how those influence the final recovered population of vortices in the Appendix \ref{sec:Vortex Recovery Statistics}.
To suppress longer-term signals and facilitate detection of the vortices, we apply a mean boxcar filter with a window size $W$ before sifting the data for vortices. Figure \ref{fig:data_conditioning_and_fit}(b) shows the resulting detrended time-series $\Delta P$. Based on the analysis described in the Appendix \ref{sec:Vortex Recovery Statistics}, we chose $W = 3000\,{\rm s}$.
Next, we employ a matched filter approach \citep[][ch.~13]{Press2007} using a normalized Lorentzian profile with a known FWHW, $\Gamma$; that is, we march a Lorentzian profile, point-by-point, across the time series, convolving it with the time-series. Based on our analysis (Appendix \ref{sec:Vortex Recovery Statistics}), we chose $\Gamma = 1\,{\rm s}$. This process produces the equivalent of a spectrum, with large positive spikes when the filter encounters other Lorentzian-like signals. We subtract the median value from this raw spectrum and then normalize it by the standard deviation (as estimated by $1.4826\ \times$ the median absolute deviation -- \citealp{doi:10.1080/01621459.1993.10476408}). We consider peaks rising above the detection threshold of 5 to be possible vortices. Figure \ref{fig:data_conditioning_and_fit}(c) shows this normalized spectrum for the time-series in panel (b), along with the peaks raising above $F \ast P = 5$ (vertical dashed orange lines).
Finally, considering the original, undetrended time-series (\emph{e.g.}, Figure \ref{fig:data_conditioning_and_fit}a), we use the Levenberg-Marquardt algorithm \citep[\emph{cf.}][]{Press2007} to fit the time-series in a window 30-FWHMs wide around each vortex signal. As in previous work \citep[\emph{e.g.},][]{2016JGRE..121.1514K}, we assume the pressure structures of vortices are accurately represented by a steady-state modified Lorentzian profile,
\begin{equation}
\Delta P(r) = -\frac{\Delta P_{\rm act}}{1 + \left( \frac{r}{D_{\rm act}/2} \right)^2}\label{eqn:radial_lorentzian_profile}
\end{equation}
where $r$ the radial distance of the InSight sensors from the vortex center, $P_{\rm act}$ the pressure excursion at the vortex center, and $D_{\rm act}$ is taken as the vortex diameter. As a function of time $t$, $r(t)$ is given by
\begin{equation}
r(t) = \sqrt{b^2 + U^2 \left( t - t_0 \right)^2}\label{eqn:radial_distance}
\end{equation}
where $t_0$ is the time of closest approach. This scheme assumes the vortex travels past the sensor on a linear trajectory with a unidirectional and constant velocity $U$ (at least during the course of a single encounter). The closest approach distance $b$ between the vortex center and the InSight sensors is usually greater than zero. As a result, the minimum observed in the pressure time-series $\Delta P_{\rm obs}$ is not usually as deep as the actual pressure excursion at the vortex center \citep{2018Icar..299..166J, 2019Icar..317..209K}. The pressure time-series for a vortex encounter can therefore be represented by
\begin{equation}
\Delta P(t) = \frac{-\Delta P_{\rm obs}}{1 + \left( \frac{ t - t_0 }{\Gamma_{\rm obs}/2} \right)^2}\label{eqn:Lorentzian_profile}
\end{equation}
where $\Gamma_{\rm obs}$ is the observed profile full-width/half-max (FWHM). With these definitions, $\Gamma_{\rm obs} = U^{-1} \sqrt{D_{\rm act}^2 + \left( 2 b \right)^2}$.
We fit this profile combined with a linear trend, returning best-fit parameters $t_0$, $\Delta P_{\rm obs}$, and $\Gamma_{\rm obs}$, as well as the background slope and intercept. Fitting such a model to the original, undetrended data instead of the detrended time-series avoids the distorting effect of the boxcar filter on the vortex signal while taking into account any background trend. Figure \ref{fig:data_conditioning_and_fit}(d) shows such a fit.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/data_conditioning_and_fit.png}
\caption{(a) The pressure time-series for sol 395, as blue dots. The vertical orange lines highlight the detected vortex signals. (b) The time-series after application of the mean boxcar filter. Apparent by eye, the scatter in the time-series increases around mid-day. (c) Convolution of the matched filter with the time-series in (b). The horizontal dashed orange line shows the detection threshold of 5. (d) A model fit (solid orange line) to the deepest vortex discovered on sol 395, along with the model fit parameters -- each point's uncertainty is calculated via $1.4826\ \times$ the median absolute deviation in the window centered on that point.}
\label{fig:data_conditioning_and_fit}
\end{figure}
\subsection{Fitting Wind Profiles}
\label{sec:Fitting Wind Profiles}
To assess the intrinsic vortex properties and encounter geometries, we also fit wind speed profiles to the wind speed time-series that coincide in time with the peaks found by our search. The wind speed signal consists both of an ambient wind $U$ and the vortex wind $V(r)$, which is a function of radial distance (and therefore time). For the Vatistas vortex model \citep{1991ExFl...11...73V}, $V(r)$ is given by
\begin{equation}
V(r) = V_{\rm act} \frac{2 \left( \frac{r}{D_{\rm act}/2} \right) }{1 + \left( \frac{r}{D_{\rm act}/2} \right)^2}
\end{equation}
where $V_{\rm act}$ is the tangential wind speed at the vortex diameter. Similar to the pressure signal, a non-zero $b$ means the maximum wind speed encountered $V_{\rm obs}$ is less than the actual maximum at the vortex diameter. We model the observed vortex wind profile as
\begin{equation}
V(t) = V_{\rm obs} \frac{\sqrt{1 + \left( U_1/b \right)^2 \left( t - t_0 \right)^2}}{1 + \left(\frac{t - t_0}{\Gamma_{\rm obs}/2}\right)^2}\label{eqn:wind_profile}
\end{equation}
where $U_1$ is the ambient wind speed before the encounter and which we take as the advective speed for the vortex. Strictly, this model breaks down for $b = 0$, but such an encounter is statistically unlikely \citep{2018Icar..299..166J}. Many of the wind signals exhibit a different ambient wind speed before the encounter ($U_1$) than after the encounter ($U_2$). Thus, the total wind speed observed $W(t)$ involves the vector sum of the ambient wind and vortex wind and is given by
\begin{equation}
W(t) =
\begin{array}{ll}
\sqrt{V^2 + 2 U_1 V \cos \theta + U_1^2}, & \left( t - t_0 \right) \leq 0\\
\sqrt{V^2 + 2 U_2 V \cos \theta + U_2^2}, & \left( t - t_0 \right) > 0\\
\end{array} \label{eqn:total_wind_speed}
\end{equation}
where $\cos \theta = b/\sqrt{b^2 + \left( U_{1/2} \right)^2\left( t - t_0 \right)^2}$.
We fit the pressure and wind speed profiles for each encounter in two separate steps -- first, the pressure, then the wind speed. In so doing, we hold the $\Gamma_{\rm obs}$- and $t_0$-values fixed from the pressure profile fit. To fit the wind profiles, we estimate $U_1$ and $U_2$ by finding the median wind speed $W(t)$ between $3$ and $5\times\Gamma_{\rm obs}$ before and after the encounter and then hold these values fixed as we fit $V$. Experimentation showed this approach most frequently gave reasonable results, and Figure \ref{fig:vortices_and_windspeed} shows several examples of the profile fits.
As we show in Appendix \ref{sec:Inferring Encounter Geometries from the Pressure and Velocity Profiles}, fitting both $\Delta P_{\rm obs}$ and $V_{\rm obs}$ and assuming a balance between centrifugal and pressure gradient accelerations (i.e., cyclostrophic balance) allows us to estimate $\Delta P_{\rm act}$ and $V_{\rm act}$, along with the encounter distance $b$. To check this approach, we applied these models to many synthetic vortex encounters for a range of encounter geometries, vortex parameters, and time-series noise representative of the observed values. We found that, for encounters with $b \lesssim D_{\rm act}$, we were able to recover the assumed parameters to within 50\% for the majority of cases. For encounters farther than that, the signals were often lost in the noise. We also required that the estimated $V_{\rm obs}$ exceed the scatter in the wind speed data $\sigma$ by a factor of three to ensure a robust detection. Future work should explore more robust approaches. In what follows, we initially retain all vortices detected, regardless of their best-fit $b$-values, since the best-fit $\Delta P_{\rm obs}$- and $\Gamma_{\rm obs}$ are independent of $b$, but for results later on that depend on $b$, we dropped those vortices with $b > D_{\rm act}$ and $V_{\rm obs}/\sigma < 3$, leaving 88\ vortices. That transition is clearly indicated in the narrative below.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figures/vortices_and_windspeed.png}
\caption{Pressure data (blue dots) and models (blue lines) and horizontal windspeed data (orange dots) and models (orange lines). \label{fig:vortices_and_windspeed}}
\end{figure}
\subsection{Vortex Statistics}
\label{sec:Vortex Statistics}
Figure \ref{fig:DeltaPobs_vs_Gammaobs}(a) shows our collection of best-fit $\Gamma_{\rm obs}$- and $\Delta P_{\rm obs}$-values, along with their respective cumulative histograms. Inspecting an initial tranche of detections by-hand, we found that vortices with best-fit $\Gamma_{\rm obs} > 100\,{\rm s}$ and/or $\Delta P_{\rm obs} < 0.1\,{\rm Pa}$ tended not to resemble true vortices but instead appeared simply to be incoherent pressure excursions, so we dropped 44\ of these initial detections, leaving the 990\ vortex signals depicted in Figure \ref{fig:DeltaPobs_vs_Gammaobs}. The largest $\Delta P_{\rm obs}$-value we found was $\left( 8.9 \pm 0.2 \right)\,{\rm Pa}$\ on sol 65, which seems to correspond to the deepest vortex reported in \citet{Spiga2021}. The longest-duration vortex occurred on sol 20\ and lasted $\left( 99 \pm 3 \right)\,{\rm s}$. The median values are $\Gamma_{\rm obs} = \left( 9.3 \pm 0.2 \right)\,{\rm s}$ and $\Delta P_{\rm obs} = \left( 1.13 \pm 0.03 \right)\,{\rm Pa}$ (indicated by the dashed, orange lines in Figure \ref{fig:DeltaPobs_vs_Gammaobs}). As evident in previous analyses of Mars lander pressure time-series \citep[\emph{e.g.},][]{2010JGRE..115.0E16E}, there is a marked absence of long-duration/deep (i.e., large $\Delta P_{\rm obs}$) vortices. This absence simply reflects the miss distance effect: most encounters between the barometer and vortex occur some distance from the vortex center ($b > 0$ as in Equation \ref{eqn:radial_distance}), where the pressure profile is more shallow and of longer duration \citep{2018Icar..299..166J}.
The flattening of the cumulative histogram for $\Delta P_{\rm obs}$ (Figure \ref{fig:DeltaPobs_vs_Gammaobs}c) near $1.1\,{\rm Pa}$ indicates a decline in the number of detected vortices below that value. This decline occurs, at least in part, because of difficulty detecting these more shallow signals against noise \citep{2018Icar..299..166J}. Given the possible strong dependence of dust-lifting on $\Delta P$, the exact form of the histogram of $\Delta P_{\rm obs}$-values is critical for evaluating the population's atmospheric influence. We fit a power-law to the cumulative histogram with an exponent $\gamma = -2.39\pm0.02$, which indicates the differential histogram has an exponent $\approx -3.39$. This exponent is reasonably consistent with the values reported in \citet{Spiga2021} and \citet{2021Icar..35514119L} and with the value reported in \citet{2018Icar..299..166J} for the vortices detected by the Phoenix Mission. (On a related note, since the best-fit exponent for a differential histogram can depend on the chosen binning, we suggest such analyses use cumulative histograms or a data-informed scheme for binning -- \citealp{2016SSRv..203..277L}.)
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/DeltaPobs_vs_Gammaobs.png}
\caption{(a) The best-fit $\Delta P_{\rm obs}$- and $\Gamma_{\rm obs}$-values (blue dots). Representative error bars are shown as orange crosses. (b) Cumulative histogram of $\Gamma_{\rm obs}$-values, along with the median value ($\Gamma_{\rm obs} = \left( 9.3 \pm 0.2 \right)\,{\rm s}$) shown by the dashed, orange line. (c) Cumulative histogram of $\Delta P_{\rm obs}$-values, along with the median value ($\Delta P_{\rm obs} = \left( 1.13 \pm 0.03 \right)\,{\rm Pa}$) shown by the dashed, orange line. The dash-dotted green line shows a power-law fit to the histogram, with $N \propto \Delta P_{\rm obs}^{-2.39\pm0.02}$.}
\label{fig:DeltaPobs_vs_Gammaobs}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/sol_and_t0_histograms.png}
\caption{(a) The number of vortices during each sol $N$ (blue dots) along with the values for $L_{\rm s}$ (dashed, grey lines). (Northern spring starts at $L_{\rm s} = 90^\circ$.) The orange histogram shows the number of vortices per sol for bins 38.5 sols wide. The grey, dashed box shows the bin affected by solar conjunction when data were not available between sols 270 and 283. (b) The number of vortices per hour of local true solar time (LTST). Orange lines in both panels show Poisson error bars.}
\label{fig:sol_and_t0_histograms}
\end{figure}
The vortices also exhibit sol-to-sol and hour-to-hour variation, as illustrated in Figure \ref{fig:sol_and_t0_histograms}. Panel (a) shows that the maximum daily number of vortices occurred on sol 204 of the mission, in the middle of northern spring, while the next maximum occurs on sol 300, near the beginning of northern summer. The dip around sol 270 (shown with a grey, dashed box) is artificial because it corresponds to solar conjunction when data are not available. Although \citet{Spiga2021} used a different procedure, that study found a broadly similar pattern -- a dip in the occurrence rate centered on sol 100, with an increase going into sol 200 and beyond. Regarding the hour-to-hour variations, panel (b) shows that the maximum occurs between 12:00 and 13:00 LTST, with a nearly symmetric decline to either side. For this calculation, we totaled up the number of hours (and fractions thereof) during which pressure time-series were reported and then divided the number of vortices encountered during each of those hours over all sols by that total. These results contrast somewhat with those of \citet{Spiga2021}, which found a peak in vortex occurrence between 11:00 and 12:00 LTST.
The $\Gamma_{\rm obs}$- and $\Delta P_{\rm obs}$-values also exhibit trends as well. Figure \ref{fig:Gammaobs_DeltaPobs_vs_TOD_and_sol}(c) shows that, binned by the hour, the median value of $\Gamma_{\rm obs}$ steadily increases from early morning to late afternoon. Figure \ref{fig:Gammaobs_DeltaPobs_vs_TOD_and_sol}(d) shows that median value for $\Delta P_{\rm obs}$ peaks around 12:00 LTST at $\left( 1.3\pm0.05 \right)\,{\rm Pa}$ with minimum values at either end of about $\left( 0.7\pm 0.1 \right)\,{\rm Pa}$ (where the uncertainties come from the error of the median in the corresponding bin). However, the distributions for both values also involve considerable scatter as well. The values binned by sol (Figures \ref{fig:Gammaobs_DeltaPobs_vs_TOD_and_sol}a and b) show no obvious trends.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/Gammaobs_DeltaPobs_vs_TOD_and_sol.png}
\caption{(a)/(b) Distributions of $\Gamma_{\rm obs}$ and $\Delta P_{\rm obs}$ by sol. (c)/(d) Distributions of $\Gamma_{\rm obs}$ and $\Delta P_{\rm obs}$ by time-of-day $t_0$. The orange lines show the median values from binning by hour.}
\label{fig:Gammaobs_DeltaPobs_vs_TOD_and_sol}
\end{figure}
Presumably, these putative hour-to-hour trends reflect the influence of variable ambient conditions, but these data alone are not sufficient to judge what influence: an increase in $\Gamma_{\rm obs}$ could result either from intrinsically larger vortices late in the day or lower wind speeds $U$ advecting the same size vortices past the sensor. Fortunately, the TWINS wind speed data can shed light on this issue. We estimated the advection speed by taking the median value $U$ between 5- and 3-$\Gamma_{\rm obs}$ before the encounter time $t_0$. This approach returns a wind speed close enough in time to be a plausible estimate of the advection speed but early enough that the vortex did not significantly influence the measurement.
Figure \ref{fig:U1_vs_Gamma_hist} shows the distribution of advective wind speeds associated with the observed vortices; the maximum, median, and minimum values are $19.1\pm2.3$, $7.6\pm1.0$, and $0.5\pm0.2 \,{\rm m\ s^{-1}}$, respectively. Panel (a) suggests an anti-correlation between $U_1$ and $\Gamma_{\rm obs}$, as we might expect if trends in $\Gamma_{\rm obs}$ reflect a change in advection speed rather than simply variations in vortex diameter. A Pearson-r test confirms an anti-correlation, albeit a weak one with $r = -0.07$, at a significance $p < 0.05$. Of course, the fact that $\Gamma_{\rm obs}$ spans two orders of magnitude, while $U_1$ spans only a factor of about three indicates that variations in vortex diameter play an important role.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/U1_vs_Gamma_hist.png}
\caption{(a) Distribution of vortex-associated wind speeds $U_1$ as a function of the vortex duration $\Gamma_{\rm obs}$. Panel (b) shows a differential (as opposed to cumulative) histogram of $U_1$-values.}
\label{fig:U1_vs_Gamma_hist}
\end{figure}
Next, we couple our pressure profile and wind profile fits (Equations \ref{eqn:Lorentzian_profile} and \ref{eqn:wind_profile}) to infer the actual central pressure excursions $\Delta P_{\rm act}$ and vortex diameters $D_{\rm act}$, as well as the eyewall wind velocities $V_{\rm act}$. The wind speed data showed considerable turbulent excursions not captured by our model, so as we fit the wind speed models, we inflated the data point and model parameter uncertainties by multiplying by the square root of the reduced $\chi^2$-values, effectively imposing $\chi^2 = 1$ \citep[\emph{cf.}][]{Press2007}. We also propagated the uncertainties from the pressure and wind profile fit parameters into uncertainties on the actual parameters, as described in Appendix \ref{sec:Inferring Encounter Geometries from the Pressure and Velocity Profiles}. For these results, we only included encounters for which the inferred $b/D_{\rm act} \leq 1$, $V_{\rm obs}/\sigma \geq 3$, and with well-defined uncertainties on the actual values -- we found 88\ such vortex encounters. As described in Section \ref{sec:Fitting Wind Profiles} above, numerical experimentation with synthetic datasets bolstered this approach.
The best-fit (\emph{i.e.}, minimum $\chi^2$) wind profile models are illustrated for the three representative vortex encounters in Figure \ref{fig:vortices_and_windspeed}. As discussed in Appendix \ref{sec:Inferring Encounter Geometries from the Pressure and Velocity Profiles}, $\Delta P_{\rm obs}$ and $V_{\rm obs}$, along with the assumption of cyclostrophic balance, gives $\Delta P_{\rm act}$ and $V_{\rm act}$. Figure \ref{fig:Dact_vs_Pact} shows the distribution of inferred $\Delta P_{\rm act}$- and $D_{\rm act}$-values. The minimum, median, and maximum values for $P_{\rm act}$ are $1.20\pm0.12$, $3.33\pm0.18$, and $16.6\pm4.5\,{\rm Pa}$, respectively, and for $D_{\rm act}$ are $7.70\pm2.19$, $59.1\pm17.6$, and $517\pm181\,{\rm m}$. Panels (b) and (c) show the cumulative histograms for $P_{\rm act}$ and $D_{\rm act}$, along with power-law fits. The fit for $P_{\rm act}$ appears to be significantly better than that for $D_{\rm act}$ and corresponds to a differential histogram with a power-law index $\gamma = -2.28$, significantly more shallow than the differential histogram for $P_{\rm obs}$ (with $\gamma = -3.39$). This result is qualitatively inconsistent with theoretical expectations that the distribution of actual pressure drops is steeper than (i.e., the magnitude of the power-law exponent is larger) or the same as the distribution of observed drops \citep{2014JAtS...71.4461L, 2018Icar..299..166J, 2019Icar..317..209K}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/Dact_vs_Pact.png}
\caption{(a) Distribution of estimated actual vortex diameters $D_{\rm act}$ vs. the pressure excursions $\Delta P_{\rm act}$ (blue dots). (b) Cumulative histogram of $P_{\rm act}$-values. (c) Cumulative histogram of $D_{\rm act}$-values.}
\label{fig:Dact_vs_Pact}
\end{figure}
This inconsistency likely arises from our choice to filter out distant encounters ($b/D_{\rm act} > 1$) and low-signal vortices ($V_{\rm obs}/\sigma < 3$). All else being equal, vortices with small $\Delta P_{\rm act}$ are also likely to have small $V_{\rm act}$ and therefore to register with small $V_{\rm obs}$ in an encounter. \citet{2020Icar..33813523J} also suggested that $D_{\rm act} \propto \Delta P_{\rm act}^{1/2}$, meaning small $\Delta P_{\rm act}$ are more likely to have $b/D_{\rm act} > 1$. The distribution of $D_{\rm act}$ vs.~$\Delta P_{\rm act}$ is statistically consistent with no correlation; however, a strict power-law fit gives $D_{\rm act} \propto \Delta P_{\rm act}^{-0.34}$. This unexpected (and possibly unrealistic) power-law index may arise from small-number statistics or our admittedly conservative choice to filter out distant and low windspeed encounters, rather than a true anti-correlation between the two parameters. (\citet{2019Icar..317..209K} discusses the expected relationship between $D_{\rm act}$ and $\Delta P_{\rm act}.$)
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/all_actual_values_vs_t0.png}
\caption{(a) Distributions of $\Delta P_{\rm act}$ (a), $D_{\rm act}$ (b), and $V_{\rm act}$ (c) with time-of-day $t_0$ (local true solar time, LTST).}
\label{fig:all_actual_values_vs_t0}
\end{figure}
In spite of these observational biases, we can still infer eyewall velocities $V_{\rm act}$ and look at trends with time-of-day $t_0$. Figure \ref{fig:all_actual_values_vs_t0} shows how $\Delta P_{\rm act}$, $D_{\rm act}$, and $V_{\rm act}$ vary as functions of time-of-day $t_0$. The min, median, and maximum values for $V_{\rm act}$ are $9.17\pm0.48$, $15.6\pm1.0$, and $37.4\pm5.1\,{\rm m\ s^{-1}}$. The trends for all parameters with $t_0$ seem reasonable: many of the largest values for all parameters occur in the late afternoon, although the very largest values for each do not always occur late in the day.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/cum_hist_Vact.png}
\caption{The fractional cumulative histogram of inferred $V_{\rm act}$-values. For example, about 20\% of the vortices we analyzed have $V_{\rm act} \geq 20\,{\rm m\ s^{-1}}$.}
\label{fig:cum_hist_Vact}
\end{figure}
\subsection{Inferring Areal Occurrence Rate from the Time-Series Analysis}
\label{sec:Inferring Areal Occurrence Rate from the Time-Series Analysis}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/advective_speeds.png}
\caption{The hour-by-hour average wind speeds $\langle U \rangle$ for each sol. Brighter (yellower) colors represent higher speeds.}
\label{fig:advective_speeds}
\end{figure}
If all encountered vortices had the same advective speed $U$, diameter $D_{\rm act}$, and central pressure excursion $\Delta P_{\rm act}$, the areal density of vortices $n$ can be estimated from the number of vortex detections per unit time, $\nu$, via
\begin{equation}
\nu = k/T = n U \left( 2 b_{\rm max} \right) \label{eqn:simple_number_of_encounters}
\end{equation}
where $k$ is the total number of encounters during the observing period $T$ and $b_{\rm max} = \left( \frac{D_{\rm act}}{2} \right) \sqrt{ \frac{\Delta P_{\rm act}}{\Delta P_{\rm min}} - 1 }$ is the maximum radial encounter distance for which a pressure signal will register. $P_{\rm min}$ is the minimum observed pressure excursion for which a vortex encounter would register \citep{2021Icar..35814200K} and is taken as $0.1\,{\rm Pa}$ (see Section \ref{sec:Vortex Statistics}). For the advective speed, we use not the pre-encounter wind velocities considered in, for example, Equation \ref{eqn:total_wind_speed} but the hour-by-hour average speeds illustrated in Figure \ref{fig:advective_speeds} because we are interested in the advection of a population of vortices, not the individual vortices. In principle, calculating $n$ from the observed encounters requires integrating over the population. Unfortunately, the small number of vortices for which we were able to robustly estimate the actual parameters severely limits our ability to integrate over the population. Moreover, we expect these parameters to vary with time-of-day and season. In lieu of this more complete evaluation, we instead calculate the population average for $b_{\rm max}$ and then use that average value to solve for $n$ from the encounter rates depicted in Figure \ref{fig:sol_and_t0_histograms}. We convert that areal density into an areal occurrence rate $f$ (number per area per time) via the following equation:
\begin{equation}
f = \left( \frac{k}{T} \right) \times \left( \dfrac{1}{2 b_{\rm max} T U } \right).\label{eqn:convert_to_areal_occurrence_rate}
\end{equation}
The quantity $T U$ represents the distance traveled by a vortex during the time observed, and $2 b_{\rm max}$ the width swept out by the vortex within which it would have been detectable.
Figure \ref{fig:areal_occurrence_rate} shows our estimate for $f$ for vortices and compares it to values from other studies.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/areal_occurrence_rate.png}
\caption{The solid blue curve shows the areal occurrence rate for vortices (``vortex rates''). The dashed, orange line shows the allowed maximum rate for dust devils inferred from the ICC image analysis (``max DD rates''). The grey region shows a range of rates reported in \citet{2016Icar..266..315R} and \citet{2020GeoRL..4787234P} (``P20 rates'').}
\label{fig:areal_occurrence_rate}
\end{figure}
\section{Image Analysis}
\label{sec:Image Analysis}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/Example-Image_Insight-Combined-Analysis.png}
\caption{(a) Example of an Instrument Context Camera (ICC) image. (b) Times and sols during which images were collected throughout the InSight mission.}
\label{fig:Example-Image_Insight-Combined-Analysis}
\end{figure}
In this section, we describe our analysis of image data from InSight to search for the passage of dust devils. To cut to the chase, our survey detected no active dust devils, but we can provide an upper limit on their activity based on the geometry, frequency, and timing of observations.
For our visual dust devil survey, we used observations from InSight's ICC because that camera is consistently pointed toward the horizon. ICC has a field-of-view (FOV) of $124^\circ\times124^\circ$ and sits about $0.7\,{\rm m}$ off the surface \citep{2018SSRv..214..105M} -- Figure \ref{fig:Example-Image_Insight-Combined-Analysis}(a) shows an example image. As of this manuscript's preparation, the NASA PDS archive for ICC contains images spanning sols 1 to 476 (although many sols lack images), totaling 1527\ images. For our survey, three authors (JC, MS, RB) visually inspected all available ICC images hosted on the PDS archive (without any contrast enhancement or other aids). The inspections were conducted over the course of several weeks redundantly (i.e., multiple workers inspected the same image) and independently (to avoid biasing the results). Figure \ref{fig:Example-Image_Insight-Combined-Analysis}(b) shows the sols and times when images are available.
Having seen no images of active dust devils, what upper limits can we place on their occurrence? The limits will depend on three aspects of the observations: (1) image contrast considerations (which constrain the maximum optical depth allowed for any unidentified dust devils), (2) the total area surveilled (which constrains the areal density of dust devils), and (3) the time span during which dust devils could have been spotted in each image (which allows us to convert the areal density to an areal occurrence rate).
Regarding image contrast, we invoke the procedure used in \citet{2006JGRE..11112S09G}. In that study, the authors analyzed images from the Mars Exploration Rover Spirit showing active dust devils and compared the values for pixels within dust devils $I_{\rm DD}$ to values for pixels showing the sky $I_{\rm sky}$ and the ground $I_{\rm ground}$. The study argued that a dust devil's optical depth $\tau$ could be estimated as
\begin{equation}
\tau = \ln \left( \dfrac{I_{\rm ground} - I_{\rm sky}}{I_{\rm DD} - I_{\rm sky}}. \right)\label{eqn:optical_depth}
\end{equation}
We saw no active dust devils, and so we can use Equation \ref{eqn:optical_depth}, along with the distribution of pixel values from the ICC images, to estimate the maximum allowed optical depth for any dust-lofting vortices. By assumption, a dust devil (large enough to be resolved) could be spotted against the martian sky if it were considerably darker than the typical sky pixel. Inspecting some of the ICC images, we estimated median sky ($I_{\rm sky} \sim 150$) and ground ($I_{\rm ground} \sim 100$) pixel values, as well as the standard deviation for the sky pixels $\sigma_{I_{\rm sky}} \sim 15$. We assume a dust devil must be at least $3-\sigma_{I_{\rm sky}}$ darker than $I_{\rm sky}$ (i.e., $I_{\rm sky} - I_{\rm DD} \ge 3\ \sigma_{I_{\rm sky}}$) in order to be spotted. Using these values, we estimate the maximum optical depth for any dust devils hidden within the images as $\tau \lesssim 0.1$. This approach, of course, assumes a single pixel suffices to recognize a dust devil. However, a multi-pixel dust devil could be identified even with a smaller optical depth since more pixels would correspond to a higher signal-to-noise and therefore require a smaller contrast. Therefore, the estimate here represents a reasonable upper limit.
Regarding the area surveilled, the local topography limits the horizon for the Instrument Deployment Camera (IDC) toward the south to less than $2.4\,{\rm km}$ \citep{2020E&SS....701248G}. Since ICC is even closer to the ground ($0.7\,{\rm m}$ vs.~IDC's $1.5\,{\rm m}$), the view is even more limited. However, as an upper limit, we can estimate the area surveilled as $\frac{1}{2} \pi \left(124^\circ/180^\circ\right) \left( 2.4\,{\rm km} \right)^2 \approx 6.2\,{\rm km^2}$. An important limitation of this approach: smaller dust devils cannot be resolved at the same maximum distance as larger devils. However, ICC has an angular resolution of $\alpha = 2\times10^{-3}\,{\rm rad\ px^{-1}}$ \citep{2018SSRv..214..105M}, meaning we could resolve the diameter of the smallest vortex for which we could estimate a diameter ($D_{\rm act} = 7.7\,{\rm m}$) to a distance of about $3.9\,{\rm km}$, farther than the horizon.
Regarding the time span for an observation, the key timescale is the time for a dust devil to cross through the observational area, $T_{\rm cross}$. Given an areal occurrence rate $f$ and an observed area $A_{\rm obs}$, the total number of dust devils observed in one image would be $N = f A_{\rm obs} T_{\rm cross}$ (assuming the lifetime of the dust devils is long compared to $T_{\rm cross}$), which can be re-arranged to calculate $f$ given the other parameters. The maximum crossing time is equal to the time for a dust devil to cross along the horizon, no farther away than $2.4\,{\rm km}$. With an FOV of $124^\circ$, this distance is $\pi \left(124^\circ/180^\circ\right) \left( 2.4\,{\rm km} \right) \approx 2.6\,{\rm km}$. The time to cross this distance will depend on the ambient wind speed (with $U = 8\,{\rm m\ s^{-1}}$, $T_{\rm cross} \approx 5\,{\rm min}$), and so for our calculation, we take the median wind speed during each observational period. As an example, a single image showing no dust devils means $N < 1$, and therefore $f < \left( A_{\rm obs} T_{\rm cross}\right)^{-1} = \left( 6.2\,{\rm km^{2}}\ \times\ 5\,{\rm min}\right)^{-1} \approx 2\,{\rm km^{-2}\ hr^{-1}}$. Each additional image showing no dust devils reduces the allowed areal occurrence rate by another factor of $T_{\rm cross}$, assuming the images are separated in time by at least $T_{\rm cross}$. The upper limits on the areal occurrence rate incorporating the measured advection speeds and sol-by-sol images (Figure \ref{fig:Example-Image_Insight-Combined-Analysis}(b)) are shown in Figure \ref{fig:areal_occurrence_rate}. (Given that most sols have only one or a few images available, the areal occurrence rate binned by sol is much less informative, and so we do not include it.)
Folding all these considerations together with the number and timing of images reflected in Figure \ref{fig:Example-Image_Insight-Combined-Analysis}(b) and the advective wind speeds allows us to assess hourly upper limits on dust devil areal occurrence rates. Figure \ref{fig:areal_occurrence_rate} shows the result as a dashed orange line. To be clear, our null detection rules out dust devils within the ICC's field-of-view with $\tau > 0.1$ and subtending angles smaller than $2\times10^{-3}\,{\rm rad}$. Dust devils appearing within the available images with both a greater $\tau$ and a significantly larger angular diameter likely would have been spotted.
These results comport with a recent study of the same dataset \citep{2021Icar..36414468L}. In that study, the authors conducted some injection-recovery experiments with the images and ruled out dust devils with optical depths greater than 1\% subtending an 8x16 pixel rectangle. The same study considered engineering data from Insight's solar panels to argue that most vortices were dustless, as we corroborate here.
A more comprehensive assessment based on the lack of imaged dust devils could provide a more detailed estimate of occurrence rates and optical depth. Such an assessment would likely require generation of images with (and without) synthetic dust devils, and an analogous survey of these images could then be conducted to assess detectability and robustness. However, such an exercise is beyond the scope of this study. We are interested only in upper limits, which our survey provides, implicitly including important effects such as image compression (any compression artifacts feed into the distribution of pixel values). We leave such a fuller assessment for future work.
\section{Discussion}
\label{sec:Discussion}
\subsection{Inferring Occurrence Rates and Thresholds for Dust Lifting}
\label{sec:Inferring Occurrence Rates and Thresholds for Dust Lifting}
Altogether, these results invite several interesting conclusions which are bolstered by and contrasted with previous studies. The lack of observed dust devils in the ICC imagery indicates that the vortices near InSight are frequently dustless and therefore invisible (at least to the limit of the image contrast). Figure \ref{fig:areal_occurrence_rate} shows the maximum areal occurrence rates for dust devils allowed by the image survey. Comparing to the rates of vortex occurrence suggests no more than 35\% of the encountered vortices could have lofted dust and still not have registered in the images. This rate appears roughly consistent with studies of terrestrial studies: deploying pressure loggers alongside solar sensors, \citet{LORENZ20151} found that 40\% of vortex events produced no solar attenuation, and only 20\% of events caused dimming greater than about 2\%. Studies on Mars have suggested martian vortices are very often dustless, especially when the boundary layer is shallow, which correlates with less vigorous vortices \citep{2015Icar..249..129M, 2016Icar..278..180S}.
In fact, based on our non-detection of active dust devils and distribution of $V_{\rm act}$-values, we can estimate a minimum wind speed required to loft dust. If only 35\% of vortices lofted dust, Figure \ref{fig:cum_hist_Vact} suggests a threshold of $19\,{\rm m\ s^{-1}}$, corresponding to $\Delta P_{\rm act} \approx 7\,{\rm Pa}$. (N.B. $\Delta P_{\rm act} \geq \Delta P_{\rm obs}$.) Of course, this value is a minimum since an even smaller fraction of dusty vortices would still be consistent with our null detection among the ICC images, but it appears roughly consistent with other work. Based on lab simulations, \citet{2003JGRE..108.5041G} proposed $20-30\,{\rm m\ s^{-1}}$. Results from a study tracking the motions of dust clots within martian dust devils agreed with that estimate \citep{2011GeoRL..3824206C}. Other studies suggest much higher thresholds \citep[\emph{cf.}][]{2006JGRE..11112002C}. We can, of course, also flip the problem around and assume a minimum wind speed for lofting dust and then use Figure \ref{fig:cum_hist_Vact} to infer the fraction of vortices that would be expected to be dust devils.
We can also consider thresholds required to form tracks on the martian surface. As a vortex travels over the surface, it may disrupt the surficial sediment, revealing a brighter or darker surface beneath. Previous studies have used observations either in-situ or from orbit of dust devil tracks to infer the diameters, lifetimes, and occurrence rates of dust devils \citep[\emph{e.g.},][]{2008JGRE..113.7002W}. (However, a vortex may leave a track without lofting significant dust, and not all dust devils leave discernible tracks -- \citealp{2005JGRE..110.6002G}.)
With these caveats in mind, we can once again compare areal occurrence rates to estimate the fractions of vortices and dust devils leaving tracks. \citet{2016Icar..266..315R} and \citet{2020GeoRL..4787234P} both conducted surveys of the region surrounding InSight for dust devil tracks, and Figure \ref{fig:areal_occurrence_rate} shows the range of rates from those studies. (We have excluded the very large rate of $0.68\,{\rm km^{-2}\ sol^{-1}}$ reported in \citet{2020GeoRL..4787234P} as an outlier.) The rates are reported in ${\rm km^{-2}\ sol^{-1}}$, so we multiplied them by 9/24 to convert from per sol to per hour. (Our vortex encounter analysis indicated that vortices are active for about 9 martian hours for each martian sol -- Figure \ref{fig:sol_and_t0_histograms}b.) Comparison to our inferred occurrence rates suggests between 38 and 74\% of vortices leave tracks. This result may mean that between 26 and 62\% have insufficient wind speeds to visibly rearrange the surficial sediment. Looking again at Figure \ref{fig:cum_hist_Vact}, these numbers correspond to $V_{\rm act} = 14$ and $18\,{\rm m\ s^{-1}}$, respectively.
We can also gauge how often a dust devil might leave a trail (at least, in the region surrounding InSight). Around 14:00 LTST, the largest allowed dust devil occurrence rate is $0.01\,{\rm km^{-2}\ hour^{-1}}$. This rate is about 80\% of the minimum track formation rate. As with the comparison to the ICC images, vortex diameter may factor into these considerations: vortices may be too small to leave tracks resolvable by the HiRISE instrument. However, \citet{2020GeoRL..4787234P} reported a few tracks with widths as small as $5\,{\rm m}$ (but none smaller), indicating that, even our smallest vortex with $D_{\rm act} = 7.7\,{\rm m}$ could have left a resolvable track. Work on the track dataset continues to determine the distribution of diameters, and so further comparisons must wait.
\subsection{Comparison to Previous Work}
\label{sec:Comparison to Previous Work}
Our results corroborate some results from previous studies. \citet{2021Icar..35514119L} conducted a survey of InSight meteorological data very similar to this one, recovering 853 events with pressure excursions exceeding $0.8\,{\rm Pa}$ over the first 390 sols of the InSight mission and amounting to 2-3 encounters per sol, similar to our encounter rates (Figure \ref{fig:sol_and_t0_histograms}b). Although that study analyzed the pressure profiles of individual vortex encounters and the ambient wind speeds adjacent to an encounter, it did not model the vortex wind profiles. That study did consider the seismic signals from vortex passes as well. Unfortunately, \citet{2021Icar..35514119L} did not explicitly estimate an areal occurrence rate for vortices for direct comparison to our results but instead the fractional surface area occulted by vortices $F$. This parameter was estimated by dividing the total duration of encounters to the total duration of data collections during the hours when vortices are active, giving $F \approx 0.07\%$. Performing the same calculation for our detections, we find $F \approx 0.08\%$, indicating good agreement between our results. Comparing the two catalogs, we see that \citet{2021Icar..35514119L} found 75\% of our encounters from before sol 390.
\citet{Spiga2021}'s survey also resembles this survey but reported a much higher vortex encounter rate: 6046 in the mission's first 400 sols. Comparing detection catalogs, we find that study recovered nearly 90\% of our detections, indicating good agreement for the encounters we found. The reasons that \citet{Spiga2021} found so many more encounters are not entirely clear but probably arise from our different detection schemes. \citet{Spiga2021} fit a straight-line to 1000-second windows surrounding each data point in the pressure time-series and then recorded any negative excursions greater than $0.35\,{\rm Pa}$ as encounters. This approach might record any negative pressure excursion, regardless of its duration or time structure, as a vortex encounter, while our approach might filter out some signals that are not sufficiently Lorentz-like. Following a statistical approach similar to ours, \citet{2021Icar..35814200K} determined that the detections in \citet{Spiga2021} imply a vortex occurrence rate of 56 vortices per ${\rm km}^2$, which was described as ``an unprecedented high level''.
It is plausible that some of the disagreement between studies arises from the different detection thresholds. \citet{2021Icar..35514119L} required $\Delta P_{\rm obs} > 0.8\,{\rm Pa}$, while \citet{Spiga2021} required $\Delta P_{\rm obs} > 0.35\,{\rm Pa}$. (Our detection threshold is not quantified in the same way -- see Appendix \ref{sec:Vortex Recovery Statistics}.) For the cumulative histogram of $\Delta P_{\rm obs}$-values, \citet{2021Icar..35514119L} inferred a power-law with an index of about $-2$, nearly consistent with ours (i.e., the number of encounters with a $\Delta P_{\rm obs}$-value or higher drops as $\Delta P_{\rm obs}^{-2}$). \citet{2021Icar..35514119L} reported 853 detections and so would have expected 4460 ($= 853 \times (0.8\,{\rm Pa}/0.35\,{\rm Pa})^2$) total detections with $\Delta P_{\rm obs} \geq 0.35\,{\rm Pa}$, inconsistent with the 6000 detections of \citet{Spiga2021}. \citet{Spiga2021} inferred a similar dependence for the $\Delta P_{\rm obs}$ cumulative histogram, reporting a power-law index between -2.5 for the smallest-$\Delta P_{\rm obs}$ encounters. Taking the former index and their total number of detections (6046), \citet{Spiga2021} would have expected 765 ($= 6046 \times \left(0.35\,{\rm Pa}/0.8\,{\rm Pa} \right)^{2.5}$) total detections with $\Delta P_{\rm obs} \geq 0.8\,{\rm Pa}$, not entirely consistent with the 853 detections of \citet{2021Icar..35514119L} -- Poisson statistics suggests disagreement at more than 3-$\sigma$ ($\sqrt{853} \approx 30$). Given their good agreement with \citet{2021Icar..35514119L}, our results also appear inconsistent with those of \citet{Spiga2021}.
The overall encounter rate from \citet{Spiga2021} is about seven times larger than our rate and therefore implies an areal occurrence rate of about $0.28\,{\rm km^{-2}\ hour^{-1}}$. This result suggests no more than 1 out of about 28 vortices (0.28/0.01) is a visually detectable dust devil, which might suggest martian vortices are much less likely to loft dust than terrestrial ones \citep{LORENZ20151}. This areal occurrence rate is also ten times larger than the rate for tracks, $\le 0.03\,{\rm km^{-2}\ hour^{-1}}$.
As measured in the average number of vortices encountered per sol, our results suggest vortices are active at InSight at a level comparable to sites for previous missions. \citet{2010JGRE..115.0E16E} reported 502 vortex encounters by the Phoenix mission, which landed at $68.2^\circ$ N, over 151 sols from $L_{\rm s} = 76^\circ$ to $148^\circ$. The lander encountered about 3 vortices per sol, with seasonally varying mid-day peaks from about 0.2 to 0.8 per hour, rates only slightly larger than the rates we report here. The total duration of encounters normalized to the total observational time suggests a fractional area occulted $F \approx 0.01\%$, smaller than the fractional area estimates in our study or \citet{2021Icar..35514119L}. \citet{2010JGRE..115.0E16E} also conducted an imaging survey at the Phoenix site, imaging 37 unique dust devils; however, the requisite data to convert those detections into an areal occurrence rate are not provided. (An unspecified number of devils are imaged multiple times.) \citet{2019JGRE..124.3442N} identified vortex encounters using three martian years of pressure data from the Mars Science Laboratory (MSL) in the vicinity of Gale Crater, near $5.3^\circ$ S. That study found similar per-sol and per-hour encounter rates to what we report here. Detailed atmospheric modeling allowed a comparison between observed encounter rates and the expected meteorological conditions, corroborating some theoretical expectations \citep{1998JAtS...55.3244R}. \citet{2020Icar..34713814O} conducted a survey of MSL including data from beyond the mission's third year and found a significant increase in dust devil activity which was attributed to higher elevation of the terrain, lower thermal inertia of the environment, and more available dust.
However, cast in terms of the number of vortices with a given $\Delta P_{\rm obs}$-value, the InSight landing site does appear to be significantly more active than the Phoenix site, as suggested by \citet{Spiga2021} and \citet{2021Icar..35514119L}. Extrapolating our $\Delta P_{\rm obs}$ power-law fit (Figure \ref{fig:DeltaPobs_vs_Gammaobs}) and the per-sol number of encounters, we might have expected more than 7,000 encounters with $\Delta P_{\rm obs} > 0.3\,{\rm Pa}$. The power-law fits from \citet{Spiga2021} and \citet{2021Icar..35514119L} give different expectations, but all agree that the number of encounters actually reported is significantly less than expected based on the InSight encounters.
The dust devil areal occurrence rate inferred for the Spirit lander site in \citet{2010JGRE..115.0F02G} appears to be comparable to the rate for vortices we infer for InSight. \citet{2010JGRE..115.0F02G} analyzed images collected over three martian years, netting more than 700 sightings of active dust devils. Normalizing their detections by the imaging area and frequency, they inferred hourly areal occurrence rates which varied over a sol up to $0.05\,{\rm km^{-2}\ hour^{-1}}$ (their Figure 7). Of course, theirs was an imaging survey and ours is a meteorological survey.
\subsection{Why Didn't InSight Image Any Dust Devils?}
\label{sec:Why Didn't InSight Image Any Dust Devils?}
If the vortex occurrence rate at InSight is similar to or even greatly exceeds the rates seen by other Mars landers, why did InSight image no dust devils when those other landers imaged many? Not for lack of trying: \citet{2020NatGe..13..190B} describe a concerted imaging campaign to detect dust devils, as illustrated in Figure \ref{fig:Example-Image_Insight-Combined-Analysis}(b). This imaging campaign resembles campaigns conducted by those other missions, with more than a thousand images collected over hundreds of sols.
Inspection of the hourly wind speed data collected throughout the mission suggests one explanation for the lack of imaged dust devils: the InSight landing site appears to be much windier during the times of day when vortex activity occurs. Figure \ref{fig:U_vs_DeltaP_comparisons}(a) shows the distribution of hourly-averaged wind speeds both during vortex encounters and between 8:00 and 16:00 LTST but during hours when no vortices were encountered. (N.B., these wind speeds are different from the speeds from immediately before encounters used to estimate vortex diameters and shown in Figure \ref{fig:U1_vs_Gamma_hist}). Clearly, the vortex-associated advective speeds skew toward larger values than the winds overall, with an average of $8.3\,{\rm m\ s^{-1}}$.
The seminal terrestrial field studies of \citet{1969JApMe...8...32S} indicated that dust devil frequency often increases for increasing wind speed but then declines again above a certain wind speed. The same trend seems to hold for martian dust devils. Among imaged dust devils for which horizontal speeds could be estimated, \citet{2010JGRE..115.0F02G} found only about two dozen of about 500 total advected faster than $8\,{\rm m\ s^{-1}}$. Though \citet{2010JGRE..115.0E16E} did not directly estimate the advective velocities of imaged dust devils, the hourly-averaged wind speeds measured by Phoenix rarely exceeded $8\, {\rm m\ s^{-1}}$.
Certainly, an increased advection speed would be expected to increase the rate of dust devil encounter since more dust devils would be advected past the camera or meteorological sensor, what has been called the ``advection effect''. Moreover, some non-zero winds are probably necessary to provide the vorticity requisite for dust devil formation \citep{2020Icar..33813523J}, and, in any case, turbulent winds must accompany the convectively unstable conditions that produce dust devils. However, as discussed in \citet{2016SSRv..203..183R}, higher wind speeds may suppress a high near-surface lapse rate and reduce the vigor of convective mixing associated with dust devils. In addition, wind shear could disrupt the dynamical structures in which dust devils are embedded.
Evidence for the suppression of dust devils at high wind speeds appears in Figure \ref{fig:U_vs_DeltaP_comparisons}(a): there are fewer vortex encounters above about $9\,{\rm m\ s^{-1}}$. Moreover, Figure \ref{fig:U_vs_DeltaP_comparisons} shows the maximum $\Delta P_{\rm obs}$ for vortices that \emph{are} encountered seems to decline for $\langle U \rangle$ exceeding $4\,{\rm m\ s^{-1}}$, qualitatively consistent with the suggestion that high wind speeds disrupt the structure and therefore vortex strength. It is worth noting that the median $\Delta P_{\rm obs}$-values remain roughly constant with $\langle U \rangle$; however, all else equal, we might reasonably expect it is the most vigorous vortices which are dust devils, not necessarily the average vortices. Also, although the number of vortex encounters declines as $\langle U \rangle$ passes $8\, {\rm m\ s^{-1}}$, thereby potentially reducing the width of the $\Delta P_{\rm obs}$ distribution, the systematic decline in maximum $\Delta P_{\rm obs}$ with increasing $\langle U \rangle$ appears well before the maximum in number of vortices binned by $\langle U \rangle$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/U_vs_DeltaP_comparisons.png}
\caption{(a) The distribution of hourly-averaged wind speeds during the hours when vortices were encountered (solid, blue line) and between 8:00 and 16:00 LTST during hours when they were \emph{not} encountered (dashed, orange line). (b) Vortex $\Delta P_{\rm obs}$-values vs.~their hourly averaged advective wind speeds. The horizontal orange lines show the maximum $\Delta P_{\rm obs}$-value observed for wind speeds binned by $2\, {\rm m\ s^{-1}}$ (the horizontal span of each line shows the binning).}
\label{fig:U_vs_DeltaP_comparisons}
\end{figure}
Ostensibly, these results seem to contradict some of those from \cite{Spiga2021}, who reported a strong positive correlation between wind speed and vortex encounter rate and reasonably invoked the advection effect. That study also explored the dependence of vortex occurrence on meteorological conditions using large eddy simulations (LES) and found an increase in encounter rate with the synthetic vortices from the model but a reduction in the vortex areal occurrence rate. (Thus, they attributed their increased encounter rate to the advection effect.)
However, the wind speeds they reported for vortex encounters were not the hour-by-hour averages we discuss here. Instead, they calculated the average between 11:00 and 14:00 LTST for each day, which gave smaller speeds. In fact, if we conduct the same wind speed averaging, we obtain similar results (smaller overall average wind speeds and a montonic increase in encounter rate with wind speed).
Why the InSight landing site might have been windier overall than the landing sites for other mission that successfully imaged dust devils is not clear and should be the focus of future studies. Moreover, the comprehensive data and/or studies for other Mars missions are lacking to robustly corroborate the possibility that InSight's landing site is indeed windier than other sites. On top of all that, multiple cases may contribute to suppression of dust devils at InSight. Although numerous nearby dust devil tracks have been imaged from space, it is possible that there is also a lack of liftable dust. In any case, the available data are at least consistent with the idea that dust devil formation was suppressed at InSight by higher wind speeds than have been seen at other landing sites.
\section{Conclusions}
\label{sec:Conclusions}
Our analysis of InSight's APSS pressure and wind speed time-series search has netted 990 encounters with low-pressure, high-wind vortices over the first 477 sols of the mission, on average two encounters per sol (Figure \ref{fig:sol_and_t0_histograms}), in good agreement with some previous studies of InSight data \citep{2021Icar..35514119L} and somewhat inconsistent with others \citep{Spiga2021}. The distribution of observed pressure excursions associated with these vortices also resembles the distributions from other martian vortex studies (Figure \ref{fig:DeltaPobs_vs_Gammaobs}).
Our analysis of wind speeds from the TWINS instrument allowed us both to infer the advection speeds for the vortices and vortex wind profiles themselves. These advection speeds allowed us to convert encounter rates into the intrinsic vortex occurrence rates (solid, blue line in Figure \ref{fig:areal_occurrence_rate}), and we found reasonable agreement with previous meteorological studies of both the InSight and other lander sites (Section \ref{sec:Discussion}). By leveraging assumptions about the pressure and wind profile shapes similar to previous work \citep{2016Icar..271..326L}, we were able to estimate the encounter distances between the vortex centers and the InSight lander for many of the vortices and back out the maximum wind velocities. Assuming a minimum threshold for dust lifting of about $20\,{\rm m\ s^{-1}}$, we estimated that about 35\% of encountered vortices would have been bonafide dust devils. This result agrees with terrestrial field studies about how often vortices may loft visible dust \citep{LORENZ20151}.
We also surveyed 1577 images (Figure \ref{fig:Example-Image_Insight-Combined-Analysis}) collected by InSight's ICC. Seeing no active dust devils, we were able to put an upper limit on the fraction of vortices that lift significant amounts of dust (dashed, orange line in Figure \ref{fig:areal_occurrence_rate}), no more than 35\% of vortices. It is crucial to note that this value is an upper limit with many limitations and caveats. Future work may revise this result. In any case, comparison of the distribution of wind velocities and the occurrence rates to results from studies of dust devil tracks seen from orbit in the region around InSight \citep{2020GeoRL..4787234P} allowed us also to infer that probably not all track-forming vortices are dust devils, and perhaps no more than 74\% of vortices leave tracks. Consequently, assuming the tangential wind speed is the only important factor in track formation, the minimum speed required may be $14\,{\rm m\ s^{-1}}$ (Section \ref{sec:Discussion}). By exploring the relationships between vortex encounters and parameters with advective wind speeds, we also found evidence that the lack of dust devils imaged by InSight might arise from high wind speeds, although multiple causes may contribute. In addition, we do not suggest high wind speeds at InSight suppressed vortex formation generally, just formation of the most vigorous vortices, the vortices most likely to be dust devils.
As impactful as these results may be, they involve a number of important assumptions and limitations. Perhaps most important, the turbulence of winds at the martian surface introduced considerable correlated noise \citep[\emph{cf.}][]{2018RemS...10...65J} into the TWINS wind speed, frequently obscuring the wind profiles of encountered vortices. Consequently, we limited our inference of vortex encounters to those with encounter distances less than one vortex diameter. This approach limited the number of encounters for which we could estimate intrinsic parameters and probably biased our inferred wind speed distribution to only the largest and/or most vigorous vortices. Fortunately, time-analysis techniques to account for such non-white noise exist \citep{hodlr} and should be considered in future work.
The relatively slow sampling rate for TWINS ($1\,{\rm Hz}$) also presented issues. Since vortex encounters often last for only a few seconds (Figures \ref{fig:vortices_and_windspeed} and \ref{fig:DeltaPobs_vs_Gammaobs}), such sampling often only provided a few points during the encounter, challenging robust inference of the profile parameters. Of course, data volume is always an issue with planetary missions, but perhaps future missions that include meteorological instrumentation could consider short, high-resolution monitoring campaigns to more accurately capture vortex behavior during times of sol when they are expected to be most active. Sampling of $10\,{\rm Hz}$ or better would also allow more accurate assessment of other important boundary layer processes, such as turbulent heat and momentum transport \citep{2011RvGeo..49.3005P}.
Standards for reporting vortex analyses and statistics would also significantly facilitate comparison between studies of the same and of different datasets. Such comparisons not only help corroborate results from different studies but could also make more robust possible detections of time-variability in vortex and boundary layer behavior. For instance, in lieu of publishing only summary statistics and histograms of vortex properties, authors should consider providing tables of the detections themselves, including links to, for example, the specific images in which dust devils were detected.
In the future, planetary missions with a focus on or at least capabilities to assess boundary layer phenomena will elucidate these important and crucial processes. Since all surface-atmosphere interactions are mediated through these processes, they play key roles in shaping not just the climate but also the geology of worlds throughout the solar system, even on small bodies with only the barest breath of an atmosphere \citep{2017PNAS..114.2509J}. Fortunately, the increasing number of active and future missions carrying meteorological equipment bodes well for studies of surface-atmosphere interactions and, in particular, convective vortices and dust devils.
\acknowledgments
We acknowledge helpful input from Don Banfield, Matthew Golombek, Ralph Lorenz, Patrick Whelley and two anonymous referees. We also thank the InSight team and NASA PDS for providing access to the data. The data are available from NASA's Atmosphere's PDS Node - \url{https://atmos.nmsu.edu/data_and_services/atmospheres_data/INSIGHT/insight.html}. BJ was supported by a grant from NASA's Solar System Workings program NNH17ZDA001N. JC, MS, and RB were supported by a grant from the Idaho Space Grant Consortium. All results from this study including the analysis codes are available here - \url{https://github.com/BoiseStatePlanetary/Recovering-Martian-Dust-Devil-Population}.
\vspace{5mm}
\software{matplotlib \citep{Hunter:2007}, numpy \citep{harris2020array}, scipy \citep{2020SciPy-NMeth}, statsmodels \citep{seabold2010statsmodels}}
|
1,116,691,497,165 | arxiv | \section{Introduction}
\label{intro}
Analytical treatment of problems having quenched disorder is usually
difficult. There are few models having nontrivial quenched disorder that
can be solved exactly. In this paper, we obtain exact results for the
non-equilibrium properties of the random-field Ising model (RFIM) on the
Bethe lattice. We consider the single-spin-flip Glauber dynamics of the
system at zero temperature, as the external magnetic field is slowly
varied from $-\infty$ to $+\infty$. As the field increases, the
magnetization increases as groups of spins flip up together. This model
has been proposed as a model of the Barkhausen noise by Sethna {\it et
al}~\cite{sethna} (see also ~\cite{PDS}). In this paper, we set up the
exact self-consistent equations satisfied by the generating function of
the distribution of avalanche sizes, and analyze these to determine the
behavior of the avalanche distribution function on the Bethe lattice.
The study of the equilibrium properties of the RFIM has been an important
problem in statistical physics for a long time. In 1975, Imry and Ma
\cite{imryma}, showed that arbitrarily weak disorder destroys long-ranged
ferromagnetic order in dimensions $ d < 2$. The persistence of
ferromagnetism in $ d=2$ was a matter of a long controversy, but has now
been established \cite{imbrie}. A recent review of earlier work on this
model may be found in \cite{natterman}. As far as an exact calculation of
thermodynamic quantities is concerned, there are only a few results. For
example, Bruinsma studied the RFIM on a Bethe lattice in the absence
of an external field and for a bivariate random field
distribution~\cite{bruinsma}. There are no known exact results for the
average free energy or magnetization, for a continuous distribution of
random field, even at zero temperature and in zero applied field.
\begin{figure}
\begin{center}
\leavevmode
\psfig{figure=bark.eps,width=12cm}
\caption{The Hysteresis loop of magnetization $M$ versus the external
field $h$ for RFIM. The zoomed figure shows the small jumps in
magnetization that give rise to the Barkhausen noise}
\label{barkhausen}
\end{center}
\end{figure}
The non-equilibrium properties of the RFIM has attracted a lot of
interest lately, arising from the observation by Sethna {\it et al}
~\cite{sethna}
that its zero-temperature dynamics provides a simple model for the
Barkhausen noise and return point memory. Barkhausen noise is the high
frequency noise generated due to the small jumps in magnetization observed
when ferromagnets are placed in oscillating magnetic fields
[Fig.~\ref{barkhausen}].
Understanding and reduction of this noise is important for the design of
many electronic devices~\cite{sipahi}. Experimentally it is
observed~\cite{McClure,brien,Urbach,cote} that the the increase of the
magnetization occurs in bursts that span over two decades of size and the
distribution of burst (avalanche) sizes seems to follow a power law
in this range. Similar avalanche-like relaxational
events are also observed in other systems, for example, the stress-induced
martensitic growth in some alloys~\cite{carrillo}. This power-law
tail in the event-size distribution was interpreted by Cote and
Meisel~\cite{cote} as an
example of {\it self-organized criticality}. But Percovi\'{c} {\it et al}
~\cite{PDS} have argued that large bursts are exponentially rare, and the
approximate power-law tail of the observed distribution comes from
crossover effects due to nearness of a critical point. Recently Tadi\'{c}
\cite{tadic} has presented some evidence from numerical simulations that
the exponents for avalanche distribution can vary continuously with
disorder. Our results about the behavior of the avalanche distribution
function also relate to this question whether any fine-tuning of
parameters is required to see power-law tails in the avalanche
distribution in the RFIM, and if the exponents can be varied
continuously with disorder.
The advantage of working on the Bethe lattice is that the usual BBGKY
hierarchy of equations for correlation functions closes, and one can hope
to set up exact self-consistent equations for the correlation functions.
The fact that Bethe's self-consistent approximation becomes exact on the
Bethe lattice is useful as it ensures that the approximation will not
violate any general theorems, e.g. the convexity of thermodynamic
functions, sum rules. In the presence of disorder, in spite of the
closure of the BBGKY hierarchy, the Bethe approximation is still very
difficult, as the self-consistent equations become functional equations
for the probability distribution of the effective field. These are not
easy to solve, and available analytical results in this direction are
mostly restricted to one dimension \cite{1dsolns}, or to models with
infinite-ranged interactions \cite{SK}. On the Bethe lattice, for
short-ranged interactions with quenched disorder, e.g. in the prototypical
case of the $\pm J$ random-exchange Ising model, the average free energy
is trivially determined in the high temperature phase, but not in the
low-temperature phase. It has not been possible so far to determine even
the ground-state energy exactly despite several attempts
\cite{sherrington.katsura}.
Calculation of time-dependent or non-equilibrium properties presents its
own difficulties, even in the absence of disorder. Usually, for $d > 1$,
one has to resort to the limit of coordination number becoming large, with
interaction strength scaled suitably with coordination number to give a
nontrivial thermodynamic limit \cite{derrida}. The large-d limit in the
self-consistent field approximation for quantum-mechanical problems is
similar in spirit \cite{larged}.
The RFIM model on a Bethe lattice is special in that the zero-temperature
{\it nonequilibrium} response to a slowly varying magnetic field
can be determined exactly~\cite{DSS}. To be
precise, the average non-equilibrium magnetization in this model can be
determined exactly if the magnetic field is increased very slowly, from
$-\infty$ to $+\infty$, in the limit of zero temperature. It thus
provides a good theoretical model to study the slow relaxation to
equilibrium in glassy systems. The dynamics is governed by the existence
of many metastable states, with large energy barriers separating different
metastable states. We hope that this study of non-equilibrium response
in this model would help in the more general problem of
understanding the statistical mechanics of metastable states in glassy
systems.
A brief summary of our results is as follows. We derive the exact self
consistent
equations for the generating function of the avalanche size
distribution function $Q(x)$
on the Bethe lattice. This is a polynomial equation in $Q(x)$ and $x$,
in which the
coefficients depend on the external field $h$, and the distribution of
the quenched
random fields. We can solve these equations explicitly numerically and
thus determine the
qualitative behavior of the distribution of avalanches for any
distribution of the
quenched random fields. The behavior depends on the coordination
number $z$, and on the
details of the distribution function. We work out the distribution of
avalanches
explicitly for a rectangular distribution of the quenched fields, for
the linear chain
($z = 2$), and the 3-coordinated Bethe lattice. In both cases, one
finds only exponential
decay. We also studied other unimodal continuous distributions, {\it
e.g.} when the random field distribution is gaussian, or of the form
$Prob(h_i)={1\over 2 \Delta} sech^2({h_i\over \Delta})$, also for
large $z$. We find that, for $z\ge4$,
there is a regime of disorder strengths for which
the magnetization
shows a jump-discontinuity ( ``first order transition''), but the
avalanche distribution,
averaged over the hysteresis loop also shows a power law tail of the
form $s^{-5/2}$
(``critical fluctuations'').
The paper is organized as follows. In section~\ref{model}, we define the
model precisely. In section~\ref{avalanche}, we briefly recapitulate the
derivation of self-consistent equations for the magnetization in our
model, and then use a similar argument to construct the generating
function for the avalanche distribution for arbitrary distribution of the
quenched random field. We set up a self-consistent equation for the
probability $Q_n$, that an avalanche propagating in subtree flips exactly
$n$ more spins in the subtree before stopping. The probability
distribution of avalanches is expressed in terms of this generating
function. In section~\ref{RD}, we consider the special case of a
rectangular distribution of the random field. In this case, we explicitly
solve the self-consistent equations for Bethe lattices with coordination
numbers $z=2$ and $3$. However, this case is non-generic. For small
strength of disorder $\Delta$, the magnetization jumps from $-1$ to $+1$
at some value of the field, but for larger disorder, when the system shows
finite avalanches, there is no jump in magnetization and the
distribution function
decays exponentially for large $s$. In
section~\ref{GD}, we analyse the self-consistent equations to determine
the form of the avalanche distribution for some other unimodal
continuous distributions
of the random field. We find that in each case for coordination
number $z\ge 4$, the
magnetization shows a first order jump discontinuity as a
function of the applied field at some field-strength $h_{disc}$, for weak
disorder. Just below $h = h_{disc}$, the avalanche distribution has a
universal $(-3/2)$ power-law tail. Section~\ref{conclusion} contains a
discussion of our results, and some concluding remarks. Some algebraic
details of the analytical solution for the rectangular distribution of
quenched fields are relegated to two appendices.
\section{Definition of the Model}
\label{model}
We consider a uniform Cayley tree of $n$ generations where each
non-boundary site has a coordination number $z$ (see
Fig.~\ref{tree}). The first generation
consists of a single vertex. The $r$-th generation has $z(z-1)^{r-2}$
vertices for $r \geq 2$.
\begin{figure}
\begin{center}
\leavevmode
\psfig{figure=tree.eps,width=12cm}
\caption{A Cayley tree of coordination number 3 and 4 generations.}
\label{tree}
\end{center}
\end{figure}
The RFIM on this graph is defined as follows : At each vertex there is a
Ising spin $s_i=\pm 1$ which interacts with nearest neighbors through a
ferromagnetic interaction $J$. There are quenched random fields $h_{i}$ at
each site $i$ drawn independently from a continuous distribution $p(h_i)$.
The entire system is placed in an externally applied uniform field $h$.
The Hamiltonian of the system is
\be
H=-J \sum_{<i,j>} s_is_j -\sum_{i}h_is_i -h\sum_{i}s_i \label{m.3}
\ee
We consider the response of this system when the external field $h$ is
slowly increased from $-\infty$ to $+\infty$.
We assume the dynamics to be zero-temperature single-spin-flip
Glauber dynamics, {\it i.e.} a spin flip is allowed only if the process
lowers
energy. We assume that if the spin-flip is allowed, it occurs with a rate
$\Gamma$, which is much larger than the rate at which the magnetic field
$h$ is increased. Thus we assume that all flippable spins relax
instantly, so that the spin $s_i$ is always parallel to the net local
field $\ell_{i}$ at the site:
\be
s_i= sign(\ell_{i}) = sign( J \sum_{j=1}^{z} s_{j} + h_{i} + h)
\label{m.1}
\ee
We start with $h=-\infty$, when all spins are down and slowly increase
$h$. As we increase $h$, some sites where the quenched random field is
large positive will find the net local field positive, and will flip up.
Flipping a spin makes the local field at neighboring sites increase, and
in turn may cause them to flip. Thus, the spins flip in clusters of
variable sizes. If increasing $h$ by a very small amount causes $s$ spins
to flip up together, we shall call this event an avalanche of size
$s$. As the applied field increases, more and more spins flip up
until eventually all spins are up, and further increase in $h$ has no
effect.
\section{The Self-Consistent Equations}
\label{avalanche}
The special property of the ferromagnetic RFIM that makes the analytical
treatment possible is this: Suppose we start with $h=-\infty$, and all
spins down at $t=0$. Now we change the field slowly with time, in such a
way that $h(t) \leq h(T)$, for all times $ t <T$. Then the
configuration of spins at the final instant $t =T$ does not depend on the
detailed time dependence of $h(t)$, and is the same for all histories, so
long as the condition $ h(t) \leq h(T)$ for all earlier times is obeyed.
In particular, if the maximum value $h(T)$ of the field was reached at
an earlier
time $t_1$, then the configuration at time $T$ is exactly the same as
that at time
$t_1$. This property is called the return point memory~\cite{sethna}.
We may choose to increase the field suddenly from $-\infty$
to $h(T)$ in a single step. Then, once the field becomes $h=h(T)$, several
spins would have positive local fields. Suppose there are two or more
such flippable sites. Then flipping any one of them up can only
increase the local field at other unstable sites, as all couplings are
ferromagnetic. Thus to reach a stable configuration, all such spins
have to be flipped, and {\it the final stable configuration reached is
the same, and independent of the order in which various spins are
relaxed}. This property will be called the abelian property of
relaxation. Using the symmetry between up and down spins, it is easy
to see that the abelian property also holds whether the new value of
field $h"$ is greater or less than its initial value $h'$ so long as
one considers transition from a stable configuration at $h'$ to a
stable configuration at $h"$.
We first briefly recapitulate the argument of our earlier paper
\cite{DSS} which uses the abelian nature of spin-flips to
determine the mean magnetization for any field $h$ in the lower half of
the hysteresis loop by setting up a self-consistent equation.
Since the spins can be relaxed in any order, we relax them in this:
first all the spins at generation $n$ ( the leaf nodes) are relaxed. Then
spins at generation $n-1$ are examined, and if any has a positive local
field, it is flipped. Then we examine the spins at generation $n-2$, and
so on. If any spin is flipped, its descendant are reexamined for possible
flips \cite{foot}. In this process, clearly the flippings of different
spins of the same generation $r$ are independent events.
Suppose we pick a site at random in the tree away from the
boundary, the probability that the local field at this site is
positive, given that exactly $m$ of its neighbors are up, is
precisely the probability that the local field $h_i$ at this site
exceeds $[(z-2m)J-h]$. We denote this probability by $p_{m}(h)$. Clearly,
\be
p_{m}(h)=\int_{(z-2m)J-h}^{\infty} p(h_{i}) dh_{i}
\label{p_m}
\ee
Let $P^{(r)}(h)$ be the probability that a spin on the $n-r$-th generation
will be flipped when its parent spin at generation $n-r-1$ is kept down,
the external field is $h$, and each of its descendent spins has been
relaxed. As each of the $z-1$ direct descendents of a spin is
independently up with probability $P^{(r-1)}$, it is straightforward to
write down a recursion relation for $P^{(r)}$ in terms of $P^{(r-1)}$. For
$r >>1$, these probabilities tend to limiting value $P^\star$, which
satisfies the equation \cite{DSS}
\be
P^\star (h)= \sum_{m=0}^{z-1} {z-1 \choose m} [P^\star (h)]^{m}
[1-P^\star (h)]^{z-1-m} ~p_{m}(h)
\label{a.2}
\ee
For the spin at $O$, there are $z$ downward neighbors, and
the probability that it is up is given by
\be
Prob(s_O=+1|~h) = \sum_{m=0}^{z} {z \choose m} [P^\star (h)]^{m}
[1-P^\star (h)]^{z-m} ~p_{m}(h)
\label{a.3}
\ee
Because all spins deep inside the tree are equivalent, $Prob(s_O=+1|~h)$
determines
the average magnetization for all sites deep inside the tree. Using
Eqs. (4-5), we can determine the magnetization for any value
of the external field $h$. This determines the lower half of the
hysteresis loop. The upper half is obtained similarly.
Now consider the state of the system at external field $h$, and all the
flippable sites have been flipped. We increase the field by a small
amount $dh$ till one more site becomes unstable. We would like to
calculate the probability that this would cause an `avalanche' of $n$ spin
flips. Since all sites deep inside are equivalent, we may assume the new
susceptible site is the site $O$.
It is easy to see that this avalanche propagation is somewhat like
propagation of infection in the contact process on the Bethe lattice. The
`infection' travels $\it downwards$ from the site $O$ which acts as the
initiator of infection. If any site is infected, then it can cause
infection of some of its descendents. If the descendent spin is already
up, it cannot be flipped; such sites act as immune sites for the infection
process. If the descendent spin is down, it can catch infection with a
finite probability. Furthermore, this probability does not depend on
whether the other
`sibling' sites catch infection.
Infection of two or more descendents of
an infected site are uncorrelated events. Thus, we can expect to find the
distribution of avalanches on the Bethe lattice, as for the size
distribution of percolation clusters on a Bethe lattice
\cite{percolation}. However, a precise description in terms of the
contact process is
complicated, as here the infection spreads in a correlated background
of `immune'
(already up ) spins, and the probability that a site catches infection
does depend on the
number of its neighbors that are already up.
\begin{figure}
\begin{center}
\leavevmode
\psfig{figure=subtree.ps}
\caption{A sub-tree $T_X$ formed by $X$ and its descendents. The
sub-tree is rooted at $X$ and $Y$ is the parent spin of $X$.}
\label{sub-tree}
\end{center}
\end{figure}
We start with the initial configuration of all spins down. Now increase
the external field to the value $h$. Consider a site $X$ at some generation $r
>1$ of the Cayley tree [Fig.~\ref{sub-tree}]. We call the subtree
formed by $X$ and its
descendents $T_X$, the subtree rooted at $X$. We keep its parent spin $Y$
at generation $r-1$ down, and relax all the sites in $T_X$ at the
uniform field
$h$. If $X$ is far away from the boundary, the probability that spin
at $X$ is up is $P^\star (h)$.
The conditional probability that spin at a descendant of $X$ is up,
given that the spin at $X$ is down is also $P^\star (h)$. We measure the
response
of $T_X$ to external perturbation by forcibly flipping the spin at $Y$ (
whatever the local field there) and see how many spins in this subtree
flip in response to this perturbation. Let $Q_n$ be the probability that
the spin at $X$ was down when $Y$ was down {\it and} $n$ spins on the
subtree $T_X$ flip up if $S_Y$ is flipped up. Here allowed values of $n$
are $0,1,2,\ldots$. Clearly, we have
\be
P^\star + \sum_{n=0}^{\infty} Q_n =1
\ee
\noi We define
\be
Q(x)= \sum_{n=0}^{\infty} Q_n x^n
\label{def.Q}
\ee
Clearly, $Q(x=0)=Q_0$ and $Q(x=1) = 1-P^\star$. It is straight forward to
write the self-consistent equation for $Q(x)$. Let us first relax all
spins on $T_X$ keeping $X$ and $Y$ down. The probability that exactly
$m$ the descendents of $X$ are turned up in this process be denoted by
$Pr(m)$. Clearly
\be
Pr(m) = {{z-1} \choose m} {P^\star}^m (1 -P^\star)^{z-1-m}
\ee
For a given $m$, the conditional probability that local field at $X$ is
such that spin remains down, even if $Y$ is turned up is $1-p_{m+1}$.
Summing over $m$, and using the expression for $Pr(m)$ above, we get
\be
Q_{0} = \sum_{m=0}^{z-1}{{z-1} \choose m} {P^\star}^m (1
-P^\star)^{z-1-m} [1- p_{m+1}]
\label{Q_0}
\ee
We can write down an expression for $Q_{1}$ similarly. In this case, if
$m$ of the direct descendents of $X$ are up when $Y$ is down, the local
field at all the remaining $z-1-m$ direct descendents must be
such that they remain down even if $X$ is flipped up.
This probability is $ { {z -1 } \choose m } {P^*}^m Q_{0}^{z-1-m}$. The
local quenched field at $X$ must satisfy $ (z-2m)J -h > h_X > (z - 2m -2)J
-h$. The probability for this to occur is $p_{m+1}-p_{m}$.
Hence we get
\be
Q_{1}=\sum_{m=0}^{z-1} (p_{m+1} - p_{m}){{z -1 } \choose m} {P^\star}^m
~Q_{0}^{z-1-m}
\ee
The equation determines $Q_{n}$ for higher $n$ can be written down
similarly. It
only involves the probabilities $Q_{m}$ with $m < n$ for the descendent
spins. These recursion equations are expressed more simply in terms of the
generating function $Q(x)$. It is easily checked that the self-consistent
equation for $Q(x)$ is
\be
Q(x)= Q(x=0) + x \sum_{m=0}^{z-1} {{z-1} \choose m}(p_{m+1}-p_m)
{P^\star}^m ~Q(x)^{z-1-m}
\label{Q(x)}
\ee
This is a polynomial equation in $Q(x)$ of degree $z-1$, whose
coefficients are functions of $h$ through $P^\star(h)$ and $p_m(h)$. It is
easily checked that for
$x=1$, the ansatz $Q(x=1)=1- P^\star$ satisfies the equation, as it
should.
To determine $Q(x)$ for any given external field $h$, we have to first
solve the self-consistent equation for $P^\star$ [ Eq.~\ref{a.2}]. This then
determines $Q(x=0)$ using Eq.~\ref{Q_0}, and then, given $P^\star$ and
$Q(0)$, we
solve for $Q(x)$ by solving the $(z-1)$-th degree polynomial equation
Eq.~\ref{Q(x)}.
Finally, we express the relative frequency of avalanches of various sizes
when the external field is increased from $h$ to $h + dh$ in terms of
$Q(x)$. Let $G_s(h)dh$ be the probability that
avalanche of size $s$
is initiated at $O$. We also define the generating
function $G(x|h)$ as
\be
G(x|h)= \sum_{s=1}^{\infty} G_s(h) x^s
\label{def.G}
\ee
Consider first the calculation of $G_s(h)$ for $s=1$. Let the number of
descendents of $O$ that are up at field $h$ be $m$. For the spin at site
$O$ to be down at $h$ , but flip up at $ h+dh$, the local field $h_{O}$
must satisfy $ [(z-2m)J-(h +dh)] < h_{O} < [(z-2m)J-h]$. This occurs with
probability $p(zJ-2mJ-h)dh$. Each of the $(z-m)$ down neighbors of $O$
must not flip up, even when $s_O$ flips up. The conditional probability of
this event is $Q_0^{z-m}$. Multiplying by the probability that
$m$ neighbors are up, we finally get
\be
G_1(h) = \sum_{m=0}^{z} { z \choose m} {P^\star}^m ~{Q_0}^{z-m} ~p(zJ-2mJ-h)
\ee
Arguing similarly, we can write the equation for $G_s(h)$ for $s=2,
3 $ etc. These equations simplify considerably when expressed in terms of the generating
function $G(x|h)$, and we get
\be
G(x|h) = x\sum_{m=0}^{z} { z \choose m} ~{P^\star}^m ~{Q(x)}^{z-m}
~p(zJ-2mJ-h)
\label{G(x|h)}
\ee
In numerical simulations, and experiments, it is much easier to measure
the avalanche distribution integrated over the full hysteresis loop.
To get the probability that an avalanche of size $s$ will be initiated
at any given site $O$ in the interval when the external field is increased
from $h_1$ to $h_2$, we just have to integrate $G(x|h)$ in this range.
For any $h$, the value of $dG/dx$ at $x=1$ is proportional to the mean size
of an avalanche, and thus to the average slope of the hysteresis loop
at that $h$.
\section{Explicit calculation for the Rectangular distribution}
\label{RD}
While the general formalism described in the previous section can be used
for any distribution, and any coordination number, to calculate the
avalanche distributions explicitly, we have to choose some specific form
for the probability distribution function. In this section, we shall
consider the specific choice of a rectangular distribution : The
quenched random field is uniformly distributed between $-\Delta$ and
$\Delta$, so that
\be
p(h_i) = \frac{1}{2 \Delta}~, \mbox{~~for }~~ -\Delta \leq h_i \leq
\Delta
\ee
In this case, the cumulative probabilities $p_m(h)$ become piecewise
linear functions of $h$, and $h$-dependence of the distribution is easier
to work out explicitly. We shall work out the distributions for
the linear chain ( $z = 2$), and the 3-coordinated Bethe lattice.
\subsection{The Linear Chain}
\label{z.2}
The simplest illustration is for a linear chain. In this case the
self-consistent equation, for the probability
$P^\star$ [ Eq.~\ref{a.2} ] becomes a linear equation. This is easily
solved, and explicit expressions for $Q_0$, and $Q(x)$ are obtained
(see Appendix~\ref{appendixA}). The different regimes showing
different qualitative behavior of the hysteresis loops are shown in
Fig.~\ref{phase2}
\begin{figure}[htbt]
\begin{center}
\leavevmode
\psfig{figure=phase2.ps,height=6cm,angle=-90}
\caption{ Behavior of RFIM in the magnetic field - disorder
($h-\Delta$) plane for a linear chain. The regions A-D correspond to
qualitatively different responses. In region A all spins are down and
in region D all are up. The avalanches of finite size occur in region
B and C.}
\label{phase2}
\end{center}
\end{figure}
For $h < 2J -\Delta$ (region A), all the spin remain down. For $h >
\Delta$, all spins are up (region D). For $\Delta <J$, we get a
rectangular loop and the magnetization jumps discontinuously from $-1$ to
$+1$ in a single infinite avalanche, and we directly go from region A to D
as the field is increased. For $\Delta >J$, we get nontrivial hysteresis
loops.
The hysteresis loops for different
values of $\Delta =0.5, 1.5$ and $2.5$ are shown in Fig.~\ref{m2}.
If $\Delta$ is sufficiently large ($ \Delta > J$), we find that the mean
magnetization is a precisely linear function of the external field for a
range of values of the external field $h$ (region B in Fig. 2). For larger
$h$ values, the magnetization shows saturation effects, and is no longer
linear ( region C).
\begin{figure}
\begin{center}
\leavevmode
\psfig{figure=m1.ps,width=6cm}
\psfig{figure=m2.ps,width=6cm}
\leavevmode
\psfig{figure=m3.ps,width=6cm}
\caption{Hysteresis loops for the linear chain for the
rectangular distribution of quenched fields with different widths
{\it (a)} $\Delta/J=0.5$, {\it (b)} $\Delta/J=1.5$ and {\it (c)}
$\Delta/J=2.5$}
\label{m2}
\end{center}
\end{figure}
The explicit forms of the generating function $Q(x)$ are given in the
Appendix~\ref{appendixA}. We find that in region B, the function
$Q(x)$ is independent of
the applied field $h$. The distribution function $G_s(h)$ has a simple
dependence on $s$ of the form
\be
G_s(h)= A_1 s \left(\frac{J}{\Delta}\right)^{s},
\label{Gs.B.z2}
\ee
where $A_1$ is a constant, that depends only on $J/\Delta$, and does not depend on
$s$ or $h$
\be
A_1= {1 \over 2 \Delta}~\frac{(1-J/\Delta)^2}{(J/\Delta)}
\ee
In region C, the mean magnetization is a nonlinear function of $h$. But
$Q(x)$ is still a rational function of $x$. From the explicit functional form
of $Q(x)$ and $G(x|h)$ are given in the appendix~\ref{appendixA}, we find that
$G_s(h)$ is of the form
\be
G_s(h) =[ A_1' s + A_2'] \left({J\over \Delta}\right)^{s}
, ~~{\mbox for}~~ s \geq 2.
\ee
Here $A_1'$ and $A_2'$ have no
dependence on $s$ but are explicit functions
of $h$.
Integrating over $h$ from $-\infty$ to $\infty$ we get the integrated
avalanche distribution $D_s$,
\be
D_s = \int_{-\infty}^{\infty} G_s(h) dh
\ee
It is easy to see from above that the integrated distribution $D_s$ also
has the form
\be
D_s = [ A_2 s + B_2] \left({J\over \Delta}\right)^s, {\rm for}~~ s \ge 2
\ee
where the explicit forms of the coefficients $A_2$ and $B_2$ are given in
the Appendix~\ref{appendixA}.
\subsection{The Case $z = 3$}
\label{z3}
The analysis for the case $ z=3$ is very similar to the linear case. In
this case,
the self-consistent equation. for $P^\star(h)$ [ Eq.~\ref{a.2} ]
becomes a quadratic equation. The qualitative behavior of solution is
very similar to the earlier case. Some details are given in
Appendix~\ref{appendixB}.
We again get regions A-D as before, but the boundaries
are shifted a bit, and are shown in Fig.~\ref{phase3}.
As before, in region B, the average
magnetization
is a linear function of $h$, and the avalanche distribution is
independent of $h$.
\begin{figure}[htbt]
\begin{center}
\leavevmode
\psfig{figure=phase3.ps,height=7cm,angle=-90}
\caption{ Behavior of RFIM in the magnetic field - disorder
($h-\Delta$) plane for Bethe lattice of coordination number 3. The
qualitative behavior in different regions A-D is similar to that of
a linear chain (Fig.~\ref{phase2}).}
\label{phase3}
\end{center}
\end{figure}
We find that in regime B, the distribution of
avalanche sizes is given by
\be
G_s(h) =N
\left[\frac{(2s)!}{(s-1)!(s+2)!}\right]
(1-J/\Delta)^{s}\left({J\over\Delta}\right)^{s}
\label{Gs.B.z3}
\ee
where $N$ is a normalization constant given by
\be
N={3 \over 2 \Delta} (1 - J/\Delta)^2 { 1 \over (J/\Delta)}
\ee
\noi It is easy to see that for large $s$, $G_s(h)$ varies as
\be
G_s \sim s^{-\frac{3}{2}} \kappa^s
\ee
where
\be
\kappa = 4 ( 1 - J/ \Delta) (J/\Delta)
\ee
In region B, $J/\Delta$ is always less than $1/3$, and so this
function always has an exponential decay for large $s$.
In the region C, we find that the avalanche distribution is of the form
\be
G_s(h)= N'
\left[\frac{(2s)!}{(s-1)!(s+2)!}\right] \kappa^s
\label{Gs.C.z3}
\ee
where $N'$ is a normalization constant independent of $s$, and $\kappa$ is a
a cubic polynomial in the external field $h$:
\bea
\kappa = {1\over 8(1-2J/\Delta)^2}
&&\left[
\left\{9-53(J/\Delta)+119(J/\Delta)^2-107(J/\Delta)^3\right\}
\right. \nn \\
&& \left. +\left\{-5+10(J/\Delta)+11(J/\Delta)^2\right\}(h/\Delta)
+\left\{3-9(J/\Delta)^2\right\}(h/\Delta)^2 +(h/\Delta)^3
\right]
\eea
As $\kappa$ is not a very simple function of
$h$, explicit expressions for the integrated distribution $D_s$ are
hard to write down.
\section{General Distributions }
\label{GD}
The analysis of the previous section can, in principle, be extended to
higher coordination numbers, and other distributions of random fields.
However, the self-consistent equations become cubic, or higher order
polynomials. In principle, an explicit solution is possible
for $z \le 5$, but it is not
very instructive. However, the qualitative behavior of
solutions is easy to determine, and is the same for all $ z \ge 4$.
We shall take $z=4$ in the following for simplicity. Since we only study
the
general features of the self-consistent equations, we need not pick a
specific form for the continuous distributions of random field
distribution $p(h_i)$. We shall only assume that it has a single
maximum around zero and asymptotically go to zero at $\pm \infty$.
For small width ( $\Delta$ )
of the random field distribution {\it i.e.} for weak disorder the
magnetization shows a jump discontinuity as a function of the external
uniform field , which disappears for a larger values of $\Delta$
~\cite{DSS}. For fields $h$ just lower than the value where the jump
discontinuity occurs, the slope of the hysteresis curves is large, and
tends to infinity as the field tends to the value at which the jump
occurs. This indicates that large avalanches are more likely just
before the first
order jump in magnetization.
\begin{figure}[htbt]
\begin{center}
\leavevmode
\psfig{figure=m.ps,height=6cm,angle=-90}
\caption{Magnetization as a function of increasing field for the Bethe
lattice
with $z=4$ and the random field distribution given by Eq.~\ref{sech}.}
\label{mag}
\end{center}
\end{figure}
For $z=4$, the self-consistent equation for $P^\star(h)$
[ Eq.~\ref{a.2} ] is cubic
\be
a P^{\star 3}+b P^{\star 2} +c P^\star+d=0
\label{z4.1}
\ee
where $a, b, c$ and $d$ are functions of the external field $h$,
expressible in terms of the cumulative
probabilities $p_i,i=0$ to $3$,
\bea
&& a=p_3-3p_2+3p_1-p_0 \nonumber \\
&&b=3p_2-6p_1+3p_0 \nonumber \\
&&c=3p_1-3p_0-1 \nonumber \\
&&d=p_0 \nonumber
\eea
This equation will have $1$ or $3$ real roots, which will vary with $h$.
We have shown this variation for the real roots which lie between 0 and 1
in Fig.~\ref{root} for the case where $p(h_i)$ is a simple
distribution
\be
p(h_i) = \frac{1}{2 \Delta} sech^2( h_i/\Delta)
\label{sech}
\ee
We have also solved numerically the self-consistent equation for
$P^\star$ for
other choices of $p(h_i)$, like the gaussian distribution, and for
higher $z ( = 4,5,6 )$. In each case we find that the qualitative
behavior of the solution is very similar.
Note that the rectangular distribution discussed in the previous section
is very atypical in that both the coefficients $a$ and $b$ vanish for an
entire range of values of $h$.
In the generic case, we find two qualitatively different behaviors: For
larger values of $\Delta$, there is only one real root for any $h$ . For
$\Delta$ sufficiently small, we find a range of $h$ where there are $3$
real solutions. There is a critical value $\Delta_c$ of the width which
separates these two behaviors. For the particular distribution chosen
$\Delta_c \simeq 2.10382$.
In the first case, the real root is a continuous function of $h$, and
correspondingly, the magnetization is a continuous function of $h$. This is
the case corresponding to $\Delta =2.5$ in Fig.~\ref{root}.
For smaller $\Delta < \Delta_c$, for
large $\pm h$ there is only one root , but in the intermediate region
there are three roots. The typical variation is shown for $\Delta =
1.5$ in
Fig.~\ref{root}. In the increasing
field the probability $P^\star(h)$ initially takes the smallest root.
As $h$ increases , at a value $h=h_{disc}$ , the
middle and the lower roots become equal and after that both disappear
from the real plane . At $h=h_{disc}$ the probability $P^\star(h)$ jumps to the
upper root . Thus for $\Delta < \Delta_c$ there is a discontinuity in
$P^\star (h)$ which gives rise to a first order jump in the magnetization
curve .
\begin{figure}[htbt]
\begin{center}
\leavevmode
\psfig{figure=p.ps,height=6cm}
\caption{Variation of $P^\star(h)$ with $h$ for the Bethe lattice with
$z=4$, and the random field distribution given by Eq.~\ref{sech}.}
\label{root}
\end{center}
\end{figure}
The field $h_{disc}$ where the discontinuity of magnetization occurs,
is determined by the condition that for this value of $h$,
the cubic equation [ Eq.~\ref{z4.1} ] has two equal roots. The value
of $P^\star$ at this point, denoted by $P^\star_{disc}$, satisfies the
equation
\be
3 a_0 P^{\star 2}_{disc}+2 b_0 P^\star _{disc}+c_0=0
\label{z4.2}
\ee
where $a_0, b_0$ and $c_0$ are the values of $a, b$ and $c$ at
$h=h_{disc}$.
We now determine the behavior of the avalanche generating function
$G_s(h)$ for large $s$ and $h$ near $h_{disc}$. The behavior for large
$s$ corresponds to $x$ near $1$. So we write $x = 1- \delta$, with
$\delta$ small, and $h = h_{disc} -\epsilon $. Near $h_{disc}$, $a, b, \ldots$
vary linearly with $\epsilon$ and
\be
P^\star \approx P^\star _{disc} - \alpha \sqrt{\epsilon} + O(\epsilon)
\ee
where $\alpha$ is a numerical constant.
Since $Q(x=1)=1-P^\star(h)$ , if $x$ differs slightly from unity $Q(x)$ also
differs
from $ 1 - P^\star(h)$ by a small amount. Substituting
$x=1-\delta$ and $Q(x=1-\delta)=1-P^\star-F(\epsilon, \delta)$ in
the self-consistent equation for $Q(x)$ [Eq.~\ref{Q(x)}],
where both $\delta$ and $F$ are small, using
Eq.~\ref{z4.2}, we get
to lowest order in $\delta$, $\epsilon$ and $F$
\be
F^2 + \beta \sqrt{\epsilon} F - \gamma^2 \delta = 0
\label{z4.5}
\ee
where $\beta$ and $\gamma$ are some constants.
Thus, to lowest orders in $\epsilon$ and $\delta$, $F$ is given by
\be
F = (1/2)[ \sqrt{\beta^2 \epsilon + 4 \gamma^2 \delta} - \beta \sqrt{\epsilon}]
\ee
Thus $Q(x)$ has leading square root singularity at
$x=1+\frac{\beta^2 \epsilon}{4 \gamma^2}$.
Consequently, $G(x|h)$ will also show a square root
singularity $x=1+\frac{\beta^2 \epsilon}{4 \gamma^2}$.
This implies that the Taylor expansion coefficients
$G_s(h)$ vary as
\be
G_s(h) \sim s^{-\frac{3}{2}} \left(1+\frac{\beta^2 \epsilon}{4
\gamma^2}\right)^{-s},
~~~~\mbox{for large $s$.}
\label{z4.7}
\ee
At $\epsilon=0$, we get
\be
G_s(h_{disc}) \sim s^{-\frac{3}{2}}
\ee
Thus at $h=h_{disc}$ the avalanche distribution has a power law tail.
To calculate the integrated distribution $D_s$, we have to integrate
Eq.~\ref{z4.7} over a range of $\epsilon$ values. For large $s$, only
$\epsilon < {\gamma^2 \over \beta^2 s}$ contributes significantly to
the integral, and thus we get
\be
D_s \sim s^{-{5\over 2}} \mbox{~,~~~~for large} ~~s.
\ee
Thus the integrated distribution shows a robust $(-5/2)$ power law for
a range of disorder strength $\Delta$.
\section{Discussion}
\label{conclusion}
In this paper, we set up exact self-consistent equations for the avalanche
distribution function for the RFIM on a Bethe lattice. We were able to
solve these equations explicitly for the rectangular distribution of the
quenched field, for the linear chain $z=2$, and the 3-coordinated Bethe
lattice. For more general coordination numbers, and general
continuous distributions of random fields, we argued that for very large
disorder, the avalanche distribution is exponentially damped, but for
small disorder, generically, one gets a jump in magnetization,
accompanied by a square-root singularity. For field-strengths just below
corresponding to the
jump discontinuity, the avalanche distribution function has a
power-law tail of the form $s^{-3/2}$. The integrated avalanche
distribution then varies as $s^{-5/2}$ for large $s$.
Some unexpected features of the solution deserve mention. Firstly,
we find that the behavior of the self-consistent equations for $z=3$
is qualitatively different from that for $z>3$. The behavior for the
linear chain ($z = 2$) is, of course, expected to be different from higher
$z$. One usually finds same behavior for all $z >2$. Mathematically, the
reason for this unusual dependence is that the mechanism of two real
solutions of the polynomial equation merging, and both becoming unphysical
(complex) is not available for $z=3$. Here the self-consistency equation
is a quadratic, and from physical arguments, at least one of the roots
must be real. That a Bethe lattice may show non-generic behavior for low
coordination numbers has been noted earlier by Ananikyan {\it et al} in their
study of the
Blume-Emery-Griffiths model on a Bethe lattice. These authors observed
that the qualitative behavior for $z < 6$ is different from that for $z
\geq 6$ \cite{ananikyan}.
The second point we want to emphasize is that here we find that the
power-law tail in the distribution function is accompanied by the
first-order jump in magnetization. Usually, one thinks of critical
behavior and first-order transitions as mutually exclusive, as first-order
jump pre-empts a build-up of long-ranged correlations, and all
correlations remain finite-ranged across a first -order transition. This
is clearly not the case here. In fact, the power-law tail in the avalanche
distribution disappears, when the jump disappears. A similar situation
occurs in equilibrium statistical mechanics in the case of a Heisenberg
ferromagnet below the critical temperature. As the external field $h$ is
varied across zero, the magnetization shows a jump discontinuity, but in
addition has a cusp singularity for small fields~\cite{parisi}. But in this
case the power-law tail is seen on {\it both sides of the transition}.
Note that for most values of disorder, and the external field, the
avalanche distribution is exponentially damped. We get robust power law
tails in the distribution, only if we integrate the distribution over the
hysteresis cycle across the magnetization jump. But, in this case, the
control parameter $h$ is swept across a range of values, in
particular across a (non-equilibrium) phase transition point! In this
sense, while no
explicit fine-tuning is involved in an experimental setup, this is not a
self-organized critical system in the usual sense of the word.
Recently P\'{a}zm\'{a}ndi {\it et al} have argued that the hysteretic
response of the
Sherrington-Kirkpatrick model to external fields at zero temperature
shows
self-organized criticality for all values of the field \cite{pazmandi}.
However, this seems to be because of the presence of infinite-ranged
interactions in that model.
The treatment of this paper may be extended to the site-dilution case
discussed by Tadi\'{c}~\cite{tadic}. From the structural stability of
the mechanism which
leads to the cusp singularity just before the jump-discontinuity in
magnetization, it is clear that in our model, introduction of
site dilution would not change the qualitative behavior of solutions.
A general question concerns the behavior of the avalanches for more
general probability distributions. Clearly, if $p(h_i)$ has a discrete
part, it would give rise to
jumps in $p_i$ as a function of $h$, and hence give rise to several
jumps in the hysteresis loop. These could preempt the cusp singularity
mechanism which is responsible for the power-law tails.
If the distribution $p(h_i)$ is continuous, but multimodal, then it is
possible to have more than one first order jump in the
magnetization \cite{z6}. This is confirmed by explicit calculation in some
simple cases. If $p(h_i)$ has
power-law singularities, these would also lead to power-law singularities
in $p_i$, and hence in $P^\star(h)$. Even for purely continuous
distributions, the merging of two roots as the magnetic field varies
need not always occur. For example, it is easy to check that for the
rectangular distribution, even for $z \ge 4$, we do not get a power law
tail for any value of $\Delta$. The precise conditions necessary for
the occurrence
of the power-law tail are not yet clear to us.
Finally, we would like to mention some open questions. Our analysis
relied heavily on the fact that initial state was all spins down. Of
course, we can start with other initial conditions. It would be
interesting to set up self-consistent field equations for them. In
particular, the behavior the return loop, when the external field is
increased from $-\infty$ to some value $h_1$, and then decreased to a
lower value $h_2$ seems an interesting quantity to determine. Another
extension would be to make the rate of field-sweep comparable to the
single-spin flip rate (still assuming T=0 dynamics). This would mean some
large avalanches in different parts of the sample could be evolving
simultaneously. Then one could
study the sweep-rate
dependence of the hysteresis loops, and the frequency dependence of
the Barkhausen noise spectra. This is
perhaps of some relevance in real experimental data, and would also make
contact with other treatments of Barkhausen noise that focus on the domain
wall motion.
We thank M. Barma and N. Trivedi for their useful comments on the
manuscript.
DD would like to thank the Physics Department of North Eastern Hill
University, for hospitality during a visit there.
|
1,116,691,497,166 | arxiv | \section{Introduction}
\label{s:intro}
High-amplitude sub-Larmor-scale electromagnetic turbulence is a phenomenon largely associated with high-energy density environments. Such turbulence is a common feature of astrophysical and space plasmas, e.g., at high-Mach-number collisionless shocks in weakly magnetized plasmas \citep{medvedev09, frederiksen04, nishikawa03}, upstream regions of quasi-parallel shocks \citep{sironi06, plotnikov12}, sites of magnetic reconnection \citep{swisdak08, liu09} and others. Additionally, these sub-Larmor-scale, or ``small-scale'', fields play a critical role in laboratory plasmas; especially in high-intensity laser plasmas -- as observed in facilities such as the National Ignition Facility (NIF), OmegaEP, Hercules, Trident, and others \citep{ren04, huntington12, mondal12, tatarakis03}. Experimental and numerical studies of non-relativistic collisionless shocks also show that they are mediated by small-scale electromagnetic turbulence \citep{fiuza12, medvedev06}. Thus, studies of plasmas and turbulence in these environments are important for the fusion energy sciences and the inertial confinement concept \citep{ren04, tatarakis03}.
Small-scale electromagnetic turbulence can be of various origin and thus have rather different properties, from being purely magnetic (Weibel) turbulence \citep{weibel59, fried59, medvedev09c}, to various types of electromagnetic turbulence (for example, whistler wave turbulence or turbulence produced by filamentation/mixed mode instability \citep{lemoine09, bret05}), to purely electrostatic Langmuir turbulence \citep{treumann97, bret05b}.
Despite substantial differences, these small-scale fields share one thing in common: they vary on scales much smaller than the characteristic curvature scale of the particles traversing the field, i.e.\ the plasma inertial length (skin depth) which are on the order of the particle Larmor radius. The particle trajectory through these turbulent fields will, consequently, never form a well-defined Larmor circle.
If the electromagnetic fields are random, which is usually the case of turbulence because of the random phases of fluctuations, the paths of the particles diffusively diverge due to pitch-angle diffusion. Radiation simultaneously produced by these particles is neither cyclotron nor synchrotron (for non-relativistic or relativistic particles, respectively) but, instead, carries information about the spectrum of turbulent fluctuations. Here we stress that we strictly consider the case of turbulence in vanishing mean field plasma $\langle {\bf B} \rangle = 0$.
\indent
In our previous work, see Ref. \citep{keenan13}, we found the relation between the transport of relativistic particles in isotropic three-dimensional small-scale magnetic turbulence and the radiation spectra simultaneously produced by these particles. In particular, we found that the radiation spectrum agrees with the small-angle jitter radiation prediction, in the small deflection angle regime \citep{medvedev00,medvedev06,medvedev11,RK10,TT11}. Furthermore, we demonstrated that the pitch-angle diffusion coefficient is directly related to, and can readily be deduced from, the spectra of the emitted radiation. This inter-relation between radiative and transport properties provides a unique way to remotely diagnose high-energy-density plasmas, both in laboratory experiments and in astrophysical systems.
\newline
\indent
We extend our previous work to now consider non-relativistic ($v \lesssim 0.1c$) and trans-relativistic (i.e.\ mildly relativistic: $0.1c \lesssim v \lesssim 0.5c$) particles moving through three-dimensional sub-Larmor-scale magnetic turbulence. We demonstrate, once more via numerical and theoretical analysis, that an analogous inter-relation holds in these regimes as well, which naturally generalizes the relativistic small-angle jitter radiation regime and the pitch-angle diffusion coefficient.
\newline
\indent
This trans-relativistic regime is applicable to laboratory plasmas, particularly high-intensity laser plasmas -- where bulk plasma motion is below $v \lesssim 0.5c$. Multi-dimensional relativistic Particle-In-Cell (PIC) simulations and laboratory experiments have revealed that non-relativistic collisionless shocks, mediated by Weibel-like instabilities, can occur in an overcritical plasma via interaction with an ultraintense laser pulse \citep{fiuza12, mondal12}. In the laboratory setting, laser-produced supersonic counter-streaming plasmas have been observed to give rise to self-organized electromagnetic fields \citep{kugland12}. Recently, the formation of filamentary structures indicative of Weibel-like magnetic fields, fully consistent with the shock model offered by 3D PIC simulations and theoretical instability analysis, has been directly observed in a scaled laboratory experiment \citep{huntington15}. Consequently, given the role of trans-relativistic particle motion in these environments, the study of the small-scale electromagnetic turbulence may be aided by the diagnostic tool offered via this inter-relation between the transport and radiative properties.
\newline
\indent
The rest of the paper is organized as follows. Section \ref{s:analytic} presents the analytic theory. Sections \ref{s:model} and \ref{s:results} describe the numerical techniques employed and the obtained simulation results. Section \ref{s:concl} is the conclusions. All equations appear in cgs units.
\section{Analytic theory}
\label{s:analytic}
\subsection{Pitch-angle diffusion}
\label{s:diffusion}
Consider a trans-relativistic electron moving (with velocity, ${\bf v}$) through a non-uniform, random, mean-free (i.e.\ $\langle {\bf B} \rangle = 0$), small-scale magnetic field (and assume that this magnetic ``micro-turbulence'' is statistically homogeneous and isotropic). Because the Lorentz force on the electron is random, it's velocity and acceleration vectors vary stochastically, leading to a random (diffusive) trajectory. We define the field turbulence to be ``small-scale'' when the electron's Larmor radius, $r_L = \gamma\beta m_e c^2/e \langle B_\perp^2 \rangle^{1/2}$ (where $\beta=v/c$ is the dimensionless particle velocity, $m_e$ is the electron mass, $c$ is the speed of light, $e$ is the electric charge, $\langle B_\perp^2 \rangle^{1/2}$ is the rms component of the magnetic field perpendicular to the electron's velocity vector, and $\gamma$ is the electron's Lorentz factor) is greater than, or comparable to, the characteristic correlation scale of the magnetic field, $\lambda_B$, i.e., $r_L\gtrsim \lambda_B$.
For small deflections, the deflection angle of the velocity (with respect to the particle's initial direction of motion) is approximately the ratio of the change in the electron's transverse momentum to its initial transverse momentum. The former is $ \sim F_L\tau_\lambda$, where ${\bf F}_L=(e/c)\,{\bf v\times B}$ is the transverse Lorentz force, and $\tau_\lambda$ is the transit time, which is the time required to traverse the scale of the field's inhomogeneity, i.e., the field correlation length, $\lambda_B$. This is, $\tau_\lambda \sim \lambda_B/v_\perp$ -- where $v_\perp$ is the the component of the velocity perpendicular to the magnetic field. The change in the transverse momentum is thus, ${\Delta}p_\perp \sim F_L \tau_\lambda \sim e(B/c)\lambda_B$. Given that the particle's total transverse momentum is $p_\perp \sim \gamma m_e v_\perp$, the deflection angle over the field correlation length will be $\alpha_\lambda \approx {\Delta}p_\perp/p_\perp \sim e(B/c)\lambda_B/\gamma m_e v_{\perp}$. The subsequent deflection will be in a random direction, because the field is uncorrelated over scales greater than $\lambda_B$, hence the particle motion is diffusive. As for any diffusive process, the ensemble-averaged squared deviation grows linearly with time. Hence, for the pitch-angle deviation, we have
\begin{equation}
\langle \alpha^2 \rangle = D_{\alpha\alpha}t.
\label{diff_def}
\end{equation}
The pitch-angle diffusion coefficient is, by definition, the ratio of the square of the deflection angle in a coherent patch to the transit time over this patch, that is
\begin{equation}
D_{\alpha\alpha} \sim \frac{\alpha_\lambda^2}{\tau_\lambda}\sim \left(\frac{e^2}{m_e^2 c^3}\right)\frac{1}{\langle \beta_{\perp}^2 \rangle^{1/2}}\frac{\lambda_B}{\gamma^2}{\langle B^2 \rangle},
\label{Daa}
\end{equation}
where a volume-averaged square magnetic field, $\langle B^2 \rangle$ and perpendicular rms velocity, $\langle \beta_{\perp}^2 \rangle^{1/2}$ have been substituted for $B^2$ and $\beta_{\perp} \equiv v_{\perp}/c$. Note that the diffusion coefficient depends on both statistical properties of the magnetic field, namely its strength and the typical correlation scale.
Although the assumption that $\alpha_\lambda \ll 1$ is valid in the ultra-relativistic limit: $\beta \rightarrow 1$ (see Ref. \citep{keenan13}), it is not evident that it holds for trans-relativistic and non-relativistic velocities. As we will demonstrate via numerical simulation, pitch-angle diffusion will occur in accordance with Eq. (\ref{Daa}), so long as the magnetic turbulence is sub-Larmor-scale, i.e.\ $r_L \gtrsim \lambda_B$.
The average square magnetic field, $\langle B^2 \rangle$ is related to $\langle B_\perp^2 \rangle$ by a multiplicative factor. For isotropic magnetic turbulence, $\langle B_x^2 \rangle = \langle B_y^2 \rangle = \langle B_z^2 \rangle$. Thus, $\frac{1}{3}\langle B^2 \rangle = \langle B_x^2 \rangle$. Alternatively, ${\bf B}$ may be expressed as a linear combination of parallel and perpendicular components. Given isotropy, $\langle B_{\perp}^2 \rangle = \langle B_x^2 \rangle + \langle B_y^2 \rangle$, so
\begin{equation}
\langle B_\perp^2 \rangle = \frac{2}{3}\langle B^2 \rangle.
\label{b_perp}
\end{equation}
Recognizing that $v_{\perp}B = vB_{\perp}$ allows the expression of the rms perpendicular velocity as
\begin{equation}
\langle \beta_{\perp}^2 \rangle^{1/2} = \sqrt{\frac{2}{3}}\beta,
\label{beta_para}
\end{equation}
Next, the correlation length, $\lambda_B$ lacks a formal definition. It is, nonetheless, commonplace in the literature -- e.g.\ Ref. \citep{biswas02}, to define the two-point autocorrelation tensor,
\begin{equation}
R^{ij}({\bf r}, t) \equiv \langle {B}^i({\bf x}, \tau){B}^j({\bf x} + {\bf r}, \tau + t) \rangle_{{\bf x}, \tau},
\label{corr_tensor}
\end{equation}
with the formally path and time dependent correlation length tensor defined as
\begin{equation}
\lambda^{ij}_B(\hat{\bf r}, t) \equiv \int_{0}^\infty \! \frac{R^{ij}({\bf r}, t)}{R^{ij}(0, 0)} \, \mathrm{d}r.
\label{corr_l}
\end{equation}
Note that we make no distinction between co-variant and contra-variant components; the usage of upper and lower indices is only for convenience.
Since the component of the magnetic field perpendicular to the particle trajectory alters the motion, we choose an integration path along ${\bf v_\perp}$ and only consider a transverse magnetic field component. In accord with standard practice (see, for example, Ref. \citep{batchelor82}), we choose ${\bf r} = z\hat{\bf z}$ and $i=j=x$. Thus, we define the magnetic field correlation length as
\begin{equation}
\lambda_B \equiv \lambda^{xx}_B(\hat{\bf z}, t) = \int_{0}^\infty \! \frac{R^{xx}(z\hat{\bf z}, t)}{R^{xx}(0, 0)} \, \mathrm{d}z.
\label{corr_l_def}
\end{equation}
The correlation length has a convenient representation in Fourier ``$k$-space'' and ``$\Omega$-space". Let ${\bf B}_{{\bf k}, \Omega}$ be the spatial and temporal Fourier transform of the magnetic field, i.e.\
\begin{equation}
{\bf B}_{{\bf k},\Omega} = \int \! {\bf B}({\bf x}, t)e^{-i({\bf k}\cdot{\bf x} - \Omega{t})} \, \mathrm{d} {\bf x} \mathrm{d}t,
\label{B_fourier_def}
\end{equation}
where ${\bf k}$ and $\Omega$ are the corresponding wave vector and frequency, respectively. We may define a complementary spectral correlation tensor $\Phi_{ij}({\bf k}, \Omega)$, such that
\begin{equation}
R_{ij}({\bf r}, t) = (2\pi)^{-4}\int \Phi_{ij}({\bf k}, \Omega) e^{i{\bf k}\cdot{\bf r} -i\Omega{t}} \, \mathrm{d}{\bf k}\, \mathrm{d}{\Omega},
\label{corr_tensor_def_Phi}
\end{equation}
Isotropy, homogeneity, time-independence, and ${\bf \nabla}\cdot{\bf B} = 0$ require that the spectral correlation tensor take the simple form \citep{biswas02}
\begin{equation}
\Phi_{ij}({\bf k}, \Omega) = \frac{1}{2V}\left|{\bf B}_k\right|^2\left(\delta_{ij} - \hat{k}_i\hat{k}_j\right)2\pi\delta(\Omega),
\label{spec_tensor}
\end{equation}
where $V$ is the volume of the space considered, $\hat{\bf k}$ is the unit vector in the direction of the wave vector, and $\delta_{ij}$ is the Kronecker delta. The normalization has been chosen such that $\sum{R}_{ii}(0, 0) = \langle B^2 \rangle_{{\bf x}, \tau} = \langle B^2 \rangle$. Given Eq. (\ref{corr_tensor_def_Phi}) and Eq. (\ref{spec_tensor}), the correlation length may be reformulated as
\begin{equation}
\lambda_B = \int_{0}^\infty \! \frac{\int \! |{\bf B}_k|^2k^{-2}(k^2-k_x^2)e^{ik_z{z}}\, \mathrm{d}{\bf k}}{\int \! |{\bf B}_k|^2k^{-2}(k^2-k_x^2)\, \mathrm{d}{\bf k}} \, \mathrm{d}{z}.
\label{corr_l}
\end{equation}
By assuming isotropic turbulence, the magnetic field has azimuthal and polar symmetry in $k$-space, hence ${\bf B}_{\bf k}$ is only a function of $|{\bf k}| \equiv k$. After the integration over $z$ and all solid-angles in Fourier space, Eq. (\ref{corr_l}) becomes
\begin{equation}
\lambda_B = \frac{3\pi}{8}\frac{\int_{0}^\infty \! k{|{\bf B}_k|^2}\, \mathrm{d}k}{\int_{0}^\infty \! k^2{|{\bf B}_k|^2}\, \mathrm{d}k}.
\label{corr_l_div}
\end{equation}
It may be noted that $\lambda_B \approx k_B^{-1}$, where $k_B$ is the characteristic (dominant) wave number of turbulence.
Thus, with Eqs. (\ref{Daa}), (\ref{beta_para}), and (\ref{corr_l_div}), the pitch-angle diffusion coefficient is
\begin{equation}
D_{\alpha\alpha} \equiv \frac{3\pi}{8}\sqrt{\frac{3}{2}}\left(\frac{e^2}{m_e^2 c^3}\right)\frac{\int_{0}^\infty \! k{|{\bf B}_k|^2}\, \mathrm{d}k}{\int_{0}^\infty \! k^2{|{\bf B}_k|^2}\, \mathrm{d}k}\frac{\langle B^2 \rangle}{\gamma^2\beta}.
\label{Daa_def}
\end{equation}
To continue, we must specify a magnetic spectral distribution, $|{\bf B}_k|^2$. As in our previous work (Ref. \citep{keenan13}), we assume the isotropic three-dimensional magnetic turbulence has a static, i.e.\ time-independent, power law turbulent spectrum:
\begin{equation}
\left\{\begin{array}{ll}
|{\bf B}_{\bf k}|^2 = Ck^{-\mu}, & k_{min} \le k \le k_{max}
\\
|{\bf B}_{\bf k}|^2 = 0. & \text{otherwise}
\end{array}\right.
\label{Bk}
\end{equation}
Here the magnetic spectral index, $\mu$ is a real number, and
\begin{equation}
C \equiv \frac{2\pi^2V\langle B^2 \rangle}{\int_{k_\text{min}}^{k_\text{max}} \! k^{-\mu+2}\, \mathrm{d}k},
\label{C_def}
\end{equation}
is a normalization, such that
\begin{equation}
V^{-1}\int \! {\bf B}^2({\bf x}) \mathrm{d}{\bf x} = (2\pi)^{-3}\int \! |{\bf B}_{\bf k}|^2 \, \mathrm{d} {\bf k}.
\label{C_def_cont}
\end{equation}
It should be noted that our principal results strictly apply only to static turbulence. One should, in principle, consider time-dependent fields as well. However, if the transit time of a particle over a correlation length is shorter than the field variability time-scale, then the static field approximation is valid. Additionally, plasma instabilities generally produce random fields in a preferred direction, leading to anisotropic turbulence. Nonetheless, isotropy may arise in an advance stage of development. Magnetic turbulence of this kind is a natural outcome of the non-linear Weibel-filamentation instability, which occurs at relativistic collisionless shocks and in laser-produced plasmas \citep{medvedev00, medvedev06, medvedev11}.
\subsection{The ultra-relativistic jitter theory}
\label{s:rel_jit}
Now we consider the radiative properties of these sub-Larmor-scale plasmas. First, the ultra-relativistic radiation regime in sub-Larmor-scale magnetic turbulence is well understood. This regime is characterized by a single parameter, the ratio of the deflection angle, $\alpha_\lambda$ to the relativistic beaming angle, $\Delta\theta \sim 1/\gamma$. The ratio \citep{medvedev00, medvedev11, keenan13}
\begin{equation}
\frac{\alpha_\lambda}{\Delta\theta} \sim \frac{eB_{\perp}\lambda_B}{m_e c^2} \sim 2\pi \frac{e \langle B^2 \rangle^{1/2}}{m_e c^2 k_B} \equiv \delta_j
\label{delta}
\end{equation}
is known as the $\emph{jitter parameter}$. From this, we recover four distinct radiation regimes. Firstly, if $\delta_j\to\infty$, the regime is the classical synchrotron radiation regime; the particle orbits are circular in the plane orthogonal to a perfectly homogeneous magnetic field. With $\delta_j>\gamma$, the regime is very similar to synchrotron, but the particle's guiding center is slowly drifting, due to slight inhomogeneity in the magnetic field. The produced spectrum is well represented by the synchrotron spectrum, and it evolves slowly in time due to the particle diffusion through regions of differing field strength. This regime may be referred to as the diffusive synchrotron regime.
Thirdly, when $1<\delta_j<\gamma$, the particle does not complete its Larmor orbit because the $B$-field varies on a shorter scale. In this case, an onlooking observer would see radiation from only short intervals of the particle's trajectory (i.e., whenever the trajectory is near the line-of-sight), as in synchrotron, but these intervals are randomly distributed. This is the case of the large-angle jitter regime. The radiation is similar to synchrotron radiation near the spectral peak and above, but differs significantly from it at lower frequencies, see Ref. \citep{medvedev11} for details.
Finally, If $\delta_j \ll 1$, a distant observer on the line-of-sight will see the radiation along, virtually, the entire trajectory of the particle (which will be approximately straight with small, random, transverse deviations). This is known as small-angle jitter radiation \citep{medvedev00, medvedev06, medvedev11}. The resulting radiation markedly differs from synchrotron radiation, although the total radiated power of radiation, $P_\text{tot}\equiv dW/dt$, produced by a particle in all these regimes, e.g., jitter and synchrotron, is identical:
\begin{equation}
P_\text{tot} = \frac{2}{3} r_e^2c \gamma^2 \langle B_\perp^2 \rangle,
\label{P_tot_rel}
\end{equation}
where $r_e = e^2/m_e c^2$ is the classical electron radius.
For ultra-relativistic electrons, the radiation spectra are wholly determined by $\delta_j$ and the magnetic spectral distribution. It has been shown \citep{medvedev06, medvedev11,RK10,TT11} that monoenergetic relativistic electrons in the sub-Larmor-scale magnetic turbulence given by Eq. (\ref{Bk}) produce a flat angle-averaged spectrum below the spectral break and a power-law spectrum above the break, that is
\begin{equation}
P(\omega) \propto
\left\{\begin{array}{ll}
\omega^0, &\text{if}~ \omega<\omega_j, \\
\omega^{-\mu + 2}, &\text{if}~ \omega_j<\omega<\omega_b, \\
0, &\text{if}~ \omega_b<\omega,
\end{array}\right.
\label{Pomega}
\end{equation}
where the spectral break is
\begin{equation}
\omega_j =\gamma^2 k_\textrm{min} c,
\label{omegaj-kmin}
\end{equation}
which is called the jitter frequency. Similarly, the high-frequency break is
\begin{equation}
\omega_b =\gamma^2 k_\textrm{max} c.
\label{omegab}
\end{equation}
\subsection{Non-relativistic jitter radiation}
\label{s:nonrel_jit}
In contrast, radiation from non-relativistic particles is not beamed along a narrow cone of opening angle, $\Delta\theta$. The jitter parameter is, consequently, without meaning in the non-relativistic radiation regime. Instead, the ``dimensionless scale'' (or ``gyro-number''), i.e.\ $r_L\lambda_B^{-1}$, is the only meaningful parameter:
\begin{equation}
r_L\lambda_B^{-1} \sim k_B{r_L} = k_B\frac{{\gamma}m_e{v}c}{e\langle B^2 \rangle^{1/2}} \equiv \rho,
\label{scale_para}
\end{equation}
Given the magnetic spectral distribution exhibited by Eq. (\ref{Bk}), $k_B \sim k_\text{min}$, so
\begin{equation}
\rho = k_\text{min}r_L.
\label{rho}
\end{equation}
\indent
As we shall see below, the radiation spectrum in this regime markedly differs from the single-harmonic cyclotron spectrum. We call this radiation ``pseudo-cyclotron'' radiation or ``non-relativistic jitter'' radiation.
\indent
Regardless of the regime, the radiation spectrum (which is the radiative spectral energy, $dW$ per unit frequency, $d\omega$, and per unit solid-angle, $d\eta$) seen by a distant observer is obtained from the equation \citep{landau75,jackson99}
\begin{equation}
\frac{d^2W}{d\omega\, d\eta} =
\frac{e^2}{4\pi^2 c} \left|\int_{-\infty}^\infty \! {\bf A}_{\bf k}(t)e^{i\omega{t}}\, \mathrm{d} t
\right|^2,
\label{LW}
\end{equation}
where
\begin{equation}
{\bf A}_{\bf k}(t) \equiv \frac{\hat{\bf n}\times[(\hat{\bf n} - {\boldsymbol\beta}) \times \dot{\boldsymbol\beta} ]}{(1 - \hat{\bf n}\cdot{\boldsymbol\beta})^2}e^{-i{\bf k}\cdot {\bf r}(t)}.
\label{A_k}
\end{equation}
In this equation, ${\bf r}(t)$ is the particle's position at the retarded time $t$, ${\bf k} \equiv \hat{\bf n}\omega/c$ is the wave vector which points along $\hat{\bf n}$ from ${\bf r}(t)$ to the observer and $\dot{\boldsymbol\beta} \equiv \text{d}{\boldsymbol\beta}/\text{d}t$. Since the observer is distant, $\hat{\bf n}$ is approximated as fixed in time to the origin of the coordinate system. This fully relativistic equation is obtained from the Li\'{e}nard-Wiechart potentials. If $v \ll c$, Eq. (\ref{LW}) simplifies to
\begin{equation}
\frac{d^2W}{d\omega\, d\eta} =
\frac{e^2}{4\pi^2 c} \left|\int_{-\infty}^\infty \! \ \hat{\bf n}\times(\hat{\bf n} \times \dot{\boldsymbol\beta}) e^{i\omega{t}}\, \mathrm{d} t
\right|^2,
\label{LW_nonrel}
\end{equation}
Next, integrating Eq. (\ref{LW_nonrel}) over all solid-angles gives the radiated energy per frequency
\begin{equation}
\frac{dW}{d\omega} = \frac{2{e^2}}{3\pi{c^3}}\left|{\bf w}_{\omega}\right|^2,
\label{dWdw}
\end{equation}
where ${\bf w}_{\omega}$ is the Fourier component of the electron's acceleration with frequency, $\omega$. Eq. (\ref{dWdw}), valid for $v \ll c$, is known as the dipole approximation \citep{landau75}. This expression may also be obtained from the Larmor formula, i.e.\
\begin{equation}
P_\text{tot} = \frac{2}{3}\frac{e^2}{c^3}|{\bf w}|^2,
\label{P_tot_nonrel}
\end{equation}
using the identity \citep{landau75}:
\begin{equation}
\frac{1}{2}\int_{-\infty}^\infty \! |{\bf w}(t)|^2\, \mathrm{d} t = (2\pi)^{-1}\int_{0}^\infty \! |{\bf w}_\omega|^2\, \mathrm{d} \omega.
\label{pow_identity}
\end{equation}
To proceed further, we use our previous assumption that the particle deflection angle over a field correlation length is small (i.e.\ $\alpha_\lambda \ll 1$). This condition implies the validity of the ``perturbative'' approach, whereby the particle trajectory is approximated as a straight line. For a particle moving in a magnetic field, $\left|{\bf w}_{\omega}\right|^2$ is given by the Lorentz force. In this limiting case of small deflections, we may write
\begin{equation}
\left|{\bf w}_{\omega}\right|^2 = \left(\frac{e\beta}{m_e}\right)^2\left(\delta_{ij} - \hat{v}_i\hat{v}_j\right)B^{i*}_\omega{B^j_\omega},
\label{w_omega}
\end{equation}
where ${{\bf B}_\omega}$ is the temporal variation of the magnetic field along the trajectory of the electron, i.e.\
\begin{equation}
{{\bf B}_\omega} = (2\pi)^{-4} \int \! e^{i\omega{t}} \, \mathrm{d}t \int \! {\bf B}_{{\bf k},\Omega}e^{i{\bf k}\cdot{\bf r}(t) - i\Omega{t}} \, \mathrm{d} {\bf k} \mathrm{d}\Omega.
\label{B_omega_def}
\end{equation}
Since the trajectory is approximately straight, ${\bf r}(t) \approx {\bf r_0} + {\bf v}t$, consequently
\begin{equation}
{{\bf B}_\omega} = (2\pi)^{-4} \int \! e^{i{\bf k}\cdot{\bf r}_0} {\bf B}_{{\bf k}, \Omega} \, \mathrm{d} {\bf k} \, \mathrm{d} {\Omega} \int \! e^{i(\omega+{\bf k}\cdot{\bf v} - \Omega)t } \, \mathrm{d}t,
\label{B_omega_trans}
\end{equation}
After the time integration, this becomes
\begin{equation}
{{\bf B}_\omega} = (2\pi)^{-3} \int \! \delta(\omega + {\bf k}\cdot{\bf v} - \Omega)e^{i{\bf k}\cdot{\bf r}_0} {\bf B}_{{\bf k}, \Omega} \, \mathrm{d} {\bf k} \, \mathrm{d}{\Omega}.
\label{B_omega}
\end{equation}
Now, since the magnetic turbulence is assumed to be homogeneous (at least over a time scale greater than the particle transit time) the product of $B^{i*}_\omega{B^j_\omega}$ along a particular trajectory starting at ${\bf r}_0$ is representative of the magnetic field as a whole \citep{medvedev11}. Thus, we may consider only the volume-average of $B^{i*}_\omega{B^j_\omega}$. Performing the integration leads to
\begin{equation}
\left<B^{i*}_{\omega}B^{j}_{\omega}\right>_{{\bf r}_0} = (2\pi)^{-3}{V}^{-1} \int \! \delta(\omega + {\bf k}\cdot{\bf v} - \Omega){B}^{i}_{{\bf k}, \Omega}{B}^{j*}_{{\bf k}, \Omega} \, \mathrm{d} {\bf k} \, \mathrm{d} {\Omega}.
\label{cor_avg}
\end{equation}
The quantity, $B^{i*}_{{\bf k}, \Omega}B^{j}_{{\bf k}, \Omega}$, is proportional to the Fourier image of the two-point auto-correlation tensor -- i.e.\ Eq. (\ref{spec_tensor}). Thus, with Eqs. (\ref{dWdw}), (\ref{w_omega}), (\ref{cor_avg}), and (\ref{spec_tensor}), the angle-averaged radiation spectrum of a non-relativistic electron moving in static, statistically homogeneous and isotropic sub-Larmor-scale magnetic turbulence is
\begin{equation}
\frac{dW}{d\omega} =
\left(\frac{Tr_e^2\beta^2c}{12\pi^3V}\right)\int \! \delta(\omega + {\bf k}\cdot{\bf v})\left[1 + \left(\hat{\bf k}\cdot\hat{\bf v}\right)^2\right] \left|{\bf B}_k\right|^2\mathrm{d}{\bf k},
\label{nonrel_analy}
\end{equation}
where $T$ is the duration of the observation, and where we have used
\begin{equation}
\delta(\omega + {\bf k}\cdot{\bf v}) = \int \! \delta(\omega + {\bf k}\cdot{\bf v} - \Omega)\delta(\Omega) \, \mathrm{d} {\Omega}.
\label{delta_identity}
\end{equation}
We see that the radiation spectrum is fully determined by the magnetic spectral distribution, $\left|{\bf B}_k\right|^2$. It is instructive to consider one of the simplest such distributions -- the isotropic spectrum of a magnetic field at a single scale, $k_B$, i.e.\
\begin{equation}
|{\bf B}_{\bf k}|^2 = (2\pi)^3V\langle{B^2}\rangle\frac{\delta(k-k_B)}{4\pi{k_B^2}}.
\label{Bk_single}
\end{equation}
Substitution of Eq. (\ref{Bk_single}) into Eq. (\ref{nonrel_analy}) produces the radiation spectrum
\begin{equation}
\frac{dW}{d\omega} =
\left\{\begin{array}{ll}
\frac{T}{3k_B}r_e^2\beta\langle B^2 \rangle \left(1 + \frac{\omega^2}{\omega_\textrm{jn}^2}\right), &\text{if}~ \omega \leq \omega_\textrm{jn}
\\
0, &\text{if}~ \omega > \omega_\textrm{jn},
\end{array}\right.
\label{nonrel_single}
\end{equation}
where $\omega_\textrm{jn} = k_\textrm{B}v$. Given the magnetic spectral distribution of Eq. (\ref{Bk}), the corresponding non-relativistic jitter spectrum, is
\begin{equation}
\frac{dW}{d\omega} \propto
\left\{\begin{array}{ll}
A + D\omega^2, &\text{if}~ \omega \leq \omega_\textrm{jn}
\\
F\omega^{-\mu+2} + G\omega^2 + K, &\text{if}~ \leq \omega_\textrm{bn}
\\
0, &\text{if}~ \omega > \omega_\textrm{bn},
\end{array}\right.
\label{analy_spec}
\end{equation}
where $\mu \neq 2$ and
\begin{equation}
A \equiv \frac{v}{2-\mu}\left(k_\text{max}^{-\mu+2}-k_{min}^{-\mu+2}\right),
\label{A_def}
\end{equation}
\begin{equation}
D \equiv -\frac{1}{v\mu}\left(k_\text{max}^{-\mu}-k_\text{min}^{-\mu}\right),
\label{D_def}
\end{equation}
\begin{equation}
F \equiv \frac{v^\mu}{v}\left(\frac{1}{\mu-2} + \frac{1}{\mu}\right),
\label{F_def}
\end{equation}
\begin{equation}
G \equiv - \frac{v}{\mu}k_\text{max}^{-\mu},
\label{G_def}
\end{equation}
\begin{equation}
K \equiv \frac{v}{2-\mu}k_\text{max}^{-\mu+2},
\label{K_def}
\end{equation}
with the jitter frequency given by the characteristic, and largest, spatial scale
\begin{equation}
\omega_\textrm{jn} = k_\textrm{min}v.
\label{omega_jn_nonrel}
\end{equation}
Finally, the break frequency is indicated by the smallest spatial scale, i.e.\ the maximum wave number
\begin{equation}
\omega_\textrm{bn} = k_\textrm{max}v.
\label{omega_bn_nonrel}
\end{equation}
Notice the structural similarity between the spectrum at frequencies less than $\omega_\textrm{jn}$ and the delta function spectrum in Eq. (\ref{nonrel_single}).
Next, the total radiated power may be obtained by integrating Eq. (\ref{nonrel_analy}) over all frequencies and dividing by the total observation time, yielding
\begin{equation}
P_{tot} = \frac{2}{3}r_e^2\beta^2c\langle{B_\perp^2}\rangle,
\label{nonrel_power}
\end{equation}
where we have used Eq. (\ref{b_perp}). Compare this to the total power radiated by a non-relativistic electron moving through a uniform magnetic field,
\begin{equation}
P_{tot} = \frac{2}{3}r_e^2\beta^2cB_\perp^2,
\label{cyclo_spec}
\end{equation}
which follows directly from Eq. (\ref{P_tot_nonrel}). Evidently, the total power of non-relativistic jitter radiation is identical to the total power of cyclotron radiation -- with $B^2 \rightarrow \langle B^2 \rangle$; this is exactly analogous to the relation between synchrotron and relativistic jitter radiation.
The radiation spectrum, generalized to any velocity, may be obtained by a formal Lorentz transformation to the electron rest frame. Consider a relativistic electron moving with velocity $\beta$ in the (unprimed) laboratory frame. By employing the Lorentz invariant phase space volume, $d^3k/\omega(k)$ -- the radiation spectra between the two frames can readily be related by the equality \citep{jackson99}
\begin{equation}
\frac{1}{\omega^2}\frac{d^2W}{d\omega{d\eta}} = \frac{1}{\omega'^2}\frac{d^2W'}{d\omega'{d\eta'}}.
\label{spec_invar}
\end{equation}
Thus, the angle-averaged laboratory radiation spectrum is obtained by integration over all solid-angles (in the lab frame) of the electron rest frame spectrum, i.e.\
\begin{equation}
\frac{dW}{d\omega} = \int \! \frac{\omega^2}{\omega'^2}\frac{d^2W'}{d\omega'{d\eta'}} \, \mathrm{d} {\eta}.
\label{spec_def}
\end{equation}
We consider, once more, that the electron moves along a straight path, experiencing only small deviations in its trajectory. Consequently, we consider a Lorentz boost of the laboratory coordinates along the trajectory (z-axis). In the electron's rest frame, the field turbulence has both a time-dependent magnetic and electric component. However, since the electron is at rest in this frame, only the electric field contributes to the instantaneous particle acceleration. Via Lorentz transformation of the laboratory magnetic field, the co-moving electric field is simply
\begin{equation}
{\bf E}'({\bf x}', t') = \gamma{\boldsymbol\beta}\times{\bf B}({\bf r}),
\label{rest_electric}
\end{equation}
where ${\bf r}(t) = {\bf r_0} + {\bf v}t$. Since the electron is instantaneously at rest in this frame, we may choose ${\bf x}' = 0$; thus, $t = \gamma{t'}$. The corresponding equation of motion, for the electron, is then
\begin{equation}
m_e{\bf w}'(t') = e{\bf E}'(0, t') = e\gamma{\boldsymbol\beta}\times{\bf B}({\bf r}).
\label{rest_eom}
\end{equation}
As before, the radiation spectrum in the rest frame is given by the Dipole approximation, Eq. (\ref{LW_nonrel}). Substitution of these results into Eq. (\ref{spec_def}) leads to
\begin{equation}
\frac{dW}{d\omega} = \frac{e^2}{4\pi^2\gamma^2c^3} \int \! \frac{\left|{\bf w}'_{\omega'}\right|^2sin^2\Theta'}{(1 - \beta{cos\theta})^2}\, \mathrm{d} {(cos\theta)} \, \mathrm{d} {\phi},
\label{spec_def_sub}
\end{equation}
where $\Theta'$ is the angle between the wave and acceleration vectors in the electron rest frame, and we have used the relativistic Doppler formula $\omega' = \gamma\omega(1 - \hat{\bf n}\cdot{\boldsymbol\beta})$. Next, given the equivalent form of Eq. (\ref{rest_eom}) to the lab frame equation of motion, Eq. (\ref{w_omega}), the acceleration term is given by the non-relativistic jitter spectrum with the substitution, $\omega' \rightarrow \omega'/\gamma = \omega(1 - \beta{cos\theta})$.
The final task is to perform the integration. However, the angle $\Theta'$ must first be related to the laboratory $\theta$ and $\phi$ coordinates -- which are derived from the angle between the wave vector and the velocity, and the azimuthal angle with respect to the boost axis, respectively. With a transverse acceleration, these angles are related by \citep{rybicki}
\begin{equation}
sin^2{\Theta}' = 1 - \frac{sin^2\theta{cos}^2\phi}{\gamma^2(1 - \beta{cos\theta})^2},
\label{angle_transform}
\end{equation}
with $\phi' = \phi$. Thus, the angle-averaged (velocity-independent) jitter spectrum is given by the following integration of the non-relativistic jitter spectrum
\begin{equation}
\frac{dW}{d\omega} = \frac{3}{8\gamma^2} \int_{-1}^1 \! \mathrm{d} {x} \left[\frac{1}{(1 - \beta{x})^2} + \frac{(x-\beta)^2}{(1-\beta{x})^4}\right]I(\omega_0),
\label{jitter_vel_free}
\end{equation}
where $I(\omega_0)$ is the non-relativistic jitter spectrum, e.g.\ Eq. (\ref{nonrel_analy}), evaluated at $\omega_0 \equiv \omega(1 - \beta{x})$. This result leads to the traditional, ultra-relativistic, jitter spectrum in the limit of $\beta \rightarrow 1$ (or, equivalently, $\gamma \rightarrow \infty$). In the trans-relativistic regime, the characteristic frequencies, Eqs. (\ref{omegaj-kmin}) and (\ref{omegab}), generalize to
\begin{equation}
\omega_\textrm{jn} \equiv \gamma^2k_\textrm{min}v,
\label{omega_jn}
\end{equation}
and
\begin{equation}
\omega_\textrm{bn} \equiv \gamma^2k_\textrm{max}v,
\label{omega_bn}
\end{equation}
which are the (trans-relativistic) jitter and break frequencies, respectively. It is noteworthy that $\omega_{bn}$ is not a proper break frequency in the mildly relativistic regime. The spectrum quickly falls to zero following $\omega_{bn}$; however, the drop is not instantaneous (as it is in the ultra-relativistic limit). In the trans-relativistic regime, $\gamma \simeq 1$, of course. With this in mind, and for the sake of convenience, we retain the $\text{n}$ subscript for both the trans-relativistic and non-relativistic expressions.
From Eqs. (\ref{analy_spec}), (\ref{jitter_vel_free}), and (\ref{Daa_def}), we see that an inter-relation between the diffusive and radiative properties of trans-relativistic/non-relativistic plasmas with sub-Larmor-scale magnetic turbulence exists. Furthermore, this inter-relation owes its existence to the statistical properties of the magnetic turbulence (e.g.\ $\langle B^2 \rangle$ and $\lambda_B$). We note, however, that our radiation treament assumes small deflections; an assumption that allowed the use of the, so called, perturbation theory. Recent work (see Ref. \citep{kelner13}) has considered a formal treatment of the perturbation theory that exclusively requires that the deflection angle over a correlation length is small, i.e.\ $\alpha_\lambda \ll 1$. Due to continued diffusive scatterings of the electron, its path will eventually deviate strongly from its initial trajectory. The traditional perturbative approach, regardless, remains valid so long as the trajectory remains approximately straight over the radiation formation length, at least for the considered domain of frequencies (i.e.\ lower frequencies will, inevitably, require a non-perturbative treatment). In the non-relativistic limit, the formation length is $\sim k^{-1}$. This must be less than, or comparable to, the magnetic correlation length $\lambda_B$. With the characteristic frequency $\omega_{jn}$, this length is $\sim \lambda_B/\beta$. Consequently, as long as the particle velocity is not arbitrary small, the perturbative approach will be valid; if $\alpha_\lambda$ is, indeed, small. By way of numerical simulation, we will demonstrate that this condition holds as long as $\rho > 1$ (i.e.\ the turbulence is sub-Larmor in scale).
Finally, our results do not consider the dispersive effect of the surrounding plasma. An account of dispersion will modify the radiation spectrum by a multiplication of Eq. (\ref{dWdw}) by the square root of the frequency-dependent scalar permittivity, $\epsilon(\omega)$. The scalar dielectric permittivity at high frequencies is \citep{jackson99, rybicki}
\begin{equation}
\epsilon(\omega) = 1 - \frac{\omega_\text{pe}^2}{\omega^2},
\label{epsilon_def}
\end{equation}
where $\omega_\text{pe}$ is the plasma frequency. Eq. (\ref{epsilon_def}) holds formally for $\omega^2 \gg \omega_\text{pe}^2$ in any dielectric medium; although it holds for cold, non-magnetized, isotropic plasmas for a wide domain of frequencies -- including $\omega < \omega_\text{pe}$ \citep{rybicki}. In a magnetized plasma, additional terms including the ambient ``mean'' magnetic field appear in the permittivity tensor. As previously mentioned, the Weibel-like magnetic turbulence can occur in a non-magnetized environment, thus we ignore any ``mean'' field here. Hence, we will consider an extension of Eq. (\ref{epsilon_def}) to low frequencies ($\omega \sim \omega_\text{pe}$).
The plasma dispersion effect is only important for frequencies $\omega \ll \gamma\omega_\text{pe}$ -- below which, suppression of relativistic beaming (due to the Razin effect) occurs \citep{jackson99, rybicki}. Electron driven Weibel-like turbulence occurs on a very small-scale, with $\lambda_B \sim d_e$ (where $d_e \equiv c/\omega_\text{pe}$ is the electron skin depth) \citep{medvedev09c, TT11}. Consequently, in the ultra-relativistic regime, the jitter frequency is many orders of magnitude larger than the plasma frequency -- by a factor $\sim \gamma^2$. However, in the non-relativistic and trans-relativistic regimes, dispersion can play a considerable role. This will especially be so for $\beta \ll 1$. In this case, a considerable portion of the radiation spectrum may fall below $\omega_\text{pe}$, and thus be unobservable. For simplicity and convenience, we have ignored the plasma dispersion in our numerical simulations. However, we consider a few cases with plasma dispersion intact, both numerically and theoretically, in Appendix \ref{s:appendixc}.
\section{Numerical model}
\label{s:model}
Using the method from our previous work (see Ref. \citep{keenan13}), here we explore the inter-relation between the diffusive and radiative properties of these plasmas, and thereby test our theoretical predictions. As before, this was done via simulations of particle dynamics in sub-Larmor-scale magnetic turbulence. In our simulations, only first-principles were used. Non-relativistic and trans-relativistic electrons are test particles moving in preset magnetic fields defined over a 3D simulation box, with periodic boundary conditions in all directions. The particles do not interact with each other, as in collisionless plasmas, nor do they induce any fields. Additionally, any radiative energy losses are neglected. An individual electron's motion is, consequently, determined only by the Lorentz force equation given by:
\begin{equation}
\frac{d{\boldsymbol\beta}}{dt} = -\frac{1}{\gamma}\left({\boldsymbol\beta}\times{\boldsymbol\Omega}_B\right),
\label{dvdt}
\end{equation}
where ${\boldsymbol\Omega}_B \equiv e{\bf B}/m_e{c}$. For simplicity, we define our simulation magnetic field as ${\bf B} \equiv {\boldsymbol\Omega}_B$. In this manner, our arbitrary simulation units are always related to a physical magnetic field via the definition of ${\boldsymbol\Omega}_B$. Notice that the purely magnetic Lorentz force conserves particle energy; hence, the velocity vector varies in direction but has a constant magnitude.
The simulation can be divided into two principle stages (see Ref \citep{keenan12} for a detailed description of the numerical implementation). First, the turbulent magnetic field is created using a predetermined spectral distribution in Fourier space. This field is generated on a cubic lattice that is then interpolated, so as to represent a ``continuous" field. The interpolation implements divergenceless matrix-valued radial basis functions (see Ref. \citep{mcnally11}, for a discussion). This interpolation method starts with a radial function -- in our case, one of the simplest, $\phi({\bf r}) = e^{-\epsilon{r}^2}$ (where $\epsilon$ is a scaling factor, and $r^2 = x^2 + y^2 + z^2$). Then, a set of divergence-free matrix-valued radial basis functions is obtained from the transformation \citep{mcnally11}:
\begin{equation}
\Phi({\bf r}) = (\nabla\nabla^T - \mathbb{I}_{3\times3}{\nabla^2})\phi({\bf r}),
\label{rad_basis}
\end{equation}
where $\nabla\nabla^T$ is the second-order, $3\times3$-matrix differential operator and $\mathbb{I}_{3\times3}$ is the $3\times3$ identity matrix.
These interpolants are then applied to the interior of each lattice ``cell'' (the significance of the interpolant's divergence is explored in Appendix \ref{s:appendixb}). The second stage in our model involves the numerical solution of the equation of motion for each particle, i.e.\ Eq. (\ref{dvdt}). From the solution, $\langle\alpha^2\rangle$ and the radiation spectra are obtained. We first turn our attention to the generation of the magnetic field.
As discussed previously (see Ref. \citep{keenan13}), generation of the magnetic field distribution is more convenient in Fourier space. There are two principal reasons for this.
Firstly, it is an easier task to specify a particular spectral distribution in Fourier space directly, rather than attempting to approximate the corresponding field in real space. Secondly, any physically realizable field should satisfy Maxwell's equations, thus its divergence must be zero. This divergenceless condition is more readily met in Fourier space, because Gauss' law, $\nabla\cdot{\bf B}=0$, is an algebraic equation there; ${\bf k}\cdot{\bf B_k}=0$, for each Fourier component. Although our code can handle a wide variety of magnetic spectral distributions, we limit our study to isotropic magnetic turbulence, described in Eq. (\ref{Bk}) -- leaving more sophisticated models for the future.
After the magnetic field is generated, the next step is the numerical solution of the equation of motion, Eq. (\ref{dvdt}). This was done via a fixed step 4$^\text{th}$-order Runge-Kutta-Nystr\"om method. With all the particle positions, velocities, and accelerations calculated, the radiation spectrum is obtained from Eq. (\ref{LW}).
Next, the total radiation spectrum is obtained by ``summing'' over the spectra of the individual particles. There are two, usually equivalent, methods for doing the summation. First, one can add the spectra coherently by summing over each particle's ${\bf A}_{\bf k}$, and then performing a single integration via Eq. (\ref{LW}). This is a more physical method. In the second method we add the spectra incoherently (i.e., by integrating each particle's ${\bf A}_{\bf k}$ separately, and then summing the results of each integration). As discussed in Ref. \citep{hededal05}, both methods will result in the same spectra, since the wave phases are uncorrelated. However, an incoherent sum will produce spectra that are less noiser, for a given number of simulation particles, than the coherently summed spectra. Hence we use the incoherent approach in our study.
\section{Numerical results}
\label{s:results}
In Section \ref{s:analytic} we made a number of theoretical predictions concerning the transport and radiation properties of plasmas with small-scale turbulent magnetic fields. Additionally, we anticipated that an inter-connection between the transport and radiative properties of non-relativistic/trans-relativistic particles moving through sub-Larmor-scale magnetic turbulence exists, as it does for ultra-relativistic particles \citep{keenan13}. Here we check our predictions, and further explore the radiation spectra.
First of all, we explore how the pitch-angle diffusion coefficient depends on various parameters, cf. Eq. (\ref{Daa_def}), namely the particle's velocity, $\beta$, the magnetic field strength, $\langle B^2 \rangle$, the field correlation scale, $\lambda_B$, and the ``gyro-number'', $\rho$.
To start, we tested our fundamental assumption that the particle velocity vector only varies slightly over a correlation length, $\lambda_B$. This is the key assumption that underlies our theoretical predictions for pitch-angle diffusion and the radiation spectra. If this assumption were to not hold (i.e.\ if $\alpha_\lambda \ll 1$) then pitch-angle diffusion would break down, and the plot of $\langle \alpha^2 \rangle$ vs time will deviate from linear behavior. In Figure \ref{alpha_break}, $\langle \alpha^2 \rangle$ is plotted as a function of time for seven different cases. In each run, $\langle B^2 \rangle$, $k_\text{min}$, and $N_p$ (number of simulation particles) are fixed to the values of $0.01$, $4\pi/5$ (both in arbitrary simulation units), and $2000$, respectively. The particles are monoenergetic, and are isotropically distributed in their initial velocities. Each case differs in particle velocities; which range from $\frac{1}{512}c$ to $\frac{1}{8}c$. As can be seen, the curves begin as straight lines that increase with slope as $\beta$ decreases. Eventually, the linear behavior breaks down as $\beta$ decreases. A decrease in $\rho$ occurs concurrently, in accordance with Eq. (\ref{scale_para}). As expected, the breakdown in linear behavior, and hence diffusion, occurs when $\rho \sim 1$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{alpha_break}
\caption{(Color online) Average square pitch-angle vs. time (in simulation units). Relevant parameters are $N_p = 2000$, $k_\text{min} = 4\pi/5$, $k_\text{max} = 8\pi$, $\langle B^2 \rangle^{1/2} = 0.01$, and $\mu = 3$. The particle velocities in each case range from $\frac{1}{8}c$ to $\frac{1}{512}c$ (by multiples of two). The curves appear with increasing average slope as $\beta$ decreases. As $\beta$ decreases, eventually $\rho \sim 1$ (at $\beta = \frac{c}{128}$, i.e.\ the fifth most sloped, ``green'' line ), after which the deflection angle becomes large, and pitch-angle diffusion breaks down.}
\label{alpha_break}
\end{figure}
Later, we did the same experiment, only this time we varied $\langle B^2 \rangle$ in such a way as to keep $\rho$ constant ($\rho = 24.5$). In this way, each case is securely in the small-scale regime. In Figure \ref{alpha_restore}, we see that the linear behavior of $\langle \alpha^2 \rangle$ vs time is preserved for all velocities, as anticipated. Consequently, our assumption of a small $\alpha_\lambda$ is valid, as long as $\rho > 1$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{alpha_restore}
\caption{(Color online) Average square pitch-angle vs. time (in simulation units). Relevant parameters are $N_p = 2000$, $k_\text{min} = \pi$, $k_\text{max} = 8\pi$, and $\mu = 3$. $\langle B^2 \rangle^{1/2}$ ranges from $5\times10^{-4}$ to $0.032$, by multiples of two. The particle velocities range (in the opposite order) from $\frac{1}{256}c$ to $\frac{1}{4}c$. These two parameters, $\langle B^2 \rangle$ and $\beta$, vary in such a way as to keep $\rho = 24.5$. The curves appear with increasing slope as $\beta$ decreases. Clearly, the linear form of the curves is retained in all seven cases.}
\label{alpha_restore}
\end{figure}
With the existence of pitch-angle diffusion established, we then proceeded to compare the slope of $\langle \alpha^2 \rangle$ vs time (the numerical pitch-angle diffusion coefficient) to Eq. (\ref{Daa_def}). In Figure \ref{diff_v}, the numerically obtained diffusion coefficients from Figure \ref{alpha_restore} are compared to the analytical result of Eq. (\ref{Daa_def}). In each, the theoretical and numerical results differ only by a small factor of ${\cal O}(1)$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{diff_v}
\caption{(Color online) Pitch-angle diffusion coefficient, $D_{\alpha\alpha}$ vs the logarithm (base 2) of the inverse normalized particle velocity, $log_2(\beta^{-1})$. The (blue) empty ``squares'' indicate the $D_{\alpha\alpha}$ obtained directly from simulation (as the slope of $\langle\alpha^2\rangle$ vs. time), while the (red) filled ``triangles" are the analytical, given by Eq. (\ref{Daa_def}), pitch-angle diffusion coefficients. Simulation parameters are identical to those used in Figure \ref{alpha_restore}.}
\label{diff_v}
\end{figure}
Next, we tested the correlation length dependence, i.e.\ whether or not the numerical simulations agree with Eq. (\ref{corr_l}). With $k_\text{min} = \pi$ and $k_\text{max} = 8\pi$, we varied the magnetic spectral index, $\mu$ from $2$ to $5$. This is plotted in Figure \ref{diff_mu}, where the numerical diffusion coefficient closely matches the analytical result.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{diff_mu}
\caption{(Color online) Pitch-angle diffusion coefficient, $D_{\alpha\alpha}$ vs the magnetic spectral index, $\mu$. The (blue) empty ``squares'' indicate the $D_{\alpha\alpha}$ obtained directly from simulation, while the (red) filled ``triangles" are the analytical, given by Eq. (\ref{Daa_def}), pitch-angle diffusion coefficients. Relevant parameters are $N_p = 2000$, $k_\text{min} = \pi$, $k_\text{max} = 8\pi$, $\langle B^2 \rangle^{1/2} = 0.064$ , $\beta = 0.5$, and $\rho = 24.5$. The magnetic spectral indexes are $2$, $3$, $4$, and $5$. Notice that the numerical results have nearly the same functional dependence on $\mu$ as the analytical triangles, as given by Eq. \ref{Daa_def}. }
\label{diff_mu}
\end{figure}
In Figure \ref{diff_numvsan}, the numerical diffusion coefficient is plotted against the analytical coefficient for the same range of $\mu$ values, but now the $k_\text{min}$, $k_\text{max}$, and $\beta$ values differ among the three (with $\rho$ fixed to $24.5$). Included are the results of Figure \ref{diff_mu}. All three cases give a nearly linear relationship between the numerical and analytical coefficients, with slopes approximately equal to unity.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{diff_numvsan}
\caption{(Color online) Numerical pitch-angle diffusion coefficient vs the analytical pitch-angle diffusion coefficient, for three different cases. In each case, the magnetic spectral index has been varied from $2$ to $5$, by intervals of unity. Relevant parameters are $k_\text{min} = \pi/2$ (red) ``circles'' and (blue) ``triangles'', $\pi$ (green) ``diamonds'', $k_\text{max} = 5.12\pi$ (red) ``circles''; $k_\text{max} = 8\pi$ (green) ``diamonds'' and (blue) ``triangles''; $\langle B^2 \rangle^{1/2} = 0.016$ (red) ``circles'', $0.032$ (blue) ``triangles''; $0.064$ (green) ``diamonds''; $\beta = 0.25$ (red) ``circles'', $0.5$ (blue) ``triangles'' and (green) ``diamonds''. In each case, a line of best fit is applied. The slopes are $0.979$ (circles), $0.972$ (diamonds), and $1.06$ (triangles)}
\label{diff_numvsan}
\end{figure}
Another concern worth addressing is the dependence of the numerical diffusion coefficient on the total number of simulation particles. In Figure \ref{diff_num_par}, a test case was repeated with an increasing number of simulation particles. The number of particles was increased from $500$ to $64000$, by factors of $2$. There is little variation to be seen in the numerical result, as the number of particles is increased.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{diff_num_par}
\caption{(Color online) Pitch-angle diffusion coefficient, $D_{\alpha\alpha}$ vs the total number of simulation particles, $N_p$. The ``blue squares'' indicate the $D_{\alpha\alpha}$ obtained directly from simulation, while the red dotted line is the analytical result, given by Eq. (\ref{Daa_def}). Relevant parameters are $k_\text{min} = \pi/2$, $k_\text{max} = 8\pi$, $\langle B^2 \rangle^{1/2} = 0.032$ , $\beta = 0.5$, and $\rho = 24.5$. There appears to be no strong dependence of the numerical pitch-angle diffusion coefficient upon the total number of simulation particles; nevertheless, there appears to be some convergence to the analytical result.}
\label{diff_num_par}
\end{figure}
Next, we explored the trans-relativistic jitter radiation regime by calculating the radiation spectra, using Eq. (\ref{LW}), with variable simulation parameters. We aimed to test the radiation spectra's dependence upon the key turbulent parameters: $k_\text{min}$, $k_\text{max}$, $\langle B^2 \rangle$, and $\mu$, as well as the particle velocity, $v$. To start, we considered the $k_\text{min}$ dependence. In Figure \ref{kmin_spec}, we have plotted spectra for an initially isotropically distributed, monoenergetic, ensemble of trans-relativistic electrons ($v = 0.5c$) moving through sub-Larmor-scale magnetic turbulence with three different values of $k_\text{min}$. The key parameters are: $\rho = 18.1$, $36.3$, and $72.6$, with $k_\text{min} = \pi/5$, $2\pi/5$, and $4\pi/5$, respectively (see Table \ref{spec_para_table} for a complete listing of simulation parameters used in every figure). The spectra of Figure \ref{kmin_spec}, at least superficially, resemble our theoretical prediction; cf. Eq. (\ref{analy_spec}). We have normalized the $dW/d\omega$ and $\omega$ axes by $\lambda_B$ and $k_\text{min}$, respectively. As expected, the frequency of the spectral peak scales by $k_\text{min}$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{kmin_spec}
\caption{(Color online) Radiation spectra given variable $k_\text{min}$, with all other parameters fixed. The number of simulation particles, $N_p$, is $2000$, and $v = 0.5c$ in each case. In each trial, the particles moved for a total simulation time of $T = T_g$, where $T_g \equiv 2\pi\gamma{m_e}c/e\langle B^2 \rangle^{1/2}$ is the ``gyroperiod''. Here, the axes are in arbitrary, simulation units. We see that the frequency scales as $k_\text{min}$ and $dW/d\omega$ scales as $\lambda_B$.}
\label{kmin_spec}
\end{figure}
The precise scaling of the peak frequency is revealed in Figure \ref{v_spec}. In this figure, we have varied the particle velocities, keeping all other parameters fixed. Three velocities appear: $v = 0.125c$, $0.25c$, and $0.5c$. Clearly, the overall shape of the spectra is not strongly dependent upon the particle velocities. We have identified the proper scaling on the horizontal axis. With this result, and Figure \ref{kmin_spec}, we may conclude that the frequency of the peak of the radiation spectrum is $\omega \sim \gamma^2k_{min}v = \omega_{jn}$. This is jitter frequency given in Eq. (\ref{analy_spec}).
\begin{table}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{10}{|c|}r}
\toprule
\midrule
\hline
$\#$ & $\rho$ & $\Delta{t}$ & $\beta$ & $\mu$ & $k_\text{min}$ & $k_\text{max}$ & $\sqrt{\langle B^2 \rangle}$ & $N_p$ & $\text{T}_g$ \\
\hline
\midrule
$\ref{kmin_spec}$ & $18.1$ & $0.005$ & $0.5$ & $3$ & $\pi/5$ & $10.24\pi$ & $0.02$ & $2000$ & $1$ \\ \cline{1-10}
$\ref{kmin_spec}$ & $36.3$ & $0.005$ & $0.5$ & $3$ & $2\pi/5$ & $10.24\pi$ & $0.02$ & $2000$ & $1$ \\ \cline{1-10}
$\ref{kmin_spec}$ & $72.6$ & $0.005$ & $0.5$ & $3$ & $4\pi/5$ & $10.24\pi$ & $0.02$ & $2000$ & $1$ \\ \cline{1-10}
$\ref{v_spec}$ & $15.8$ & $0.050$ & $0.125$ & $3$ & $4\pi/5$ & $10.24\pi$ & $0.02$ & $1000$ & $10$ \\ \cline{1-10}
$\ref{v_spec}$ & $32.4$ & $0.050$ & $0.25$ & $3$ & $4\pi/5$ & $10.24\pi$ & $0.02$ & $1000$ & $10$ \\ \cline{1-10}
$\ref{v_spec}$ & $72.6$ & $0.050$ & $0.5$ & $3$ & $4\pi/5$ & $10.24\pi$ & $0.02$ & $1000$ & $10$ \\ \cline{1-10}
$\ref{mu_spec}$ & $6.18$ & $0.005$ & $0.125$ & $4$ & $\pi$ & $8\pi$ & $0.064$ & $8000$ & $1$ \\ \cline{1-10}
$\ref{mu_spec}$ & $6.18$ & $0.005$ & $0.125$ & $5$ & $\pi$ & $8\pi$ & $0.064$ & $8000$ & $1$ \\ \cline{1-10}
$\ref{kmax_spec}$ & $6.34$ & $0.005$ & $0.25$ & $5$ & $\pi/2$ & $4\pi$ & $0.064$ & $2000$ & $1$ \\ \cline{1-10}
$\ref{kmax_spec}$ & $6.34$ & $0.005$ & $0.25$ & $5$ & $\pi/2$ & $8\pi$ & $0.064$ & $2000$ & $1$ \\ \cline{1-10}
$\ref{extreme_mu_spec}$ & $12.4$ & $0.05$ & $0.125$ & $100$ & $\pi$ & $8\pi$ & $0.032$ & $8000$ & $10$ \\
\cline{1-10}
$\ref{monopole_spec}$ & $7.9$ & $0.05$ & $0.125$ & $4$ & $2\pi/5$ & $8\pi$ & $0.02$ & $4000$ & $10$ \\
\cline{1-10}
$\ref{interp_spec}$ & $6.2$ & $0.005$ & $0.125$ & $5$ & $\pi$ & $8\pi$ & $0.064$ & $5000$ & $1$ \\ \cline{1-10}
$\ref{op_kdiv10}$ & $14.2$ & $0.00125$ & $0.5$ & $4$ & $8\pi$ & $400\pi$ & $1.024$ & $1000$ & $10$ \\ \cline{1-10}
$\ref{op_k}$ & $14.2$ & $0.00125$ & $0.5$ & $4$ & $8\pi$ & $400\pi$ & $1.024$ & $1000$ & $10$ \\ \cline{1-10}
\bottomrule
\end{tabular}
}
\caption{Table of parameters used for the radiation spectra figures. Here, and elsewhere, $\Delta{t}$ is the simulation time step, the simulation time is denoted in multiples of the ``gyroperiod'' (i.e.\ $T_g = 2\pi\gamma{m_e}c/e\langle B^2 \rangle^{1/2}$), and $N_p$ is the total number of simulation particles.}
\label{spec_para_table}
\end{table}
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{v_spec}
\caption{(Color online) Radiation spectra given variable $v$. In each trial, $1000$ particles move for a total simulation time of $T = 10T_g$, where $T_g \equiv 2\pi\gamma{m_e}c/e\langle B^2 \rangle^{1/2}$ is the ``gyroperiod''. We see that the overall shape of the spectra is not appreciably altered with decreasing $v$. The spectra are normalized by $T\gamma^2v$, vertically. Given Figure \ref{kmin_spec}, we may conclude that the peak frequency of these spectra is $\omega \sim \gamma^2k_{min}v$ -- cf. Eq. (\ref{omega_jn}).}
\label{v_spec}
\end{figure}
Next, we tested the $\mu$ dependence. In Figure \ref{mu_spec}, $\mu = 4, 5$. For each spectrum, $v = 0.125c$, and the total simulation time was $T_g$, where $T_g = e\langle B^2 \rangle^{1/2}/\gamma{m_e}c$ is the gyroperiod. The numerical and analytical spectra show close agreement for frequencies less than the break frequency, $\omega \sim \gamma^2k_\text{max}v$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{mu_spec}
\caption{(Color online) Radiation spectra given two different values of the magnetic spectral index: $\mu = 5$ (red) ``thick" line and $\mu = 4$ (blue) ``thin" line. Included are the analytical solutions given by Eq. (\ref{analy_spec}). Note that the $\mu = 5$ solution has been multiplied by an overall factor of two for easier visualization. For frequencies near $\omega \sim \gamma^2k_\text{min}v$, the numerical spectra agree decently with the analytical results. However, for frequencies near the break, $\omega \sim \gamma^2k_\text{max}v$, there is considerable deviation between the predicted and numerical spectra -- for both values of the magnetic spectral index. The origin of this discrepancy is explored in Appendix \ref{s:appendixb}. }
\label{mu_spec}
\end{figure}
In Figure \ref{kmax_spec}, we have plotted two spectra that differ in their $k_\text{max}$ values (all other parameters kept fixed). The $k_\text{max}$ values employed differ by a factor of 2. We see that, roughly, the spectra approach zero near $\omega \sim \gamma^2k_{max}v$. The proceeding power law ``tail'' feature is a numerical artifact that arises from a steep drop to zero power (this fact is more readily apparent in a linear plot -- see Appendix \ref{s:appendixa}).
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{kmax_spec}
\caption{(Color online) Radiation spectra with differing $k_\text{max}$. Some other relevant parameters are $v = 0.25c$, $\rho = 6.34$, $N_p = 2000$, and $\mu = 5$ (for a complete listing, see Table \ref{spec_para_table}). The two spectra differ by a factor of 2 in $k_\text{max}$, with $k_\text{min}$ the same between them. Roughly, the spectra transition to the ``tail'' feature near $\omega \sim \gamma^2k_{max}v = \omega_{bn}$.}
\label{kmax_spec}
\end{figure}
Next, we examined the apparent structure in the radiation spectra for $\omega < \omega_{jn}$. This is most clearly seen in Figure \ref{v_spec}, where it appears as a distinctive ``bump''. According to Eq. (\ref{analy_spec}), this bump-like feature has a functional form of $A + D\omega^2$. To assure that this form is correctly identified, we considered a large magnetic spectral index of $\mu = 100$ with $\beta = 0.125c$. Such a large $\mu$ makes the feature more prominent, helping to magnify it. As can be seen, the curve that best fits the bump-like feature at $\omega < \omega_{jn}$ is given by a function of the form $A + D\omega^2$.
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{extreme_mu_spec}
\caption{(Color online) Radiation spectrum with $\mu = 100$ ($\beta = 0.125c$). Evidently, the spectral feature presented directly prior to $\omega_{jn}$ has a functional form given by $A + D\omega^2$ (dashed line). This is consistent with Eq. (\ref{analy_spec}).}
\label{extreme_mu_spec}
\end{figure}
One may consider the magnetic correlation tensor and its relation to the shape of the radiation spectra. Anisotropic turbulence will alter the shape, but so will a change to the topology of the magnetic field. Motivated by pure curiosity, we consider turbulence that is generated by a distribution of magnetic monopoles. This will result in a magnetic field that is curl-free, but has a divergence given by Gauss's Law for monopoles. This topological change will alter the correlation tensor for isotropic and homogeneous turbulence to \citep{radler11}
\begin{equation}
B^{i*}_{\bf k}B^{j}_{\bf k} = \left|{\bf B}_k\right|^2\hat{k}^i\hat{k}^j,
\label{cor_ten_mono}
\end{equation}
which is the form required for an irrotational vector field. Substitution of this correlation tensor into Eq. (\ref{dWdw}) will give a slightly different radiation spectrum for the magnetic spectrum in Eq. (\ref{Bk}).
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{monopole_spec}
\caption{(Color online) Radiation spectrum of non-relativistic electrons moving through small-scale magnetic turbulence generated by a distribution of magnetic monopoles (``thick'', blue), superimposed with the radiation spectrum given a magnetic spectrum (``thin'', red) produced by standard means (i.e.\ Ampere's Law). For each run, $\mu = 4$ and $\beta = 0.125c$. Each curve is accompanied by its corresponding analytical solutions (``dashed'', black). The spectral shape for frequencies less than $\omega_{jn}$ is $A + D\omega^2$ and $A - D\omega^2$ for the ``divergenceless'' field and ``monopolar'' field, respectively.}
\label{monopole_spec}
\end{figure}
The principal change will be to the quadratic prefactor $A + D\omega^2$. The ``monopolar'' field will result in a sign change to $D$. In Figure \ref{monopole_spec}, this difference is clearly indicated. Notice the apparent lack of the quadratic peak feature at $\omega_\text{jn}$.
The altered correlation tensor will affect the particle diffusion coefficient as well. In fact, as can be seen in Figure (\ref{alpha_mono}), the pitch-angle diffusion coefficient of particles moving in the monopolar field is twice as large as the divergenceless field equivalent. This follows from the fact that
\begin{equation}
\lambda_B^\text{monopole} = 2\lambda_B^\text{div. free},
\label{diff_mono_cor}
\end{equation}
which results from substitution of Eq. (\ref{cor_ten_mono}) into Eq. (\ref{corr_l_def}).
\begin{figure}
\includegraphics[angle = 0, width = 1\columnwidth]{alpha_mono}
\caption{(Color online) Average square pitch-angle growth as a function of time for non-relativistic electrons moving through small-scale magnetic turbulence generated by a distribution of magnetic monopoles ``dashed'' (blue), superimposed with the otherwise equivalent curve ``solid'' (red) produced by standard means (i.e.\ Ampere's Law). For each run, $\mu = 6$, $N_p = 15420$, $\langle B^2 \rangle^{1/2} = 0.032$, $k_\text{min} = \pi$, $k_\text{max} = 8\pi$, and $\beta = 0.125c$. Note that the slope of the ``monopolar'' curve is very nearly twice the slope of the standard curve -- in accordance with Eq. (\ref{diff_mono_cor}).}
\label{alpha_mono}
\end{figure}
It is a noteworthy observation that the preceding results are identical, up to overall multiplicative factors, to the radiation spectra and pitch-angle diffusion coefficient for the more physically plausible situation of a trans-relativistic monopole moving through ``small-scale'' electrostatic turbulence, such as Langmuir turbulence.
\section{Conclusions}
\label{s:concl}
In this paper we explored non-relativistic and trans-relativistic particle transport (diffusion) and radiation production in small-scale electromagnetic turbulence. Principally, we demonstrated that in the regime of small deflections, i.e.\ when the particle's deflection angle over a correlation length is small $\alpha_\lambda \ll 1$, the pitch-angle diffusion coefficient and the simultaneously produced radiation spectrum are wholly determined by the particle velocity and the statistical/spectral properties of the magnetic turbulence; which is a result most transparently offered by Eqs. (\ref{corr_l_div}) and (\ref{nonrel_analy}). Additonally, we showed that the condition of a small deflection angle is satisfied if $\rho > 1$, i.e.\ if the magnetic turbulence is small-scale.
These results generalize the ultra-relativistic regime first discussed in Ref. \citep{keenan13}. In fact, the pitch-angle diffusion coefficient remains unchanged, in both the non-relativistic and relativistic regimes. Significantly, just as small-angle jitter radiation strongly differs from synchrotron radiation, so too does the analogous non-relativistic jitter radiation distinguish itself from cyclotron radiation. Given the isotropic 3D power law magnetic spectral distribution from Eq. (\ref{Bk}), the resulting trans-and non-relativistic radiation spectrum is a piece-wise function of a quadratic equation in frequency, $\omega$ up to the characteristic (jitter) frequency, $\omega_{jn} = \gamma^2k_\text{min}v$, after which it is the sum of a power law and a quadratic term up to the break frequency, $\omega_{bn} = \gamma^2k_\text{max}v$, where it then quickly approaches zero -- see Eq. (\ref{analy_spec}).
We have, further, confirmed our theoretical results via first-principle numerical simulations.
Lastly, we have considered the change in the radiative and transport properties of trans-relativistic particles moving through magnetic turbulence due to a topological change in the field. Namely, we supposed the generation of sub-Larmor-scale magnetic turbulence from a distribution of magnetic monopoles. We showed that the radiation spectra and pitch-angle diffusion coefficient are modified; i.e.\ the pitch-angle diffusion coefficient doubles in magnitude, \emph{\`{a} la} Eq. (\ref{diff_mono_cor}), and the shape of the radiation spectrum is dramatically altered for frequencies less than the jitter frequency, $\omega_{jn}$. These results, furthermore, generalize to the case of a magnetic monopole moving through ``small-scale" electrostatic turbulence.
Finally, the applicability of our model will depend heavily upon the plasma environment. The turbulence dissipation time-scale, growth rate, time-evolution, and spatial-scale are important considerations. We have highlighted the Weibel-like turbulence, in particular, because of its favorable properties. As stated previously, the Weibel instability can produce strong, small-scale, magnetic fields in an non-magnetized plasma. Furthermore, the instability is aperiodic (i.e.\ real frequency $\Omega_{r} \sim 0$), and thus allows for the static field treatment. More precisely, the growth rate $\gamma \gg \Omega_{r}$. Typically, the growth rate is governed by a characteristic plasma frequency. Lastly, the magnetic fluctuations are long-lived in the case of the Weibel-filamentation instability, dying out only when the driving free energy (provided by the kinetic energy of streaming particle filaments) of the system is converted by particle isotropization (i.e.\ the depletion of the anisotropy in the streaming particle distribution function). In short, the generated fields are approximately stationary on a time-scale which exceeds the growth/stabilization rate times \citep{treumann12}. Consequently, there appears to be adequate time for radiation production in the jitter regime, given by our prescription, in these ``quasi-static'' Weibel magnetic fields.
Via subsequent non-linear evolution, the electron-generated Weibel magnetic fields may grow to larger spatial-scales -- including the ion skin-depth. Additionally, the Weibel fields may ``seed'' the growth of further MHD turbulence via a process of inverse-cascade -- once more, residing at larger spatial-scales. Thus, in the non-relativistic regime, the jitter radiation spectrum may be effectively screened out when the turbulent magnetic fields predominantly exist at scales much larger than the electron skin-depth. Consequently, non-relativistic jitter radiation, as a diagnostic of Weibel turbulence, may have a limited applicability. However, kinetic instabilities in magnetized plasma can produce turbulent magnetic spectra at the appropriate length scales as well. One such scenario may be provided by a turbulent magnetic field generated in a cold, magnetized, background plasma. We then imagine the existence of a ``hot'' population of sub-Larmor-scale electrons that will serve as our test particles. To this end, anisotropic whistler turbulence may provide a promising candidate. In fact, the (low beta) collisionless Whistler spectrum (perpendicular to the mean magnetic field) may be rather broadband -- a (stationary) piece-wise set of power-laws extending to scales much smaller than the electron skin-depth \citep{saito10}. Naturally, our model requires modification to suit a magnetized plasma -- the case to be considered elsewhere.
To conclude, the obtained results, coupled with our previous work, reveal strong inter-relation of transport and radiative properties of plasmas turbulent at sub-Larmor scales -- whether they be relativistic or non-relativistic. We have demonstrated how spectral information can be a powerful tool to diagnose magnetic micro-turbulence in laboratory and astrophysical plasmas.
\begin{acknowledgments}
This work has been supported by the DOE grant DE-FG02-07ER54940 and the NSF grant AST-1209665.
\end{acknowledgments}
|
1,116,691,497,167 | arxiv | \section{Introduction} \label{sec:intro}
The predictions of single-field inflation boast a remarkable agreement with the most recent experimental data \cite{Ade:2015lrj,Ade:2015ava}. Yet from a fundamental physics point of view there is no compelling reason to expect that only one field was dynamically active during the inflationary phase, and indeed top-down scenarios of the very early universe typically predict the presence of multiple scalars in addition to the inflaton \cite{Baumann:2014nda,Yamaguchi:2011kg,Linde:2005ht}. It is thus natural to regard the single-field description of inflation as an effective one, arising from the existence of a mass hierarchy between the inflaton and the other fields, which may therefore in practice be integrated out (see \textit{e.g.}~\cite{Tolley:2009fg,Cremonini:2010ua,Achucarro:2010da,Baumann:2011su} for early works). As usual in effective field theory (EFT), the importance of the effects of the heavy fields in the low-energy dynamics is measured by their energy scale. In the context of inflation, this scale need not be extremely large compared to the Hubble scale, and hence it is not unlikely that the heavy modes may lead to sizable effects in the single-field EFT description.
One such effect is that the curvature perturbation will propagate with an effective speed of sound $c_s$ that can differ from the speed of light. The existence of a non-relativistic dispersion relation should of course be expected on general grounds from the spontaneous breaking of time translations by the inflationary background.\footnote{The dispersion relation need not even be linear, see \textit{e.g.}~\cite{ArkaniHamed:2003uz,Baumann:2011su,Ashoorioon:2011eg,Gwyn:2012mw,Gwyn:2014doa,Ashoorioon:2018uey}.} What is interesting is that $c_s$ may be significantly small compared to one, a fact that can lead to important observational signatures, prime among which are large primordial non-Gaussianities with $f_{NL}\sim1/c_s^2$. The bispectrum of cosmological perturbations thus offers a unique window into physics at energies above the Hubble scale (see \textit{e.g.}~\cite{Wands:2010af,Chen:2010xka,Wang:2013eqj,Renaux-Petel:2015bja} for reviews on primordial non-Gaussianities).
An interesting twist to the story is that $c_s$ can in principle be {\it imaginary}. Indeed, in the recently proposed sidetracked inflation scenario \cite{Garcia-Saenz:2018ifx} following the geometrical destabilization of inflation \cite{Renaux-Petel:2015mga,Renaux-Petel:2017dia}, we displayed a concrete realization of an effective single-field theory, arising from a two-field model, that had precisely this property. This is made possible by the fact that the heavy field that is integrated out---the entropic mode---exhibits a transient tachyonic instability that translates into $c_s^2<0$ in the EFT picture. In that case, an imaginary speed of sound is therefore simply a manifestation of this transient amplification of fluctuations, and not a pathology of the underlying UV theory---indeed, there is no fundamental ghost in the two-field models studied in \cite{Garcia-Saenz:2018ifx}. Nevertheless, the resulting tachyonic growth of the fluctuations gives rise to some interesting predictions, which are moreover quite different from those of other inflationary scenarios. First, the curvature power spectrum experiences an exponential growth before the time of sound horizon crossing, leading to a very suppressed tensor-to-scalar ratio. On the other hand, such an exponential growth was shown to be absent in the bispectrum, which was numerically computed in the full two-field description using the transport approach (see \textit{e.g.}~\cite{Dias:2015rca,Dias:2016rjq,Mulryne:2016mzv,Seery:2016lko,Ronayne:2017qzn,Butchers:2018hds} for recent works). However, its amplitude was relatively large, and moreover its shape was found to be markedly different from the usual equilateral one, in particular with a large amplitude in flattened configurations and a large correlation with the orthogonal template.
The goal of the present paper is to derive general results for the size and shape of the bispectrum in imaginary sound speed scenarios. We do so in a model-independent manner by using the EFT of fluctuations \cite{Creminelli:2006xe,Cheung:2007st}, working in the decoupling limit and at lowest order in derivatives. We highlight two features of effective descriptions in terms of an imaginary sound speed. The first is that, since the instability of any underlying UV theory must be transient, such descriptions have to break down at high enough energies. We thus incorporate an appropriate UV cutoff that takes into account the fact that the EFT cannot be infinitely extrapolated towards the past. We also pay particular attention to the quantization of such systems, which plays a central role in our computation. Under the above hypotheses, our results apply to any scenario admitting an effective single-field description for the fluctuations with an imaginary sound speed.\footnote{Although to our knowledge such description has so far only been made explicit in \cite{Garcia-Saenz:2018ifx}, from the results of \cite{Brown:2017osf,Mizuno:2017idt} we conjecture that the same effects are at play in the model of hyperinflation.} The upshot is that such theories lead to simple universal predictions for the bispectrum that can be potentially distinguished from other set-ups. Although we will see that the {\it amplitude} of the non-Gaussianities in the equilateral limit is characterized by $f_{NL}\sim 1/|c_s|^2$, like in conventional frameworks with reduced speed of sound, their {\it shape} is quite distinct, with an enhancement of the bispectrum in flattened configurations, in a way that is sensitive to the UV cutoff of the EFT. This is reminiscent of models with excited initial states (see \textit{e.g.}~\cite{Chen:2006nt,Holman:2007na,Meerburg:2009ys,Meerburg:2009fi,Agarwal:2012mq}), and we explain the similarities and differences between the two frameworks.
In the next section, we briefly review the EFT of fluctuations, and we discuss the quantization and the power spectrum of imaginary sound speed models. We compute the bispectrum and study its amplitude and shape in section \ref{sec:bispectrum}, and discuss our results in section \ref{Discussion}.
\section{Set-up} \label{sec:set-up}
\subsection{Effective field theory of fluctuations}
Our starting point will be the simplest form of the action built from the effective field theory of inflation \cite{Creminelli:2006xe,Cheung:2007st}, or more precisely the EFT of fluctuations generated in single-clock inflation. We refer the reader to the above reference for details, and simply do a brief review of its construction here.
In the unitary gauge where time diffeomorphisms have been fixed so that the clock is unperturbed, there are no matter fluctuations but only metric fluctuations. The most general effective action is then constructed by writing down all operators that are functions of the metric
fluctuations and invariant under time-dependent spatial diffeomorphisms. It reads, about a spatially flat FLRW background:
\begin{eqnarray}
S=\int d^4x \sqrt{-g} \left[ \frac{M_{{\rm Pl}}^2}{2} R+M_{{\rm Pl}}^2 \dot H g^{00} -M_{{\rm Pl}}^2 (3 H^2+{\dot H}) +F(\delta g^{00},\delta K_{\mu \nu}, \delta R_{\mu \nu \rho \sigma};\nabla_{\mu};t) \right] \nonumber
\end{eqnarray}
where a dot represents the time derivative with respect to the cosmic time $t$, $H \equiv \dot{a}/a$ is the Hubble parameter, $\delta g^{00} \equiv g^{00}+1$, $\delta K_{\mu \nu}$ (respectively $\delta R_{\mu \nu \rho \sigma}$) is the fluctuation of the extrinsic curvature of constant time surfaces (respectively of the 4-dimensional Riemann tensor) and where $F$ starts quadratic in its arguments $\delta g^{00}$, $\delta K_{\mu \nu}$ and $\delta R_{\mu \nu \rho \sigma}$. In the simplest EFT at lowest order in derivatives, one only allows operators involving powers of $\delta g^{00}$, namely, up to cubic order in fluctuations,
\begin{eqnarray}
F&=&\frac{1}{2} M_2(t)^4 (\delta g^{00})^2+\frac{1}{3!} M_3(t)^4 (\delta g^{00})^3\,.
\end{eqnarray}
The Goldstone boson $\pi$ associated with the spontaneous breaking of time-translation invariance can be explicitly reintroduced with the St\"uckelberg trick, restoring full time-diffeomorphism invariance through the replacements $t \to t+\pi$ and $g^{00} \to \partial_{\mu}(t+\pi) \partial_{\nu}(t+\pi) g^{\mu \nu}$. Working in the decoupling limit where $\pi$ decouples from the gravitational sector allows us to neglect the complications of the mixing with gravity,\footnote{Like for $c_s^2>0$, the decoupling limit is not applicable after sound Hubble crossing, but one can use the simple relation $\zeta=-H \pi$ between $\pi$ and the comoving curvature perturbation $\zeta$ to follow the evolution of modes until sound Hubble crossing, after which one makes use of the constancy of $\zeta$.} resulting in the simple transformation law
\begin{equation}
\delta g^{00} \to -2 {\dot \pi}-{\dot \pi}^2+\frac{(\partial_i \pi)^2}{a^2}\,.
\end{equation}
At leading order in a slow-varying approximation, or equivalently by assuming that $\pi$ enjoys an approximate shift symmetry, one can neglect the time-dependence of all the $M_n(t)$ as well of $H$ and $\dot{H}$. One thus obtains, up to cubic order in $\pi$,
\begin{eqnarray}
S_{{\rm DL}}&=& \int {\rm d} t \, {\rm d}^3 x\, a^3 \left[ M_{{\rm Pl}}^2 \dot H (\partial_{\mu} \pi)^2+2 M_2^4\left({\dot \pi}^2-{\dot \pi} (\partial_{\mu} \pi)^2\right)
-\frac{4 M_3^4}{3} {\dot \pi}^3 \right]\,,
\end{eqnarray}
where $(\partial_{\mu} \pi)^2 \equiv -{\dot \pi}^2+(\partial_i \pi)^2/a^2$ is evaluated on the background metric and the comoving curvature perturbation simply reads $\zeta=-H \pi$ at linear order and at leading order in a slow-varying approximation. As a result of the non-linearly realized spontaneously broken symmetry of time diffeomorphism invariance, a non-vanishing $M_2$ introduces both a non-trivial sound speed, such that
\begin{equation}
\frac{1}{c_s^2}-1 \equiv -\frac{2 M_2^4}{M_{{\rm Pl}}^2 {\dot H}}\,,
\end{equation}
and cubic interactions. By defining $A/c_s^2 \equiv -1+\frac23 \left(\frac{M_3}{M_2}\right)^4 $ and $\epsilon \equiv -\dot{H}/H^2>0$, the decoupling Lagrangian is put in the simple form
\begin{eqnarray} \label{eq:effective goldstone action}
S_{{\rm DL}}= \int {\rm d} t \, {\rm d}^3 x\, a^3 M_{{\rm Pl}}^2 \epsilon H^2 \left[\frac{1}{c_s^2} \left({\dot \pi}^2-c_s^2\frac{(\partial_i \pi)^2}{a^2}\right)-\left(\frac{1}{c_s^2}-1 \right)\left(\frac{{\dot \pi}(\partial_i \pi)^2}{a^2}+\frac{A}{c_s^2} {\dot \pi}^3 \right) \right]\,.
\label{S-pi}
\end{eqnarray}
where all parameters are taken to be constants, and $a \propto e^{H t}$. We will keep $A$ arbitrary, keeping in mind that $A$ of order one is technically natural as the operators in ${\dot \pi}^3$ and $\dot \pi (\partial_i \pi)^2$ then introduce the same strong coupling scale \cite{Senatore:2009gt}. The power spectrum and the primordial non-Gaussianities originating from the action \eqref{S-pi} are well known when $c_s^2$ is positive \cite{Chen:2006nt,Senatore:2009gt}. Here, on the contrary, we will consider the situation in which $c_s^2$ is negative. We thus write $c_s^2 =- |c_s|^2$, and refer to this situation as a framework with an effective imaginary speed of sound, as it formally corresponds to replacing $c_s$ by $i |c_s|$.
\subsection{Quantization and power spectrum} \label{sec:quantization}
While it may be surprising at first sight to consider the action \eqref{S-pi} in the regime where $c_s^2<0$, as the kinetic energy of $\pi$ is negative then, it can perfectly make sense as a low-energy EFT, as long as we specify its range of applicability. As we explained in the introduction as a motivating example, an effective action with an imaginary speed of sound can indeed be derived from a more fundamental and perfectly healthy theory, like in two-field models after having integrated out a heavy tachyonic field, with a background trajectory deviating strongly from a field space geodesic \cite{Garcia-Saenz:2018ifx}.\footnote{In appendix \ref{sec:appendix} we give an explicit derivation of the single-field EFT arising in the low-energy regime of a generic two-field model of inflation.} In such a system, the EFT becomes valid, for a given scale $k$, when the physical momentum $k/a$ becomes negligible compared to the mass of the heavy tachyonic field. In a similar fashion, we will work under the assumption that the action \eqref{S-pi} with $c_s^2<0$ is valid, for a given scale $k$, when $k |c_s|/(aH)$ drops below a $k$-independent dimensionless quantity, that we call $x$, and that is larger than unity. The quantity $x$ measures from how deep inside the sound horizon the EFT is applicable, and by introducing the conformal time $\tau$ such that $a \simeq -1/(H \tau)$, our EFT is thus valid, for a given scale $k$, for $k |c_s| \tau +x> 0$.\\
Despite the fact that the action \eqref{S-pi} seems to describe an essentially classical instability for $c_s^2<0$, the quantum nature of the system will be central for the understanding of the non-Gaussianities it generates. Therefore, we now discuss its quantization, following a standard procedure, but treating the situations with $c_s^2$ positive or negative in a unified manner for pedagogical reasons. The variable $\zeta$ is promoted to a quantum field, decomposed as
\begin{equation}
\label{Fourier}
\hat \zeta (\tau, \vec x)=\int \frac{{\rm d}^3k}{(2\pi)^3} \left\{{\hat a}_{\vec k} \zeta_{k}(\tau) e^{i \vec k.\vec x}
+ {\hat a}_{\vec k}^\dagger \zeta_{k}^*(\tau) e^{-i \vec k.\vec x} \right\},
\end{equation}
where the $\hat a$ and $\hat a^\dagger$ are annihilation and creation operators, which satisfy the
usual commutation rules
\begin{equation}
\label{a}
\left[ {\hat a}_{\vec k}, {\hat a^\dagger}_{\vec k'}\right]=(2\pi)^{3} \delta(\vec k-\vec k')\, ,
\quad
\left[ {\hat a}_{\vec k}, {\hat a}_{\vec k'}\right]=
\left[ {\hat a^\dagger}_{\vec k}, {\hat a^\dagger}_{\vec k'}\right]= 0\, ,
\end{equation}
and the complex mode function $ \zeta_{k}(\tau)$ verifies the equation of motion
\begin{equation}
\zeta_k^{''}-\frac{2}{\tau} \zeta_k^{'}+c_s^2 k^2 \zeta_k=0\,.
\end{equation}
Its general solution reads
\begin{equation}
\zeta_k(\tau)=\frac{A_k}{k^{3/2}} e^{- i k c_s \tau}(-i k c_s \tau-1)+\frac{B_k}{k^{3/2}} e^{i k c_s \tau}(i k c_s \tau-1)\,,
\label{general-solution}
\end{equation}
where $A_k$ and $B_k$ are arbitrary complex constants, and $c_s$ denotes the positive square root of $c_s^2$ when the latter is positive, and $i |c_s|$ when it is negative. \\
In conformal time, the conjugate momentum of $\zeta$ is $p_\zeta=2 a^2 \epsilon M_{{\rm Pl}}^2/c_s^2 \zeta^{'}$, where $'={\rm d}/{\rm d} \tau$. The quantization condition $\left[ \hat \zeta (\tau, \vec x), \hat p_\zeta^\dagger (\tau, \vec x)\right]=i$ thus imposes that $ \zeta_{k}(\tau)$ verify
\begin{equation}
\zeta_k \zeta^{'*}_k-\zeta'_k \zeta^*_k=\frac{i c_s^2}{2 \epsilon a^2 M_{{\rm Pl}}^2}
\end{equation}
at all times, which leads to the constraint
\begin{equation}
\hspace{-0.2cm} {\rm Re}[c_s] \left( |A_k|^2 e^{2 k \tau {\rm Im}[c_s]}- |B_k|^2 e^{-2 k \tau {\rm Im}[c_s]} \right) +2\, {\rm Im}[c_s] {\rm Im}[A_k^* B_k e^{2i k \tau {\rm Re}[c_s]}] =\frac{H^2}{4 \epsilon M_{{\rm Pl}}^2}\,.
\label{quantization-universal}
\end{equation}
The consequences of the quantization condition are therefore very different for the two cases of positive and negative $c_s^2$:\\
\textbullet \,\, For $c_s^2$ positive, $ {\rm Im}[c_s]=0$, so that the second term is zero, and one finds the familiar constraint
\begin{equation}
|A_k|^2-|B_k|^2 =\frac{H^2}{4 \epsilon c_s M_{{\rm Pl}}^2}\,.
\label{quantization-standard}
\end{equation}
One is then free to choose $B_k=0$, the Bunch--Davies vacuum, selecting only the positive frequency mode in \eqref{general-solution}, whose amplitude is completely fixed, finding then for the late power spectrum ${\cal P}_{\zeta_k} \equiv k^3/(2 \pi^2) |\zeta_k|^2= H^2/(8 \pi^2 \epsilon c_s M_{{\rm Pl}}^2)$.\\
\textbullet \,\,For $c_s^2$ negative, $ {\rm Re}[c_s]=0$, so that the first term in \eqref{quantization-universal} is zero, and one finds
\begin{equation}
{\rm Im}[A_k^* B_k]=\frac{H^2}{8 \epsilon |c_s| M_{{\rm Pl}}^2}\,.
\label{quantization-A-B}
\end{equation}
Here, the quantization condition thus imposes that both $A_k$ and $B_k$ be non-zero. In the following, as the global phase of the mode function is irrelevant, one chooses $A_k$ to be real, so that a more precise statement is that the imaginary part of $B_k$ should be non-vanishing. We also write $A_k=\alpha_k e^{x}$ and $B_k=\alpha_k e^{\gamma_k} e^{i \theta_k} e^{-x}$, where all parameters are real, so that the mode function reads
\begin{equation}
\zeta_k(\tau)=\frac{\alpha_k}{k^{3/2}} \left( e^{k |c_s| \tau+x}(k |c_s| \tau-1)-e^{\gamma_k} e^{i \theta_k} e^{-(k |c_s| \tau+x)}(k |c_s| \tau+1) \right)\,.
\label{mode-function}
\end{equation}
The positive and negative frequency modes of the situation with $c_s^2>0$ are now turned into exponentially growing and decreasing modes (the first and second terms respectively). The physical motivation behind this parameterization is the following. As the EFT starts to be valid at $k |c_s \tau |=x$ for the scale $k$, one cannot predict the amplitude of the growing and decaying modes without further input from a UV completion. However, on physical grounds, one can expect them to be excited with similar amplitudes. Hence, we factored out the $e^{\pm x}$ terms, so that $e^{\gamma_k}$, which sets the initial ratio between the amplitudes of the decaying and growing modes, is considered in the following to be an order one number. Eventually, note that with these parameters, the quantization condition \eqref{quantization-A-B} reads
\begin{equation}
\alpha_k^2 e^{\gamma_k} \sin(\theta_k)=\frac{H^2}{8 \epsilon |c_s| M_{{\rm Pl}}^2} \,.
\label{quantization-apha}
\end{equation}
Taking the limit $k |c_s| \tau \to 0$, one finds from \eqref{mode-function} the final value of the power spectrum:
\begin{equation}
{\cal P}_{\zeta_k}= \frac{ \alpha_k^2}{2 \pi^2} \left[ e^{2x} + e^{2 \gamma_k} e^{-2 x} +2e^{\gamma_k} \cos(\theta_k) \right]\,.
\label{Pzeta-exact}
\end{equation}
Let us recall that $e^{\gamma_k}={\cal O}(1)$ and that $x$ is positive, so that a soon as $x \gtrsim 5$, one is completely dominated by the first term, coming from the exponentially growing mode, so that
\begin{equation}
{\cal P}_{\zeta_k} \simeq \frac{ \alpha_k^2}{2 \pi^2} e^{2x}\,.
\label{Pzeta}
\end{equation}
In the following, in agreement with the approximate shift symmetry that we used, we will work under the simplifying assumption that the curvature power spectrum \eqref{Pzeta} is scale invariant, \textit{i.e.}~that $\alpha_k$ does not depend on the scale $k$, and we write $\alpha_k\equiv\alpha$ in the following. From the quantization condition \eqref{quantization-apha}, one deduces that $e^{\gamma_k} \sin(\theta_k)$ is also scale-independent, and for simplicity we will assume that the amplitude and the phase are separately scale-independent, hence omitting the explicit subscripts $k$ from now on. These approximations are mainly used to simplify the results, but play no fundamental role in our computation. Notice finally that in the particular set-up of sidetracked inflation \cite{Garcia-Saenz:2018ifx}, we used a heuristic description of the transition from the UV regime to the one of the EFT that was sufficient for our purpose there to model the power spectrum, but that did not take into account the quantization condition \eqref{quantization-apha}. Here, on the contrary, we remain agnostic about the precise amplitude of $\alpha^2$, and hence of the curvature power spectrum. Nevertheless, we will show that under mild assumptions, one can derive universal results about the amplitude and shape of the primordial non-Gaussianities.
\section{Bispectrum from an imaginary speed of sound} \label{sec:bispectrum}
In this section we compute the three-point correlation function of the curvature perturbation:
\begin{equation}
\langle \zeta_{\boldsymbol{k}_1} \zeta_{\boldsymbol{k}_2} \zeta_{\boldsymbol{k}_3} \rangle \equiv (2\pi)^3 \delta(\sum_i \boldsymbol{k}_i) B_{\zeta}(k_1,k_2,k_3)\,,
\label{Bispectrum}
\end{equation}
where the bispectrum $B_\zeta$ is a function of the three wavenumbers $k_i=|\boldsymbol{k}_i|$. In particular, we will study the amplitude and the momentum-dependence of the shape function $S$ such that \cite{Babich:2004gb}
\begin{equation}
B_{\zeta} \equiv (2 \pi)^4 \frac{S(k_1,k_2,k_3)}{(k_1 k_2 k_3)^2} A_s^2\,,
\label{shape-def}
\end{equation}
where $A_s \simeq 2.4 \times 10^{-9}$ denotes the amplitude of the dimensionless curvature power spectrum ${\cal P}_\zeta$ at the pivot scale $k=0.05\,{\rm Mpc}^{-1}$, which reads, according to Eq.~\eqref{Pzeta},
\begin{equation}
A_s=\frac{\alpha^2}{2 \pi^2}\, e^{2x}\,.
\label{As}
\end{equation}
Using the in-in formalism (see \textit{e.g.}~\cite{Maldacena:2002vr,Weinberg:2005vy}), the tree-level bispectrum can be computed as
\begin{equation}
\langle \zeta_{\boldsymbol{k}_1}(t) \zeta_{\boldsymbol{k}_2}(t) \zeta_{\boldsymbol{k}_3}(t) \rangle=2 \,{\rm Im} \left[ \int_{-\infty(1-i \epsilon)}^t {\rm d} t' \langle 0| \zeta_{\boldsymbol{k}_1}(t) \zeta_{\boldsymbol{k}_2}(t) \zeta_{\boldsymbol{k}_3}(t) H_{(3)}(t') | 0 \rangle \right]\,,
\label{in-in}
\end{equation}
where the interaction picture cubic Hamiltonian $H_{(3)}$ simply reads $-\int {\rm d}^3x {\cal L}_{(3)}$ here, and for simplicity, we omit indices to denoting that all fields are in the interaction picture. As we already emphasized, an important feature of our calculation is that our EFT is valid only for $k |c_s \tau | \geq x$ for a given scale $k$. Hence, in Eq.~\eqref{in-in}, we can only compute the contribution to the bispectrum starting from the time such that all modes have reached the regime of validity of the EFT, namely from $\tau_i=-x/(|c_s|k_{ m})$ with $k_{ m} \equiv\,{\rm max}(k_1,k_2,k_3)$. We disregard any previous contribution to the bispectrum, which would depend on the model-dependent UV completion of the EFT. In standard set-ups, and when the three modes have comparable orders of magnitude, we expect this contribution, approximately coming from the period when modes are oscillating, to be anyway dwarfed in magnitude by the subsequent contribution that we compute, where all modes experience an exponential growth. On the other hand, in the squeezed limit, one can make use of the single-clock consistency relation \cite{Maldacena:2002vr,Creminelli:2004yq} that predicts a vanishing bispectrum in our approximation of a scale-invariant power spectrum. As we will see, given that our computations result in a vanishing three-point function in the squeezed limit, our shape can thus be considered to be reliable for all type of triangle configurations.
\subsection{Computation of the bispectrum}
\label{computation}
In this subsection we give our result for the bispectrum, treating subsequently each contribution from the two vertices of the cubic action. We begin with the algebraically simpler case of the vertex $\dot{\zeta}^3$, for which we explain in some detail the structure and the subtleties of the computation.\\
{\bf Interaction in $\dot{\zeta}^3$.---} Using Eqs.~\eqref{S-pi}-\eqref{in-in}, with $\zeta=-H \pi$ and $a \simeq -1/(H \tau)$, one readily obtains the contribution to the final bispectrum from the vertex $\dot{\zeta}^3$ as:
\begin{equation}
B_{\zeta} \supset \left(\frac{1}{|c_s|^2}+1 \right) \frac{12 A\, \epsilon M_{{\rm Pl}}^2}{H^2 |c_s|^2} \int_{-x/(|c_s|k_{ m})}^{0} \frac{{\rm d} \tau }{\tau} \,{\rm Im}\left[ \zeta_{k_1}(0) \zeta_{k_2}(0) \zeta_{k_3}(0) \zeta^{*'}_{k_1}(\tau) \zeta^{*'}_{k_2}(\tau) \zeta^{*'}_{k_3}(\tau) \right]\,,
\label{B1}
\end{equation}
where
\begin{equation}
\zeta^{'}_k(\tau)=\frac{\alpha_k}{k^{3/2}} |c_s|^2 k^2 \tau \left( e^{k |c_s| \tau+x}+e^{\gamma_k} e^{i \theta_k} e^{-(k |c_s| \tau+x)} \right)\,.
\label{mode-function-derivative}
\end{equation}
There is no difficulty in performing the relevant integrals exactly, but it is cumbersome to write the full result, and it is physically more instructive to discuss the structure of the computation to identify the leading-order result. To this end, note that the growing mode in \eqref{mode-function} and \eqref{mode-function-derivative} comes with a large factor $e^{x}$, while the decaying mode comes with an $e^{-x}$, hence it is easy to organize the computation by formally counting the powers of $e^{x}$.
Taking into account only the growing mode, the dominant term inside the brackets in \eqref{B1} scales as $e^{6 x}$. However, it is real and hence does not contribute to the bispectrum. This explains why in section \ref{sec:quantization}, we paid special attention to the quantization, and in particular to the imaginary part of the decaying mode. The first non-zero contribution to the bispectrum thus comes from inserting one decaying mode in the product of the six mode functions, and thus scales like $e^{4 x}$, while other contributions are suppressed by additional powers of $e^{-x}$. Hence, in the limit of a large $x$ (a statement that will be made quantitative below), we find a dominant contribution proportional to $e^{4 x} \alpha^6 e^{\gamma} \sin(\theta)$. As the dimensionless shape function \eqref{shape-def} involves the ratio between the bispectrum and $A_s^2$, with the latter scaling like $\alpha^4 e^{4 x}$, one finds a result for $S$ proportional to $\alpha^2 e^{\gamma} \sin(\theta)$, whose amplitude is fixed by virtue of the quantization condition \eqref{quantization-apha}. Contrary to the power spectrum, and to what a naive power counting might have led to, the bispectrum is therefore not enhanced by $e^{2 x}$, but its overall amplitude is simply set by $1/c_s^2-1$, like in conventional models with positive speed of sound squared (we will see though that the result is enhanced in flattened configurations, but only by $x^3$ instead of exponentially). Performing the integrals explicitly, one finds the following leading-order result:
\begin{equation}\begin{aligned}
S_{\dot{\zeta}^3}= \frac{3A}{4} \left(\frac{1}{|c_s|^2}+1 \right)\bigg\{&-\frac{k_1k_2k_3}{(k_1+k_2+k_3)^3} \\
&+\frac{k_1k_2k_3}{\tilde k_1^3}\bigg[1-e^{-x \tilde k_1/k_{ m}} \bigg(1+x \frac{\tilde k_1}{k_{ m}}+\frac{x^2}{2} \frac{\tilde k_1^2}{k_{ m}^2}\bigg)\bigg]\bigg\}+(\mbox{2 perm.})
\label{Ssimple}
\end{aligned}\end{equation}
where
\begin{equation}
\tilde k_1\equiv k_2+k_3-k_1\,,
\end{equation}
and similarly for $\tilde k_2$ and $\tilde k_3$. Note that $\tilde k_i\geq0$ as a consequence of the triangle inequality. \\
It is easy to understand physically the momentum dependence of the various contributions. The first term comes from inserting one decaying mode in the external legs in Eq.~\eqref{B1}. The integrand is then computed by taking the product of three growing modes for the internal legs, hence is proportional to $\tau^2\,e^{(k_1+k_2+k_3)|c_s| \tau}$ and leads to a standard equilateral type result.\footnote{Note that we neglected terms in $e^{-x(k_1+k_2+k_3)/k_{ m}}$, which are suppressed at least by $x^2 e^{-2 x}$ in all triangle configurations.} The fact that this contribution is the same as for $c_s^2$ positive should not come as a surprise as, in that case, the requirement to project into the interacting vacuum state (the $i\epsilon$ prescription in Eq.~\eqref{in-in}) effectively corresponds to performing the integral with $c_s$ turned into $i |c_s|$. The second contribution comes from inserting one decaying mode in the internal legs, changing one $k_i$ into $-k_i$ in the integrand, which becomes proportional to $\tau^2\,e^{(-k_1+k_2+k_3)|c_s| \tau}$ (and permutations), and hence to a shape that is enhance in flattened configurations such that $k_2+k_3=k_1$ (and permutations).
The result that we obtain is therefore similar in spirit to the bispectrum generated by a small non-Bunch--Davies component in conventional models with positive $c_s^2$ (see \textit{e.g.}~\cite{Chen:2006nt,Holman:2007na,Meerburg:2009ys,Meerburg:2009fi,Agarwal:2012mq}). Note however that in our case, the result \eqref{Ssimple} is not a correction that comes in addition to a standard equilateral type bispectrum, but constitutes the dominant bispectrum itself. In addition, as a direct consequence of the presence of an imaginary speed of sound, the bispectrum does not feature an oscillating behaviour, but rather acquires an exponential dependence in $x \tilde k_1/k_{ m}$ (and permutations). The terms in $e^{-x \tilde k_1/k_{ m}}$ are negligible near the equilateral limit, but they become increasingly important as one approaches the flattened configurations, and are in fact crucial to regularize the apparent divergence in $1/\tilde k_1^3$ in this limit. Note also that the equilateral type contribution, coming from the correction to the external legs, is numerically smaller than the other contribution, even in the equilateral configuration, and could be neglected for a simpler but qualitatively correct result.
A word is also useful about the terms that we have neglected in \eqref{Ssimple}. From the structure of the computation, one can realize that they are negligible for all type of triangle configurations, and that they are parametrically suppressed (at least) by $x^2 e^{-x}$ compared to the leading-order result in Eq.~\eqref{Ssimple}. Let us quote for instance the next-to-leading-order correction in $e^{-x}$, in the equilateral configuration for simplicity:
\begin{eqnarray}
S_{\dot{\zeta}^3 \,{\rm NLO}}(k,k,k)=\frac{9 A}{4} \left(\frac{1}{|c_s|^2}+1 \right)e^{-x} \bigg[&&\hspace{-0.3cm}-(1 + x + x^2/2) - e^{\gamma} (2 - 2 x + x^2) \cos(\theta) \nonumber \\
&&\hspace{-0.5cm} +\frac{1}{156}e^{2 \gamma}(2- 6x + 9 x^2)(1+2\cos(2 \theta)) \bigg]\,.
\end{eqnarray}
Like the correction to the leading-order power spectrum in Eq.~\eqref{Pzeta-exact}, the corrections depend on $e^{\gamma}$ and $\cos(\theta)$, but the dominant suppressing factor is not $e^{-2 x}$ but rather $x^2 e^{-x}$. Thus, the quantitative criterion that enables us to derive model independent results for the bispectrum from the EFT is that $x^2 e^{-x} \ll 1$, which is verified for $x \gtrsim 8$. This is the regime that we consider in the remainder of this paper. \\
{\bf Interaction in $\dot{\zeta}(\partial\zeta)^2$.---} Using Eqs.~\eqref{S-pi}-\eqref{in-in}, the contribution to the final bispectrum from the vertex $\dot{\zeta}(\partial\zeta)^2$ reads
\begin{eqnarray}
B_{\zeta} &\supset & \left(\frac{1}{|c_s|^2}+1 \right) \frac{4 \epsilon M_{{\rm Pl}}^2}{3 H^2} \, \boldsymbol{k}_2 \cdot \boldsymbol{k}_3 \times \nonumber \\
&&\int_{-x/(|c_s|k_{ m})}^{0} \frac{{\rm d} \tau }{\tau} \,{\rm Im}\left[ \zeta_{k_1}(0) \zeta_{k_2}(0) \zeta_{k_3}(0) \zeta^{*'}_{k_1}(\tau) \zeta^{*}_{k_2}(\tau) \zeta^{*}_{k_3}(\tau) \right]+(\mbox{2 perm.})\,.
\label{B2}
\end{eqnarray}
The structure of the calculation is completely analogous to the previous one, only with an algebraically more complicated momentum-dependence. Hence we simply quote the final result, again keeping only leading-order terms:
\begin{equation}\begin{aligned}
\label{Suni}
S_{\dot{\zeta}(\partial\zeta)^2}&=\frac{1}{16} \left(\frac{1}{|c_s|^2}+1 \right)\bigg\{-\frac{k_1k_2k_3}{(k_1+k_2+k_3)^3}\,p_0(k_1,k_2,k_3)+\frac{k_1k_2k_3}{\tilde k_1^3} \times \\
&\hspace{-1cm}\bigg[p_0(-k_1,k_2,k_3)-e^{-x \tilde k_1/k_{ m}} \bigg(p_0(-k_1,k_2,k_3)+x \frac{\tilde k_1}{k_{ m}}p_1(-k_1,k_2,k_3)+\frac{x^2}{2} \frac{\tilde k_1^2}{k_{ m}^2} p_2(-k_1,k_2,k_3)\bigg)\bigg] \bigg\}\\
&+(\mbox{2 perm.})
\end{aligned}\end{equation}
with
\begin{eqnarray}\begin{aligned}
p_0(k_1,k_2,k_3)&=-12-9\bigg(\frac{k_1}{k_2}+\mbox{5 perm.}\bigg)-\bigg(\frac{k_1^2}{k_2^2}+\mbox{5 perm.}\bigg)+6\bigg(\frac{k_1^2}{k_2k_3}+\mbox{2 perm.}\bigg)\\
&\quad-6\bigg(\frac{k_1k_2}{k_3^2}+\mbox{2 perm.}\bigg)+3\bigg(\frac{k_1^3}{k_2k_3^2}+\mbox{5 perm.}\bigg)+\bigg(\frac{k_1^4}{k_2^2k_3^2}+\mbox{2 perm.}\bigg)\\
p_1(k_1,k_2,k_3)&=-6-5\bigg(\frac{k_1}{k_2}+\mbox{5 perm.}\bigg)+4\bigg(\frac{k_1^2}{k_2k_3}+\mbox{2 perm.}\bigg)\\
&\quad-2\bigg(\frac{k_1k_2}{k_3^2}+\mbox{2 perm.}\bigg)+\bigg(\frac{k_1^3}{k_2k_3^2}+\mbox{5 perm.}\bigg)\\
p_2(k_1,k_2,k_3)&=-2\bigg(\frac{k_1}{k_2}+\mbox{5 perm.}\bigg)+2\bigg(\frac{k_1^2}{k_2k_3}+\mbox{2 perm.}\bigg)\,.
\end{aligned}
\end{eqnarray}
The physical origin of the various momentum-dependences is similar to the above case: the first contribution simply comes from inserting a decaying mode in one of the three external legs in \eqref{B2}, and for the same reason as for the other interaction, it has the same equilateral-type shape as the same vertex in models with $c_s^2>0$. The other contribution stems from inserting one decaying mode in the internal legs, turning one $k_i$ into $-k_i$, thus yielding a shape that is enhanced in flattened configurations, where the exponential terms are important to regularize the apparent divergence in $1/\tilde k_1^3$. Finally, note that contrary to the other interaction \eqref{Ssimple}, the equilateral type shape does not give a contribution in the equilateral configuration that is numerically negligible compared to the one coming from the flattened shape.
\subsection{Shapes and amplitudes}
In this subsection we study in more detail the amplitudes and the momentum dependences of the two shapes in Eqs.~\eqref{Ssimple}-\eqref{Suni}. For this, we order the $k_i$'s such that $k_3 \leq k_2 \leq k_1$, and we represent the shape information by plotting the two-dimensional functions $S(1, x_2, x_3)$, where $0 \leq x_3 \leq x_2 \leq 1$ and $1 \leq x_2+x_3$ (to satisfy the triangle inequality). The two shapes are represented in Fig.~\ref{fig:Shapes} for the representative value $x=10$. Note that $S_{\dot{\zeta}^3}$ (respectively $S_{\dot{\zeta}(\partial\zeta)^2}$) is normalized to $1$ (respectively $-1$) in the equilateral configuration $x_2=x_3=1$.
As we understood in the last section, the striking feature shared by the two shapes is the large enhancement in flattened configurations $x_2+x_3=1$ compared to the equilateral one (except in the squeezed limit where all shapes vanish).
\begin{figure*}
\centering
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{Ssimple10.pdf}
\caption{$S_{\dot{\zeta}^3}$ for $x=10$.}
\label{fig:Ssimple}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{Suni10.pdf}
\caption{$S_{\dot{\zeta}(\partial\zeta)^2}$ for $x=10$.}
\label{fig:Suni}
\end{subfigure}
\caption{Shapes $S(1,x_2,x_3)$ as a function of $(x_2,x_3)$. We set them to zero outside the region $1-x_2 \leq x_3 \leq x_2$. The shape $S_{\dot{\zeta}^3}$ (respectively $S_{\dot{\zeta}(\partial\zeta)^2}$) is normalized to $1$ (respectively $-1$) in the equilateral configuration $x_2=x_3=1$.}
\label{fig:Shapes}
\end{figure*}
Indeed, one finds
\begin{eqnarray}
S(1,1,1)= \left(\frac{1}{|c_s|^2}+1 \right) \left( \frac{13A}{6}-\frac{5}{24}\right)
\end{eqnarray}
for the total shape $S=S_{\dot{\zeta}^3}+A S_{\dot{\zeta}(\partial\zeta)^2}$ in the equilateral limit, while the result in the squashed configuration $(x_2=x_3=1/2)$ reads
\begin{eqnarray}
S\left(1,\frac12,\frac12\right)= \frac{1}{128} \left(\frac{1}{|c_s|^2}+1 \right)\bigg[39(A-1)+ 12 x^2 + 4 x^3(A+1)\bigg]\,,
\label{S-squashed}
\end{eqnarray}
where we kept the dominant terms in each configuration, and it is clear what the contribution from each operator is. One can easily derive an expression for $S(1,x_2,1-x_2)$ in more general flattened triangles, but it is not particularly illuminating, and for simplicity we concentrate on the squashed configuration where each shape is the largest. Let us remark that in Fig.~\ref{fig:Shapes}, if the enhancement in the squashed configuration is more important for $S_{\dot{\zeta}(\partial\zeta)^2}$ than for $S_{\dot{\zeta}^3}$, it is simply because each shape is normalized to $\pm1$ in the equilateral configuration. As one can see from Eq.~\eqref{S-squashed}, for $A={\cal O}(1)$, the two shapes actually contribute equally in the squashed limit, with a dominant term proportional to $x^3$ for large $x$. The result \eqref{S-squashed}, and the non-trivial dependence on $x$ in particular, can be derived by carefully taking the squashed limit from the general expressions \eqref{Ssimple}-\eqref{Suni}, or by considering a squashed configuration from the start in the computations \eqref{B1}-\eqref{B2} of the bispectrum. The argument of the relevant exponential factors being zero in that case, one can see that the $x$-enhanced term is proportional to $\int_{-x/(|c_s|k_{ m})}^{0} \tau^2 {\rm d} \tau \propto x^3$ for the operator $\dot{\zeta}^3$, while there is also a contribution in $\int_{-x/(|c_s|k_{ m})}^{0} \tau {\rm d} \tau \propto x^2$ for the operator $\dot{\zeta}(\partial\zeta)^2$. One can also notice that $S_{\dot{\zeta}^3}$ has the same sign in the equilateral and in the squashed configuration, while$S_{\dot{\zeta}(\partial\zeta)^2}$ changes from negative in the equilateral limit to positive in the squashed one for the relevant values of $x$.
We now make a quantitative comparison between the two shapes generated in our set-up with an imaginary speed of sound, and well-known templates used in data analysis, namely the equilateral \cite{Creminelli:2005hu}, orthogonal \cite{Senatore:2009gt}, and flattened (also known as enfolded) \cite{Meerburg:2009ys} shapes:
\begin{equation}
\label{templates}
S^{{\rm eq}}=\frac{9}{10} \frac{\tilde k_1 \tilde k_2 \tilde k_3}{k_1 k_2 k_3} \,, \quad S^{{\rm orth}}=3\,S^{{\rm eq}}-\frac95\,, \quad S^{{\rm flat}}=-S^{{\rm eq}}+\frac{9}{10}\,,
\end{equation}
The templates are represented in Fig.~\ref{fig:templates}.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{SeqSorth.pdf}
\caption{$S^{\rm eq}$ (in orange) and $S^{\rm orth}$ (in blue), defined in Eq.~\eqref{templates}.}
\label{fig:Seq-orth}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{Sflat.pdf}
\caption{$S^{\rm flat}$, defined in Eq.~\eqref{templates}.}
\label{fig:Sflat}
\end{subfigure}
\caption{Shapes $S(1,x_2,x_3)$ as a function of $(x_2,x_3)$. We set them to zero outside the region $1-x_2 \leq x_3 \leq x_2$, and normalize them to $1$ in the equilateral configuration, except for the flat shape, which vanishes in this limit.}
\label{fig:templates}
\end{figure*}
We make use of the standard inner product $F(S,S')$ \cite{Babich:2004gb,Fergusson:2008ra} (the integral of $S S'$ over the various inequivalent triangle configurations, weighted by $1/(k_1+k_2+k_3)$) to compute the correlation $\mathcal{C}(S,S')$ between a given shape $S$ and a template $S'$, as well as the corresponding amplitude, as
\begin{equation}
\mathcal{C}(S,S')=\frac{F(S,S')}{\sqrt{F(S,S)F(S',S')}}\,, \qquad f_{NL}^{S'}(S)=\frac{F(S,S')}{F(S',S')}\,.
\label{correlations-fNL}
\end{equation}
The results for $S_{\dot{\zeta}^3}$ with $A=1$ and for $S_{\dot{\zeta}(\partial\zeta)^2}$ are represented in Figs.~\ref{fig:correlations} and \ref{fig:fNL} for the correlations and the amplitudes, respectively (factoring out the overall amplitude $\left(1/|c_s|^2+1 \right)$), for $5 \leq x \leq 15$. Model-dependent corrections to the universal results that we computed, in $x^2 e^{-x}$, are not entirely negligible for $5 \lesssim x \lesssim 8$, but it is nonetheless interesting to see how our results behave in this regime.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{Correlation-simple.pdf}
\caption{$S_{\dot{\zeta}^3}$}
\label{fig:Ssimple-correlation}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{Correlation-uni.pdf}
\caption{$S_{\dot{\zeta}(\partial\zeta)^2}$}
\label{fig:Suni-correlation}
\end{subfigure}
\caption{Correlations of $S_{\dot{\zeta}^3}$ with $A>0$ (right), and $S_{\dot{\zeta}(\partial\zeta)^2}$ (left), with the templates in \eqref{templates}, as a function of $x$.}
\label{fig:correlations}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{fNLsimple.pdf}
\caption{$S_{\dot{\zeta}^3}$}
\label{fig:fNLsimple}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.43\linewidth}
\centering
\includegraphics[width=\textwidth]{fNLuni.pdf}
\caption{$S_{\dot{\zeta}(\partial\zeta)^2}$}
\label{fig:fNLuni}
\end{subfigure}
\caption{Amplitudes of the various $f_{NL}$ as a function of $x$, for $S_{\dot{\zeta}^3}$ with $A=1$ (right) and $S_{\dot{\zeta}(\partial\zeta)^2}$ (left). The overall common factor $\left(1/|c_s|^2+1 \right)$ is not included.}
\label{fig:fNL}
\end{figure*}
Obviously, the equilateral template, maximum in the equilateral configuration and vanishing in the flattened ones, is a very poor fit to the shapes obtained in our set-up, hence the weak correlations, of order $0.5$ for $S_{\dot{\zeta}^3}$ and $0.2$ for $S_{\dot{\zeta}(\partial\zeta)^2}$. These correlations increase with decreasing $x$, as the enhancement of the flattened configurations compared to the equilateral one decreases then. The fact that $S_{\dot{\zeta}(\partial\zeta)^2}$ changes sign between these two types of configurations, whereas $S^{\rm eq}$ is always positive, additionally explains its weaker correlation with $S^{\rm eq}$ compared to $S_{\dot{\zeta}^3}$.
The orthogonal template differs from the equilateral one by a constant, so that its amplitude in flattened configurations is $-2$ times the one in the equilateral limit (see Eq.~\eqref{templates} and Fig.~\ref{fig:Seq-orth}). Hence, we expect its correlation with our shapes to be much larger, which is indeed clearly visible in Fig.~\ref{fig:correlations} with correlations of order $-0.8$, this time larger for $S_{\dot{\zeta}(\partial\zeta)^2}$ than for $S_{\dot{\zeta}^3}$, for the same reason that explains the change of sign between equilateral and flattened configurations for the former.
The flattened shape also differs from the equilateral one by a constant, in a way such that the shape is vanishing in the equilateral configuration and maximum in the flattened one (see Eq.~\eqref{templates} and Fig.~\ref{fig:Sflat}). This very well represents the large enhancement of flattened versus equilateral configurations that are typical of our shapes, as one can see in Figs.~\ref{fig:Shapes}-\ref{fig:Sflat}, and which is confirmed quantitatively by the very large correlation of our two shapes with this template, at least of order $0.9$ for all values of $x$. This is physically transparent given our computation and explanations in section \ref{computation}: we saw there the important similarities between the non-Gaussianities generated in models with negative $c_s^2$ and the ones induced by non-Bunch--Davies initial states, for which the flattened shape was designed as a simple template \cite{Meerburg:2009ys}.
Finally, one can see in Fig.~\ref{fig:fNL} that the amplitude of the non-Gaussian signal, measured by either of the parameters $f_{NL}^{\rm eq}, f_{NL}^{\rm orth}, f_{NL}^{\rm flat}$, is rather large, independently of the overall common factor $\left(1/|c_s|^2+1 \right)$, and increases with $x$. This is due to the polynomial dependence, in $x^2$ and $x^3$, of the bispectrum near flattened configurations. The largest signal is naturally measured by the flattened template, with $f_{NL}^{\rm flat}$ of order $30$ for $x=10$ for instance, but even $f_{NL}^{\rm eq}={\cal O}(10)$ despite the weak correlation of our shapes with $S^{\rm eq}$. Note also that we included the three amplitudes for consistency, but that they are simply related by $f_{NL}^{\rm flat} \simeq 0.560 f_{NL}^{\rm eq}-1.477 f_{NL}^{\rm orth}$.\\
\begin{figure*}[t]
\centering
\includegraphics[width=0.5\textwidth]{Correlation-total10.pdf}
\caption{Correlation of the total shape $S=S_{\dot{\zeta}^3}+A S_{\dot{\zeta}(\partial\zeta)^2}$ as a function of $A$, for $x=10$.}
\label{fig:total10A}
\end{figure*}
So far we have studied the two shapes independently, but the total bispectrum $S=S_{\dot{\zeta}^3}+A S_{\dot{\zeta}(\partial\zeta)^2}$ is a linear combination of them that depends on the dimensionless parameter $A$, with an amplitude $f_{NL}^X=f_{NL}^X (S_{\dot{\zeta}^3})+A \,f_{NL}^X (S_{\dot{\zeta}(\partial\zeta)^2})$. We show in Fig.~\ref{fig:total10A} how the correlations with the three templates vary as a function of $A$, for the representative value $x=10$. As the two individual shapes are strongly correlated with themselves and the flattened template, and have a very comparable amplitude near the most important flattened and squashed configurations (see Eq.~\eqref{S-squashed}), the resulting total shape is either strongly anti-correlated or correlated with the flattened shape, except in a narrow region of parameter space near $A \simeq -1$ (at which the dominant signal in $x^3$ is cancelled, see again Eq.~\eqref{S-squashed}). The situation is similar to what happens for positive $c_s^2$ \cite{Senatore:2009gt}, where the two individual shapes there are strongly correlated with the equilateral template, and the total shape is qualitatively different for $3.1 \lesssim A \lesssim 4.2$. This is actually how the orthogonal template was designed, in order not to be blind to this type of signal. In our case, however, one does not need another template, as there always exists a correlation of the total shape (with either $S^{\rm flat}$ or $S^{\rm eq}$) that is not negligible, and additionally because of the intrinsically large amplitude of the bispectrum. It is nonetheless interesting to represent the total shape for the value of $A$ that generates a vanishing $f_{NL}^{\rm flat}$. We do so for $x=10$ (with $A \simeq -0.88$ in that case) in Fig.~\ref{fig:totalmin10}, finding similar shapes for different values of $x$. Its amplitude is non-negligible near equilateral configurations (which explains why the overlap with $S^{\rm eq}$ is non-negligible), but the most important signal is for flattened triangles, with a different sign between the squashed limit and configurations approaching the squeezed limit, near which the shape has a local extremum. In this respect, we note however that in concrete UV realizations of imaginary sound speed scenarios, contributions to the bispectrum coming from times preceding the validity of the EFT might not be entirely negligible near squeezed configurations.
\begin{figure*}[t]
\centering
\includegraphics[width=0.5\textwidth]{totalmin10.pdf}
\caption{Total shape for $x=10$ and $A\simeq -0.88$, such that its overlap with the flattened shape is vanishing. It is normalized to $1$ in the equilateral configuration.}
\label{fig:totalmin10}
\end{figure*}
\section{Discussion}
\label{Discussion}
The main purpose of this paper was to work out the consequences of an imaginary speed of sound for the inflationary bispectrum. We considered the simplest effective field theory of fluctuations at lowest order in derivatives, computed the primordial bispectrum and studied its amplitude and shape-dependence. A theory with an imaginary speed of sound cannot be regarded as fundamental but can perfectly make sense as a low energy EFT. In order to make predictions, we thus introduced a physical cut-off momentum scale, parameterized by the dimensionless parameter $x$, such that the EFT becomes valid once $|c_s| k/a $ drops below $x H$. The parameter $x$ measures how deep inside the sound horizon the EFT is trustable (each mode experiences a tachyonic growth during ${\rm ln}(x)$ e-folds of expansion between when the scale enters the domain of validity of the EFT and when it exits the sound horizon), and encodes a sensitivity to the ultraviolet completion of the theory.
An imaginary speed of sound induces an instability of the fluctuations, which experience an exponential growth in conformal time, before becoming constant after sound Hubble crossing. However, we showed that the exponentially decreasing mode is essential to a calculation of the non-Gaussianities. Without further input from an UV completion, we worked under the mild assumption that the growing and decaying modes are initially excited with a similar amplitude, which we left unspecified however. Very interestingly, we showed that the dimensionless bispectrum is nonetheless unambiguously determined, at least as soon as $x \gtrsim 8$. For this, despite the fact an imaginary speed of sound seems to essentially describe a classical instability, it was important to acknowledge the quantum nature of such a system. It is indeed the commutation relation imposed by the quantization that eventually leads to the determination of the overall amplitude of the bispectrum.
In this respect, it is instructive to compare the two scenarios with positive and negative $c_s^2$. In the former case, the quantization condition determines $|A_k|^2-|B_k|^2$ (see Eq.~\eqref{quantization-standard}), where $A_k$ and $B_k$ are the amplitudes of the positive and negative frequency modes respectively. One is then free to choose the Bunch--Davies vacuum, with $B_k=0$, which unambiguously determines the full mode function, and hence the power spectrum and bispectrum, with equilateral-type non-Gaussianities of amplitude $f_{NL} \sim 1/c_s^2-1$. One can also consider the effect of a small non-Bunch--Davies component in this context, turning on a small non-zero $B_k$. The overall amplitude of the power spectrum is then fixed, but as a result of the interferences between positive and negative frequency modes, the power spectrum comes with superimposed oscillations, whose amplitude is undetermined, but that is tightly constrained observationally \cite{Meerburg:2013dla,Ashoorioon:2013eia,Ade:2015lrj}. In addition to the `standard' part of the bispectrum, the 3-point function also acquires a contribution from the non-Bunch--Davies component, whose amplitude is related to the ones of the power spectrum oscillations, hence intrinsically UV-dependent, and with a shape that is enhanced near flattened configurations, and that features an oscillatory behavior \cite{Chen:2006nt,Holman:2007na,Meerburg:2009ys,Meerburg:2009fi,Agarwal:2012mq}.
The situation with an imaginary speed of sound (negative $c_s^2$) borrows ingredients from the two situations just described. Here, the quantization condition does not determine $|A_k|^2-|B_k|^2$, but rather ${\rm Im}[A_k^* B_k]$ (see Eq.~\eqref{quantization-A-B}), where now $A_k$ and $B_k$ are related to the amplitudes of the exponentially growing and decreasing modes. Hence, one is forced to have a non-zero $B_k$, effectively mimicking a non-Bunch--Davies component. The amplitude of the power spectrum is not determined unambiguously, and the interference between the two types of modes does not result in oscillations, but rather in exponentially suppressed corrections to the leading-order result induced by the growing mode. The amplitude and shape of the dimensionless bispectrum, however, is completely determined, again with $f_{NL} \sim 1/c_s^2-1=-(1/|c_s|^2+1)$ in equilateral configurations, but with a shape that is enhanced by $x^3$ near flattened configurations, and lacking features such as the aforementioned oscillations. The bispectrum thus constitutes a more robust probe of imaginary sound speed scenarios than the power spectrum.
We performed a quantitative study of this bispectrum, calculating its correlation with equilateral, orthogonal and flattened templates, as well as the corresponding $f_{NL}$ parameters. The total bispectrum is the sum of two components corresponding to the two cubic vertices of the EFT, and each have a modest correlation with the equilateral shape, but a large correlation with the orthogonal template (of order $0.8)$, and even more so with the flattened one (of order $0.9$). Independently of the overall common factor $\left(1/|c_s|^2+1 \right)$, whose magnitude we left arbitrary, the amplitudes of these shapes are rather large, with $f_{NL}^{\rm flat}={\cal O}(30)$, and growing with $x$. As the total shape is a linear combination of them, depending on an order one coefficient $A$, we showed that a total shape qualitatively different from its individual flattened-shape components is realized in a narrow region of parameter space near $A \simeq -1$, with only a mild dependence on $x$. However, despite its non-standard momentum-dependence, no further template is needed to study it in a first approximation, because of its non-negligible correlation with the equilateral one.
We recently studied concrete UV realizations of imaginary sound speed scenarios in a class of multi-field models that we called sidetracked inflation \cite{Garcia-Saenz:2018ifx}. While it is beyond the scope of this paper to make a detailed comparison, our results here are in qualitative very good agreement with the bispectrum computed there numerically from first principles, with an overall amplitude of the bispectrum set by $1/|c_s|^2+1$, without the exponential enhancement by $e^{2 x}$ obtained for the power spectrum, and with a shape that is enhanced in flattened configurations. The main ingredients of the relevant scenarios studied in sidetracked inflation are rather model-independent, and it is useful to make the link between the parameters in such multi-field models and the language that we use here. We refer the reader to reference \cite{Garcia-Saenz:2018ifx} for more details, but the upshot is that imaginary sound speed scenarios arise there as a result of integrating out entropic fluctuations that are heavy and tachyonic, \textit{i.e.}~with the relevant mass parameter such that $m_s^2<0$ and $|m_s|^2 \gg H^2$. As explained there, this type of configuration is compatible with a stable background when the background trajectory deviates strongly from a geodesic, as quantified by the dimensionless parameter $\eta_{\perp}$, provided that $m_s^2+4 H^2 \eta_{\perp}^2>0$. The transient tachyonic instability experienced by the entropic fluctuations leads to an imaginary speed of sound once they are integrated out, with
\begin{equation}
\frac{1}{|c_s|^2}+1= \frac{4 H^2 \eta_{\perp}^2}{|m_s|^2}\,.
\end{equation}
The description in terms of a single field EFT with a negative $c_s^2$ becomes valid when the physical momenta $k/a$ becomes negligible compared to the mass $|m_s|$ of the field that is integrated out (see appendix \ref{sec:appendix} for details and caveats), thus giving a parametric dependence
\begin{equation}
x \sim \frac{|m_s|}{H} |c_s|\,,
\end{equation}
where the numerical factor in the right hand side should be somewhat smaller than unity. The two important parameters $|c_s|$ and $x$ controlling the EFT are hence determined by $|m_s|/H$ and $\eta_{\perp}$ in these UV realizations. In addition, we note that the dominant amplitude of the dimensionless shape function in squashed and flattened configurations scales, both for order one or small $|c_s|$, as $1/\eta_{\perp} (|m_s|/H)^4 < 16 \eta_{\perp}^3$. \\
It would be interesting to further investigate the constraints that theoretical consistency puts on imaginary sound speed descriptions. In our motivating example, namely the sidetracked inflationary scenarios of Ref.~\cite{Garcia-Saenz:2018ifx}, we identified interesting attractor homogeneous solutions, and used standard perturbation theory about it to compute two- and three-point correlation functions of cosmological fluctuations. All the salient features observed in this framework are captured by a single-field effective field theory with an imaginary speed of sound, as we showed in \cite{Garcia-Saenz:2018ifx} for the power spectrum, and in this paper for the bispectrum. Another question is to investigate whether the exponential growth of fluctuations in such set-ups, described at the multi-field level, or by an effective single-field theory, can hinder the perturbative approach itself, and which constraints this can put on the parameters of such theories. For instance, requiring that the energy density of the fluctuations be subdominant compared to $H^2 M_{{\rm Pl}}^2$, in order to avoid any backreaction, should set an upper bound on $x$, in the same way that backreaction constrains excited initial states in models with $c_s^2>0$ (see \textit{e.g.}~\cite{Holman:2007na,Agarwal:2012mq}). It would also be desirable to understand how the discussion in Ref.~\cite{Baumann:2011su} extends to set-ups with imaginary sound speed, as the identification of relevant energy scales might be subtle in scenarios that intrinsically feature instabilities. Without mentioning possibly interesting constraints set by high-order correlation functions, one should at least require $f_{NL} \zeta \ll 1$ for the perturbative description to be valid. Using the power spectrum \eqref{As} as an estimate for the amplitude of $\zeta \sim A_s^{1/2} = \alpha e^{x}/(\sqrt{2} \pi)$, and omitting numerical factors, one finds $\left(\frac{1}{|c_s|^2}+1 \right) x^3 \alpha e^{x} \ll 1$\,.
Despite Eq.~\eqref{quantization-apha}, the amplitude of $\alpha$ itself is not specified in terms of the parameters of our EFT, as it depends on the specific UV completion of it, but one can envisage to add other operators to extend its regime of validity, for instance along the lines of Refs.~\cite{Baumann:2011su,Gwyn:2012mw,Gwyn:2014doa}. Finally, as it is clear from Eq.~\eqref{S-pi}, having $c_s^2<0$ with $\epsilon>0$ and having $c_s^2>0$ with $\epsilon<0$ equally implies a negative kinetic energy of $\pi$. In spite of the differences between the two set-ups, it would hence be interesting to compare imaginary sound speed models with systems that violate the null energy condition. It would also be intriguing to understand whether constraints derived in other contexts on low-energy effective ghosts, for instance related to their decay into gravitons \cite{Carroll:2003st,Cline:2003gs}, can be used to further constrain the framework studied here. We leave these various questions, as well as further studies of concrete realizations of imaginary sound speed scenarios, for future works.
\begin{acknowledgments}
We are grateful to Patrick Peter, Lucas Pinol, John Ronayne and Krzysztof Turzy\'nski for useful discussions. S.GS is supported by the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013 Grant Agreement no.\ 307934, NIRG project). S.RP is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 758792, project GEODESI).
\end{acknowledgments}
|
1,116,691,497,168 | arxiv | \section{Introduction}
A reasonably contemporary approach is to study, even without going into anthropic arguments, the nature of alternative universes as one changes the values of physical parameters. In the parameter space, one then looks for regions that could be similar to our universe and may possibly be congenial to the creation and sustenance of intelligent life \cite{Wei89,Teg97,Teg98a,Teg98b,Hog00,Teg06,Jaf09,Don10}. Bounds thus obtained may be referred to as congeniality bounds.
In a recent work, Jaffe, Jenkins and Kimchi \cite{Jaf09} studied how sensitive our universe would be to variations of quark masses. For this they chose to study the variations of masses of the three lightest quarks $u$, $d$ and $s$, under the constraint that the sum of these masses, $m_T$ remained fixed. They also studied variations of $m_T$.
Their basic idea was to find the two lightest baryons for any quark mass combination and consider them to play the roles of the proton and neutron in forming nuclei. In this process they also considered $\Lambda_{\mathrm{QCD}}$ to be an adjusted free parameter that they tuned to keep the average nucleon mass at 940 MeV. They then studied the variation of nuclear stability and, in light of this, tried to obtain the regions of the parameter space where nuclear chemistry in a somewhat familiar form could be sustained.
The starting point is that the three light quark masses would be changed keeping their sum $m_T$ fixed. This parameter space can be neatly shown in the form of an equilateral triangle [Fig.~\ref{mtriangle}] where the distances of a point from the base and right and left sides are, respectively, the masses of the up, down and strange quarks.
\begin{figure}[b]
\includegraphics[width=0.8\columnwidth]{mass_triangle.eps}%
\caption{\label{mtriangle}The model space of light quark masses for a fixed $m_T$ shown in the form of a triangle where the distance from the three sides give the three masses. Figure reproduced from \cite{Jaf09}.}
\end{figure}
In this manner they identified congenial regions in a triangle parametrized in terms of $x_3$ and $x_8$ [Fig.~\ref{x3x8}] defined as
\begin{eqnarray}
x_3 &=& \frac{2m_3}{\sqrt{3}}\frac{100}{m_T^\oplus}=\frac{100\left(m_u-m_d\right)}{\sqrt{3}m_T^\oplus}\\
x_8 &=& \frac{2m_8}{\sqrt{3}}\frac{100}{m_T^\oplus}=\frac{100\left(m_u+m_d-2m_s\right)}{\sqrt{3}m_T^\oplus}
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{x3x8.eps}%
\caption{\label{x3x8}The model space of light quark masses parametrized in terms of $x_3$ and $x_8$ reproduced from \cite{Jaf09}. The point labeled `us' points to the physical value in our present universe and therefore has coordinates $(x_3^\oplus, x_8^\oplus)$.}
\end{figure}
Here $m_T^\oplus$ is the sum of the light ($u$, $d$ and $s$) quark masses in our universe. It is obvious that $x_3$ basically gives the isospin splitting while $x_8$ is related to the breaking of $SU(3)_f$ due to the mass of the strange quark. Their results can be summarized in Fig.~\ref{cong}, where the congenial regions are indicated in green \footnote{If you are reading a black and white print, then green appears as lightly shaded and red as deep shaded}. This triangle is for $m_T^{\oplus}$, {\it i.e.}~with $m_T$ as it is in the present universe. They have also studied variations in $m_T$, but our work is limited to commenting on the case of $m_T^\oplus$, understanding that the same arguments qualitatively extend to other values of $m_T$. This is further justified in the discussions near the end of this report.
\begin{figure}[t]
\includegraphics[width=0.8\columnwidth]{congenialtriangle.eps}%
\caption{\label{cong}(color online) Figure reproduced from \cite{Jaf09} identifying congenial regions in the quark mass triangle with green bands. The red and white regions are uncongenial and uncertain, respectively.}
\end{figure}
It is increasingly being understood that if there is complexity, fine-tuning is inevitable \cite{Brad11}. Even if one is not happy with anthropic arguments, we simply cannot get away from fine-tuning. With this in mind, the first impression that one has from Fig.~\ref{cong}, is that the congenial region seems to be surprisingly large - allowing, around one order of magnitude variations in the quark masses. Though this already involves intricate compensating adjustments in $\Lambda_{\mathrm{QCD}}$ to keep the average `nucleon' mass fixed.
However, one should appreciate the difficulty in setting up a new framework in which a problem can be studied. From this perspective the authors of \cite{Jaf09} should be commended for presenting, literally from scratch, a setup for studying the congeniality bounds on quark masses. This setup can be extended removing some of the constraints used in any further work. Indeed, considering the significance of the work it was chosen first for a Viewpoint article in Physics \cite{Per09} and then went into a cover story in Scientific American \cite{Jen10}.
In our work, we remain within the provided setup, but extend the analysis to bounds provided by nucleosynthesis. It should be noted here that whereas, on the one hand, nuclear masses and stability expectedly vary comparatively slowly with quark masses; on the other hand, the observed abundances of the lightest nuclei hydrogen and helium provide much more stringent bounds on the variation of nucleon masses. We report below how the congeniality triangle of Fig.~\ref{cong} is modified by the application of these constraints.
At the outset, the variation of the octet baryon masses as one traverses along the borders of the triangle (Figs.~10 and 11 of \cite{Jaf09}) were reproduced to gain confidence in our code and our understanding of the framework. The fitted parameter values of $c_T$, $c_3$ and $c_8$ from Table III of Ref.~\cite{Jaf09} were used in the equation
\begin{equation}
M_B = C_0 + c_Tx_T + c_8x_8 + c_3x_3 + \left\langle B \left|H_{\mathrm{EM}}\right| B \right\rangle .
\end{equation}
This leads to
\begin{eqnarray}
M_p = C_0 + 3.68\: x_T + 3.53\: x_8 + 1.24\: x_3 + 0.63\\
M_n = C_0 + 3.68\: x_T + 3.53\: x_8 - 1.24\: x_3 - 0.13.
\end{eqnarray}
The quantity occurring most in the analysis below being
\begin{equation}
M_n-M_p = -2.48\: x_3-0.76.
\end{equation}
It may be noted here that using updated values of baryon masses from the Particle Data Book changes the parameters very slightly and this is neglected considering the qualitative nature of this work. After a clarification on the adjustment of $C_0$ (corresponding to an adjustment of $\Lambda_{\mathrm{QCD}}$) from the authors \cite{JJK} it was possible to reproduce the figures.
Then the issue of further bounds from nucleosynthesis were studied. It is well known that the observed abundances of the primordial nuclei hydrogen (protons), helium (alpha particles) etc.~are sensitively tied to the masses of the nucleons \cite{barrowtipler, Hog00}. The slight difference in the masses of the proton and neutron are responsible for the survival of protons with the observed abundance. The (un)congenial regions of the triangle is explored further under these constraints.
There are three cases that arise here:
\subsection*{Case I: $x_3 > x_3^\oplus$}
Let us concentrate on the region on the upper right of the triangle with $x_3$ values greater than at the point labeled `us' on the right hand side of the triangle.
Of the two nucleons, the neutron is heavier by about 1.3 MeV. If it was just 0.8 MeV less, that would bring it below the electron capture threshold for protons, i.e. it would become energetically favourable for protons to capture electrons and become neutrons. All the protons would have been converted to neutrons in the Big Bang. The Universe would be full of neutrons and nothing else. We would not be here. In words of Barrow and Tipler \cite{barrowtipler}
\begin{quote}
Without electrostatic forces to support them, solid bodies would collapse rapidly into neutron stars or black holes. Thus, the coincidence that allows protons to partake in nuclear reactions in the early universe also prevents them decaying by weak interactions. It also, of course, prevents the 75\% of the Universe which emerges from nucelosynthesis in the form of protons from simply decaying away into neutrons. If that were to happen no atoms would ever have formed and we would not be here to know it. [Ref.~\cite{barrowtipler}, p.~400]
\end{quote}
The same issue is also discussed by Hogan \cite{Hog00}
\begin{quote}
The $u$-$d$ mass difference in particular attracts attention because the $d$ is just heavier enough than $u$ to overcome the electromagnetic energy difference to make the proton ($uud$) lighter than the neutron ($udd$) and therefore stable. On the other hand, if it were a little heavier still, the deuteron would be unstable and it would be difficult to assemble any nuclei heavier than hydrogen.
\end{quote}
Therefore, it is necessary to have
\begin{equation}
M_n-M_p \geq 0.5\:\mathrm{MeV}.
\end{equation}
This reduces the congenial region on the upper right of the physical point to
\begin{equation}
x_3 \leq -0.51.
\end{equation}
\subsection*{Case II: $x_3 < x_3^\oplus$}
Now let us move to the bottem-left side of the triangle (left of the $x_8$ axis) where $x_3$ values are smaller than that at the `us'-labeled point, again concentrating on the right hand side of the triangle.
The key reaction by which hydrogen ‘burns’ in stars such as the sun involves the reaction
\begin{eqnarray}
p+p&\rightarrow& d+e^++\nu+0.42\,\mathrm{MeV}\\
e^++e^-&\rightarrow& 1\:\mathrm{MeV}
\end{eqnarray}
So the total amount of energy released in this reaction is 1.42 MeV.
If the neutron mass was 1.42 MeV (0.15\%) more than it is, this reaction would not happen at all. It would need energy to make it go, rather than producing energy. Deuterons are a key step in burning hydrogen to helium. Without them, hydrogen would not burn, and there would be no long-lived stars and no stellar nucleosynthesis to produce the remaining elements.
Therefore, it is necessary that
\begin{equation}
M_n-M_p \leq 2.72\:\mathrm{MeV}.
\end{equation}
This reduces the congenial region on the lower left of the physical point to
\begin{equation}
x_3 \geq -1.4.
\end{equation}
These two conditions, thus, significantly reduce the congenial corridor from
\begin{equation}
-12.9\leq x_3 \leq 4.1
\end{equation}
to
\begin{equation}
-1.4\leq x_3 \leq -0.5.
\end{equation}
It may be noted here that the width of this region is of the same order as the uncertainty in $x_3^\oplus$ itself due to uncertainties in the light quark masses given by $x_3^\oplus =-1.17\pm 0.43$.
In fact it was a pleasant surprise to realise, rather late into our work, that Hogan \cite{Hog00} reached essentially similar conclusions which were expressed in terms of the up-down quark mass difference, $\delta m_{d-u}$ and Section IV of his review \cite{Hog00} is a recommended read for anybody interested in this issue. The approximately 1.4 + 0.8 = 2.2 MeV window of variation that we find is in agreement with the allowed region in Fig.~1 of the same \cite{Hog00}.
\subsection*{Case III: Left half of the triangle}
If we move to the left half of the triangle, we essentially replace the down quark with a strange quark. We know that the $s$-quark is, in some ways, like a heavy $d$-quark. In the left half of the triangle the $s$-quark is light and the $d$-quark is heavy. As if they simply interchange positions. That is why Jaffe {\emph et al.} \cite{Jaf09} seem to find a symmetric congenial region in the left of the triangle. The discussions for Cases I and II narrow it down, but do not remove it.
However, let us now turn towards the coupling between $u$-$d$ and $u$-$s$. The $u$-$d$ coupling is much stronger, whereas the $u$-$s$ coupling is suppressed. This is described by the well-known Cabibbo angle $\theta_C$. Where the $u$-$d$ coupling carries a factor $cos\,\theta_C$ and the $u$-$s$ coupling carries a factor of $sin\,\theta_C$, the Cabibbo angle being about 13 degrees.
This is like the present world with a much weaker weak interaction. This the case where the weak decay rate of neutrons is not strong enough to produce the primordial neutron-proton abundance ratio of 1:6. Without this we are left without enough protons, {\it i.e.} without enough hydrogen, which is key to both stellar burning and biological life itself. Therefore we are left with only a narrow region on the right [Fig.~\ref{newcong}].
\begin{figure}
\includegraphics[width=0.8\columnwidth]{newcongenial.eps}%
\caption{\label{newcong} (color online) Fig.~\ref{cong} adapted by the further restrictions imposed leaving only a very narrow congenial slit in the bottom-right region of the triangle.}
\end{figure}
The only remaining question is probably regarding the length of this narrow region extending nearly up to the centre of the triangle. As one moves up this narrow slit towards the centre, away from the physical point, the up-down quarks become heavier keeping the down quark slightly heavier than the up. Meanwhile the strange quark becomes lighter to keep $m_T$ fixed. The physics considered here is probably not very sensitive to the strange quark mass. The increase in the up-down masses is offset by the compensating adjustment in $\Lambda_{\mathrm{QCD}}$ to keep the nucleon masses fixed. Therefore, the length of this region could probably be an artifact of the simultaneous and compensating tuning of quark masses and $\Lambda_{\mathrm{QCD}}$.
This indeed has been one of the conclusions in \cite{Jaf09} as summarized more elegantly in \cite{Jen10}; as well as \cite{Brad11} where, reviewing the alternative universe landscapes studied by \cite{Agu01,Har06,Ada08,Jaf09,Jen10} it has been observed that if one is prepared to adjust another parameter in a compensating manner, it might be possible to find other regions in the parameter space that are also congenial. However, that does not remove the fine-tuning problem, as the alternative values are still finely tuned and this is inevitable to produce complexity as observed in our present universe. Here most of the alternatives are removed and the narrow region remains as a result of the compensating adjustments of $\Lambda_{\mathrm{QCD}}$.
Indeed along the narrow region the sum of the two lightest quarks vary with the strange quark mass going in the opposite direction to keep $m_T$ fixed. If the effect of this could be quantified, it would probably be possible to restrict even the length of the narrow region [See the Addendum].
For example, as noted by Hogan \cite{Hog00},
\begin{quote}
...~the sum of the (up and down) quark masses controls the pion mass, so changing them alters the range of the nuclear potential and significantly changes nuclear structure and energy levels. Even a small change radically alters the history of nuclear astrophysics, for example, by eliminating critical resonances of nucleosynthesis needed to produce abundant carbon. \footnote{An interesting additional note is that, here Hogan cites Hoyle, F., D.~N.~F.~Dunbar, W.~A.~Wenzel, and W.~Whaling, 1953, Phys. Rev. {\bf 92}, 1095. This has been cited several times in different papers, sometimes with Phys. Rev. Lett. as the source, and a few times with the title ``A state in C12 predicted from astrophysical evidence''. However, we failed to find any such article and would appreciate any information on this reference. The nearest match was D.~N.~F.~Dunbar, R.~E.~Pixley, W.~A.~Wenzel, and W.~Whaling, 1953, Phys. Rev. {\bf 92}, 649, an article on the resonance in $^{12}$C often dubbed the `Hoyle resonance'.}
\end{quote}
Here it should be added that a more up-to-date view is that the strongest effect on the scalar scattering
lengths and deuteron binding energy seem to be due to the “sigma-resonance” exchange (or correlated
two-pion scalar-isoscalar exchange) dependence on $m_\pi$ \cite{Han08,Pel10}.
As mentioned at the outset, the analysis here has been limited to the case of $m_T = m_T^\oplus$. It has been noted by Jaffe {\it et al.} \cite{Jaf09} that the widths of the two major congenial bands on the bottom-left and bottom-right of the triangle are independent of $m_T$. Therefore, naturally the further exclusions for $x_3 > x_3^\oplus$, $x_3 < x_3^\oplus$ reducing the width of the band should also apply to other values of $m_T$. The exclusion of the left half of the triangle should also extend to other values of $m_T$. Therefore, in summary, it can be expected that for all values of $m_T$, after applying constraints from nucleosynthesis, there will only remain a similar very narrow congenial band at the bottom-right of the triangle.
An additional comment is due here on the possibilities of universes with deuterons, sigma-hydrogen, or delta-helium playing the roles of hydrogen as listed in \cite{Jen10} as a summary of \cite{Jaf09}. The point made here does not contradict that these could be stable lightest elements. It is only pointed out that stability alone is not enough to produce and sustain nuclear chemistry in a manner familiar to us. Correct primordial abundances and conditions for sustained stellar burning provide constraints that are much more difficult to satisfy. This probably calls for a closer analysis of the other half in \cite{Jen10} related to possible universes without any weak interaction of \cite{Har06} where there indeed has been a detailed discussion of these issues pertaining to nucleosynthesis. However, that would have to be another project; whereas, this work is focused on \cite{Jaf09}.
In summary, it can be observed that, primordial nuclear abundances and processes of stellar nucleosynthesis provide much more stringent constraints on quark masses than nuclear stability. Using these constraints it is possible to significantly reduce the congenial region in the space of light quark masses.
\section*{Addendum}
Our attention has been drawn through referee comments to studies of the bounds from nucleosynthesis \cite{Bed10,Ber13}, the latter appearing after the initial submission of this paper, on $\delta m_q /m_q$, where $m_q$ is the average of the light (up and down) quark mass and $\delta m_q$ is the change in $m_q$ keeping $m_u/m_d$ fixed. Coincidentally, along the length of the remaining narrow congenial region $m_u/m_d$ is approximately constant. The latest value is $\left|\delta m_q /m_q\right|<0.009$ \cite{Ber13}. There are other values in literature, but they are generally of the same order. Let us try to do a crude estimate of the effect of this constraint. For $m_q\approx 3.8$ MeV, $\delta m_q\approx 0.035$ MeV.
From eqs.~8 and 9, we get
\begin{equation}
M_N=\left(M_n+M_p\right)/2=C_0+3.68\: x_T+3.53\: x_8+2.5,
\end{equation}
which given that $x_T$ is kept fixed, leads to
\begin{equation}
\delta M_N=\delta C_0+3.53\: \delta x_8.
\end{equation}
Now, $x_8$ as defined in eq.~2 can be re-expressed in terms of $m_q$ as
\begin{equation}
x_8 =\frac{200\left(m_q-m_s\right)}{\sqrt{3}m_T^\oplus}.
\end{equation}
If $m_T$ is kept fixed then $\delta m_s=-\delta m_q$, leading to
\begin{equation}
\delta x_8 =\frac{400\left(\delta m_q\right)}{\sqrt{3}m_T^\oplus}=0.08,
\end{equation}
where we have used $\delta m_q\approx 0.035$ MeV and $m_T^\oplus\approx 100$ MeV. The remaining congenial region is too small to show on a figure of this scale. In fact the region $x_8=x_8^\oplus\pm 0.08$ is too small to show on a plot of this scale and is also very small compared to the uncertainty in the value of $x_8^\oplus$ itself $x_8^\oplus =-59.5\pm 1.1$ due to the uncertainties in the determination of the light quark masses. However, one should remember our estimate is rather crude without appropriate consideration of the uncertainties in $m_q$. Taking these into account will increase the region, but keep it within the same order as the uncertainty in $x_8^\oplus$ itself. In short there is practically no congenial region outside $\left( x_3^\oplus ,x_8^\oplus\right)$.
\begin{acknowledgments}
The authors are grateful to Robert L.~Jaffe, Alejandro Jenkins and Itamar Kimchi for their kind replies to our queries and helpful suggestions. MHA and ASBT would also like to acknowledge the support of the Abdus Salam International Centre for Theoretical Physics (ICTP), Trieste, Italy through a Regular Associateship and a Junior Associateship, respectively.
\end{acknowledgments}
|
1,116,691,497,169 | arxiv | \section{Introduction}\label{section1}
In this paper, we study deformations of holomorphic Poisson structures in the framework of Kodaira and Spencer's deformation theory of complex analytic structures(\cite{Kod58},\cite{Kod60}). The main difference from Kodaira and Spencer's deformation theory is that for deformations of a compact holomorphic Poisson manifold, we deform not only its complex structures, but also holomorphic Poisson structures. We will briefly review Kodaira-Spencer's main idea and show how we can extend their idea in the context of deformations of holomorphic Poisson structures.
Kodaira and Spencer's main idea of deformations of complex analytic structures is as follows \cite[p.182]{Kod05}. A $n$-dimensional compact complex manifold $M$ is obtained by glueing domains $U_1,...,U_n$ in $\mathbb{C}^n:M=\bigcup_{j=1}^n U_j$ where $\mathfrak{U}=\{U_j|j=1,...,n\}$ is a locally finite open covering of $M$, and that each $U_j$ is a polydisk:
\begin{align*}
U_j=\{z_j\in \mathbb{C}^n||z_j^1|<1,...,|z_j^n|<1\}
\end{align*}
and for $p\in U_j\cap U_k$, the coordinate transformation
\begin{align*}
f_{jk}:z_k\to z_j=(z_j^1,...,z_j^n)=f_{jk}(z_k)
\end{align*}
transforming the local coordinates $z_k=(z_k^1,...,z_k^n)=z_k(p)$ into the local coordinates $z_j=(z_j^1,...,z_j^n)=z_j(p)$ is biholomorphic. According to Kodaira,
\begin{quote}
\textit{``A deformation of $M$ is considered to be the glueing of the same polydisks $U_j$ via different identification. In other words, replacing $f_{jk}^{\alpha}(z_k) $ by the functions $f_{jk}^{\alpha}(z_k,t)=f^{\alpha}_{jk}(z_k,t_1,...,t_m),$ $ f_{jk}(z_k,0)=f_{jk}^{\alpha}(z_k)$ of $z_k$, and the parameter $t=(t_1,...,t_m)$, we obtain deformations $M_t$ of $M=M_0$ by glueing the polydisks $U_1,...,U_n$ by identifying $z_k\in U_k$ with $z_j=f_{jk}(z_k,t)\in U_j$"}
\end{quote}
We extend the main idea of Kodaira-Spencer in the context of deformations of holomorphic Poisson structures. A $n$-dimensional compact holomorphic Poisson manifold $M$ is a compact complex manifold such that the structure sheaf $\mathcal{O}_M$ is a sheaf of Poisson algebras (we refer to \cite{Lau13} for general information on Poisson geometry). The holomorphic Poisson structure is encoded in a holomorphic section (a holomorphic bivector field) $\Lambda \in H^0(M,\wedge^2 \Theta_M)$ with $[\Lambda,\Lambda]=0$, where $\Theta_M$ is the sheaf of germs of holomorphic vector fields on $M$ and the bracket $[-,-]$ is the Schouten bracket on $M$. In the sequel a holomorphic Poisson manifold will be denoted by $(M,\Lambda)$. For deformations of a compact holomorphic Poisson manifold $(M,\Lambda)$, we extend the idea of Kodaira and Spencer. A $n$-dimensional compact holomorphic Poisson manifold is obtained by glueing the domains $U_1,...,U_n$ in $\mathbb{C}^n$: $M=\bigcup_{j=1}^n U_j$ where $\mathfrak{U}=\{U_j|j=1,...,n\}$ is a locally finite open covering of $M$ and each $U_j$ is a polydisk
\begin{align*}
U_j=\{z_j\in \mathbb{C}^n||z_j^1|<1,...,|z_j^n|<1\}
\end{align*}
equipped with a holomorphic bivector field $\Lambda_j=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ such that $g_{\alpha\beta}^j(z_j)=-g_{\beta\alpha}^j(z_j)$ with $[\Lambda_j,\Lambda_j]=0$ on $U_j$ and for $p\in U_j\cap U_k$, the coordinate transformation
\begin{align*}
f_{jk}:z_k\to z_j=(z_j^1,...,z_j^n)=f_{jk}(z_k)
\end{align*}
transforming the local coordinates $z_k=(z_k^1,...,z_k^n)=z_k(p)$ into the local coordinates $z_j=(z_j^1,...,z_j^n)=z_j(p)$ is a biholomorphic `Poisson' map.
Deformations of a compact holomorphic Poisson manifold $(M,\Lambda)$ is the glueing of the Poisson polydisks $(U_j,\Lambda_j(t))$ parametrized by $t$. That is, replacing $f_{jk}^{\alpha}(z_k)$ by $f_{jk}^{\alpha}(z_k,t) ( f_{jk}(z_k,0)=f_{jk}^{\alpha}(z_k)$ of $z_k$), replacing $\Lambda_j=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ by $\Lambda_j(t)=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j,t) \frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ with $[\Lambda_j(t),\Lambda_j(t)]=0$ and $\Lambda_j(0)=\Lambda_j$, and the parmeter $t=(t_1,...,t_m)$, we obtain deformations $(M_t,\Lambda_t)$ by gluing the Poisson polydisks $(U_1,\Lambda_1(t)),...,(U_n,\Lambda_n(t))$ by identifying $z_k\in U_k$ with $z_j=f_{jk}(z_k,t)\in U_j$. The work on deformations of holomorphic Poisson structures is based on this fundamental idea.
In section \ref{section2}, we define a family of compact holomorphic Poisson manifolds, called a Poisson analytic family in the framework of Kodaira-Spencer's deformation theory. In other words, when we ignore Poisson structures, a family of compact holomorphic Poisson manifolds is just a family of compact complex manifolds in the sense of Kodaira and Spencer. So deformations of compact holomorphic Poisson manifolds means that we deform complex structures as well as Poisson structures.
In section \ref{section3}, we show that infinitesimal deformations of a holomorphic Poisson manifold $(M,\Lambda_0)$ in a Poisson analytic family are encoded in the first `degree-shifted by $1$' truncated holomorphic Poisson cohomology group. More precisely, an infinitesimal deformation is realized as an element in the first hypercohomology group $\mathbb{H}^1(M,\Theta_M^\bullet)$ of a complex of sheaves $\Theta_M^\bullet:\Theta_M\to \wedge^2 \Theta_M\to \cdots\to \wedge^n \Theta_M\to 0$ induced by $[\Lambda_0,-]$. Analogously to deformations of complex structure, we define so called Poisson Kodaira-Spencer map where the Kodaira-Spencer map is realized as a component of the Poisson Kodaira-Spencer map.
In section \ref{section4}, we study the integrability condition for a Poisson analytic family. Kodaira showed that given a family of deformations of a compact complex manifold $M$, locally the family is represented by a $C^{\infty}$ vector $(0,1)$-form $\varphi(t)\in A^{0,1}(M,T_M)$ with $\varphi(0)=0$ satisfying the integrability condition $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$ (see \cite{Kod05} \S 5.3. Here $T_M$ is the holomorphic tangent bundle of $M$ and we use the notation $A^{0,1}(M,T_M)$ instead of $\mathscr{L}^{0,1}(T_M)$ in \cite{Kod05}). We show that given a family of deformations of a compact holomorphic Poisson manifold $(M,\Lambda_0)$, locally the family is represented by a $C^{\infty}$ vector $(0,1)$-form $\varphi(t)$ with $\varphi(0)=0$ and a $C^{\infty}$ bivector $\Lambda(t)\in A^{0,0}(M,\wedge^2 T_M)$ with $\Lambda(0)=\Lambda_0$ satisfying the integrability condition $[\Lambda(t),\Lambda(t)]=0, \bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$, and $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$. Replacing $\varphi(t)$ by $-\varphi(t)$ and putting $\Lambda'(t):=\Lambda(t)-\Lambda_0$ so that we have $\Lambda'(0)=0$, the integrability condition is equivalent to $L(\varphi(t)+\Lambda'(t))+\frac{1}{2}[\varphi(t)+\Lambda'(t),\varphi(t)+\Lambda'(t)]=0$ where $L=\bar{\partial}+[\Lambda_0,-]$. Then $\varphi(t)+\Lambda'(t)$ is a solution of Maurer Cartan equation of the following differential graded Lie algebra
\begin{align}\label{tt76}
\mathfrak{g}=(\bigoplus_{i\geq 0} g_i,g_i=\bigoplus_{p+q-1=i,p\geq 0, q\geq 1} A^{0,p}(M,\wedge^q T_M),L=\bar{\partial}+[\Lambda_0,-],[-,-]),
\end{align}
where $[-,-]$ is the Schouten bracket on $M$, and $A^{0,p}(M,\wedge^q T_M)$ is the global section of $\mathscr{A}^{0,p}(\wedge^q T_M)$ the sheaf of germs of $C^{\infty}$-section of $\wedge^p \bar{T}_M^*\otimes \wedge^q T_M$. Here $\bar{T}_M^*$ is the dual bundle of antiholomorphic tangent bundle $\bar{T}_M$ (see \cite{Kod05} p.108). We remark that the integrability condition was proved in more general context in the language of generalized complex geometry (See \cite{Gua11}). As $H^1(M,\Theta_M)$ is realized as a subspace of the second cohomology group of a compact complex manifold $M$ in the sense of generalized complex geometry, $\mathbb{H}^1(M,\Theta_M^\bullet)$ is realized as a subspace of the second cohomology group of a compact holomorphic Poisson manifold $(M,\Lambda_0)$ in the sense of generalized complex geometry. In this paper, we deduce the integrability condition by extending Kodaira-Spencer's original approach, that is, by starting from a concept of a geometric family (a Poisson analytic family).
In section \ref{section5}, under some analytic assumption, we establish an analogous theorem to the following theorem of Kodaira and Spencer (\cite{Kodaira58},\cite{Kod05} p.270).
\begin{theorem}[Theorem of existence for complex analytic structures]
Let $M$ be a compact complex manifold and suppose $H^2(M,\Theta)=0$. Then there exists a complex analytic family $(\mathcal{M},B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=M$
\item The Kodaira-Spencer map $\rho_0:\frac{\partial}{\partial t}\to \left(\frac{\partial M_t}{\partial t}\right)_{t=0}$ with $M_t=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $H^1(M,\Theta_M):T_0(B)\xrightarrow{\rho_0} H^1(M,\Theta_M)$.
\end{enumerate}
\end{theorem}
Similarly, we prove `Theorem of existence for deformations of holomorphic Poisson structures' (see Theorem \ref{theorem of existence}).
\begin{theorem}[Theorem of existence for holomorphic Poisson structures]\label{theorem of existence}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold such that the associated Laplacian operator $\Box$ $($induced from the operator $\bar{\partial}+[\Lambda_0,-]$$)$ is strongly elliptic and of diagonal type. Suppose that $\mathbb{H}^2(M,\Theta^\bullet)=0$. Then there exists a Poisson analytic family $(
\mathcal{M},\Lambda,B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=(M,\Lambda_0)$
\item The Poisson Kodaira-Spencer map $\varphi_0:\frac{\partial}{\partial t}\to\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}$ with $(M_t,\Lambda_t)=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $\mathbb{H}^1(M,\Theta_M^\bullet):T_0 B\xrightarrow{\rho_0} \mathbb{H}^1(M,\Theta_M^\bullet)$.
\end{enumerate}
\end{theorem}
The proof is rather formal. The proof follows from the Kuranishi's method presented in \cite{Mor71}. The reason for the assumption on the associated Laplacian operator $\Box$ (induced from the operator $\bar{\partial}+[\Lambda_0,-]$) is for applying the Kuranishi's method in the holomorphic Poisson context.
In section \ref{section6}, we establish an analogous theorem to the following theorem of Kodaira and Spencer (\cite{KS58},\cite{Kod05} p.284).
\begin{theorem}[Theorem of completeness for complex analytic structures]\label{kodairacomplete}
Let $(\mathcal{M},B,\omega)$ be a complex analytic family of deformations of a compact complex manifold $M_0=\omega^{-1}(0)$, $B$ a domain of $\mathbb{C}^m$ containing $0$. If the Kodaira-Spencer map $\rho_0:T_0 (B)\to H^1(M_0,\Theta_{M_0})$ is surjective, the complex analytic family $(\mathcal{M},B,\omega)$ is complete at $0\in B$.
\end{theorem}
Similarly, we prove the following theorem which is an analogue of `Theorem of completeness' by Kodaira-Spencer.
\begin{theorem}[Theorem of completeness for holomorphic Poisson structures]
Let $(\mathcal{M},\Lambda_{\mathcal{M}},B,\omega)$ be a Poisson analytic family of deformations of a compact holomorphic Poisson manifold $(M,\Lambda_0)=\omega^{-1}(0)$, $B$ a domain of $\mathbb{C}^m$ containing $0$. If the Poisson Kodaira-Spencer map $\varphi_0:T_0 (B) \to \mathbb{H}^1(M,\Theta_M^\bullet)$ is surjective, the Poisson analytic family $(\mathcal{M},\Lambda_{\mathcal{M}}, B,\omega)$ is complete at $0\in B$.
\end{theorem}
\section{Families of compact holomorphic Poisson manifolds}\label{section2}
\begin{definition}$($compare \cite{Kod05} p.59$)$\label{definition}
Suppose that given a domain $B\subset \mathbb{C}^m$, there is a set $\{(M_t,\Lambda_t)|t \in B\}$ of $n$-dimensional compact holomorphic Poisson manifolds $(M_t,\Lambda_t)$, depending on $t=(t_1,...,t_m)\in B$. We say that $\{(M_t,\Lambda_t)|t\in B\}$ is a family of compact holomorphic Poisson manifolds or a Poisson analytic family of compact holomorphic Poisson manifolds if there exists a holomorphic Poisson manifold $(\mathcal{M},\Lambda)$ and a holomorphic map $\omega:\mathcal{M}\to B$ satisfing the following properties
\begin{enumerate}
\item $\omega^{-1}(t)$ is a compact holomorphic Poisson submanifold of $(\mathcal{M},\Lambda)$ for each $t\in B$.
\item $(M_t,\Lambda_t)=\omega^{-1}(t)(M_t$ has the induced Poisson holomorphic structure $\Lambda_t$ from $\Lambda)$.
\item The rank of Jacobian of $\omega$ is equal to $m$ at every point of $\mathcal{M}$.
\end{enumerate}
We will denote a Poisson analytic family by $(\mathcal{M},\Lambda,B,\omega)$. We also call $(\mathcal{M},\Lambda,B,\omega)$ a Poisson analytic family of deformations of a compact holomorphic Poisson manifold $(M_{t_0},\Lambda_{t_0})$ for each fixed $t_0\in B$.
\end{definition}
\begin{remark}
When we ignore Poisson structures, a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ is a complex analytic family $(\mathcal{M},B,\omega)$ in the sense of Kodaira-Spencer $($see \cite{Kod05} p.59$)$.
\end{remark}
\begin{remark}\label{tt61}
Given a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ as in Definition $\ref{definition}$, we can choose a locally finite open covering $\mathcal{U}=\{\mathcal{U}_j\}$ of $\mathcal{M}$ such that $\mathcal{U}_j$ are coordinate polydisks with a system of local complex coordinates $\{z_1,...,z_j,...\}$, where a local coordinate function $z_j:p\to z_j(p)$ on $\mathcal{U}_j$ satisfies $z_j(p)=(z_j^1(p),...,z_j^n(p),t_1,...,t_m)$, and $t=(t_1,...,t_m)=\omega(p)$. Then for a fixed $t_0\in B$, $\{p\mapsto (z_j^1(p),...,z_j^n(p))| \mathcal{U}_j \cap M_{t_0}\ne \emptyset\}$ gives a system of local complex coordinates on $M_{t_0}$. In terms of these coordinates, $\omega$ is the projection given by $(z_j,t)=(z_j^1,...,z_j^n,t_1,...,t_m)\to (t_1,...,t_m)$. For $j,k$ with $\mathcal{U}_j\cap \mathcal{U}_k\ne \emptyset$, we denote the coordinate transformations from $z_k$ to $z_j$ by $f_{jk}:(z_k^1,...,z_k^n,t)\to (z_j^1,...,z_j^n,t)=f_{jk}(z_k^1,...,z_k^n,t)$$($for the detail, see \cite{Kod05} p.60$)$.
On the other hand, since $(M_t,\Lambda_t) \hookrightarrow (\mathcal{M},\Lambda)$ is a holomorphic Poisson submanifold for each $t\in B$ and $\mathcal{M}=\bigcup_t M_t$, the holomorphic Poisson structure $\Lambda$ on $\mathcal{M}$ can be expressed in terms of local coordinates as $\Lambda=\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^j(z_j^1,...,z_j^n,t)\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}}$ on $\mathcal{U}_j$, where $g_{\alpha\beta}^j(z_j,t)=g_{\alpha\beta}^j(z_j^1,...,z_n^n,t)$ is holomorphic with respect to $(z_j,t)$ with $g_{\alpha\beta}^j(z_j,t)=-g_{\beta\alpha}^j(z_j,t)$. For a fixed $t^0$, the holomorphic Poisson structure $\Lambda_{t^0}$ on $M_{t_0}$ is given by $\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^j(z_j^1,...,z_j^n,t^0)\frac{\partial}{\partial{z_j^{\alpha}}}\wedge \frac{\partial}{\partial{z_j^{\beta}}}$ on $\mathcal{U}_j\cap M_{t_0}$.
\end{remark}
\begin{remark}\label{restriction}
Let $(\mathcal{M},\Lambda,B,\omega)$ be a Poisson analytic family. Let $\Delta$ be an open set of $B$. Then the restriction $(\mathcal{M}_{\Delta}=\omega^{-1}(\Delta),\Lambda|_{M_{\Delta}},\Delta,\omega|_{\mathcal{M}_{\Delta}})$ is also a Poisson analytic family. We will denote the family by $(\mathcal{M}_{\Delta},\Lambda_{\Delta},\Delta,\omega)$.
\end{remark}
\begin{example}[complex tori]$($\cite{Kod58} $p.408$$)$
Let $S$ be the space of $n\times n$ matrices $s=(s_{\beta}^{\alpha})$ with $\det(Im(s) ) >0$, where $\alpha$ denotes the row index and $\beta$ the column index, and $Im(s)$ is the imaginary part of $s$. For each matrix $s\in S$ we define an $n\times 2n$ matrix $\omega(s)=(\omega_j^{\alpha}(s))$ by
\begin{equation*}
\omega_j^{\alpha}(s)=
\begin{cases}
\delta_j^{\alpha},\,\,\,\,\,\,\,\,\,\, $\text{for $1\leq j\leq n$}$\\
s_\beta^{\alpha},\,\,\,\,\,\,\,\,\ $\text{for $j=n+\beta, 1\leq \beta \leq n$}$
\end{cases}
\end{equation*}
Let $G$ be the discontinuous abelian group of analytic automorphisms of $\mathbb{C}^n\times S$ generated by $g_j:(z,s)\to (z+\omega_j(s),s),\,\,\,\,\, j=1,...,2n,$
where $\omega_j(s)=(\omega_j^1(s),...,\omega_j^{\alpha}(s),...,\omega_j^n(s))$ is th $j$-th column vector of $\omega(s)$. The quotient space $\mathcal{M}=\mathbb{C}^n\times S/G$ and $\pi:\mathcal{M}\to S$ induced from the canonical projection $\mathbb{C}^n\times S\to S$ forms a complex analytic family of complex tori. We will put a holomorphic Poisson structure on $\mathcal{M}$ to make a Poisson analytic family. A holomorphic bivector field of the form $\Lambda=\sum_{i,j=1}^nf_{ij}(s)\frac{\partial}{\partial z_i}\wedge \frac{\partial}{\partial z_j}$ on $\mathbb{C}^n\times S$ where $f_{ij}(s)=f_{ij}(z,s)$ are holomorphic functions on $\mathbb{C}^n\times S$, independent of $z$, is a $G$-invariant bivector field on $\mathbb{C}^n\times S$. So this induces a holomorphic bivector field on $\mathcal{M}$. Since $f_{ij}(s)$ are independent of $z$, we have $[\Lambda,\Lambda]=0$. So $(\mathcal{M},\Lambda,S, \pi)$ is a Poisson analytic family.
\end{example}
\begin{example}[Hirzebruch-Nagata surface]$($\cite{Uen99} $p.13$$)$
Take two $\mathbb{C}\times \mathbb{P}_{\mathbb{C}}^1\times \mathbb{C}$ and write the coordinates as $(u,(\xi_0:\xi_1),t), (v,(\eta_0:\eta_1),t))$, respectively, where $u,v,t$ are the coordinates of $\mathbb{C}$ and $(\xi_0:\xi_1),(\eta_0:\eta_1)$ are the homogeneous coordinates of $\mathbb{P}_{\mathbb{C}}^1$.
By patching two $\mathbb{C}\times \mathbb{P}_{\mathbb{C}}^1\times \mathbb{C}$ together by relation
\begin{equation*}\label{relation}
\begin{cases}
u=1/v, \\
(\xi_0:\xi_1)=(\eta_0:v^m\eta_1+tv^k\eta_0), \,\,\,\,\,m-2\leq 2k \leq m,\,\,\, \text{where} \,\,\, m,k\,\,\, \text{are natural numbers}\\
t=t,
\end{cases}
\end{equation*}
we obtain a complex analytic family $\pi:\mathcal{S}\to \mathbb{C}$ which is induced from the natural projection $\mathbb{C}\times \mathbb{P}_{\mathbb{C}}^1\times \mathbb{C}\to \mathbb{C}$ to the third component. We will put a holomorphic Poisson structure $\Lambda$ on $\mathcal{S}$ so that $(\mathcal{S},\Lambda,\mathbb{C},\pi)$ is a Poisson analytic family. $S$ has four affine covers. For one $\mathbb{C}\times \mathbb{P}_\mathbb{C}^1\times\mathbb{C}$ with coordinate $(u,(\xi_0:\xi_1),t)$, we have two affine covers, namely, $\mathbb{C}\times \mathbb{C}\times \mathbb{C}$ and $\mathbb{C}\times \mathbb{C}\times \mathbb{C}$. They are glued via $\mathbb{C}\times (\mathbb{C}-\{0\})\times \mathbb{C}$ and $\mathbb{C}\times (\mathbb{C}-\{0\})\times \mathbb{C}$ by $(u,x=\frac{\xi_1}{\xi_0},t)\mapsto (u,y=\frac{\xi_0}{\xi_1},t)=(u,\frac{1}{x},t)$. Similarly for another $\mathbb{C}\times \mathbb{P}_\mathbb{C}^1\times \mathbb{C}$, two affine covers are glued via $\mathbb{C}\times (\mathbb{C}-\{0\})\times \mathbb{C}$ and $\mathbb{C}\times (\mathbb{C}-\{0\})\times \mathbb{C}$ by $(v,w=\frac{\eta_1}{\eta_0},t)\mapsto (v,z,t)=(v,\frac{1}{w}=\frac{\eta_0}{\eta_1},t)$. We put holomorphic Poisson structures on each four affine covers which define a global bivector field $\Lambda$ with $[\Lambda,\Lambda]=0$ on $\mathcal{S}$. On $(u,x,t)$ coordinate, we give $g(t)x^2\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial x}$, where $g(t)$ is any holomorphic function depending only on $t$. On $(u,y,t)$ coordinate, we give $-g(t)\frac{\partial}{\partial u}\wedge \frac{\partial}{\partial y}$. On $(v,w,t)$ coordinate, we give $-g(t)v^{2k-m+2}(wv^{m-k}+t)^2\frac{\partial}{\partial v}\wedge\frac{\partial}{\partial w}$. On $(v,z,t)$ coordinate, we give $g(t)v^{2k-m+2}(v^{m-k}+tz)^2\frac{\partial}{\partial v}\wedge \frac{\partial}{\partial z}$. Then $(\mathcal{S},\Lambda,\mathbb{C},\pi)$ is a Poisson analytic family.
\end{example}
\begin{example}[Hopf surfaces]
We construct an one parameter Poisson analytic family of general Hopf surfaces.
An automorphism of $W\times \mathbb{C}$ given by $g:(z_1,z_2,t)\to(az_1+tz_2^m,bz_2,t)$ where $0<|a|\leq |b| <1$ and $b^m-a=0$ $($i.e $a=b^m$$)$, generates an infinite cyclic group $G$, which properly discontinuous and fixed point free. Hence $\mathcal{M}:=W\times \mathbb{C}/G$ is a complex manifold. Since the projection of $W\times \mathbb{C}$ to $\mathbb{C}$ commutes with $g$, it induces a holomorphic map $\omega$ of $\mathcal{M}$ to $\mathbb{C}$. So $(\mathcal{M},\mathbb{C},\omega)$ is a complex analytic family. Since $g^n$ is given by $g^n:(z_1,z_2,t)\to(z_1',z_z',t')=(a^n z_1+na^{n-1}t z_2^m,b^n z_2,t)$,
we have
\begin{equation*}
\frac{\partial}{\partial z_1}=a^n\frac{\partial}{\partial z_1'},\,\,\,\,\, \frac{\partial}{\partial z_2}=mna^{n-1}t z_2^{m-1}\frac{\partial}{\partial z_1'}+b^n\frac{\partial}{\partial z_2'},\,\,\,\,\, \frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2}=a^nb^n\frac{\partial}{\partial z_1'}\wedge \frac{\partial}{\partial z_2'}
\end{equation*}
Then $f(t)z_2^{m+1}\frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2}$ where $f(t)$ is any holomorphic function, independent of $z$, is a $G$-invariant holomorphic bivector field on $W\times \mathbb{C}$ and so define a holomorphic Poisson structure on $\mathcal{M}$. Hence $(\mathcal{M},f(t)z_2^{m+1}\frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2},\mathbb{C},\omega)$ is a Poisson analytic family of Poisson Hopf surfaces.
\end{example}
\section{Infinitesimal deformations}\label{section3}
\subsection{Infinitesimal deformations and truncated holomorphic Poisson cohomology}\
In this subsection, we show that given a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$, an infinitesimal deformation of a compact holomorphic Poisson manifold $\omega^{-1}(t)=(M_t,\Lambda_t)$ with dimension $n$ is captured by an element in the first hypercohomology group of the complex of sheaves $\Theta_{M_t}^\bullet: \Theta_{M_t}\to \wedge^2 \Theta_{M_t}\to \cdots \to \wedge^n \Theta_{M_t}\to 0$ induced by $[\Lambda_t,-]$ analogously to how an infinitesimal deformation of a compact complex manifold $M_t$ is captured by an element in the first cohomology group $H^1(M_t,\Theta_t)$.
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold and consider the complex of sheaves
\begin{align}\label{complex}
\Theta_M^\bullet:\Theta_M\xrightarrow{[\Lambda_0,-]}\wedge^2 \Theta_M\xrightarrow{[\Lambda_0,-]}\cdots \xrightarrow{[\Lambda_0,-]} \wedge^n \Theta_M\to 0
\end{align}
where $\Theta_M$ is the sheaf of germs of holomorphic vector fields on $M$. Let $\mathcal{U}=\{U_j\}$ be sufficiently fine open covering of $M$ such that $U_j$ are coordinate polydisks of $M$, that is, $U_j=\{(z_j^1,...,z_j^n)\in \mathbb{C}^n||z_j^{\alpha}|<r_j^{\alpha},\alpha=1,...,n\}$ where $z_j=(z_j^1,...z_j^n)$ is a local coordinate on $U_j$ and $r_j^{\alpha}>0$ is a constant. Then we can compute the hypercohomology group of the complex of sheaves $(\ref{complex})$ by the following \u{C}ech resolution (see \cite{EV92} Appendix). Here $\delta$ is the \u{C}ech map.
\begin{center}
$\begin{CD}
@A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\wedge^3 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\wedge^2 \Theta_M)@>\delta>> C^1(\mathcal{U},\wedge^2 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U},\Theta_M)@>-\delta>>C^1(\mathcal{U},\Theta_M)@>\delta>>C^2(\mathcal{U},\Theta_M)@>-\delta>>\cdots\\
\end{CD}$
\end{center}
\begin{definition}
We say that the $i$-th `degree-shifted by $1$' truncated holomorphic Poisson cohomology group of a holomorphic Poisson manifold $(M,\Lambda_0)$ is the $i$-th hypercohomology group associated with the complex of sheaves $(\ref{complex})$, and is denoted by $\mathbb{H}^i(M,\Theta_M^\bullet)$.
\end{definition}
\begin{remark}
In \cite{Wei99}, the holomorphic Poisson cohomology for a holomorphic Poisson manifold $(M,\Lambda_0)$ is defined by the $i$-th hypercohomology group of complex of sheaves $\mathcal{O}_M\to \Theta_M\to \wedge^2 \Theta_M \to \cdots \to\wedge^n \Theta_M\to 0$ induced by $[\Lambda_0,-]$. Since there is no role of the structure sheaf $\mathcal{O}_M$ in deformations of compact holomorphic Poisson manifolds, we truncate the complex of sheaves to get $0\to \Theta_M\to \wedge^2 \Theta_M\to \cdots \wedge^n \Theta_M\to 0$. In \cite{Kim14}, the author used the expression $HP^i(M,\Lambda_0)$ for the $i$-th truncated holomorphic Poisson cohomology group to maintain notational consistency with \cite{Nam09} by which this present work was inspired. However we shift the degree after truncation to get $\Theta_M\to \wedge^2 \Theta_M\to \cdots \wedge^n \Theta_M\to 0$ since it looks more natural by the general philosophy of deformation theory so that the $0$-th cohomology group corresponds to infinitesimal Poisson automorphisms, the first cohomology group corresponds to infinitesimal Poisson deformations and the third cohomology group corresponds to obstructions $($see the third part of the author's Ph.D. thesis \cite{Kim14}$)$.
\end{remark}
We will relate the first `degree-shifted by $1$' truncated holomorphic Poisson cohomology group $\mathbb{H}^1(M_t,\Theta_{M_t}^\bullet)$ to infinitesimal deformations of $\omega^{-1}(t)=(M_t,\Lambda_t)$ in a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ for each $t\in B$. As in Remark $\ref{tt61}$, let $\mathcal{U}=\{\mathcal{U}_j\}$ be an open covering of $\mathcal{M}$ such that $\mathcal{U}_j$ are coordinate polydisks of $\mathcal{M}$, $\{(z_j,t)\}=\{(z_j^1,...,z_j^n,t_1,...,t_m)\}$ is a local complex coordinate system on $\mathcal{U}_j$, and $z_j^{\alpha}=f_{jk}^{\alpha}(z_k^1,...,z_k^n,t_1,...,t_m),\alpha=1,...,n$
is a holomorphic transition function from $z_k$ to $z_j$. The Poisson structure $\Lambda$ is expressed in terms of local complex coordinate system on $\mathcal{U}_j$ as
\begin{align}\label{poisson}
\Lambda=\Lambda_j=\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}
\end{align}
where $g^{j}_{\alpha \beta}(z_j,t)$ is a holomorphic function on $\mathcal{U}_j$ with $g_{\alpha\beta}^j(z_j,t)=-g_{\beta\alpha}^j(z_j,t)$ and we have
\begin{align}\label{tt67}
[\Lambda,\Lambda]=[\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]=0
\end{align}
Since $f_{jk}(z_k,t)=(f_{jk}^1(z_k,t),...,f_{jk}^n(z_k,t),t_1,...,t_m)$ is a Poisson map, we have
\begin{align}\label{tt56}
g_{\alpha \beta}^j(f_{jk}^1(z_k,t),...,f_{jk}^n(z_k,t))=\sum_{r,s=1}^n g_{rs}^k(z_k,t)\frac{\partial f_{jk}^{\alpha}}{\partial z_k^r}\frac{\partial f_{jk}^{\beta}}{\partial z_k^s}
\end{align}
on $\mathcal{U}_j\cap \mathcal{U}_k$. Set $\mathcal{U}_j^t:=\mathcal{U}_j\cap M_t$. Then for each $t\in B$, $\mathcal{U}^t:=\{\mathcal{U}_j^t\}$ is an open covering of $M_t$. Recall that $\Lambda_t$ is the Poisson structure on $M_t$ induced from $(\mathcal{M},\Lambda)$. Let $\frac{\partial}{\partial t}=\sum_{\lambda=1}^m c_{\lambda}\frac{\partial}{\partial t_{\lambda}}$, $c_{\lambda}\in \mathbb{C}$ be a tangent vector of $B$. Then we have
\begin{proposition}\label{gg}
\begin{align*}
(\{\lambda_j(t)=\sum_{\alpha,\beta=1}^n \frac{\partial g_{\alpha \beta}^{j}(z_j,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}\}, \{\theta_{jk}(t)=\sum_{\alpha=1}^n \frac{\partial f_{jk}^{\alpha}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{\alpha}}\})\in C^0(\mathcal{U}^t,\wedge^2 \Theta_{M_t})\oplus C^1(\mathcal{U}^t,\Theta_{M_t})
\end{align*}
define a 1-cocycle and call its cohomology class in $\mathbb{H}^1(M_t,\Theta_{M_t}^\bullet)$ the infinitesimal $($Poisson$)$ deformation along $\frac{\partial}{\partial t}$. This expression is independent of the choice of system of local coordinates.
\end{proposition}
\begin{proof}
First we note that $\delta(\{\theta_{jk}(t)\})=0$ (See \cite{Kod05} p.201). Second, by taking the derivative of $(\ref{tt67})$ with respect to $t$, we have $[\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}},\sum_{\alpha,\beta=1}^n \frac{\partial g_{\alpha \beta}^{j}(z_j,t)}{\partial t}\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}]=0$. It remains to show that $\delta(\{\lambda_j(t)\})+[\Lambda_t,\{\theta_{jk}\}]=0$. More precisely, on $\mathcal{U}_{j}^t\cap \mathcal{U}_k^t\ne \emptyset$, we show that $\lambda_{k}(t)-\lambda_{j}(t)+[\Lambda_t,\theta_{jk}(t)]=0$. In other words,
\begin{align}\label{equ1}
\sum_{r,s=1}^n \frac{\partial g^k_{rs}}{\partial t}\frac{\partial}{\partial z^{r}_k}\wedge\frac{\partial}{\partial z_k^{s}}-\sum_{\alpha,\beta=1}^n \frac{\partial g^j_{\alpha \beta}}{\partial t}\frac{\partial}{\partial z^{\alpha}_j}\wedge\frac{\partial}{\partial z_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{r}}\wedge \frac{\partial}{\partial z_{j}^{s}},\sum_{c=1}^n \frac{\partial f_{jk}^{c}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{c}}]=0
\end{align}
Since $z_j^{\alpha}=f_{jk}^{\alpha}(z_k^1,...,z_k^n,t_1,...,t_m)$ for $\alpha=1,...,n$, we have $\frac{\partial}{\partial z_k^{r}}=\sum_{a=1}^{n}\frac{\partial f_{jk}^a}{\partial z_k^{r}}\frac{\partial}{\partial z_j^a}$ for $r=1,...,n$. Hence the first term of $(\ref{equ1})$ is
\begin{align*}
\sum_{r,s=1}^n \frac{\partial g^k_{rs}}{\partial t}\frac{\partial}{\partial z^{r}_k}\wedge\frac{\partial}{\partial z_k^{s}}=\sum_{r,s,a,b=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}\frac{\partial}{\partial z_j^a}\wedge \frac{\partial}{\partial z_j^b}
\end{align*}
We compute the third term of $(\ref{equ1})$:
\begin{align*}
&\sum_{r,s,c=1}^n [g_{rs}^{j}(z,t)\frac{\partial}{\partial z_{j}^{r}}\wedge \frac{\partial}{\partial z_{j}^{s}},\frac{\partial f_{jk}^{c}(z_k,t)}{\partial t}\frac{\partial}{\partial z_j^{c}}]=\sum_{r,s,c=1}^n ([g_{rs}^j \frac{\partial}{\partial z_j^r},\frac{\partial f_{jk}^c}{\partial t} \frac{\partial}{\partial z_j^c}]\wedge \frac{\partial}{\partial z_j^s}-g_{rs}^j[\frac{\partial}{\partial z_j^s},\frac{\partial f_{jk}^c}{\partial t}\frac{\partial}{\partial z_j^c}]\wedge \frac{\partial}{\partial z_j^r})\\
&=\sum_{r,s,c=1}^n (g_{rs}^j\frac{\partial}{\partial z_j^r}\left(\frac{\partial f_{jk}^c}{\partial t}\right) \frac{\partial}{\partial z_j^c}\wedge \frac{\partial}{\partial z_j^s}-\frac{\partial f_{jk}^c}{\partial t}\frac{\partial g_{rs}^j}{\partial z_j^c}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}+g_{rs}^j\frac{\partial}{\partial z_j^s}\left(\frac{\partial f_{jk}^c}{\partial t}\right)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^c})
\end{align*}
By considering the coefficients of $\frac{\partial}{\partial z_j^a}\wedge \frac{\partial}{\partial z_j^b}$, $(\ref{equ1})$ is equivalent to
\begin{align}\label{equ2}
\sum_{r,s=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}-\frac{\partial g_{ab}^j}{\partial t}-\sum_{c=1}^n \frac{\partial g_{ab}^j}{\partial z_j^c}\frac{\partial f_{jk}^c}{\partial t}+\sum_{c=1}^n (g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right))=0
\end{align}
On the other hand, from $(\ref{tt56})$, we have
\begin{align}\label{tt77}
g_{ab}^j(f_{jk}^1(z_k,t),...,f_{jk}^n(z_k,t),t_1,...,t_m)=\sum_{r,s=1}^n g_{rs}^k \frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}\,\,\,\,\,\,\text{on}\,\,\,\mathcal{U}_j \cap \mathcal{U}_k
\end{align}
By taking the derivative of $(\ref{tt77})$ with respect to $t$, we have
\begin{align*}
\sum_{c=1}^n \frac{\partial g_{ab}^j}{\partial z_j^c}\frac{\partial f_{jk}^c}{\partial t}+\frac{\partial g_{ab}^j}{\partial t}=\sum_{r,s=1}^n \frac{\partial g_{rs}^k}{\partial t}\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}+\sum_{r,s=1}^ng_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )
\end{align*}
Hence $(\ref{equ2})$ is equivalent to
\begin{align}\label{tt57}
\sum_{c=1}^n (g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right))=\sum_{r,s=1}^n g_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )
\end{align}
Indeed, the left hand side and right hand side of $(\ref{tt57})$ coincide: from $(\ref{tt56})$
{\small{\begin{align*}
\sum_{c=1}^n (g_{cb}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{ac}^j\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right))&=\sum_{r,s,c=1}^n (g_{rs}^k\frac{\partial f_{jk}^c}{\partial z_k^r}\frac{\partial f_{jk}^b}{\partial z_k^s}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^a}{\partial t}\right)+g_{rs}^k\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial f_{jk}^c}{\partial z_k^s}\frac{\partial}{\partial z_j^c}\left(\frac{\partial f_{jk}^b}{\partial t}\right))\\
&=\sum_{r,s=1}^n g_{rs}^k(\frac{\partial}{\partial z_k^r}\left(\frac{\partial f_{jk}^a}{\partial t}\right)\frac{\partial f_{jk}^b}{\partial z_k^s}+\frac{\partial f_{jk}^a}{\partial z_k^r}\frac{\partial}{\partial z_k^s}\left(\frac{\partial f_{jk}^b}{\partial t}\right) )
\end{align*}}}
This proves the first claim. It remains to show that $(\{\lambda_j(t)\},\{\theta_{jk}(t)\})$ is independent of the choice of systems of local coordinates. We can show that the infinitesimal deformation does not change under the refinement of the open covering (See \cite{Kod05} p.190). Since we can choose a common refinement for two system of local coordinates, it is sufficient to show that given two local coordinates $x_j=(z_j,t)$ and $u_j=(w_j,t)$ on each $\mathcal{U}_j$, the infinitesimal Poisson deformation $(\{\pi_j(t)\},\{\eta_{jk}(t)\})$ with respect to $\{u_j\}$ coincides with $(\{\lambda_j(t)\},\{\theta_{jk}(t)\})$ with respect to $\{x_j\}$. Let the Poisson structure $\Lambda$ in (\ref{poisson}) be expressed in terms of local coordinates $u_j$ as $\Lambda=\Pi_j=\sum_{\alpha,\beta=1}^n \Pi_{\alpha \beta}^{j}(w_j,t)\frac{\partial}{\partial w_{j}^{\alpha}}\wedge \frac{\partial}{\partial w_{j}^{\beta}}$. Let $(w_k,t)\to (w_j,t)=(e_{jk}(w_k,t),t)$ be the coordinate transformation of $\{u_j\}$ on $\mathcal{U}_j\cap \mathcal{U}_k\ne \emptyset$. Now we set
\begin{align*}
\eta_{jk}(t)=\sum_{\alpha=1}^n \frac{\partial e_{jk}^\alpha(w_k,t)}{\partial t}\frac{\partial}{\partial w_j^{\alpha}},\,\,\,w_k=e_{kj}(w_j,t),\,\,\,\,\,\,\,\,\pi_j(t)=\sum_{\alpha,\beta=1}^n \frac{\partial \Pi_{\alpha \beta}^{j}(w_j,t)}{\partial t}\frac{\partial}{\partial w_{j}^{\alpha}}\wedge \frac{\partial}{\partial w_{j}^{\beta}}
\end{align*}
We show that $(\{\lambda_j(t)\}),\{\theta_{jk}(t)\})$ is cohomologous to $(\{\pi_j(t)\},\{\eta_{jk}(t)\})$. Let $w_j^{\alpha}=h_j^{\alpha}(z_j^1,...,z_j^n,t),\alpha=1,...,n$, define the coordinate transformation from $x_j=(z_j,t)$ to $u_j=(w_j,t)$ which is a Poisson map.
So we have $\frac{\partial}{\partial z_j^r}=\sum_{a=1}^n \frac{\partial h_j^a}{\partial z_j^r}\frac{\partial}{\partial w_j^a}$ and the following relation holds
\begin{align}\label{yh}
\Pi_{\alpha\beta}^j(h_j^1(z_j,t),...,h_{j}^n(z_j,t),t)=\sum_{r,s=1}^n g_{rs}^j(z_j,t)\frac{\partial h_{j}^{\alpha}}{\partial z_j^r}\frac{\partial h_{j}^{\beta}}{\partial z_j^s}.
\end{align}
Set $\theta_j(t)=\sum_{\alpha=1}^n \frac{\partial h_j^{\alpha}(z_j,t)}{\partial t} \frac{\partial }{\partial w_j^{\alpha}}, \,\,\,\,\,w_j^{\alpha}=h_j^{\alpha}(z_j,t)$. Then we claim that $(\lambda_j(t),\theta_{jk}(t))-(\pi_j(t),\eta_{jk}(t))=\theta_k(t)-\theta_j(t)-[\Lambda_t, \theta_j(t)]=-\delta(-\theta_j(t))+[\Lambda_t,-\theta_j(t)]$, which means $(\{\lambda_j(t)\},\{\theta_{jk}(t)\})$ is cohomologous to $(\{\pi_j(t)\},\{\eta_{jk}(t)\})$. Since $\delta(\{\theta_j(t)\})=\{\theta_{jk}(t)\}-\{\eta_{jk}(t)\}$(for the detail, see \cite{Kod05} p.191-192), we only need to see $\lambda_j(t)-\pi_j(t)+[\Lambda_t(=\Pi_t),\theta_j(t)]=0$. Equivalently,
{\small{\begin{align*}
\sum_{r,s=1}^n \frac{\partial g_{rs}^{j}(z_j,t)}{\partial t}\frac{\partial }{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}-\sum_{\alpha,\beta=1}^n \frac{\partial \Pi_{\alpha\beta}^j(w_j,t)}{\partial t}\frac{\partial}{\partial w_j^{\alpha}}\wedge \frac{\partial}{\partial w_j^{\beta}}+[\sum_{\alpha,\beta=1}^n \Pi_{\alpha\beta}^j(w_j,t)\frac{\partial}{\partial w_j^{\alpha}}\wedge \frac{\partial}{\partial w_j^{\beta}},\sum_{c=1}^n \frac{\partial h_j^{c}(z_j,t)}{\partial t} \frac{\partial }{\partial w_j^{c}}]=0
\end{align*}}}
which follows from taking the derivative (\ref{yh}) with respect to $t$ as in the proof of the first claim.
\end{proof}
\begin{definition}[(holomorphic) Poisson Kodaira-Spencer map]\label{mapping}
Let $(\mathcal{M},\Lambda,B,\omega)$ be a Poisson analytic family, where $B$ is a domain of $\mathbb{C}^m$. As in Remark $\ref{tt61}$, let $\mathcal{U}=\{\mathcal{U}_j\}$ be an open covering of $\mathcal{M}$, and $(z_j,t)$ a local complex coordinate system on $\mathcal{U}_j$. The Poisson structure $\Lambda$ is expressed as $\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^{j}(z_j,t)\frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}$ on $\mathcal{U}_j$ where $g^{j}_{\alpha \beta}(z_j,t)$ is a holomorphic function with $g_{\alpha\beta}^j(z_j,t)=-g_{\beta\alpha}^j(z_j,t)$. For a tangent vector $\frac{\partial}{\partial t}=\sum_{\lambda=1}^{m} c_{\lambda}\frac{\partial}{\partial t_{\lambda}},c_{\lambda} \in \mathbb{C}$, of $B$, we put
\begin{align*}
\frac{\partial \Lambda_t}{\partial t}:=\sum_{\alpha,\beta=1}^n\left[\sum_{\lambda=1}^{m}c_{\lambda}\frac{\partial g_{\alpha \beta}^{j}(z_j,t)}{\partial t_{\lambda}}\right] \frac{\partial}{\partial z_{j}^{\alpha}}\wedge \frac{\partial}{\partial z_{j}^{\beta}}
\end{align*}
The $($holomorphic$)$ Poisson Kodaira-Spencer map is defined to be a $\mathbb{C}$-linear map
\begin{align*}
\varphi_t:T_t(B) &\to \mathbb{H}^1(M_t,\Theta_{M_t}^\bullet)\\
\frac{\partial}{\partial t} &\mapsto \left[\rho_t\left(\frac{\partial}{\partial t}\right)\left(=\frac{\partial{M}_t}{\partial t}\right), \frac{\partial{\Lambda_t}}{\partial t}\right]=\frac{\partial (M_t,\Lambda_t)}{\partial t}
\end{align*}
where $\rho_t:T_t(B)\to H^1(M_t,\Theta_t)$ is the Kodaira-Spencer map of the complex analytic family $(\mathcal{M},B,\omega)$ $($see \cite{Kod05} $p.201$$)$.
\end{definition}
\section{Integrability condition}\label{section4}
In a complex analytic family $(\mathcal{M},B,\omega)$ of deformations of a complex manifold $M=\omega^{-1}(0)$, the deformations near $M$ are represented by $C^{\infty}$ vector $(1,0)$-forms $\varphi(t) \in A^{0,1}(M,T_M)$ on $M$ satisfying $\varphi(0)=0$ and the integrability condition $\bar{\partial} \varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$ where $t \in \Delta$ a sufficiently small polydisk in $B$ (see \cite{Kod05} section \S 5.3). In this section, we show that in a Poisson analytic family $(\mathcal{M},B,\Lambda,\omega)$ of deformations of a compact holomorphic Poisson manifold $(M,\Lambda_0)=\omega^{-1}(0)$, the deformations near $(M,\Lambda_0)$ are represented by $C^{\infty}$ vector $(0,1)$-forms $\varphi(t)\in A^{0,1}(M,T_M)$ and $C^{\infty}$ bivectors $\Lambda(t)\in A^{0,0}(M,\wedge^2 T_M)$ satisfying $\varphi(0)=0$, $\Lambda(0)=\Lambda_0$ and the integrability condition $\bar{\partial}(\varphi(t)+\Lambda(t))+\frac{1}{2}[\varphi(t)+\Lambda(t),\varphi(t)+\Lambda(t)]=0$. To deduce the integrability condition, we extend Kodaira's approach (\cite{Kod05} section \S 5.3) in the context of a Poisson analytic family.
\subsection{Preliminaries}\label{prill} \
We extend the argument of \cite{Kod05} p.259-261 (to which we refer for the detail) in the context of a Poisson analytic family. We tried to maintain notational consistency with \cite{Kod05}.
Let $(\mathcal{M}, \Lambda, B,\omega)$ be a Poisson analytic family of compact Poisson holomorphic manifolds, where $B$ is a domain of $\mathbb{C}^m$ containing the origin $0$. Define $|t|=\max_{\lambda}|t_{\lambda}|$ for $t=(t_1,...,t_m)\in \mathbb{C}^m$, and let $\Delta=\Delta_r =\{t\in \mathbb{C}^m||t|<r\}$ the polydisk of radius $r>0$. If we take a sufficiently small $\Delta \subset B$, then $(\mathcal{M}_{\Delta},\Lambda_\Delta)=\omega^{-1}(\Delta)$ is represented in the form
\begin{align*}
(\mathcal{M}_{\Delta},\Lambda_\Delta)=\bigcup_j (U_j\times \Delta,\Lambda|_{U_j\times \Delta})
\end{align*}
We denote a point of $U_j$ by $\xi_j=(\xi_j^1,...,\xi_j^n)$ and its holomorphic Poisson structure $\Lambda|_{U_j\times \Delta}$ by $\sum_{\alpha,\beta=1}^ng_{\alpha \beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ on $U_j\times \Delta$ with $g_{\alpha\beta}^j(\xi_j,t)=-g_{\beta\alpha}^j(\xi_j,t)$. For simplicity, we assume that $U_j=\{\xi_j\in \mathbb{C}^m||\xi_j|<1\}$ where $|\xi|=\max_a|\xi_j^a|$. $(\xi_j,t)\in U_j\times \Delta$ and $(\xi_k,t)\in U_k\times \Delta$ are the same point on $\mathcal{M}_{\Delta}$ if $\xi_j^{\alpha}=f_{jk}^{\alpha}(\xi_k,t)$, $\alpha=1,...,n$ where $f_{jk}(\xi_k,t)$ is a Poisson holomorphic map of $\xi_{k}^1,...,\xi_k^n,t_1,...,t_m$, defined on $U_k\times \Delta \cap U_j\times \Delta$, and so we have the following relation
\begin{align}\label{vv4}
g_{\alpha \beta}^j(f_{jk}^1(\xi_k,t),...,f_{jk}^n(\xi_k,t))=\sum_{r,s=1}^n g_{rs}^k(\xi_k,t)\frac{\partial f_{jk}^{\alpha}}{\partial \xi_k^r}\frac{\partial f_{jk}^{\beta}}{\partial \xi_k^s}
\end{align}
We note that $\omega^{-1}(t_0)=(M_{t_0},\Lambda_{t_0})=\bigcup_j (U_j,\sum_{\alpha,\beta=1}^ng_{\alpha \beta}^j(\xi_j,t_0) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}})$ for $t_0\in\Delta$
By \cite{Kod05} Theorem 2.3, when we ignore complex structures and Poisson structures, $M_t$ is diffeomorphic to $M_0=\omega^{-1}(0)$ as differentiable manifolds for each $t\in \Delta$. We put $M:=M_0$. By \cite{Kod05} Theorem 2.5, if we take a sufficiently small $\Delta$, there is a diffeomorphism $\Psi$ of $M\times \Delta$ onto $\mathcal{M}_{\Delta}$ as differentiable manifolds such that $\omega\circ \Psi$ is the projection $M\times \Delta \to \Delta$. Let $z=(z_1,...,z_n)$ be local complex coordinates of $M=M_0$. Then we have $\omega\circ \Psi(z,t)=t,\,\,\,\,\, t\in \Delta$. For $\Psi(z,t)\in U_j\times \Delta$, put
\begin{align}\label{pp00}
\Psi(z,t)=(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m).
\end{align}
Then each component $\xi_j^{\alpha}=\xi_j^{\alpha}(z,t)$, $\alpha=1,...,n$ is a $C^{\infty}$ function. If we identify $\mathcal{M}_{\Delta}=\Psi(M\times \Delta)$ with $M\times \Delta$ via $\Psi$, $(\mathcal{M}_{\Delta},\Lambda_\Delta)$ is considered as a holomorphic Poisson manifold with the complex structure defined on the $C^{\infty}$ manifold $M\times \Delta$ by the system of local coordinates on $U_j\times \Delta$
\begin{align*}
\{(\xi_j,t)|j=1,2,3,...\},\,\,\,\,\, (\xi_j,t)=(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m).
\end{align*}
and the holomorphic Poisson structure given by on $U_j\times \Delta$
\begin{align}\label{tt23}
\{\sum_{\alpha,\beta=1}^n g_{\alpha \beta}^j(\xi_j(z,t),t)\frac{\partial}{\partial \xi_j^{\alpha}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}|j=1,2,3,...\}
\end{align}
We note that since $(z_1,...,z_n)$ and $(\xi_j^1(z,0),...,\xi_j^n(z,0))$ are local complex coordinates on $M=M_0$,
\begin{align}\label{holomorphic}
\text{$\xi_j^{\alpha}(z,0)$ are holomorphic functions of $z_1,...,z_n$, $\alpha=1,...,n$}
\end{align}
We also note that if we take $\Delta$ sufficiently small, we have
\begin{align}\label{det}
\det\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}}\right)_{\alpha,\lambda=1,...,n}\ne 0
\end{align}
for any $t\in \Delta$.
With this preparation, we identify the holomorphic Poisson deformations near $(M,\Lambda_0)$ in the Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ with $\varphi(t)+\Lambda(t)$ where $\varphi(t)$ is a $C^{\infty}$ vector $(0,1)$-form and $\Lambda(t)$ is a $C^{\infty}$ bivector on $M$ for $t\in \Delta$.
\subsection{Identification of the deformations of complex structures with $\varphi(t)\in A^{0,1}(M,T_M)$}\
Put $\mathcal{U}_j=\Psi^{-1}(U_j\times \Delta)$. Then $\mathcal{U}_j\subset M\times \Delta$ is the domain of $\xi_j^{\alpha}(z,t)$. From $(\ref{det})$, we can define a $(0,1)$-form $\varphi^{\lambda}_j(z,t)=\sum_{v=1}^n \varphi^{\lambda}_{jv}(z,t)d\bar{z}_v$ in the following way:
\begin{equation*}
\left(
\begin{matrix}
\varphi_j^1(z,t)\\
\vdots \\
\varphi_j^n(z,t)
\end{matrix}
\right)
:=
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\left(
\begin{matrix}
\bar{\partial} \xi_j^1\\
\vdots \\
\bar{\partial} \xi_j^n
\end{matrix}
\right)
\end{equation*}
Then the coefficients $\varphi_{jv}^{\alpha}(z,t)$ are $C^{\infty}$ functions on $\mathcal{U}_j$ and $\bar{\partial}\xi_j^{\alpha}(z,t)=\sum_{\lambda=1}^{n} \varphi_j^{\lambda}(z,t)\frac{\partial \xi_j^{\alpha}(z,t)}{\partial z_{\lambda}},\alpha=1,...,n$. So we have
\begin{equation}\label{matrix1}
\frac{\partial \xi_j^{\alpha}}{\partial \bar{z}_v}=\sum_{\lambda=1}^n \varphi_{jv}^\lambda(z,t) \frac{\partial \xi_j^\alpha}{\partial z_\lambda}
\end{equation}
\begin{lemma}\label{c}
On $\mathcal{U}_j \cap \mathcal{U}_k$, we have
\begin{align*}
\sum_{\lambda=1}^n \varphi_j^{\lambda}(z,t)\frac{\partial}{\partial z_{\lambda}}=\sum_{\lambda=1}^n \varphi_k^{\lambda}(z,t)\frac{\partial}{\partial z_{\lambda}}
\end{align*}
\end{lemma}
\begin{proof}
See \cite{Kod05} p.262.
\end{proof}
If for $(z,t)\in \mathcal{U}_j$, we define
\begin{align}\label{b}
\varphi(z,t):=\sum_{\lambda=1}^n \varphi_j^{\lambda}(z,t) \frac{\partial}{\partial z_{\lambda}}=\sum_{\lambda=1}^n\varphi^\lambda(z,t)\frac{\partial}{\partial z_\lambda}=\sum_{v,\lambda=1}^n \varphi_v^{\lambda}(z,t) d\bar{z}_v \frac{\partial}{\partial z_{\lambda}}
\end{align}
By Lemma \ref{c}, $\varphi(t)=\varphi(z,t)\in A^{0,1}(M,T_M)$ is a $C^{\infty}$ vector $(0,1)$-form on $M$ for every $t\in\Delta$ and we have
\begin{align}\label{tt07}
\text{$\varphi(0)=0$,\,\,\,\,\,\,\,\,$\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$}
\end{align}
(see \cite{Kod05} p.263,p.265). We also point out that
\begin{theorem}\label{text}
If we take a sufficiently small polydisk $\Delta$ as in subsection $\ref{prill}$, then for $t\in \Delta$, a local $C^{\infty}$ function $f$ on $M$ is holomorphic with respect to the complex structure $M_t$ if and only if $f$ satisfies the equation
\begin{align*}
(\bar{\partial}-\varphi(t))f=0
\end{align*}
\end{theorem}
\begin{proof}
See \cite{Kod05} Theorem 5.3 p.263.
\end{proof}
\subsection{Identification of the deformations of Poisson structures with $\Lambda(t)\in A^{0,0}(M,\wedge^2 T_M)$}\
For the holomorphic Poisson structure $\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(\xi_j(z,t),t) \frac{\partial}{\partial \xi_j^{\beta}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ on each $U_j\times \Delta$ from $(\ref{tt23})$, there exists the unique bivector field $\Lambda_j(z,t):=\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ on $\mathcal{U}_j=\Psi^{-1}(U_j\times \Delta)$ such that
\begin{align}\label{tt304}
\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}=g_{\alpha\beta}^j(\xi_j(z,t),t).
\end{align}
Indeed, from $(\ref{det})$,
we set
{\tiny{\begin{equation*}
\left(
\begin{matrix}
h_{11}^j(z,t)& \dots & h_{1n}^j(z,t)\\
\vdots & \vdots &\vdots\\
h_{n1}^j(z,t)& \dots & h_{nn}^j(z,t)
\end{matrix}
\right)
:=
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^1}{\partial z_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^n}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\left(
\begin{matrix}
g_{11}^j(\xi_j(z,t))& \dots & g_{1n}^j(\xi_j(z,t))\\
\vdots & \vdots &\vdots\\
g_{n1}^j(\xi_j(z,t)) & \dots & g_{nn}^j(\xi_j(z,t))
\end{matrix}
\right)
\left(
\begin{matrix}
\frac{\partial \xi_j^1}{\partial z_1} & \dots & \frac{\partial \xi_j^n}{\partial z_1}\\
\vdots & \vdots &\vdots\\
\frac{\partial \xi_j^1}{\partial z_n} & \dots & \frac{\partial \xi_j^n}{\partial z_n}
\end{matrix}
\right)^{-1}
\end{equation*}}}
We note that since $g_{\alpha\beta}^j(\xi_j(z,t))=-g_{\beta\alpha}^j(\xi_j(z,t))$, we have $h_{rs}^j(z,t)=-h_{sr}^j(z,t)$.
\begin{lemma}\label{e}
On $\mathcal{U}_j\cap \mathcal{U}_k$, we have $h_{rs}^j(z,t)=h_{rs}^k(z,t)$.
\end{lemma}
\begin{proof}
From $(\ref{tt304})$, $(\ref{vv4})$ and $\frac{\partial \xi_j^{\alpha}}{\partial z_r}=\sum_{p=1}^n\frac{\partial \xi_k^p}{\partial z_r}\frac{\partial \xi_j^{\alpha}}{\partial \xi_k^p}$, we have
\begin{align*}
\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}&=g_{\alpha\beta}^j(\xi_j(z,t),t)=\sum_{p,q=1}^n g_{pq}^k(\xi_k(z,t),t)\frac{\partial \xi_j^{\alpha}}{\partial \xi_k^p}\frac{\partial \xi_j^{\beta}}{\partial \xi_k^q}\\
&=\sum_{p,q,r,s=1}^n h_{rs}^k(z,t)\frac{\partial \xi_k^{p}}{\partial z_r}\frac{\partial \xi_k^{q}}{\partial z_s}\frac{\partial \xi_j^{\alpha}}{\partial \xi_k^p}\frac{\partial \xi_j^{\beta}}{\partial \xi_k^q}=\sum_{r,s=1}^n h_{rs}^k(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}.
\end{align*}
From $(\ref{det})$, we have $h_{rs}^j(z,t)=h_{rs}^k(z,t)$.
\end{proof}
If for $(z,t)\in \mathcal{U}_j$, we define
\begin{align}\label{f}
\Lambda(z,t):=\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}=\sum_{r,s=1}^n h_{rs}(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}.
\end{align}
By Lemma \ref{e}, $\Lambda(t):=\Lambda(z,t)\in A^{0,0}(M,\wedge^2 T_M)$ is a $C^{\infty}$ bivector field on $M$ for every $t\in \Delta$ with $\Lambda(0)=\Lambda_0$.
\begin{theorem}\label{1thm}
If we take a sufficiently small polydisk $\Delta$ as in subsection $\ref{prill}$, then for the Poisson structure $\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\beta}}\wedge \frac{\partial}{\partial \xi_j^{\beta}}$ on $U_j\times \Delta$ for each $j$, there exists the unique bivector field $\Lambda_j(t)=\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ on $\mathcal{U}_j$ satisfying
\begin{enumerate}
\item $\sum_{r,s=1}^n h_{rs}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_r}\frac{\partial \xi_j^{\beta}}{\partial z_s}=g_{\alpha\beta}^j(\xi_j(z,t),t)$
\item $\Lambda_j(t)$ are glued together to define a $C^{\infty}$ bivector field $\Lambda(t)$ on $M\times \Delta$
\item for each $j$, $[\Lambda_j(t),\Lambda_j(t)]=0$. Hence we have $[\Lambda(t),\Lambda(t)]=0$
\end{enumerate}
\end{theorem}
We will use the following lemma to prove the theorem.
\begin{lemma}\label{formula}
If $\sigma=\sum_{\alpha,\beta=1}^n \sigma_{\alpha\beta}\frac{\partial }{\partial z_\alpha}\wedge \frac{\partial}{\partial z_\beta}$ with $\sigma_{\alpha\beta}=-\sigma_{\beta\alpha}$, then
$[\sigma,\sigma]=0$ is equivalent to
\begin{align*}
\sum_{l=1}^n (\sigma_{lk}\frac{\partial \sigma_{ij}}{\partial z_l}+\sigma_{li}\frac{\partial \sigma_{jk}}{\partial z_l}+\sigma_{lj}\frac{\partial \sigma_{ki}}{\partial z_l})=0
\end{align*}
for each $1\leq i,j,k \leq n$.
\end{lemma}
\begin{proof}[Proof of Theorem $\ref{1thm}$]
We have already showed $(1)$ and $(2)$. It remains to show $(3)$. We note that
\begin{align}\label{tt55}
[\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge\frac{\partial}{\partial \xi_j^{\beta}},\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(\xi_j,t) \frac{\partial}{\partial \xi_j^{\alpha}}\wedge\frac{\partial}{\partial \xi_j^{\beta}}]=0.
\end{align}
Since $g_{\alpha\beta}^j(\xi_j(z,t),t)=\sum_{a,b=1}^n h_{ab}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_a}\frac{\partial \xi_j^{\beta}}{\partial z_b}$ is holomorphic with respect to $\xi_j=(\xi_j^{\alpha}),\alpha=1,...,n$, we have
\begin{align}\label{nb1}
\frac{\partial}{\bar{\partial} \xi_j^{\alpha}}\left( \sum_{a,b=1}^n h_{ab}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_a}\frac{\partial \xi_j^{\beta}}{\partial z_b}\right)=\sum_{a,b=1}^n \frac{\partial}{\bar{\partial}\xi_j^{\alpha}}\left(h_{ab}^j(z,t)\frac{\partial \xi_j^{\alpha}}{\partial z_a}\frac{\partial \xi_j^{\beta}}{\partial z_b} \right)=0
\end{align}
In the following, for simplicity, we denote $\xi_j^{\alpha}(z_j,t)$ by $\xi_{\alpha}$ and $h_{ab}^j(z,t)$ by $h_{ab}$. By (\ref{tt55}), Lemma \ref{formula} and $(\ref{nb1})$, and by the property $h_{ab}=-h_{ba}$ and $\frac{\partial}{\partial z_a}=\sum_{l=1}^n \frac{\partial \xi_l}{\partial z_a}\frac{\partial}{\partial \xi_l}+\sum_{l=1}^n \frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial}{\partial \bar{\xi}_l}$, we have
\begin{align*}
0=&\sum_{a,b,c,d,l=1}^n( h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(h_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\right)+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(h_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}\right)+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial}{\partial \xi_l}\left(h_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n (h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(h_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\right)+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(h_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}\right)+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial}{\partial \bar{\xi}_l}\left(h_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}\right))\\
=&\sum_{a,b,c,d,l=1}^n (h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial h_{cd}}{\partial \xi_l}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_i}{\partial z_c}\right)\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_j}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n(h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial h_{cd}}{\partial \xi_l}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_j}{\partial z_c}\right)\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_k}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n(h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial h_{cd}}{\partial \xi_l}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_k}{\partial z_c}\right)\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \xi_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial}{\partial \xi_l}\left(\frac{\partial \xi_i}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n(h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial h_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_i}{\partial z_c}\right)\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_j}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n(h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial h_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_j}{\partial z_c}\right)\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_k}{\partial z_d}\right))\\
+&\sum_{a,b,c,d,l=1}^n(h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial h_{cd}}{\partial \bar{\xi}_l}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_k}{\partial z_c}\right)\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \bar{\xi}_l}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial}{\partial \bar{\xi}_l}\left(\frac{\partial \xi_i}{\partial z_d}\right))\\
=&\sum_{a,b,c,d=1}^n(h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial^2 \xi_i}{\partial z_a\partial z_c}\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial \xi_k}{\partial z_b}h_{cd}\frac{\partial \xi_i}{\partial z_c}\frac{\partial^2 \xi_j}{\partial z_a\partial z_d})\\
+&\sum_{a,b,c,d=1}^n (h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial^2 \xi_j}{\partial z_a\partial z_c}\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial \xi_i}{\partial z_b}h_{cd}\frac{\partial \xi_j}{\partial z_c}\frac{\partial^2 \xi_k}{\partial z_a\partial z_d})\\
+&\sum_{a,b,c,d=1}^n(h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial^2 \xi_k}{\partial z_a\partial z_c}\frac{\partial \xi_i}{\partial z_d}+h_{ab}\frac{\partial \xi_j}{\partial z_b}h_{cd}\frac{\partial \xi_k}{\partial z_c}\frac{\partial^2 \xi_i}{\partial z_a\partial z_d})\\
=&\sum_{a,b,c,d=1}^n(h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_k}{\partial z_b}\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}+h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_i}{\partial z_b}\frac{\partial \xi_j}{\partial z_c}\frac{\partial \xi_k}{\partial z_d}+h_{ab}\frac{\partial h_{cd}}{\partial z_a}\frac{\partial \xi_j}{\partial z_b}\frac{\partial \xi_k}{\partial z_c}\frac{\partial \xi_i}{\partial z_d})\\
=&\sum_{a,b,c,d=1}^n\left(h_{ab}\frac{\partial h_{cd}}{\partial z_a}+h_{ac}\frac{\partial h_{db}}{\partial z_a}+h_{ad}\frac{\partial h_{bc}}{\partial z_a}\right)\frac{\partial \xi_i}{\partial z_c}\frac{\partial \xi_j}{\partial z_d}\frac{\partial \xi_k}{\partial z_b}
\end{align*}
From $(\ref{det})$, we have $\sum_{a=1}^n h_{ab}\frac{\partial h_{cd}}{\partial z_a}+h_{ac}\frac{\partial h_{db}}{\partial z_a}+h_{ad}\frac{\partial h_{bc}}{\partial z_a}=0$ for each $b,c,d$. So by Lemma \ref{formula}, $[\Lambda_j(t),\Lambda_j(t)]=0$.
\end{proof}
\begin{remark}\label{renj}
For the compact holomorphic Poisson manifold $(M_t,\Lambda_t)$ for each $t\in \Delta$ in the Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$, we showed that there exists a bivector field $\Lambda(t)$ on $M=M_0$ with $[\Lambda(t),\Lambda(t)]=0$ for $t\in \Delta$ by Theorem $\ref{1thm}$. Let $J_t:T_{\mathbb{R}}M\to T_{\mathbb{R}}M$ with $J_t^2=-id$ be the almost complex structure associated to the complex structure $M_t$ $($induced by $\varphi(t))$ where $T_{\mathbb{R}}M$ is a real tangent bundle of the underlying differentiable manifold $M$. Then $J_t$ induces a type decomposition of complexified tangent bundle $T_{\mathbb{C}} M=T^{1,0}_{M_t}\oplus T^{0,1}_{M_t}$ $($see \cite{Kob69} Chapter IX section 2$)$ so that we have $\wedge^2 T_{\mathbb{C}} M=\wedge^2 T_{M_t}^{1,0} \oplus T_{M_t}^{1,0}\otimes T_{M_t}^{0,1} \oplus \wedge^2 T_{M_t}^{0,1}$. If $\Lambda$ is a $C^{\infty}$ section of $\wedge^2 T_{\mathbb{C}} M $ on $M$, then we denote by $\Lambda^{2,0}$ the component of $\wedge^2 T_{M_t}^{1,0}$, by $\Lambda^{1,1}$ the component of $T_{M_t}^{0,1}\otimes T_{M_t}^{0,1}$, and by $\Lambda^{0,2}$ the component of $\wedge^2 T_{M_t}^{0,1}$. So we have $\Lambda=\Lambda^{2,0}+\Lambda^{1,1}+\Lambda^{0,2}$. We call $\Lambda^{2,0}$ the type $(2,0)$-part of $\Lambda$. With this notation, the type $(2,0)$-part of $\Psi_{*} \Lambda(t)$ is $\Lambda_t$ for $t\in \Delta$, where $\Psi_*\Lambda(t)$ is the bivector field induced from $\Lambda(t)$ via diffeomorphism $\Psi$ in $(\ref{pp00})$. So we can say that $\Lambda(t)^{2,0}=\Lambda_t$.
\end{remark}
\begin{remark}\label{tt45}
Let $\Lambda$ be a $C^{\infty}$-section of $\wedge^k T_\mathbb{C}M$. From $\wedge^k T_\mathbb{C} M=\bigoplus_{p+q=k} \wedge^p T_{M_t}^{1,0}\otimes \wedge^q T_{M_t}^{0,1}$, we can define the type $(p,q)$ part $\Lambda^{p,q}$ of $\Lambda$ in an obvious way as in Remark $\ref{renj}$.
\end{remark}
Next we discuss the condition when a given $C^{\infty}$ bivector field $\Lambda\in A^{0,0}(M, \wedge^2 T_M)$ on $M$ with $[\Lambda,\Lambda]=0$ gives a holomorphic bivector field $\Lambda^{2,0}\in A^{0,0}(M_t, \wedge^2 T_{M_t})$ with respect to the complex structure $M_t$ induced by $\varphi(t)$. Before proceeding our discussion, we recall the Schouten bracket $[-,-]$ on $\bigoplus_{i\geq 0}\bigoplus_{p+q-1=i,p\geq 0,q\geq 1} A^{0,p}(M,\wedge^q T_M)$ (see (\ref{tt76})) which we need for the computation of the integrability condition (\ref{yghb}). The Schouten bracket $[-,-]$ is defined in the following way:
\begin{align*}
[-,-]:A^{0,p}(M,\wedge^q T_M)\times A^{0,p'}(M,\wedge^{q'} T_M)\to A^{0,p+p'}(M,\wedge^{q+q'-1} T_M)
\end{align*}
In local coordinates it is given by
\begin{align}\label{tt00}
[fd\bar{z}_I\frac{\partial}{\partial z_J},gd\bar{z}_K\frac{\partial}{\partial z_L}]=(-1)^{|K|(|J|+1)} d\bar{z}_I\wedge d\bar{z}_K [f\frac{\partial}{\partial z_J},g\frac{\partial}{\partial z_L}]\end{align}
where $f,g$ are $C^{\infty}$ functions on $M$ and $d\bar{z}_I=d\bar{z}_{i_1}\wedge \cdots \wedge d\bar{z}_{i_{|I|}}$, $\frac{\partial}{\partial z_J}=\frac{\partial}{\partial z_{j_1}}\wedge \cdots \wedge \frac{\partial}{\partial z_{j_{|L|}}}$ (similarly for $d\bar{z}_K, \frac{\partial}{\partial z_L})$. Then
\begin{align}
\mathfrak{g}=(\bigoplus_{i\geq0} g_i, g_i=\bigoplus_{p+q-1=i,p\geq 0, q\geq 1} A^{0,p}(M,\wedge^q T_M),L=\bar{\partial}+[\Lambda,-],[-,-]),
\end{align}
is a differential graded Lie algebra. So we have the following properties: for $a\in A^{0,p}(M,\wedge^q T_M), b\in A^{0,p'}(M,\wedge^{q'} T_M)$, and $c\in A^{0,p''}(M,\wedge^{q''} T_M)$
\begin{enumerate}
\item $[a,b]=-(-1)^{(p+q+1)(p'+q'+1)}[b,a]$
\item $[a,[b,c]]=[[a,b],c]+(-1)^{(p+q+1)(p'+q'+1)}[b,[a,c]]$
\item $\bar{\partial}[a,b]=[\bar{\partial} a,b]+(-1)^{p+q+1}[a,\bar{\partial} b]$
\end{enumerate}
\begin{theorem}\label{m}
If we take a sufficiently small polydisk $\Delta$ as in subsection $\ref{prill}$, then for $t\in \Delta$, a type $(2,0)$-part $\Lambda^{2,0}$ of a $C^{\infty}$ bivector field $\Lambda=\sum_{r,s=1}^n h_{rs}(z)\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}$ on $M$ is holomorphic with respect to the complex structure $M_t$ induced by $\varphi(t)$ if and only if it satisfies the equation
\begin{align*}
\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0
\end{align*}
Moreover, if $[\Lambda,\Lambda]=0$, then $[\Lambda^{2,0},\Lambda^{2,0}]=0$.
\end{theorem}
\begin{proof}
We note that the type $(2,0)$-part of $\Lambda=\sum_{r,s=1}^n h_{rs}(z)\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}$ with respect to complex structure $M_t$ is $\sum_{r,s,\alpha,\beta=1}^n h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}\frac{\partial}{\partial \xi_j^\alpha}\wedge \frac{\partial}{\partial \xi_j^\beta}$. Hence by Theorem \ref{text}, it suffices to show that
\begin{align}\label{mm9}
\text{For each $\alpha,\beta$, $(\bar{\partial}-\varphi(t))(\sum_{r,s=1}^n h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}})=0$ if and only if $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0$}
\end{align}
First we note that from $(\ref{b})$ and $(\ref{tt00})$, we have
\begin{align}\label{tt01}
&\bar{\partial}\Lambda-[\Lambda,\varphi(t)]\\
&=\sum_{r,s, v=1}^n \frac{\partial h_{rs}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}-\sum_{r,s, v,\lambda=1}^n [h_{rs}\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}},\varphi_v^{\lambda} d\bar{z}_v \frac{\partial}{\partial z_{\lambda}}]\notag\\
&=\sum_{r,s, v=1}^n \frac{\partial h_{rs}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}+\sum_{r,s, v,\lambda=1}^n [h_{rs}\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}},\varphi_v^{\lambda} \frac{\partial}{\partial z_{\lambda}}]d\bar{z}_v \notag\\
&=\sum_{r,s, v=1}^n \frac{\partial h_{rs}}{\partial \bar{z}_v}d\bar{z}_v\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}+\sum_{r,s, v,\lambda=1}^n(h_{rs}\frac{\partial \phi_v^{\lambda}}{\partial z_{r}}\frac{\partial}{\partial z_{\lambda}}\wedge \frac{\partial}{\partial z_{s}}-\varphi_{v}^{\lambda} \frac{\partial h_{rs}}{\partial z_{\lambda}}\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}+h_{rs} \frac{\partial \varphi_v^{\lambda}}{\partial z_{s}}\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{\lambda}})d\bar{z}_v.\notag
\end{align}
By considering the coefficients of $d\bar{z}_v\frac{\partial}{\partial z_{r}}\wedge \frac{\partial}{\partial z_{s}}$ in $(\ref{tt01})$, $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0$ is equivalent to
\begin{align}\label{mm3}
\frac{\partial h_{rs}}{\partial \bar{z}_v}+\sum_{c=1}^n (h_{cs}\frac{\partial \varphi^{r}_v}{\partial z_c}-\varphi^c_v\frac{\partial h_{rs}}{\partial z_c}+h_{r c}\frac{\partial \varphi^{s}_v}{\partial z_c})]=0\,\,\,\,\,\text{for each $r,s,v$.}
\end{align}
On the other hand, from $(\ref{b})$, $(\bar{\partial}-\varphi(t))(\sum_{r,s=1}^n h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}})=0$ for each $\alpha,\beta$ is equivalent to
\begin{align}\label{mm5}
\sum_{r,s=1}^n (\frac{\partial h_{rs}}{\partial \bar{z}_v}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+&h_{rs}\frac{\partial}{\partial z_{r}}\left(\frac{\partial \xi_j^\alpha}{\partial \bar{z}_v}\right)\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial}{\partial z_{s}}\left(\frac{\partial \xi_j^\beta}{\partial \bar{z}_v}\right))\\
&-\sum_{r,s,c=1}^n \varphi^{c}_v(\frac{\partial h_{rs}}{\partial z_{c}}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial^2 \xi_j^\alpha}{\partial z_{r} \partial z_{c}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial^2 \xi_j^\beta}{\partial z_{s}\partial z_{c}})=0\,\,\,\,\,\,\,\text{for each $\alpha,\beta,v$}\notag
\end{align}
From $(\ref{matrix1})$, we have $\frac{\partial \xi_j^\alpha}{\partial \bar{z}_v}=\sum_{c=1}^n \frac{\partial \xi_j^\alpha}{\partial z_c}\varphi^c_v$ and $\frac{\partial \xi_j^\beta}{\partial \bar{z}_v}=\sum_{c=1}^n \frac{\partial \xi_j^\beta}{\partial z_c}\varphi^c_v$.
So $(\ref{mm5})$ is equivalent to
\begin{align}\label{mm8}
&\sum_{r,s=1}^n \frac{\partial h_{rs}}{\partial \bar{z}_v}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+\sum_{r,s,c=1}^n(h_{rs}\left(\frac{\partial^2 \xi_j^\alpha}{\partial z_{r} \partial z_c}\varphi^c_v+\frac{\partial \xi_j^\alpha}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{r}}\right)\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\left(\frac{\partial^2 \xi_j^\beta}{\partial z_{s} \partial z_c}\varphi^c_v+\frac{\partial \xi_j^\beta}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{s}}\right))\\
&-\sum_{r,s,c=1}^n \varphi^{c}_v(\frac{\partial h_{rs}}{\partial z_{c}}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial^2 \xi_j^\alpha}{\partial z_{r} \partial z_{c}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial^2 \xi_j^\beta}{\partial z_{s}\partial z_{c}})\notag\\
&=\sum_{r,s=1}^n \frac{\partial h_{rs}}{\partial \bar{z}_v}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+\sum_{r,s,c=1}^n(h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}+h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_c}\frac{\partial \varphi^c_v}{\partial z_{s}})-\sum_{r,s,c=1}^n \varphi^{c}_v\frac{\partial h_{rs}}{\partial z_{c}}\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}
=0 \notag
\end{align}
So $(\ref{mm8})$ is equivalent to
\begin{align}\label{tt62}
\sum_{r,s=1}^n [\frac{\partial h_{rs}}{\partial \bar{z}_v}+\sum_{c=1}^n (h_{cs}\frac{\partial \varphi^{r}_v}{\partial z_c}-\varphi^c_v\frac{\partial h_{rs}}{\partial z_c}+h_{r c}\frac{\partial \varphi^{s}_v}{\partial z_c})]\frac{\partial \xi_j^\alpha}{\partial z_{r}}\frac{\partial \xi_j^\beta}{\partial z_{s}}=0 \,\,\,\,\,\text{for each $\alpha,\beta,v$.}
\end{align}
From $(\ref{det})$, the equation $(\ref{tt62})$ is equivalent to
\begin{align}\label{mm4}
\frac{\partial h_{rs}}{\partial \bar{z}_v}+\sum_{c=1}^n (h_{cs}\frac{\partial \varphi^{r}_v}{\partial z_c}-\varphi^c_v\frac{\partial h_{rs}}{\partial z_c}+h_{r c}\frac{\partial \varphi^{s}_v}{\partial z_c})=0\,\,\,\,\,\text{for each $r,s,v$}.
\end{align}
Note that $(\ref{mm3})$ is same to $(\ref{mm4})$, which proves $(\ref{mm9})$.
For the second statement of Theorem $\ref{m}$, we note that
\begin{align*}
\Lambda&=\sum_{r,s=1}^n h_{rs}\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}=\sum_{r,s,i,j=1}^n( h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_r}\frac{\partial \xi_j^\beta}{\partial z_s}\frac{\partial}{\partial \xi_j^\alpha}\wedge \frac{\partial}{\partial \xi_j^\beta}+ 2h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_r}\frac{\partial \bar{\xi}_j^\beta}{\partial z_s}\frac{\partial}{\partial \xi_j^\alpha}\wedge \frac{\partial}{\partial \bar{\xi}_j^\beta}+ h_{rs}\frac{\partial \xi_j^\alpha}{\partial z_r}\frac{\partial \bar{\xi}_j^\beta}{\partial z_s}\frac{\partial}{\partial \bar{\xi}_j^\alpha}\wedge \frac{\partial}{\partial \bar{\xi}_j^\beta})\\
&=\Lambda^{2,0}+\Lambda^{1,1}+\Lambda^{2,0}.
\end{align*}
Since $[\Lambda,\Lambda]=0$, the type $(3,0)$ part $[\Lambda,\Lambda]^{3,0}=[\Lambda^{2,0},\Lambda^{2,0}]+[\Lambda^{2,0},\Lambda^{1,1}]^{3,0}=0$ (see Remark \ref{tt45}). Since $\Lambda^{2,0}$ is holomorphic with respect to the complex structure induced by $\varphi(t)$, we have $[\Lambda^{2,0},\Lambda^{1,1}]^{3,0}=0$. Hence $[\Lambda^{2,0},\Lambda^{2,0}]=0$.
\end{proof}
\begin{remark}
A $C^{\infty}$ complex bivector field $\Lambda\in A^{0,0}(M,\wedge^2 T_M)$ on $M$ with $[\Lambda,\Lambda]=0$ gives a Poisson bracket $\{-,-\}$ on $C^{\infty}$ complex valued functions on $M$. We point out that when we restrict the Poisson bracket $\{-,-\}$ to holomorphic functions with respect to the complex structure $M_t$ induced by $\varphi(t)$, this is exactly the $($holomorphic$)$ Poisson bracket induced from $\Lambda^{2,0}$ when $\bar{\partial}\Lambda-[\Lambda,\varphi(t)]=0$.
\end{remark}
\begin{remark}
By Theorem $\ref{m}$, $\varphi(t)$ in $(\ref{b})$ and $\Lambda(t)$ in $(\ref{f})$ satisfy
\begin{align}\label{ll00}
\bar{\partial}\Lambda(t)-[\Lambda(t),\varphi(t)]=0 \,\,\,\,\,\,\,\, \text{for each $t$}
\end{align}
and $\Lambda(t)^{2,0}=\Lambda_t$ for each $t$ $($see Remark $\ref{renj}$$)$.
\end{remark}
\subsection{Expression of infinitesimal deformations in terms of $\varphi(t)$ and $\Lambda(t)$}\
In this subsection, we study how an infinitesimal deformation of $(M,\Lambda_0)=\omega^{-1}(0)$ in the Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$ (in subsection \ref{prill}) is represented in terms of $\varphi(t)$ (\ref{b}) and $\Lambda(t)$ (\ref{f}). Recall that an infinitesimal deformation at $(M,\Lambda_0)$ is captured by an element $\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}\in \mathbb{H}^1(M,\Theta_M^\bullet)$ of the complex of sheaves (\ref{complex}) by using the following \u{C}ech resolution associated with the open covering $\mathcal{U}^0=\{U_j^0:=U_j\times 0\}$ (see Proposition \ref{gg} and Definition \ref{mapping}).
\begin{center}
$\begin{CD}
@A[\Lambda_0,-]AA\\
C^0(\mathcal{U}^0,\wedge^3 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U}^0,\wedge^2 \Theta_M)@>\delta>> C^1(\mathcal{U}^0,\wedge^2 \Theta_M)@>-\delta>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
C^0(\mathcal{U}^0,\Theta_M)@>-\delta>>C^1(\mathcal{U}^0,\Theta_M)@>\delta>>C^2(\mathcal{U}^0,\Theta_M)@>-\delta>>\cdots\\
\end{CD}$
\end{center}
We can also compute the hypercohomology group of the complex of sheaves $(\ref{complex})$ by using the following Dolbeault resolution.
\begin{center}
$\begin{CD}
@A[\Lambda_0,-]AA\\
A^{0,0}(M,\wedge^3 T_M)@>\bar{\partial}>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
A^{0,0}(M,\wedge^2 T_M)@>\bar{\partial}>> A^{0,1}(M,\wedge^2 T_M)@>\bar{\partial}>>\cdots\\
@A[\Lambda_0,-]AA @A[\Lambda_0,-]AA @A[\Lambda_0,-]AA\\
A^{0,0}(M,T_M)@>\bar{\partial}>>A^{0,1}(M,T_M)@>\bar{\partial}>>A^{0,2}(M, T_M)@>\bar{\partial}>>\cdots\\
\end{CD}$
\end{center}
We describe how a $1$-cocycle in the \u{C}ech resolution look like in the Dolbeault resolution.
In the picture below, we connect two resolutions. We only depict a part of resolutions that we need in the following diagram. Recall that $\mathscr{A}^{0,p}(\wedge^q T_M)$ is the sheaf of germs of $C^{\infty}$-section of $\wedge^p \bar{T}_M^*\otimes \wedge^q T_M$ (see (\ref{tt76})).
{\tiny
\[
\xymatrixrowsep{0.2in}
\xymatrixcolsep{0.1in}
\xymatrix{
& & H^0(M,\wedge^3 \Theta_M) \ar[ld] \ar[rr] & & C^0(\wedge^3 \Theta_M) \ar[ld]\\
&A^{0,0}(M,\wedge^3 T_M) \ar[rr] & & C^0(\mathscr{A}^{0,0}(\wedge^3 T_M))\\
&& H^0(M,\wedge^2 \Theta_M) \ar@{.>}[uu] \ar@{.>}[ld] \ar@{.>}[rr] & & C^0(\wedge^2 \Theta_M) \ar[uu] \ar[ld] \ar[rr]^{\delta} && C^1(\wedge^2 \Theta_X) \ar[ld]\\
& A^{0,0}(M,\wedge^2 T_M) \ar[uu] \ar[ld] \ar[rr] && C^0(\mathscr{A}^{0,0}(\wedge^2 T_M)) \ar[uu] \ar[ld] \ar[rr]^{\delta} && C^1(\mathscr{A}^{0,0}(T_M)) \\
A^{0,1}(M,\wedge^2 T_M) \ar[rr] && C^0(A^{0,1}(\wedge^2 T_M)) && C^0(\Theta_M) \ar@{.>}[ld]\ar@{.>}[uu] \ar@{.>}[rr]^{-\delta} & & C^1(\Theta_X) \ar[uu] \ar[ld]\\
& A^{0,0}(M,T_M) \ar@{.>}[uu] \ar@{.>}[rr] \ar@{.>}[ld] && C^0(\mathscr{A}^{0,0}(T_M)) \ar[uu]^{[\Lambda_0,-]} \ar[rr]^{-\delta} \ar[ld]^{\bar{\partial}} && C^1(\mathscr{A}^{0,0}(T_M)) \ar[ld]^{\bar{\partial}} \ar[uu]\\
A^{0,1}(M,T_M) \ar[uu] \ar[rr] & & C^0(\mathscr{A}^{0,1}(T_M)) \ar[uu] \ar[rr]^{-\delta} & & C^1(\mathscr{A}^{0,1}(T_M))
}\]}
Note that each horizontal complex is exact except for edges of the ``real wall".
Now we explicitly construct the isomorphism of the first hypercohomology group from \u{C}ech resolution to the first hypercohomology group from Dolbeault resolution, namely
\begin{align}\label{isomorphism}
&\frac{ker( \mathcal{C}^0(\mathcal{U}^0, \wedge^2 \Theta_M)\oplus \mathcal{C}^1(\mathcal{U}^0,\Theta_M)\to \mathcal{C}^0(\mathcal{U}^0, \wedge^3 \Theta_M)\oplus \mathcal{C}^1(\mathcal{U}^0,\wedge^2\Theta_M)\oplus \mathcal{C}^2(\mathcal{U}^0, \Theta_M))}{ im(\mathcal{C}^0(\mathcal{U}^0, \Theta_M)\to\mathcal{C}^0(\mathcal{U}^0, \wedge^2 \Theta_M)\oplus \mathcal{C}^1(\mathcal{U}^0,\Theta_M))}\\
&\cong \frac{ker(A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M, T_M)\to A^{0,0}(M,\wedge^3 T_M)\oplus A^{0,1}(M,\wedge^2 T_M)\oplus A^{0,2}(M, T_M))}{im(A^{0,0}(M,T_M)\to A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M,T_M))}\notag\\
(b,a) &\mapsto \notag ([\Lambda_0,c]-b,\bar{\partial} c)
\end{align}
We define the map in the following way:
let $(b,a) \in \mathcal{C}^0(\mathcal{U}^0, \wedge^2 \Theta_M)\oplus \mathcal{C}^1(\mathcal{U}^0,\Theta_M)$ be a cohomology class of \u{C}ech resolution. Since $\delta a=0$, there exists a $c\in C^0(\mathcal{U}^0,\mathscr{A}^{0,0}(T_M))$ such that $-\delta c=a$. Since $a$ is holomorphic $(\bar{\partial}a=0)$, by the commutativity $\bar{\partial} c\in A^{0,1}(M, T_M)$. We claim that $[\Lambda_0,c]-b\in A^{0,0}(M,\wedge^2 T_M)$. Indeed, $\delta([\Lambda_0,c]-b)=-[\Lambda_0,-\delta c]-\delta b=-[\Lambda_0,a]-\delta b=0$. We show that $(\bar{\partial} c, [\Lambda_0,c]-b)$ is a cohomology class from Dolbeault resolution. Indeed, $\bar{\partial}(\bar{\partial}c)=0.$ $[\Lambda_0,[\Lambda_0,c]-b]=0$. $\bar{\partial} ([\Lambda_0,c]-b)+[\Lambda_0, \bar{\partial} c]=-[\Lambda_0,\bar{\partial} c]+[\Lambda_0,\bar{\partial c}]=0$. We define the map by $(b,a)\mapsto ([\Lambda_0,c]-b,\bar{\partial} c)$. We show that this map is well defined. Indeed, let $(b',a')$ define the same class given by $(b,a)$. Then there exists $d\in C^0(\mathcal{U}^0,\Theta_M)$ such that $a-a'=-\delta d$ and $b-b'=[\Lambda_0,d]$. Let $-\delta c'=a'$. Then $\bar{\partial} c-\bar{\partial}c'=\bar{\partial}(c-c'-d)$, and $[\Lambda_0,c]-b-([\Lambda_0,c']-b')=[\Lambda_0,c-c']-(b-b')=[\Lambda_0,c-c'-d]$.
For the inverse map, let $(\beta,\alpha) \in A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M, T_M) $ be a cohomology class from Dolbeault resolution.
Then there exists a $c\in C^0(\mathcal{U}^0,\mathscr{A}^{0,0}(T_M))$ such that $\bar{\partial} c =\alpha$. We define the inverse map $(\beta,\alpha) \mapsto ([\Lambda_0,c]-\beta,-\delta c)$.
\begin{theorem}\label{n}
$\left(- \left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0},\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}\right)\in A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M,T_M)$ satisfies
$[\Lambda_0, -\left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0}]=0$, $\bar{\partial} \left( -(\frac{\partial \Lambda(t)}{\partial t})_{t=0}\right) +[\Lambda_0,\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}]=0$, $\bar{\partial} \left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}=0$, and under the isomorphism $(\ref{isomorphism})$, $\left(\frac{\partial(M_t,\Lambda_t)}{\partial t}\right)_{t=0}\in \mathbb{H}^1(M, \Theta_M^\bullet)$ corresponds to $\left( -\left(\frac{\partial \Lambda(t)}{\partial t}\right)_{t=0},\left(\frac{\partial \varphi(t)}{\partial t}\right)_{t=0}\right)$
\end{theorem}
\begin{proof}
By Theorem \ref{1thm} (3), (\ref{ll00}) and (\ref{tt07}), we have $[\Lambda(t),\Lambda(t)]=0$, $\bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$ and $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$ with $\varphi(0)=0, \Lambda(0)=\Lambda_0$. By taking the derivative of these equations with respect to $t$ and plugging $0$, we get the first claim. Next we show the second claim. Put
\begin{align*}
\theta_{jk}=\sum_{\alpha=1}^n \left(\frac{\partial f_{jk}^{\alpha}(\xi_k,t)}{\partial t}\right)_{t=0}\frac{\partial}{\partial \xi_j^{\alpha}},\,\,\,\,\,\,\,\,
\sigma_j=\sum_{r,s=1}^n \left(\frac{\partial g_{rs}(\xi,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}
\end{align*}
The infinitesimal deformation $\left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}\in \mathbb{H}^1(M,\Theta_M^\bullet)$ is the cohomology class of the $(\{\sigma_j\},\{\theta_{jk}\},)\in C^0(\mathcal{U}^0,\wedge^2 \Theta_M)\oplus C^1(\mathcal{U}^0,\Theta_M)$ (see Proposition \ref{gg} and Definition \ref{mapping}). We fix a tangent vector $\frac{\partial}{\partial t}\in T_0(\Delta)$, denote $\left(\frac{\partial f(t)}{\partial t}\right)_{t=0}$ by $\dot{f}$ for a $C^{\infty}$ function $f(t),t\in \Delta$. With this notation, we put
\begin{align*}
\xi_j=\sum_{\alpha=1}^n \dot{\xi_j}^{\alpha}\frac{\partial}{\partial \xi_j^{\alpha}},\,\,\,\,\,\text{where}\,\,\,\,\dot{\xi_j}^{\alpha}=\left(\frac{\partial \xi_j^{\alpha}(z,t)}{\partial t}\right)_{t=0}
\end{align*}
for each $j$. Then we have
\begin{align}\label{hh89}
\text{$\delta \{ \xi_j \}=-\{ \theta_{jk} \}$ and $\bar{\partial} \xi_j =\sum_{\lambda=1}^n \left( \frac{\partial \varphi^{\lambda}(z,t)}{\partial t}\right)_{t=0}\frac{\partial}{\partial z_{\lambda}}=\sum_{\lambda=1}^n \dot{\varphi}^{\lambda}\frac{\partial}{\partial z_{\lambda}}=\dot{\varphi}$}
\end{align}
(for the detail, see \cite{Kod05} Theorem 5.4 p.266). On the other hand,
\begin{lemma}\label{beta}
We have $\dot{\Lambda}-{\sigma_j}+[\Lambda_0, \xi_j]=0$. More precisely,
\begin{align*}
\sum_{r,s=1}^n \left(\frac{\partial h_{rs} (z,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}-\sum_{\alpha,\beta=1}^n \left(\frac{\partial g_{\alpha\beta}^j(\xi_j,t)}{\partial t}\right)_{t=0} \frac{\partial}{\partial \xi_j^{\alpha}} \wedge \frac{\partial}{\partial \xi_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s},\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=0
\end{align*}
equivalently $($with the notation above$)$,
\begin{align}\label{tt10}
\sum_{r,s=1}^n \dot{h_{rs}} \frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}-\sum_{\alpha,\beta=1}^n \dot{g}_{\alpha\beta}^j \frac{\partial}{\partial \xi_j^{\alpha}} \wedge \frac{\partial}{\partial \xi_j^{\beta}}+[\sum_{r,s=1}^n g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s},\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=0
\end{align}
\end{lemma}
\begin{proof}
From $(\ref{holomorphic})$, the first term of $(\ref{tt10})$ is
\begin{align*}
\sum_{r,s=1}^n \dot{h_{rs}}\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}=\sum_{r,s,a,b=1}^n \dot{h_{rs}} \frac{\partial \xi_j^a (z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}\frac{\partial}{\partial \xi_j^a}\wedge \frac{\partial}{\partial \xi_j^b}
\end{align*}
Let's compute the third term of $(\ref{tt10})$:
\begin{align*}
&\sum_{r,s,c=1}^n [g_{rs}^j(\xi_j,0) \frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}, \dot{\xi}_j^c\frac{\partial}{\partial \xi_j^c}]=\sum_{r,s,c=1}^n ([g_{rs}^j(\xi_j,0)\frac{\partial}{\partial \xi_j^r},\dot{\xi}_j^c \frac{\partial}{\partial \xi_j^c}]\wedge \frac{\partial}{\partial \xi_j^s}-g_{rs}^j(\xi_j,0)[\frac{\partial}{\partial \xi_j^s},\dot{\xi}_j^c \frac{\partial}{\partial \xi_j^c}]\wedge\frac{\partial}{\partial \xi_j^r})\\
&=\sum_{r,s,c=1}^n (g_{rs}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^c}{\partial \xi_j^r}\frac{\partial}{\partial \xi_j^c}\wedge \frac{\partial}{\partial \xi_j^s}-\dot{\xi}_j^c\frac{\partial g_{rs}(\xi_j,0)}{\partial \xi_j^c}\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}+g_{rs}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^c}{\partial \xi_j^s}\frac{\partial}{\partial \xi_j^r}\wedge\frac{\partial}{\partial \xi_j^c})
\end{align*}
By considering the coefficients of $\frac{\partial}{\partial \xi_j^a}\wedge \frac{\partial}{\partial \xi_j^b}$, $(\ref{tt10})$ is equivalent to
\begin{align}\label{tt11}
\sum_{r,s=1}^n \dot{h_{rs}} \frac{\partial \xi_j^a (z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}-\dot{g}_{ab}^j-\sum_{c=1}^n \dot{\xi}_j^c\frac{\partial g_{ab}(\xi_j,0)}{\partial \xi_j^c}+\sum_{c=1}^n (g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c})=0
\end{align}
On the other hand, from $(\ref{tt304})$, we have
\begin{align}\label{tt14}
g_{ab}^j(\xi_j^1(z,t),...,\xi_j^n(z,t),t_1,...,t_m)=\sum_{r,s=1}^n h_{rs}(z,t)\frac{\partial \xi_j^a(z,t)}{\partial z_r}\frac{\partial \xi_j^b(z,t)}{\partial z_s}
\end{align}
By taking the derivative of $(\ref{tt14})$ with respect to $t$ and putting $t=0$, we have
\begin{align*}
\sum_{c=1}^n \frac{\partial g_{ab}^j(\xi_j,0)}{\partial \xi_j^c}\dot{\xi}_j^c+\dot{g}_{ab}^j=\sum_{r,s=1}^n\dot{h}_{rs}\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+\sum_{r,s=1}^n h_{rs}(z,0)( \frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s})
\end{align*}
Hence $(\ref{tt11})$ is equivalent to
\begin{align}\label{vv8}
\sum_{c=1}^n g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}=\sum_{r,s=1}^n (h_{rs}(z,0)\frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+ h_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s})
\end{align}
Indeed, the left hand side and right hand side of $(\ref{vv8})$ coincide: from $(\ref{tt14})$ and $(\ref{holomorphic})$,
\begin{align*}
\sum_{c=1}^n g_{cb}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+g_{ac}^j(\xi_j,0)\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c}&=\sum_{r,s,c=1}^n (h_{rs}(z,0)\frac{\partial \xi_j^c(z,0)}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}\frac{\partial \dot{\xi}_j^a}{\partial \xi_j^c}+ h_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \xi_j^c(z,0)}{\partial z_s}\frac{\partial \dot{\xi}_j^b}{\partial \xi_j^c})\\
&=\sum_{r,s=1}^n (h_{rs}(z,0)\frac{\partial \dot{\xi}_j^a}{\partial z_r}\frac{\partial \xi_j^b(z,0)}{\partial z_s}+ h_{rs}(z,0)\frac{\partial \xi_j^a(z,0)}{\partial z_r}\frac{\partial \dot{\xi}_j^b}{\partial z_s})
\end{align*}
This completes Lemma \ref{beta}.
\end{proof}
Going back to the proof of Theorem \ref{n}, we defined the isomorphism $(\ref{isomorphism})$: $(b,a)\mapsto ([\Lambda_0,c]-b,\bar{\partial} c)$ where $-\delta c=a$. We take $(b,a)=(\{\sigma_j\},\{\theta_{jk}\})$ and $c=\{\xi_j\}$. Note $-\delta \{\xi_j\}=\{\theta_{jk}\}$ by $(\ref{hh89})$. Then by the isomorphism $(\ref{isomorphism})$, $(\{\sigma_j\},\{\theta_{jk}\})$ is mapped to $([\Lambda_0,\{\xi_j\}]-\{\sigma_j\}, \bar{\partial}\{\xi_j\})$ which is $(-\dot{\Lambda}, \dot{\varphi})$ by Lemma \ref{beta} and (\ref{hh89}). This completes the proof of Theorem \ref{n}.
\end{proof}
\subsection{Integrability condition}\label{ss00}\
We have showed that given a Poisson analytic family $(\mathcal{M},\Lambda,B,\omega)$, the deformations $(M_t,\Lambda_t)$ of $M=M_0$ near $(M_0,\Lambda_0)$ is represented by the $C^{\infty}$ vector $(0,1)$-form $\varphi(t)$ (\ref{b}) and the $C^{\infty}$ bivector field $\Lambda(t)$ of type $(2,0)$ (\ref{f}) on $M$ with $\varphi(0)=0$ and $\Lambda(0)=\Lambda_0$ satisfying the conditions: (1)$[\Lambda(t),\Lambda(t)]=0,(2)\bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$ and (3)$\bar{\partial} \varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$ for each $t\in \Delta$ by Theorem \ref{1thm} (3), (\ref{ll00}) and (\ref{tt07}).
Conversely, we will show that on a compact holomorphic Poisson manifold $(M,\Lambda_0)$, a $C^{\infty}$ vector $(0,1)$-form $\varphi\in A^{0,1}(M,T_M)$ and a $C^{\infty}$ type $(2,0)$ bivector field $\Lambda\in A^{0,0}(M,\wedge^2 T_M)$ on $M$ such that $\varphi$ and $\Lambda_0+\Lambda$ satisfying the integrability condition (1),(2),(3) define another holomorphic Poisson structure on the underlying differentiable manifold $M$. Indeed, let $\varphi=\sum_{\lambda=1}^n \varphi^{\lambda}_{v}(z)d\bar{z}_v\frac{\partial}{\partial z_{\lambda}}$ be a $C^{\infty}$ vector $(0,1)$-form and $\Lambda=\sum_{r,s=1}^n h_{rs}(z)\frac{\partial}{\partial z_r}\wedge \frac{\partial}{\partial z_s}$ be a $C^{\infty}$ bivector field of type $(2,0)$ on a compact holomorphic Poisson manifold $(M,\Lambda_0)$. Suppose $\det(\delta_v^{\lambda}-\sum_{\mu=1}^n \varphi_{v}^{\mu}(z)\overline{\varphi_{\mu}^{\lambda}}(z))_{\lambda,\mu=1,...,n}\ne 0$, and $\varphi$, $\Lambda$ satisfy the integrability condition:
\begin{align}
&[\Lambda_0+\Lambda,\Lambda_0+\Lambda]=0 \label{bb1}\\
&\bar{\partial} (\Lambda_0+\Lambda)-[\Lambda_0+\Lambda,\varphi]=0\label{bb2}\\
&\bar{\partial}\varphi-\frac{1}{2}[\varphi,\varphi]=0\label{bb3}
\end{align}
Then by the Newlander-Nirenberg theorem(\cite{New57},\cite{Kod05}), the condition (\ref{bb3}) gives a finite open covering $\{U_j\}$ of $M$ and $C^{\infty}$-functions $\xi_j^{\alpha}=\xi_j^{\alpha}(z),\alpha=1,...,n$ on each $U_j$ such that $\xi_j:z\to \xi_j(z)=(\xi_j^1(z),...,\xi_j^n(z))$ gives complex coordinates on $U_j$, and $\{\xi_1,...,\xi_j,...\}$ defines another complex structure on $M$, which we denote by $M_{\varphi}$. By Theorem \ref{m}, the conditions $(\ref{bb1})$ and $(\ref{bb2})$ gives a holomorphic Poisson structure $(\Lambda_0+\Lambda)^{2,0}$ on $M_\varphi$. Recall that $(\Lambda_0+\Lambda)^{2,0}$ means the type $(2,0)$-part of $\Lambda_0+\Lambda$ with respect to the complex structure induced by $\varphi$ (see Remark \ref{renj}).
\begin{remark}
If we replace $\varphi$ by $-\varphi$, then $(\ref{bb1}),(\ref{bb2})$, and $(\ref{bb3})$ are equivalent to
\begin{align}\label{yghb}
L(\Lambda+\varphi)+\frac{1}{2}[\Lambda+\varphi,\Lambda+\varphi]=0 \,\,\,\,\text{where}\,\,L=\bar{\partial}+[\Lambda_0,-]
\end{align}
which is a solution of the Maurer-Cartan equation of a differential graded Lie algebra $(\mathfrak{g}=\bigoplus_{i\geq 0} g^i=\bigoplus_{p+q-1=i, q \geq1} A^{0,p}(M,\wedge^q T_M),L,[-,-])$. This differential graded Lie algebra controls deformations of compact holomorphic Poisson manifolds in the language of functor of Artin rings $($see the second part of the author's Ph.D. thesis \cite{Kim14}$)$.
\end{remark}
\begin{example}[Hitchin-Goto Poisson analytic family]
Let $(M,\sigma)$ be a compact holomorphic Poisson manifold which satisfies the $\partial\bar{\partial}$-lemma. Then any class $\sigma([\omega])\in H^1(M,\Theta_M)$ for $[\omega]\in H^1(M,\Theta_M^*)$ is tangent to a deformation of complex structure induced by $\phi(t)=\sigma(\alpha)$ where $\alpha=t\omega+\partial (t^2\beta_2+t^3\beta_3+\cdots)$ for $(0,1)$-forms $\beta_i$ with respect to the original complex structure $($see \cite{Hit12} Theorem 1$)$. Suppose that $\phi(t)=\sigma(\alpha)$ converges for $t\in \Delta \subset \mathbb{C}$. We can consider $\psi=\psi(t):=-\phi(t)$ as a $C^{\infty}$ vector $(0,1)$-form on $M\times \Delta$, and $\sigma$ as a $C^{\infty}$ type $(2,0)$ bivector on $M\times \Delta$. We note that $(\psi(t),\sigma)$ satisfies $[\sigma,\sigma]=0,\bar{\partial}\sigma-[\sigma,\psi(t)]=0$ and $\partial \psi(t)-\frac{1}{2}[\psi(t),\psi(t)]=0$. Then by Newlander-Nirenberg Theorem$($\cite{New57},\cite{Kod05} $p.268$$)$, we can give a holomorphic coordinate on $M\times \Delta$ induced by $\psi$. Let's denote the complex manifold induced by $\psi$ by $\mathcal{M}$. On the other hand, the type $(2,0)$ part $\sigma^{2,0}$ of $\sigma$ with respect to the complex structure $\mathcal{M}$ defines a holomorphic Poisson structure on $\mathcal{M}$. Then the natural projection $\pi:(\mathcal{M},\sigma^{2,0})\to \Delta$ defines a Poisson analytic family of deformations of $(M,\sigma)$. Since $\sigma$ does not depend on $t$, we have $0$ in the Poisson direction under the Poisson Kodaira-Spencer map $\varphi_0: T_0\Delta\to \mathbb{H}^1(M, \Theta_M^\bullet)$ by Theorem $\ref{n}$. More precisely, we have $\varphi_0(\frac{\partial}{\partial t})=(0,-\sigma([\omega]))$.
\end{example}
\section{Theorem of existence for holomorphic Poisson structures}\label{section5}
In this section, we prove `Theorem of existence for holomorphic Poisson structures' as an analogue of `Theorem of existence for complex analytic structures' by Kodaira-Spencer under the assumption that the associated Laplacian operator $\Box$ (induced from the operator $\bar{\partial}+[\Lambda_0,-]$) is strongly elliptic and of diagonal type.
\subsection{Statement of Theorem of existence for holomorphic Poisson structures}\
\begin{theorem}[Theorem of existence for holomorphic Poisson structures]\label{theorem of existence}
Let $(M,\Lambda_0)$ be a compact holomorphic Poisson manifold such that the associated Laplacian operator $\Box$ $($induced from the operator $\bar{\partial}+[\Lambda_0,-]$$)$ is strongly elliptic and of diagonal type. Suppose that $\mathbb{H}^2(M,\Theta_M^\bullet)=0$. Then there exists a Poisson analytic family $(
\mathcal{M},\Lambda,B,\omega)$ with $0\in B\subset \mathbb{C}^m$ satisfying the following conditions:
\begin{enumerate}
\item $\omega^{-1}(0)=(M,\Lambda_0)$
\item The Poisson Kodaira-Spencer map $\varphi_0:\frac{\partial}{\partial t}\to \left(\frac{\partial (M_t,\Lambda_t)}{\partial t}\right)_{t=0}$ with $(M_t,\Lambda_t)=\omega^{-1}(t)$ is an isomorphism of $T_0(B)$ onto $\mathbb{H}^1(M,\Theta_M^\bullet):T_0 B\xrightarrow{\varphi_0} \mathbb{H}^1(M,\Theta_M^\bullet)$.
\end{enumerate}
\end{theorem}
Let $\{(\pi_{1},\eta_{1}),...,(\pi_{m},\eta_{m})\}$ be a basis of $\mathbb{H}^1(M,\Theta_M^\bullet)$ where $(\pi_{\lambda},\eta_{\lambda})\in A^{0,0}(M,\wedge^2 T_M)\oplus A^{0,1}(M,T_M)$ for $\lambda=1,...,m$. Let $\Delta_\epsilon =\{t\in \mathbb{C}^m||t|<\epsilon\}$ for some $\epsilon>0$. Assume that there is a family $\{(\varphi(t),\Lambda(t))|t\in \Delta_\epsilon \}$ of $C^{\infty}$ vector $(0,1)$-forms $\varphi(t)=\sum_{\lambda=1}^n\sum_{v=1}^n \varphi^{\lambda}_v(z,t) d\bar{z}_v\frac{\partial}{\partial z_{\lambda}}\in A^{0,1}(M,T_M)$ and $C^{\infty}$ type $(2,0)$ bivectors $\Lambda(t)=\sum_{\alpha,\beta=1}^n \Lambda_{\alpha\beta}(z,t)\frac{\partial}{\partial z_\alpha}\wedge \frac{\partial}{\partial z_\beta}\in A^{0,0}(M,\wedge^2 T_M)$ on $M$, which satisfy
\begin{enumerate}
\item $[\Lambda(t),\Lambda(t)]=0$\\
\item $\bar{\partial} \Lambda(t)-[\Lambda(t),\varphi(t)]=0$\\
\item $\bar{\partial}\varphi(t)-\frac{1}{2}[\varphi(t),\varphi(t)]=0$
\end{enumerate}
and the initial conditions
\begin{align*}
\varphi(0)=0, \Lambda(0)=\Lambda_0,( -\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0},\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0})=(-\pi_{\lambda},-\eta_{\lambda}),\,\,\,\,\, \lambda=1,...,m,
\end{align*}
Since $\varphi(0)=0$, we may assume $\det(\delta^{\lambda}_v-\sum_{\mu=1}^n\varphi^{\mu}_v(z,t)\overline{\varphi^{\lambda}_{\mu}(z,t)})_{\lambda,\mu=1,...,n}\ne 0$ if $\Delta_\epsilon$ is sufficiently small. Therefore, as in subsection \ref{ss00}, by the Newlander-Nirenberg theorem(\cite{New57},\cite{Kod05} p.268), each $\varphi(t)$ determines a complex structure $M_{\varphi(t)}$ on $M$. The conditions $(2)$ and $(3)$ imply that $(2,0)$-part $\Lambda(t)^{2,0}$ of $\Lambda(t)$ with respect to the complex structure induced from $\varphi(t)$ is a holomorphic Poisson structure on $M_{\varphi(t)}$. If the family $\{(M_{\varphi(t)},\Lambda(t)^{2,0})|t\in \Delta_\epsilon\}$ is a Poisson analytic family, it satisfies the conditions (1) and (2) in Theorem \ref{theorem of existence} by Theorem \ref{n}. We will construct such a family $\{(\varphi(t),\Lambda(t))|t\in \Delta_\epsilon \}$ under the assumption $\mathbb{H}^2(M,\Theta_M^\bullet)=0$ and then show that $\{(M_{\varphi(t)},\Lambda(t)^{2,0})|t\in \Delta_\epsilon\}$ is a Poisson analytic family in the subsection \ref{subsection}, which completes the proof of Theorem $\ref{theorem of existence}$.
\begin{remark}
By replacing $\varphi(t)$ by $-\varphi(t)$, it is sufficient to construct $\varphi(t)$ and $\Lambda(t)$ satisfying
\begin{enumerate}
\item $[\Lambda(t),\Lambda(t)]=0$\\
\item $\bar{\partial} \Lambda(t)+[\Lambda(t),\varphi(t)]=0$\\
\item $\bar{\partial}\varphi(t)+\frac{1}{2}[\varphi(t),\varphi(t)]=0$
\end{enumerate}
and the initial conditions
\begin{align}\label{initial1}
\varphi(0)=0, \Lambda(0)=\Lambda_0,( \left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0},\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0})=(\pi_{\lambda},\eta_{\lambda}),\,\,\,\,\, \lambda=1,...,m,
\end{align}
We note that $(1),(2),(3)$ are equivalent to
\begin{equation}\label{qet}
\bar{\partial} (\varphi(t)+\Lambda(t))+\frac{1}{2}[\varphi(t)+\Lambda(t),\varphi(t)+\Lambda(t)]=0
\end{equation}
\end{remark}
We construct such $\alpha(t):=\varphi(t)+\Lambda(t)$ in the following subsection.
\subsection{Construction of $\alpha(t)=$$\varphi(t)+\Lambda(t)$}\
We use the Kuranishi's method presented in \cite{Mor71} to construct $\alpha(t)$. First we note the following: let $A^{p}=A^{0,p}(M,T_M)\oplus \cdots \oplus A^{0,0}(M,\wedge^{p+1} T_M)$ and $L=\bar{\partial} +[\Lambda_0,-]$. Then the sequence
\begin{align*}
A^0\xrightarrow{L} A^1\xrightarrow{L} A^2 \xrightarrow{L} \cdots
\end{align*}
is an elliptic complex. So we have the adjoint operator $L^*$, Green's operator $G$, Laplacian operator $\Box:=LL^*+L^*L$ and $H$ where $H$ is the orthogonal projection to the $\Box$-harmonic subspace $\mathbb{H}$ of $\bigoplus_{p\geq 0} A^p$. In particular we have $H:A^p\to \mathbb{H}^p\cong \mathbb{H}^p(M,\Theta_M^\bullet)$. For the detail, we refer to \cite{Wel08}.
We introduce the H\"{o}lder norms in the spaces $A^{p}=A^{0,p}(M,T_M)\oplus \cdots \oplus A^{0,0}(M, \wedge^{p+1} T_M)$ in the following way: we fix a finite open covering $\{U_j\}$ of $M$ such that $z_j=(z_j^1,...,z_j^n)$ are local coordinates on $U_j$. Let $\phi\in A^{p}$ which is locally expressed on $U_j$ as
\begin{align*}
\phi=\sum_{r+s=p+1, s\geq 1} \phi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)d\bar{z}_j^{\alpha_1}\wedge \cdots \wedge d\bar{z}_j^{\alpha_r}\wedge \frac{\partial}{\partial z_j^{\beta_1}}\wedge\cdots \wedge\frac{\partial}{\partial z_j^{\beta_s}}
\end{align*}
Let $k\in \mathbb{Z},k\geq 0,\theta\in \mathbb{R},0<\theta<1$. Let $h=(h_1,...,h_{2n}),h_i\geq 0,|h|:=\sum_{i=1}^{2n} h_i$ where $n=\dim\, M$. Then denote
\begin{align*}
D_j^h=\left(\frac{\partial}{\partial x_j^1}\right)^{h_1}\cdots \left(\frac{\partial}{\partial x_j^{2n}}\right)^{h_{2n}},\,\,\,\,\,z_j^{\alpha}=x_j^{2\alpha-1}+ix_j^{2\alpha}
\end{align*}
Then the H\"{o}lder norm $||\varphi||_{k+\theta}$ is defined as follows:
{\small{\begin{align*}
||\phi||_{k+\theta}=\max_j \{ \sum_{h, |h|\leq k}\left( \sup_{z\in U_j}|D_j^h \phi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|\right)+\sup_{y,z\in U_j,|h|=k}
\frac{|D_j^h \phi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(y)-D_j^h \phi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|}{|y-z|^{\theta}} \},
\end{align*}}}
where the sup is over all $\alpha_1,...,\alpha_r,\beta_1,...,\beta_s$.
Now suppose that the associated Laplacian operator $\Box$ induced from $L=\bar{\partial}+[\Lambda_0,-]$ is a strongly elliptic operator whose principal part is of diagonal type. Then by \cite{Kod05} Appendix Theorem 4.3 page 436, we have a $priori$ estimate
\begin{equation}\label{assumption}
||\phi||_{k+\theta}\leq C(|| \Box \phi||_{k-2+\theta}+||\phi||_0)
\end{equation}
where $k\geq 2$, $C$ is a constant which is independent of $\varphi$ and
\begin{center}
$||\phi||_0=\max_{j,\alpha_1,...,\beta_s} \sup_{z\in U_j} |\phi_{j \alpha_1\cdots\alpha_r\beta_1\cdots\beta_s}(z)|$.
\end{center}
We will use the following two lemmas.
\begin{lemma}\label{lemma5.2.2}
For $\phi,\psi\in A^1$, we have $||[\phi,\psi]||_{k+\theta}\leq C||\phi||_{k+1+\theta}||\psi||_{k+1+\theta}$, where $C$ is independent of $\phi$ and $\psi$.
\end{lemma}
\begin{lemma}\label{lemma5.2.3}
For $\phi\in A^1$, we have $||G\phi||_{k+\theta}\leq C||\phi||_{k-2+\theta},k\geq 2$, where $C$ depends only on $k$ and $\theta$, not on $\phi$.
\end{lemma}
\begin{proof}
This follows from (\ref{assumption}). See \cite{Mor71} p.160 Proposition 2.3.
\end{proof}
With this preparation, we construct $\alpha(t):=\varphi(t)+\Lambda(t)=\Lambda_0+\sum_{\mu=1}^{\infty} (\varphi_{\mu}(t)+\Lambda_{\mu}(t))$, where
\begin{align*}
\varphi_{\mu}(t)+\Lambda_{\mu}(t)=\sum_{v_1+\cdots+v_m=\mu} (\varphi_{v_1\cdots v_m}+\Lambda_{v_1\cdots v_m})t_1^{v_1}\cdots t_m^{v_m}
\end{align*}
with $\varphi_{v_1\cdots v_m}+\Lambda_{v_1\cdots v_m}\in A^{0,1}(M,T_M)\oplus A^{0,0}(M,\wedge^2 T_M)$ such that
\begin{align}\label{qetyu}
\bar{\partial} \alpha(t)+\frac{1}{2}[\alpha(t),\alpha(t)]=0,\,\,\,\,\,\,\,
\alpha_1(t)=\varphi_1(t)+\Lambda_1(t)=\sum_{v=1}^m (\eta_v+\pi_v)t_v,
\end{align}
where $\{\eta_v+\pi_v\}$ is a basis for $\mathbb{H}^1\cong \mathbb{H}^1(M,\Theta_M^\bullet)$ (see (\ref{initial1})). Let $\beta(t):=\alpha(t)-\Lambda_0=\sum_{\mu=1}^{\infty} (\varphi_{\mu}(t)+\Lambda_{\mu}(t))$. Then $(\ref{qetyu})$ is equivalent to
\begin{align}\label{ghj}
L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=0,\,\,\,\,\,\beta_1(t)=\alpha_1(t)
\end{align}
Constructing $\alpha(t)$ is equivalent to constructing $\beta(t)$. We will construct $\beta(t)$ satisfying $(\ref{ghj})$.
Consider the equation
\begin{equation}\label{qety}
\beta(t)=\beta_1(t)-\frac{1}{2}L^*G[\beta(t),\beta(t)],
\end{equation}
where $\beta_1(t)=\alpha_1(t)$.
Then $(\ref{qety})$ has a unique formal power series solution $\beta(t)=\sum_{\mu=1}^{\infty} \beta_{\mu}(t)$, and there exists a $\epsilon>0$ such that for $t\in \Delta_{\epsilon}=\{t\in \mathbb{C}^m||t|<\epsilon\}$, $\beta(t)=\sum_{\mu=1}^{\infty} \beta_{\mu}(t)$ converges in the norm $||\cdot||_{k+\theta}$ (for the detail, see \cite{Mor71} p.162 Proposition 2.4. By virtue of the integrability condition (\ref{ghj}), we can formally apply their argument.).
\begin{proposition}\label{yui}
$\beta(t)$ satisfies $L\beta(t)+\frac{1}{2}[\beta(t),\beta(t)]=0$ if and only if $H[\beta(t),\beta(t)]=0$, where $H:A^2=A^{0,2}(M,T_M)\oplus A^{0,1}(M,\wedge^2 T_M)\oplus A^{0,0}(M,\wedge^3 T_M)\to \mathbb{H}^2\cong \mathbb{H}^2(M,\Theta_M^\bullet)$ is the orthogonal projection to the harmonic subspace of $A^2$.
\end{proposition}
\begin{proof}
We simply note that $(\bigoplus_{i\geq0} g_i, g_i=\bigoplus_{p+q-1=i,p\geq 0, q\geq 1} A^{0,p}(M,\wedge^q T_M),L=\bar{\partial}+[\Lambda_0,-],[-,-])$ is a differential graded Lie algebra and so the argument in the proof of \cite{Mor71} p.163 Proposition 2.5 is formally applied to our case by Lemma \ref{lemma5.2.2} and Lemma \ref{lemma5.2.3}.
\end{proof}
Now suppose that $\mathbb{H}^2(M,\Theta_M^\bullet)=0$. Then by Proposition \ref{yui}, $\beta(t)$ satisfies $(\ref{ghj})$ for $t\in \Delta_{\epsilon}$. Hence $\alpha(t)=\beta(t)+\Lambda_0=\varphi(t)+\Lambda(t)$ is the desired one satisfying $(\ref{qetyu})$. We note that $\alpha(t)$ has the following property which we need in the construction of a Poisson analytic family in the next subsection.
\begin{proposition}\label{pr}
$\alpha(t)=\beta(t)+\Lambda_0=\varphi(t)+\Lambda(t)$ is $C^{\infty}$ in $(z,t)$ and holomorphic in $t$.
\end{proposition}
\begin{proof}
We note that $\Box$ is a strongly elliptic differential operator whose principal part is of diagonal type by our assumption. So we can formally apply the argument of \cite{Mor71} p.163 Proposition 2.6. See also \cite{Kod05} Appendix p.452 \S 8.
\end{proof}
\subsection{Construction of a Poisson analytic family}\label{subsection}\
In the previous subsection, we have constructed a family $\{(\varphi(t),\Lambda(t))|t\in \Delta_{\epsilon}\}$ of $C^{\infty}$ vector $(0,1)$-forms $\varphi(t)$ and $C^{\infty}$ type $(2,0)$ bivectors $\Lambda(t)$
\begin{align*}
\varphi(t)=\sum_{\lambda=1}^n\sum_{v=1}^n \varphi_v^{\lambda}(z,t)d\bar{z}_v\frac{\partial}{\partial z_{\lambda}},\,\,\,\,\,
\Lambda(t)=\sum_{\alpha,\beta=1}^n \Lambda_{\alpha\beta}(z,t)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}
\end{align*}
satisfying the integrability condition $[\Lambda(t),\Lambda(t)]=0,\bar{\partial}\Lambda(t)=[\Lambda(t),\varphi(t)],\bar{\partial}\varphi(t)=\frac{1}{2}[\varphi(t),\varphi(t)]$ and the initial conditions $\varphi(0)=0, \Lambda(0)=\Lambda_0, (-\left(\frac{\partial \Lambda(t)}{\partial t_{\lambda}}\right)_{t=0},\left(\frac{\partial \varphi(t)}{\partial t_{\lambda}}\right)_{t=0})=(-\pi_{\lambda},-\eta_{\lambda}),\lambda=1,...,m$, where $\varphi_v^{\lambda}(z,t)$ and $\Lambda_{\alpha\beta}(z,t)$ are $C^{\infty}$ functions of $z^1,...,z^n,t_1,...,t_m$ and holomorphic in $t_1,...,t_m$.
$(\varphi(t),\Lambda(t))$ determines a holomorphic Poisson structure $(M_{\varphi(t)},\Lambda(t)^{2,0})$ on $M$ for each $t\in \Delta_\epsilon$. In order to show that $\{(M_{\varphi(t)},\Lambda(t)^{2,0})|t\in \Delta_{\epsilon}\}$ is a Poisson analytic family, we consider $\varphi:=\varphi(t)$ as a vector $(0,1)$-form on the complex manifold $M\times \Delta_{\epsilon}$, and $\Lambda:=\Lambda(t)$ as a $(2,0)$ bivector on $M\times \Delta_{\epsilon}$. Then since $\varphi_v^{\lambda}=\varphi_v^{\lambda}(z,t)$ are holomorphic in $t_1,...,t_m$ (Proposition \ref{pr}), we have $\frac{\partial \varphi_v^{\lambda}}{\partial \bar{t}_{\mu}}=0$ in
\begin{align*}
\bar{\partial}\varphi=\sum_{\lambda,v=1}^n \left(\sum_{\beta=1}^n\frac{\partial \varphi_v^{\lambda}}{\partial \bar{z}_{\beta}}d\bar{z}_{\beta}+\sum_{\mu=1}^m \frac{\partial \varphi_v^{\lambda}}{\partial \bar{t}_{\mu}}d\bar{t}_{\mu}\right) \wedge d\bar{z}_v\frac{\partial}{\partial z_{\lambda}}
\end{align*}
Similarly since $\Lambda_{\alpha\beta}(z,t)$ is holomorphic in $t_1,...,t_m$ (Proposition \ref{pr}), we have $\frac{\partial \Lambda_{\alpha\beta}}{\partial \bar{t}_{\mu}}=0$ in
\begin{align*}
\bar{\partial} \Lambda=\sum_{\alpha,\beta} \left(\sum_{v=1}^n \frac{\partial \Lambda_{\alpha\beta}}{\partial \bar{z}_v}d\bar{z}_v+\sum_{\mu=1}^m \frac{\partial \Lambda_{\alpha\beta}}{\partial \bar{t}_{\mu}}d\bar{t}_{\mu}\right)\frac{\partial}{\partial z_{\alpha}}\wedge \frac{\partial}{\partial z_{\beta}}
\end{align*}
Hence $\varphi$ and $\Lambda$ satisfy $\bar{\partial}\varphi=\frac{1}{2}[\varphi,\varphi]$, $ \bar{\partial}\Lambda=[\Lambda,\varphi]$, and $[\Lambda,\Lambda]=0$. Then by the Newlander-Nirenberg theorem(\cite{New57},\cite{Kod05} p.268), $\varphi$ defines a complex structure $\mathcal{M}$ on $M\times \Delta_{\epsilon}$ and
$(2,0)$-part $\Lambda^{2,0}$ of $\Lambda$ defines a holomorphic Poisson structure on $\mathcal{M}$. Let $\omega: \mathcal{M}\to \Delta_\epsilon$ be the natural projection. Then $\{(M_{\varphi(t)},\Lambda(t)^{2,0})|t\in \Delta_{\epsilon}\}$ forms a Poisson analytic family $(\mathcal{M},\Lambda^{2,0},\Delta_{\epsilon},\omega)$ (for the detail, see \cite{Kod05} p.282). This completes the proof of Theorem $\ref{theorem of existence}$.
\section{Theorem of completeness for holomorphic Poisson structures}\label{section6}
\subsection{Statement of Theorem of completeness for holomorphic Poisson structures}
\subsubsection{Change of parameters}(compare \cite{Kod05} p.205)
Consider a Poisson analytic family $\{(M_t,\Lambda_t)|(M_t,\Lambda_t)=\omega^{-1}(t),t\in B\}=(\mathcal{M},\Lambda,B,\omega)$ of compact holomorphic Poisson manifolds, where $B$ is a domain of $\mathbb{C}^m$. Let $D$ be a domain of $\mathbb{C}^r$ and $h:s\to t=h(s),s\in D$, a holomorphic map of $D$ into $B$. Then by changing the parameter from $t$ to $s$, we will construct a Poisson analytic family $\{(M_{h(t)},\Lambda_{h(t)})|s\in D\}$ on the parameter space $D$ in the following.
Let $\mathcal{M}\times_B D:=\{(p,s)\in \mathcal{M}\times B|\omega(p)=h(s)\}$. Then we have the following commutative diagram
\begin{center}
$\begin{CD}
\mathcal{M}\times_B D @>p>> \mathcal{M}\\
@V\pi VV @VV\omega V\\
D @>h>> B
\end{CD}$
\end{center}
such that $(\mathcal{M}\times_B D,D,\pi)$ is a complex analytic family in the sense of Kodaira-Spencer and $\pi^{-1}(s)=M_{h(s)}$. We show that $(\mathcal{M}\times_B D,D,\pi)$ is naturally a Poisson analytic family such that $\pi^{-1}(s)=(M_{h(s)},\Lambda_{h(s)})$ and $p$ is a Poisson map. Note that the bivector field $\Lambda$ on $\mathcal{M}$ can be considered as a bivector field on $\mathcal{M}\times D$ which gives a holomorphic Poisson structure on $\mathcal{M}\times D$. So $(\mathcal{M}\times D,\Lambda)$ is a holomorphic Poisson manifold. We show that $\mathcal{M}\times_B D$ is a holomorphic Poisson submanifold of $(\mathcal{M}\times D,\Lambda)$ and defines a Poisson analytic family. Let $(p_0,s_0)\in \mathcal{M}\times_B D$. Taking a sufficiently small coordinate polydisk $\Delta$ with $h(s_0)\in \Delta$, we represent $(\mathcal{M}_{\Delta},\Lambda_{\Delta})=\omega^{-1}(\Delta)$ in the form of
\begin{align*}
(\mathcal{M}_{\Delta},\Lambda_{\Delta})=(\bigcup_{j=1}^l U_j\times \Delta, \sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j,t)\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where each $U_j$ is a polydisk independent of $t$, and $(z_j,t)\in U_j\times \Delta$ and $(z_k,t)\in U_k\times \Delta$ are the same point on $\mathcal{M}_{\Delta}$ if $z_j^{\alpha}=f_{jk}^{\alpha}(z_k,t), \alpha=1,...,n$. Let $E$ be a sufficiently small polydisk of $D$ such that $s_0\in E$ and $h(E)\subset \Delta$. Then we can represent $(\mathcal{M}\times D,\Lambda)$ around $(p_0,s_0)$ in the form of
\begin{align*}
(\mathcal{M}_{\Delta}\times E,\Lambda|_{\mathcal{M}_{\Delta}\times E})=(\bigcup_{j=1}^l U_j\times \Delta\times E, \sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j,t)\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where $(z_j,t,s)\in U_j\times \Delta\times E$ and $(z_k,t,s)\in U_k\times \Delta\times E$ are the same point on $\mathcal{M}_{\Delta}\times E$ if $z_j=f_{jk}(z_k,t)$.
Then we can represent $\mathcal{M}\times_B D$ around $(p_0,s_0)$ in the form of $\bigcup_{j=1}^l U_j\times G_E$, where $G_E=\{(h(s),s)|s\in E\}\subset \Delta\times E$, and $(z_j,h(s),s)\in U_j\times G_E$ and $(z_k,h(s),s)\in U_k\times G_E$ are the same point if $z_j=f_{jk}(z_k,h(s))$. We note that at $(p_0,s_0)\in \mathcal{M}\times_B D\subset \mathcal{M}\times D$, we have $\Lambda_{(p_0,s_0)}=\sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(p_0,h(s_0))\frac{\partial}{\partial z_j^{\alpha}}|_{p_0}\wedge\frac{\partial}{\partial z_j^{\beta}}|_{p_0}\in \wedge^2 T_{\mathcal{M}\times_B D}$. Hence $\mathcal{M}\times_B D$ is a holomorphic Poisson submanifold of $(\mathcal{M}\times D,\Lambda)$, and $p:(\mathcal{M}\times_B D,\Lambda|_{\mathcal{M}\times_B D})\to (\mathcal{M},\Lambda)$ is a Poisson map.
Since $G_E$ is biholomorphic to $E$. The holomorphic Poisson manifold $(\mathcal{M}\times_B D,\Lambda|_{\mathcal{M}\times_B D})$ is represented locally by the form
\begin{align*}
(\bigcup_{j=1}^l U_j\times E, \sum_{\alpha,\beta=1}^n g_{\alpha\beta}^j(z_j,h(s))\frac{\partial}{\partial z_j^{\alpha}}\wedge\frac{\partial}{\partial z_j^{\beta}})
\end{align*}
where $(z_k,s)\in U_k\times E$ and $(z_j,s)\in U_j\times E$ are the same point if $z_j=f_{jk}(z_k,h(s))$, which shows that $(\mathcal{M}\times_B D,D, \Lambda|_{\mathcal{M}\times_B D},\pi)$ is a Poisson analytic family and $\pi^{-1}(s)=(M_{h(s)},\Lambda_{h(s)})$.
\begin{definition}
The Poisson analytic family $(\mathcal{M}\times_B D,D, \Lambda|_{\mathcal{M}\times_B D},\pi)$ is called the Poisson analytic family induced from $(\mathcal{M},B,\Lambda,\omega)$ by the holomorphic map $h:D\to B$.
\end{definition}
We point out that change of variable formula holds for infinitesimal Poisson deformations as in infinitesimal deformations of complex structures (\cite{Kod05} Theorem 4.7 p.207).
\begin{theorem}
For any tangent vector $\frac{\partial}{\partial s}=c_1\frac{\partial}{\partial s_1}+\cdots +c_r\frac{\partial}{\partial s_r}\in T_s(D)$, the infinitesimal Poisson deformation of $(M_{h(s)},\Lambda_{h(s)})$ along $\frac{\partial}{\partial s}$ is given by
\begin{align*}
\frac{\partial(M_{h(s)},\Lambda_{h(s)})}{\partial{s}}=(\sum_{\lambda=1}^{m} \frac{\partial t_{\lambda}}{\partial s} \frac{\partial M_t}{\partial t_{\lambda}},\sum_{\lambda=1}^{m} \frac{\partial t_{\lambda}}{\partial s}\frac{\partial{\Lambda_t}}{\partial t_{\lambda}})
\end{align*}
\end{theorem}
With this preparation, we discuss a concept of completeness and `Theorem of completeness' in the context of deformations of compact holomorphic Poisson manifolds in the next subsection.
\subsubsection{Statement of `Theorem of completeness for holomorphic Poisson structures'}
\begin{definition}
Let $(\mathcal{M},\Lambda_{\mathcal{M}},B, \omega)$ be a Poisson analytic family of compact holomorphic Poisson manifolds, and $t^0\in B$. Then $(\mathcal{M},\Lambda_{\mathcal{M}},B,\omega)$ is called complete at $t^0\in B$ if for any Poisson analytic family $(\mathcal{N},\Lambda_{\mathcal{N}},D,\pi)$ such that $D$ is a domain of $\mathbb{C}^l$ containing $0$ and that $\pi^{-1}(0)=\omega^{-1}(t^0)$, there is a sufficiently small domain $\Delta$ with $0\in \Delta\subset D$, and a holomorphic map $h:s\to t=h(s)$ with $h(0)=t^0$ such that $(\mathcal{N}_{\Delta},{\Lambda_{\mathcal{N}}}_{\Delta},\Delta,\pi)$ is the Poisson analytic family induced from $(\mathcal{M},\Lambda_{\mathcal{M}},B,\omega)$ by $h$ where $(\mathcal{N}_{\Delta},{\Lambda_{\mathcal{N}}}_{\Delta},\Delta,\pi)$ is the restriction of $(\mathcal{N},\Lambda_{\mathcal{N}},D,\pi)$ to $\Delta$ $($see Remark $\ref{restriction}$$)$.
\end{definition}
We will prove the following theorem which is an analogue of `Theorem of completeness' by Kodaira-Spencer (see Theorem \ref{kodairacomplete}).
\begin{theorem}[Theorem of completeness for holomorphic Poisson structures]\label{theorem of completeness}\label{complete9}
Let $(\mathcal{M},\Lambda_{\mathcal{M}},B,\omega)$ be a Poisson analytic family of deformations of a compact holomorphic Poisson manifold $(M_0,\Lambda_0)=\omega^{-1}(0)$, $B$ a domain of $\mathbb{C}^m$ containing $0$. If the Poisson Kodaira-Spencer map $\varphi_0:T_0 (B)\to \mathbb{H}^1(M_0,\Theta_{M_0}^\bullet)$ is surjective, the Poisson analytic family $(\mathcal{M},\Lambda_{\mathcal{M}}, B,\omega)$ is complete at $0\in B$.
\end{theorem}
\begin{remark}\label{remark55}
In order to prove Theorem $\ref{complete9}$, as in \cite{Kod05} Lemma $6.1$ $p.284$, it suffices to show that for any given Poisson analytic family $(\mathcal{N},\Lambda_{\mathcal{N}},D,\pi)$ with $\pi^{-1}(0)=(M_0,\Lambda_0)$, if we take a sufficiently small domain $\Delta$ with $0\in \Delta \subset D$, we can construct a holomorphic map $h:s\to t=h(s),h(0)=0,$ of $\Delta$ into $B$, and a Poisson holomorphic map $g$ of $(\mathcal{N}_{\Delta},{\Lambda_{\mathcal{N}}}_\Delta)=\pi^{-1}(\Delta)$ into $(\mathcal{M},\Lambda_{\mathcal{M}})$ satisfying the following condition: $g$ is a Poisson holomorphic map extending the identity $g_0:\pi^{-1}(0)=(M_0,\Lambda_0)\to (M_0,\Lambda_0)$, and $g$ maps each $(N_s,\Lambda_{N_s})=\pi^{-1}(s)$ Poisson biholomorphically onto $(M_{h(s)},\Lambda_{M_{h(s)}})$. We will construct such $h$ and $g$ by extending Kodaira's elementary method $($see \cite{Kod05} Chapter $6$$)$.
\end{remark}
\subsection{Preliminaries}\label{preli}\
We extend the argument of \cite{Kod05} p.285-286 (to which we refer for the detail) in the context of a Poisson analytic family. We tried to keep notational consistency with \cite{Kod05}.
Since the problem is local with respect to $B$, we may assume that $B=\{t\in \mathbb{C}^m||t|<1\}$, and $(\mathcal{M},\Lambda_{\mathcal{M}}, B,\omega)$ is written in the following form
\begin{align*}
(\mathcal{M},\Lambda_{\mathcal{M}})=\bigcup_j(\mathcal{U}_j,\Lambda_{M_j}),\,\,\,\,\, \mathcal{U}_j=\{(\xi_j,t)\in \mathbb{C}^n\times B||\xi_j|<1\}
\end{align*}
where the Poisson structure $\Lambda_{\mathcal{M}}$ is given by $\Lambda_{M_j}=\sum_{r,s=1}^n \Lambda_{M_j}^{r,s}(\xi_j,t)\frac{\partial}{\partial \xi_j^r}\wedge \frac{\partial}{\partial \xi_j^s}$ on $\mathcal{U}_j$ with $\Lambda_{M_j}^{r,s}(\xi_j,t)=-\Lambda_{M_j}^{s,r}(\xi_j,t)$, and $\omega(\xi_j,t)=t$. For $\mathcal{U}_j\cap \mathcal{U}_k\ne \emptyset,(\xi_j,t)$ and $(\xi_k,t)$ are the same point of $\mathcal{M}$ if
\begin{align}\label{vv2}
\xi_j=g_{jk}(\xi_k,t)=(g_{jk}^1(\xi_k,t),...,g_{jk}^n(\xi_k,t)),
\end{align}
where $g_{jk}^{\alpha}(\xi_k,t)$, $\alpha=1,...,n$, are holomorphic functions on $\mathcal{U}_j\cap \mathcal{U}_k$, and we have the following relations
\begin{align}\label{vv10}
\Lambda_{M_j}^{r,s}(g_{jk}(\xi_k,t),t)=\sum_{p,q=1}^n\Lambda_{M_k}^{p,q}(\xi_k,t)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_{jk}^s}{\partial \xi_k^q}.
\end{align}
Similarly we assume that $D=\{s\in \mathbb{C}^l||s|<1\}$, and $(\mathcal{N},\Lambda_{\mathcal{N}},D,\pi)$ is written in the following form
\begin{align*}
(\mathcal{N},\Lambda_{\mathcal{N}})=\bigcup_j (\mathcal{W}_j,\Lambda_{N_j}),\,\,\,\,\, \mathcal{W}_j=\{(z_j,s)\in \mathbb{C}^n\times D||z_j|<1\}
\end{align*}
where the Poisson structure $\Lambda_{\mathcal{N}}$ is given by $\Lambda_{N_j}=\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,t)\frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ on $\mathcal{W}_j$ with $\Lambda_{N_j}^{\alpha,\beta}(z_j,t)=-\Lambda_{N_j}^{\beta,\alpha}(z_j,t)$, and $\pi(z_j,s)=s$. For $\mathcal{W}_j\cap \mathcal{W}_k\ne\emptyset, (z_j,s)$ and $(z_k,s)$ are the same point of $\mathcal{N}$ if
\begin{align}\label{vv1}
z_j=f_{jk}(z_k,s)=(f_{jk}^1(z_k,s),...,f_{jk}^n(z_k,s)).
\end{align}
and we have
\begin{align}\label{vv11}
\Lambda_{N_j}^{\alpha,\beta}(f_{jk}(z_k,s),s)=\sum_{a,b=1}^n\Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial f_{jk}^{\alpha}}{\partial z_k^a}\frac{\partial f_{jk}^{\beta}}{\partial z_k^b}
\end{align}
Since $(N_0,\Lambda_{N_0})=\pi^{-1}(0)=(M_0,\Lambda_0)=\omega^{-1}(0)=(M_0, \Lambda_{M_0})$, we may assume $(\mathcal{W}_j\cap N_0,\Lambda_{N_{0j}})=(\mathcal{U}_j\cap M_0,\Lambda_{M_{0j}})$ where $\Lambda_{N_{0j}}:=\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,0)\frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$, and $\Lambda_{M_{0j}}:=\sum_{r,s=1}^n \Lambda_{M_j}^{r,s}(\xi_j,0)\frac{\partial}{\partial \xi_j^{r}}\wedge \frac{\partial}{\partial \xi_j^{s}}$, and assume that the local coordinates $(\xi_j,0)$ and $(z_j,0)$ coincide on $\mathcal{W}_j \cap N_0=\mathcal{U}_j\cap M_0$. In other words, if $\xi_j^1=z_j^1,...,\xi_j^n=z_j^n$, $(\xi_j,0)$ and $(z_j,0)$ are the same point of $\mathcal{W}_j\cap N_0=\mathcal{U}_j \cap M_0$, and we have $\Lambda_{N_j}^{\alpha,\beta}(z_j,0)=\Lambda_{M_j}^{\alpha,\beta}(\xi_j,0)$. Putting
\begin{align}\label{ii33}
b_{jk}(\xi_k):=g_{jk}(\xi_k,0),\,\,\,\,\,\,\, \Lambda_{M_{0j}}^{\alpha,\beta}(\xi_j):= \Lambda_{M_j}^{\alpha,\beta}(\xi_j,0)
\end{align}
Then from $(\ref{vv2})$ and $(\ref{vv1})$, we have
\begin{align}\label{ii34}
b_{jk}(z_k)=f_{jk}(z_k,0),\,\,\,\,\,\,\, \Lambda_{M_{0j}}^{\alpha,\beta}(z_j)=\Lambda_{M_j}^{\alpha,\beta}(z_j,0)=\Lambda_{N_j}^{\alpha,\beta}(z_j,0)
\end{align}
In conclusion, we have
\begin{align}\label{covering}
(N_0,\Lambda_{N_0})=(M_0,\Lambda_{M_0}=\Lambda_0)=\bigcup_j (U_j,\Lambda_{M_{0j}}),\,\,\,\,\,U_j=\mathcal{W}_j\cap N_0=\mathcal{U}_j\cap M_0,
\end{align}
such that $\{z_j\},z_j=(z_j^1,...,z_j^n)$, is a system of local complex coordinates of the complex manifold $N_0=M_0$ with respect to $\{U_j\}$, and the Poisson structure is given by $\Lambda_{M_{0j}}=\sum_{\alpha,\beta=1}^n\Lambda_{M_{0j}}^{\alpha,\beta}(z_j)\frac{\partial}{\partial z_j^{\alpha}}\wedge \frac{\partial}{\partial z_j^{\beta}}$ on $U_j$ with $\Lambda_{M_{0j}}^{\alpha,\beta}(z_j)=-\Lambda_{M_{0j}}^{\beta,\alpha}(z_j)$. The coordinate transformation on $U_j\cap U_k$ is given by $z_j^{\alpha}=b_{jk}^{\alpha}(z_k),\alpha=1,...,n,$ and we have
\begin{align}\label{vv3}
\Lambda_{M_{0j}}^{\alpha,\beta}(b_{jk}(z_k))=\sum_{a,b=1}^n\Lambda_{M_{0k}}^{a,b}(z_k)\frac{\partial b_{jk}^{\alpha}}{\partial z_k^a}\frac{\partial b_{jk}^{\beta}}{\partial z_k^b}.
\end{align}
\subsection{Construction of Formal Power Series}\
As in Remark \ref{remark55}, we have to define a holomorphic map $h:s\to t=h(s)$ with $h(0)=0$ of $\Delta=\{s\in D||s|<\epsilon\}$ into $B$ for a sufficiently small $\epsilon>0$, and to extend the identity $g_0:(N_0,\Lambda_0)\to (M_0=N_0,\Lambda_0)$ to a Poisson holomorphic map $g:\pi^{-1}(\Delta)=(\mathcal{N}_{\Delta},\Lambda_{N_{\Delta}})\to (\mathcal{M},\Lambda)$ such that $\omega\circ g=h\circ \pi$.
We begin with constructing formal power series $h(s)=\sum_{v=1}^\infty h_v(s)$ of $s_1,...,s_l$ where $h_v(s)$ is a homogenous polynomial of degree $v$ of $s_1,...,s_l$, and formal power series $g_j(z_j,s)=z_j+\sum_{v=1}^\infty g_{j|v}(z_j,s)$ in terms of $s_1,...,s_l$ for each $U_j$ in (\ref{covering}), whose coefficients are vector valued holomorphic functions on $U_j$ where $g_{j|v}(z_j,s)=\sum_{v_1+\cdots+v_l=v} g_{jv_1\cdots v_l}(z_j)s_1^{v_1}\cdots s_l^{v_l}$
is a homogeneous polynomial of degree $v$ of $s_1,...,s_l$, and each component $g_{jv_1\cdots v_n}^{\alpha}(z_j),\alpha=1,...,n$ of the coefficient $
g_{jv_1\dots v_l}(z_j)=(g_{jv_1\cdots v_l}^{1}(z_j),...,g_{jv_1\cdots v_l}^{n}(z_j))$ is a holomorphic function of $z_j^1,...,z_j^n$ defined on $U_j$. The formal power series $h(s)$ and $g_j(z_j,s)$ will satisfy
\begin{align}
g_j(f_{jk}(z_k,s),s)=g_{jk}(g_k(z_k,s),h(s)) \,\,\,\,\,\text{on}\,\,\,U_j\cap U_k\ne \emptyset \label{aa90}\\
\Lambda_{M_j}^{r,s}(g_j(z_j,s),h(s))=\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial g_j^r}{\partial z_j^{\alpha}}\frac{\partial g_j^s}{\partial z_j^{\beta}}\,\,\,\,\,\text{on}\,\,\,U_j.\label{aa91}
\end{align}
For the meaning of (\ref{aa90}), we refer to \cite{Kod05} p.286-288. (\ref{aa90}) is a crucial condition for the proof of `Theorem of completeness for complex analytic structures' (Theorem \ref{kodairacomplete}). However, in order to prove `Theorem of completeness for holomorphic Poisson structures' (Theorem \ref{complete9}), we need to impose additional condition $(\ref{aa91})$ which means that $g_j(z_j,s)$ is a Poisson map.
We will write
\begin{align*}
h^v(s)&:=h_1(s)+\cdots+h_v(s).\\
g_j^v(z_j,s)&:=z_j+g_{j|1}(z_j,s)+\cdots g_{j|v}(z_j,s).
\end{align*}
The equalities (\ref{aa90}) and (\ref{aa91}) are equivalent to the following system of the infinitely many congruence:
\begin{align}\label{aa11}
g_j^v(f_{jk}(z_k,s),s)&\equiv_v g_{jk}(g_k^v(z_k,s),h^v(s))\\
\Lambda_{M_j}^{r,s}(g_j^v(z_j,s),h^v(s))&\equiv_{v}\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^v}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^v}{\partial z_j^{\beta}}\label{aa12}
\end{align}
for $v=0,1,2,3,...$ where we indicate by $\equiv_v$ that the power series expansions with respect to $s$ of both sides of (\ref{aa11}) and (\ref{aa12}) coincide up to the term of degree $v$.
We will construct $h^v(s), g_j^v(z_j,s)$ satisfying $(\ref{aa11})_v$ and $(\ref{aa12})_v$ inductively on $v$. Then the resulting formal power series $h(s)$ and $g_j(z_j,s)$ will satisfy $(\ref{aa90})$ and $(\ref{aa91})$. For $v=0$, since $h^0(s)=0$ and $g_j^0(z_j,s)=z_j$, $(\ref{aa11})_0$ and $(\ref{aa12})_0$ hold by $(\ref{ii33}),(\ref{ii34})$. Now suppose that $h^{v-1}(s)$ and $g_j^{v-1}(z_j,s)$ are already constructed in such a manner that, for each $U_j\cap U_k\ne \emptyset$,
\begin{align}
g_j^{v-1}(f_{jk}(z_k,s),s)\equiv_{v-1} g_{jk}(g_k^{v-1}(z_k,s),h^{v-1}(s))
\end{align}
and for each $U_j$,
\begin{align}\label{aa33}
\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))\equiv_{v-1}\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}
\end{align}
hold. We will find $h_{v}(s)$ and $g_{j|v}(z_j,s)$ such that $h^v(s)=h^{v-1}(s)+h_v(s)$, and $g_j^v(z_j,s)=g_j^{v-1}(z_j,s)+g_{j|v}(z_j,s)$ satisfy $(\ref{aa11})_v$ on each $U_j\cap U_k$ and $(\ref{aa12})_v$ on each $U_j$.
For this purpose, we start from finding the equivalent conditions to $(\ref{aa11})_v$ and $(\ref{aa12})_v$, and then interpret them cohomologically by using \u{C}ech resolution of the complex of sheaves (\ref{complex}) with respect to the open covering $(\ref{covering})$ of $M_0=N_0$ (see Lemma \ref{hu7} below).
For the equivalent condition to $(\ref{aa11})_v$, we briefly summarize Kodaira's result in the following: if we let $\Gamma_{jk|v}$ denote the sum of the terms of degree $v$ of $g_j^{v-1}(f_{jk}(z_k,s),s)-g_{jk}(g_k^{v-1}(z_k,s),h^{v-1}(s))$:
\begin{align}\label{tt002}
\Gamma_{jk}(z_j,s)\equiv_v g_j^{v-1}(f_{jk}(z_k,s),s)-g_{jk}(g_k^{v-1}(z_k,s),h^{v-1}(s)),
\end{align}
then $(\ref{aa11})_v$ is equivalent to the following:
\begin{align}\label{vv9}
\Gamma_{jk|v}(z_j,s)=\sum_{\beta=1}^n \frac{\partial z_j}{\partial z_k^{\beta}}g_{k|v}^{\beta}(z_k,s)-g_{j|v}(z_j,s)+\sum_{u=1}^m \left(\frac{\partial g_{jk}(z_k,t)}{\partial t_u}\right)_{t=0} h_{u|v}(s)
\end{align}
where $z_k$ and $z_j=b_{jk}(z_k)$ are the local coordinates of the same point of $N_0=M_0$ (for the detail, see \cite{Kod05} p.289-290).
On the other hand, let's find the equivalent condition to $(\ref{aa12})_v$. We note that
\begin{align}\label{aa13}
\Lambda_{M_j}^{r,s}(g_j^v(z_j,s),h^v(s))=\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s)+g_{j|v}(z_j,s),h^{v-1}(s)+h_v(s))
\end{align}
By expanding $\Lambda_{M_j}^{r,s}(\xi_j+\xi, t+\omega)$ into power series of $\xi^1,...,\xi^n,\omega_1,...,\omega_m$, we obtain
\begin{align}\label{aa14}
\Lambda_{M_j}^{r,s}(\xi_j+\xi,t+\omega)=\Lambda_{M_j}^{r,s}(\xi_j,t)+\sum_{\beta=1}^n\frac{\partial \Lambda_{M_j}^{r,s}}{\partial \xi_j^{\beta}}(\xi_j,t)\xi^{\beta}+\sum_{u=1}^m \frac{\partial \Lambda_{M_j}^{r,s}}{\partial t_u}(\xi_j,t)\omega_u+\cdots
\end{align}
where $\cdots$ denotes the terms of degree $\geq 2$ in $\xi^1,...,\xi^n,\omega_1,...,\omega_m$. Let's consider the left hand side of $(\ref{aa12})_v$. Then from $(\ref{aa13})$, $(\ref{aa14})$, and $(\ref{ii34})$, we have
\begin{align}\label{aa30}
&\Lambda_{M_j}^{r,s}(g_j^v(z_j,s),h^v(s))-\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))\\ &\equiv_v \sum_{\beta=1}^n\frac{\partial \Lambda_{M_j}^{r,s}}{\partial \xi_j^{\beta}}(g_j^{v-1}(z_j,s),h^{v-1}(s))g_{j|v}^{\beta}(z_j,s)+\sum_{u=1}^m \frac{\partial \Lambda_{M_j}^{r,s}}{\partial t_u}(g_j^{v-1}(z_j,s),h^{v-1}(s))h_{u|v}(s)\notag\\
&\equiv_v \sum_{\beta=1}^n\frac{\partial \Lambda_{M_j}^{r,s}}{\partial \xi_j^{\beta}}(g_j^{v-1}(z_j,0),h^{v-1}(0))g_{j|v}^{\beta}(z_j,s)+\sum_{u=1}^m \frac{\partial \Lambda_{M_j}^{r,s}}{\partial t_u}(g_j^{v-1}(z_j,0),h^{v-1}(0))h_{u|v}(s)\notag\\
&=\sum_{\beta=1}^n\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}g_{j|v}^{\beta}(z_j,s)+\sum_{u=1}^m \left( \frac{\partial\Lambda_{M_{j}}^{r,s}(z_j,t)}{\partial t_u}\right)_{t=0} h_{u|v}(s)\notag
\end{align}
On the other hand, let's consider the right hand side of $(\ref{aa12})_v$. Then from $(\ref{ii34})$, we have
\begin{align}\label{aa31}
&\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v}}{\partial z_j^{\beta}}=\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial ({g_j^r}^{v-1}+g_{j|v}^r)}{\partial z_j^{\alpha}}\frac{\partial ({g_j^s}^{v-1}+g_{j|v}^s)}{\partial z_j^{\beta}}\\
&\equiv_v \sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial g_{j|v}^s}{\partial z_j^{\beta}}+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial g_{j|v}^r}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}\notag\\
&\equiv_v \sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}+\sum_{\beta=1}^n\Lambda_{M_{0j}}^{r,\beta}(z_j)\frac{\partial g_{j|v}^s}{\partial z_j^{\beta}}+\sum_{\alpha=1}^n \Lambda_{M_{0j}}^{\alpha,s} (z_j)\frac{\partial g_{j|v}^r}{\partial z_j^{\alpha}}\notag
\end{align}
Then from $(\ref{aa30})$ and $(\ref{aa31})$, the congruence $(\ref{aa12})_v$ is equivalent to the following:
\begin{align}\label{aa32}
&-\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))+\sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}\\
&\equiv_v \sum_{\beta=1}^n\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}g_{j|v}^{\beta}(z_j,s)+\sum_{u=1}^m \left( \frac{\partial\Lambda_{M_{j}}^{r,s}(z_j,t)}{\partial t_u}\right)_{t=0} h_{u|v}(s)-\sum_{\beta=1}^n\Lambda_{M_{0j}}^{r,\beta}(z_j)\frac{\partial g_{j|v}^s}{\partial z_j^{\beta}}-\sum_{\alpha=1}^n \Lambda_{M_{0j}}^{\alpha,s} (z_j)\frac{\partial g_{j|v}^r}{\partial z_j^{\alpha}}\notag
\end{align}
By induction hypothesis (\ref{aa33}), the left hand side of (\ref{aa32}) $\equiv_{v-1} 0$. Hence if we let $\lambda_{j|v}^{r,s}$ denote the terms of degree $v$ of the left hand side of (\ref{aa32}), we have
\begin{align}\label{aa35}
\lambda_{j|v}^{r,s} (z_j,s) \equiv_v -\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{\beta}}
\end{align}
Hence from $(\ref{aa32})$ and $(\ref{aa35})$, the congruence $(\ref{aa12})_v$ is equivalent to the following:
\begin{align}\label{aa34}
\lambda_{j|v}^{r,s}(z_j,s)=\sum_{\beta=1}^n\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}g_{j|v}^{\beta}(z_j,s)+\sum_{u=1}^m \left( \frac{\partial\Lambda_{M_{j}}^{r,s}(z_j,t)}{\partial t_u}\right)_{t=0} h_{u|v}(s)-\sum_{\beta=1}^n \Lambda_{M_{0j}}^{r,\beta}(z_j)\frac{\partial g_{j|v}^s}{\partial z_j^{\beta}}-\sum_{\alpha=1}^n \Lambda_{M_{0j}}^{\alpha,s} (z_j)\frac{\partial g_{j|v}^r}{\partial z_j^{\alpha}}
\end{align}
where $z_k$ and $z_j=b_{jk}(z_k)$ are the local coordinates of the same point of $N_0=M_0$. We note that $\lambda_{j|v}^{r,s}(z_j,s)=-\lambda_{j|v}^{s,r}(z_j,s)$.
As in \cite{Kod05} p.291, to interpret the meaning of $(\ref{vv9})_v$, and $(\ref{aa34})_v$ in terms of \u{C}ech resolution of the complex of sheaves $(\ref{complex})$ with respect to the open covering $(\ref{covering})$ of $M_0=N_0$, we introduce holomorphic vector fields and bivector fields as follows:
\begin{align}
\theta_{ujk}&=\sum_{\alpha=1}^n \theta_{ujk}^{\alpha}(z_j)\frac{\partial}{\partial z_j^{\alpha}}=\sum_{\alpha=1}^n \left(\frac{\partial g_{jk}^{\alpha}(z_k,t)}{\partial t_u}\right)_{t=0}\frac{\partial}{\partial z_j^{\alpha}},\,\,\,\,\, z_k=b_{jk}(z_j)\label{p1}\\
\Lambda_{uj}'&=\sum_{r,s=1}^n \Lambda_{uj}'^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge\frac{\partial}{\partial z_j^s}:=\sum_{r,s=1}^n \left( \frac{\partial\Lambda_{M_{j}}^{r,s}(z_j,t)}{\partial t_u}\right)_{t=0} \frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}\label{p2}\\
\Gamma_{jk|v}(s)&=\sum_{\alpha=1}^n \Gamma_{jk|v}^{\alpha}(z_j,s)\frac{\partial}{\partial z_j^{\alpha}}\label{p3}\\
g_{k|v}(s)&=\sum_{\beta=1}^n g_{k|v}^{\beta}(z_k,s)\frac{\partial}{\partial z_k^{\beta}}\label{p4}\\
\lambda_{j|v}(s)&=\sum_{r,s=1}^n \lambda_{j|v}^{r,s}(z_j,s)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}\label{p5}
\end{align}
By $(\ref{covering})$, $\mathcal{U}:=\{U_j\}$ is a finite open covering of $M_0=N_0$. Since we assume that $\xi_j^{\alpha}=z_j^{\alpha},\alpha=1,...,n$ in subsection \ref{preli}, the 1-cocycle $(\{\Lambda_{uj}'\},\{\theta_{ujk}\})\in C^0(\mathcal{U},\wedge^2\Theta_{M_0})\oplus C^1(\mathcal{U},\Theta_{M_0})$ in $(\ref{p1})$ and $(\ref{p2})$ represents the infinitesimal Poisson deformation $(\Lambda_u',\theta_u)=\varphi_0(\frac{\partial}{\partial t_u}) \in \mathbb{H}^1(M_0,\Theta_{M_0}^\bullet)$ where $\varphi_0$ is the Poisson Kodaira-Spencer map of the Poisson analytic family $(\mathcal{M},\Lambda_{\mathcal{M}},B, \omega)$ (see Proposition \ref{gg} and Definition \ref{mapping}). Since the coefficients $\Gamma_{jkv_1\cdots v_l}$ of the homogeneous polynomial $\Gamma_{jk|v}(s)=\sum_{v_1+\cdots v_l=v} \Gamma_{jkv_1\cdots v_l} s_1^{v_1}\cdots s_l^{v_l}$ are holomorphic vector fields on $U_j \cap U_k$, $\{\Gamma_{jk|v}(s)\}=\sum_{v_1+\cdots v_l=v} \{\Gamma_{jkv_1\cdots v_l} \} s_1^{v_1}\cdots s_l^{v_l}$ is a homogenous polynomial of degree $v$ whose coefficients are $\{\Gamma_{jkv_1\cdots v_l}\}\in C^1(\mathcal{U},\Theta_{M_0})$. Since the coefficients $\lambda_{jv_1\cdots v_l}$ of the homogenous polynomial $\lambda_{j|v}(s)=\sum_{v_1+\cdots +v_l=v} \lambda_{jv_1\cdots v_l} s_1^{v_1}\cdots s_l^{v_l}$ are holomorphic bivector fields on $U_j$, $\{\lambda_{j|v}(s)\}=\sum_{v_1+\cdots +v_l=v} \{\lambda_{jv_1\cdots v_l}\}s_1^{v_1}\cdots s_l^{v_l}$ is a homogenous polynomial of degree $v$ whose coefficients are $\{\lambda_{jv_1\cdots v_l}\}\in C^0(\mathcal{U},\wedge^2 \Theta_
{M_0})$. Similarly $\{g_{j|v}(s)\}=\sum_{v_1+\cdots +v_l=v} \{g_{jv_1\cdots v_l} \} s_1^{v_1}\cdots s_l^{v_l}$ is a homogenous polynomial of degree $v$ whose coefficients are $\{g_{jv_1\cdots v_l}\}\in C^0(\mathcal{U},\Theta_{M_0})$. We claim that
\begin{lemma}\label{lemmai}
The following equation holds
\begin{align}\label{hu7}
(\{\lambda_{j|v}(s)\},\{\Gamma_{jk|v}(s)\})=\sum_{u=1}^m h_{u|v}(s)(\{\Lambda_{uj}'\},\{\theta_{ujk}\})-\delta_{HP}(\{g_{j|v}(s)\})
\end{align}
where $\delta_{HP}(\{g_{j|v}(s)\}):=(-\delta(\{g_{j|v}\})=\{g_{j|v}(s)-g_{k|v}(s)\}, \{[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},g_{j|v}(s)]\})$. Here $\delta$ is the \u{C}ech map.
\end{lemma}
\begin{proof}
First, we have $\{\Gamma_{jk|v}(s)\}=\sum_{u=1}^m h_{u|v}(s)\{\theta_{rjk}\}+\delta\{g_{j|v}(s)\}$ (see \cite{Kod05} p.291).
It remains to show that $\{\lambda_{j|v}(s)\}=\sum_{u=1}^m h_{u|v}(s)\{\Lambda'_{uj}\}-\{[\sum_{r,s} \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},g_{j|v}(s)]\}$.
Indeed,
\begin{align*}
&\sum_{u=1}^m h_{u|v}(s)\Lambda'_{uj}-\sum_{r,s,\beta=1}^n[\Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}, g_{j|v}^{\beta}(z_j,s)\frac{\partial}{\partial z_j^{\beta}}]\\
&=\sum_{u=1}^m h_{u|v}(s)\Lambda'_{uj}-\sum_{r,s,\beta=1}^n \Lambda_{M_{0j}}^{r,s}\frac{\partial g_{j|v}^{\beta}}{\partial z_j^r}\frac{\partial}{\partial z_j^{\beta}}\wedge \frac{\partial}{\partial z_j^s}+\sum_{r,s,\beta=1}^n g_{j|v}^{\beta}\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}-\sum_{r,s,\beta=1}^n \Lambda_{M_{0j}}^{r,s}\frac{\partial g_{j|v}^{\beta}}{\partial z_j^s}\frac{\partial }{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^{\beta}}\\
&=\sum_{u=1}^m h_{u|v}(s)\Lambda'_{uj}-\sum_{r,s,\beta=1}^n \Lambda_{M_{0j}}^{\beta,s}\frac{\partial g_{j|v}^{r}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^s}+\sum_{r,s,\beta=1}^n g_{j|v}^{\beta}\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}-\sum_{r,s,\beta=1}^n \Lambda_{M_{0j}}^{r,\beta}\frac{\partial g_{j|v}^{s}}{\partial z_j^{\beta}}\frac{\partial }{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}\\
&=\lambda_{j|v}(s)
\end{align*}
by $(\ref{aa34})$, $(\ref{p2})$ and $(\ref{p5})$.
\end{proof}
Thus in order to construct $h^v(s)=h^{v-1}(s)+h_v(s)$, $g_j^v(z_j,s)=g_j^{v-1}(z_j,s)+g_{j|v}(z_j,s)$ so that $(\ref{aa11})_v$ and $(\ref{aa12})_v$ hold, it suffices to obtain solutions $h_{u|v},u=1,...,m,\{g_{j|v}(s)\}$ of the equations (\ref{hu7}).
If solutions $h_{u|v}(s),u=1,...,m,\{g_{j|v}(s)\}$ exist, from (\ref{hu7}), we have
\begin{equation}\label{equ5}
\begin{cases}
[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}, \lambda_{j|v}(s)]=0\\
\lambda_{k|v}(s)-\lambda_{j|v}(s)+[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\Gamma_{jk|v}(s)]=0\\
\Gamma_{jk|v}(s)-\Gamma_{ik|v}(s)+\Gamma_{ij|v}(s)=0
\end{cases}
\end{equation}
Conversely,
\begin{lemma}\label{lemma10}
If $(\{\lambda_{j|v}(s)\},\{\Gamma_{jk|v}(s)\})$ satisfies $(\ref{equ5})$, then
\begin{equation}\label{eq22}
(\{\lambda_{j|v}(s)\},\{\Gamma_{jk|v}(s)\})=\sum_{u=1}^m h_{u|v}(s)(\{\Lambda_{uj}'\},\{\theta_{ujk}\})-\delta_{HP}(\{g_{j|v}(s)\})
\end{equation}
has solutions $h_{u|v}(s),u=1,...,m,\{g_{j|v}(s)\}$ when the Poisson Kodaira-Spencer map $\varphi_0:T_0(B)\to \mathbb{H}^1(M_0,\Theta_{M_0}^\bullet)$ is surjective.
\end{lemma}
\begin{proof}
Let $h_{u|v}(s)=\sum_{v_1+\cdots v_l=v} h_{uv_1\cdots v_l} s_1^{v_1}\cdots s_l^{v_l}$. Then by considering the coefficients of $s_1^{v_1}\cdots s_l^{v_l}$, $(\ref{eq22})$ can be written as
\begin{align*}
(\{\lambda_{jv_1\cdots v_l}\},\{\Gamma_{jkv_1,...,v_l}\})=\sum_{u=1}^m h_{uv_1\cdots v_l}(\{\Lambda_{uj}'\},\{\theta_{ujk}\})-\delta_{HP}(\{g_{jv_1\cdots v_l}\}).
\end{align*}
Thus it suffices to prove that any $1$-cocycle $(\{\lambda_j\},\{\Gamma_{jk}\})\in C^0(\mathcal{U},\wedge^2 \Theta_0)\oplus C^1(\mathcal{U},\Theta_0)$ such that $[\Lambda_0,\lambda_j]=0, \lambda_k-\lambda_j+[\Lambda_0,\Gamma_{jk}]=0,\Gamma_{jk}-\Gamma_{ik}+\Gamma_{ij}=0$ can be written in the form
\begin{align*}
(\{\lambda_{j}\},\{\Gamma_{jk}\})=\sum_{u=1}^m h_{u}(\{\Lambda_{uj}'\},\{\theta_{ujk}\})-\delta_{HP}(\{g_{j}\}),\,\,\,\,\,\text{for some}\,\,\, h_u\in \mathbb{C}, \{g_j\}\in C^0(\mathcal{U},\Theta_0)
\end{align*}
Let $(\eta,\gamma)\in \mathbb{H}^1(M_0,\Theta_{M_0}^\bullet)$ be the cohomology class of $(\{\lambda_j\},\{\Gamma_{jk}\})$. Since $\varphi_0:T_0(B)\to \mathbb{H}^1(M_0,\Theta_{M_0}^\bullet)$ is surjective, $(\eta,\gamma)$ is written in the form of a linear combination of the $(\Lambda'_u,\theta_u)$(= the cohomology class of $(\{\Lambda'_{uj}\},\{\theta_{ujk}\})$,$u=1,...,m$ in $(\ref{p1}),(\ref{p2})$) as
\begin{align*}
(\eta,\gamma)=\sum_{u=1}^m h_u(\Lambda_u',\theta_u),\,\,\,\,\, h_u\in \mathbb{C}
\end{align*}
So $\sum_{u=1}^m h_u(\{\Lambda'_{uj}\},\{\theta_{ujk}\})$ is cohomologous to $(\{\lambda_j\},\{\Gamma_{jk}\})$. Therefore there exists $\{g_j\}\in C^0(\mathcal{U},\Theta_0)$ such that $\delta_{HP}(\{g_j\})=\sum_{u=1}^m h_u(\{\Lambda'_{uj}\},\{\theta_{ujk}\})-(\{\lambda_j\},\{\Gamma_{jk}\}).$
\end{proof}
Next we will prove that
\begin{lemma}\label{lemma11}
$(\{\lambda_{j|v}(s)\},\{\Gamma_{jk|v}(s)\})$ satisfies $(\ref{equ5})$.
\end{lemma}
\begin{proof}
First, we have $\Gamma_{jk|v}(s)-\Gamma_{ik|v}(s)+\Gamma_{ij|v}(s)=0$ (see \cite{Kod05} p.292). Second, we show that $[\Lambda_0, \lambda_{j|v}(s)]=0$. We note that for $\Pi\in \Gamma(U_j,\wedge^3 \Theta_{M_0})$, $\Pi=0$ if and only if $\Pi(z_j^a,z_j^b,z_j^c):=\Pi(dz_j^a\wedge dz_j^b\wedge dz_j^c)$ for any $a,b,c$. Then from $[\Lambda_{N_j},\Lambda_{N_j}]=0$, $[\Lambda_{M_j},\Lambda_{M_j}]=0$, and Lemma \ref{formula},
{\small{\begin{align*}
&[\Lambda_0, \lambda_{j|v}](z_j^a,z_j^b,z_j^c)\\
&=\Lambda_0(\lambda_{j|v}(z_j^a,z_j^b), z_j^c)-\Lambda_0(\lambda_{j|v}(z_j^a,z_j^c), z_j^b)+\Lambda_0(\lambda_{j|v}(z_j^b,z_j^c), z_j^a)\\
&+\lambda_{j|v}(\Lambda_0(z_j^a,z_j^b),z_j^c)-\lambda_{j|v}(\Lambda_0(z_j^a,z_j^c),z_j^b)+\lambda_{j|v}(\Lambda_0(z_j^b,z_j^c),z_j^a)\\
&\equiv_v \Lambda_{N_j}(\lambda_{j|v}(z_j^a,z_j^b), g_j^{cv-1})-\Lambda_{N_j}(\lambda_{j|v}(z_j^a,z_j^c), g_j^{bv-1})+\Lambda_{N_j}(\lambda_{j|v}(z_j^b,z_j^c), g_j^{cv-1})\\
&+\lambda_{j|v}(\Lambda_{M_j}(z_j^a,z_j^b),z_j^c)-\lambda_{j|v}(\Lambda_{M_j}(z_j^a,z_j^c),z_j^b)+\lambda_{j|v}(\Lambda_{M_j}(z_j^b,z_j^c),z_j^a)\\
&\equiv_v -2\Lambda_{N_j}(\Lambda_{M_j}^{a,b}(g_j^{v-1},h^{v-1}),g_j^{cv-1})+\Lambda_{N_j}(\Lambda_{N_j}(g_j^{av-1},g_j^{bv-1}),g_j^{cv-1})\\
&+2\Lambda_{N_j}(\Lambda_{M_j}^{a,c}(g_j^{v-1},h^{v-1}),g_j^{bv-1})-\Lambda_{N_j}(\Lambda_{N_j}(g_j^{av-1},g_j^{cv-1}),g_j^{bv-1})\\
&-2\Lambda_{N_j}(\Lambda_{M_j}^{b,c}(g_j^{v-1},h^{v-1}),g_j^{av-1})+\Lambda_{N_j}(\Lambda_{N_j}(g_j^{bv-1},g_j^{cv-1}),g_j^{av-1})\\
&+2\lambda_{j|v}(\Lambda_{M_j}^{a,b}(z_j,s),z_j^c)-2\lambda_{j|v}(\Lambda_{M_j}^{a,c}(z_j,s),z_j^b)+2\lambda_{j|v}(\Lambda_{M_j}^{b,c}(z_j,s),z_j^a)\\
&\equiv_v -2\Lambda_{N_j}(\Lambda_{M_j}^{a,b}(g_j^{v-1},h^{v-1}),g_j^{cv-1})
+2\Lambda_{N_j}(\Lambda_{M_j}^{a,c}(g_j^{v-1},h^{v-1}),g_j^{bv-1})
-2\Lambda_{N_j}(\Lambda_{M_j}^{b,c}(g_j^{v-1},h^{v-1}),g_j^{av-1})\\
&+4\sum_{r=1}^n \left( -\Lambda_{M_j}^{r,c}(g_j^{v-1},h^{v-1})+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial g_j^{rv-1}}{\partial z_j^\alpha}\frac{\partial g_j^{cv-1}}{\partial z_j^\beta} \right)\frac{\partial \Lambda_{M_j}^{a,b}}{\partial z_j^r}(g_j^{v-1},h^{v-1})\\
&-4\sum_{r=1}^n \left( -\Lambda_{M_j}^{r,b}(g_j^{v-1},h^{v-1})+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial g_j^{rv-1}}{\partial z_j^\alpha}\frac{\partial g_j^{bv-1}}{\partial z_j^\beta} \right)\frac{\partial \Lambda_{M_j}^{a,c}}{\partial z_j^r}(g_j^{v-1},h^{v-1})\\
&+4\sum_{r=1}^n \left( -\Lambda_{M_j}^{r,a}(g_j^{v-1},h^{v-1})+\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial g_j^{rv-1}}{\partial z_j^\alpha}\frac{\partial g_j^{av-1}}{\partial z_j^\beta} \right)\frac{\partial \Lambda_{M_j}^{b,c}}{\partial z_j^r}(g_j^{v-1},h^{v-1})\\
&\equiv_v -2\Lambda_{N_j}(\Lambda_{M_j}^{a,b}(g_j^{v-1},h^{v-1}),g_j^{cv-1})
+2\Lambda_{N_j}(\Lambda_{M_j}^{a,c}(g_j^{v-1},h^{v-1}),g_j^{bv-1})
-2\Lambda_{N_j}(\Lambda_{M_j}^{b,c}(g_j^{v-1},h^{v-1}),g_j^{av-1})\\
&-4\sum_{r=1}^n \left( \Lambda_{M_j}^{r,c}(g_j^{v-1},h^{v-1}) \frac{\partial \Lambda_{M_j}^{a,b}}{\partial z_j^r}(g_j^{v-1},h^{v-1}) - \Lambda_{M_j}^{r,b}(g_j^{v-1},h^{v-1}) \frac{\partial \Lambda_{M_j}^{a,c}}{\partial z_j^r}(g_j^{v-1},h^{v-1}) + \Lambda_{M_j}^{r,a}(g_j^{v-1},h^{v-1}) \frac{\partial \Lambda_{M_j}^{b,c}}{\partial z_j^r}(g_j^{v-1},h^{v-1}) \right)\\
&+4\sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial \Lambda_{M_j}^{a,b}(g_j^{v-1},h^{v-1})}{\partial z_j^\alpha}\frac{\partial g_j^{cv-1}}{\partial z_j^\beta}-4\sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial \Lambda_{M_j}^{a,c}(g_j^{v-1},h^{v-1})}{\partial z_j^\alpha}\frac{\partial g_j^{bv-1}}{\partial z_j^\beta}\\
&+4\sum_{\alpha,\beta=1}^n\Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial \Lambda_{M_j}^{b,c}(g_j^{v-1},h^{v-1})}{\partial z_j^\alpha}\frac{\partial g_j^{av-1}}{\partial z_j^\beta}\\
&=0
\end{align*}}}
Next we will show that
\begin{align}\label{eq45}
\lambda_{k|v}(s)-\lambda_{j|v}(s)+[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\Gamma_{jk|v}(s)]=0.
\end{align}
First we compute the third term of (\ref{eq45})
\begin{align}\label{equ46}
&[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\Gamma_{jk|v}(s)]=\sum_{r,s,\beta=1}^n [\Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\Gamma_{jk|v}^{\beta}\frac{\partial}{\partial z_j^{\beta}}]\\
&=\sum_{r,s,\beta=1}^n \left(\Lambda_{M_{0j}}^{r,s}\frac{\partial \Gamma_{jk}^{\beta}}{\partial z_j^{r}}\frac{\partial}{\partial z_j^{\beta}}\wedge \frac{\partial}{\partial z_j^s}
-\Gamma_{jk|v}^{\beta} \frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}+\Lambda_{M_{0j}}^{r,s}\frac{\partial \Gamma_{jk|v}^{\beta}}{\partial z_j^s}\frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^{\beta}}\right)\notag\\
&=\sum_{r,s,\beta=1}^n \left( \Lambda_{M_{0j}}^{\beta,s}\frac{\partial \Gamma_{jk}^{r}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^s}
-\Gamma_{jk|v}^{\beta} \frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}+\Lambda_{M_{0j}}^{r,\beta}\frac{\partial \Gamma_{jk|v}^{s}}{\partial z_j^{\beta}}\frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^s}\right)
\notag
\end{align}
We consider the first term $\lambda_{k|v}(s)$ of ($\ref{eq45}$). From $(\ref{aa35})$ and $(\ref{ii34})$, we have
\begin{align}\label{eq47}
\lambda_{k|v}(s)&\equiv_v \sum_{p,q=1}^n\left(-\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s), h^{v-1}(s))+\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial {g_k^p}^{v-1}}{\partial z_k^a}\frac{\partial {g_k^q}^{v-1}}{\partial z_k^b}\right)\frac{\partial}{\partial z_k^p}\wedge\frac{\partial}{\partial z_k^q}\\
&\equiv_v \sum_{r,s=1}^n\left(\sum_{p,q=1}^n\left(-\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s), h^{v-1}(s))+\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial {g_k^p}^{v-1}}{\partial z_k^a}\frac{\partial {g_k^q}^{v-1}}{\partial z_k^b}\right)\frac{\partial b_{jk}^r}{\partial z_k^p} \frac{\partial b_{jk}^s}{\partial z_k^q}\right)\frac{\partial}{\partial z_j^r}\wedge\frac{\partial}{\partial z_j^s}\notag
\end{align}
We consider the second term $-\lambda_{j|v}(s)$ of (\ref{eq45}). We note that since $z_j=b_{jk}(z_k)=f_{jk}(z_k,0)$ from $(\ref{ii34})$, we have $\lambda_{j|v}(z_j,s)\equiv_v \lambda_{j|v}(f_{jk}(z_k,s),s)$. Then from $(\ref{aa35})$ and by induction hypothesis $(\ref{aa33})$, we have
{\small{\begin{align}\label{eq48}
&-\lambda_{j|v}(z_j,s)\equiv_v\sum_{r,s=1}^n\left(\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))-\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(z_j,s)\frac{\partial {g_j^{r}}^{v-1}}{\partial z_j^{\alpha}}\frac{\partial {g_j^{s}}^{v-1}}{\partial z_j^{\beta}}\right)\frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^s}\\
&\equiv_v \sum_{r,s=1}^n\left(\Lambda_{M_j}^{r,s}(g_j^{v-1}(f_{jk}(z_k,s),s),h^{v-1}(s))-\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(f_{jk}(z_k,s),s)\frac{\partial {g_j^{r}}^{v-1}}{\partial z_j^{\alpha}}(f_{jk}(z_k,s),s)\frac{\partial {g_j^{s}}^{v-1}}{\partial z_j^{\beta}}(f_{jk}(z_k,s),s)\right) \frac{\partial}{\partial z_j^{r}}\wedge \frac{\partial}{\partial z_j^s}\notag
\end{align}}}
We consider the first term of $(\ref{eq48})$. From $(\ref{tt002})$ and $(\ref{vv10})$, we have
\begin{align}\label{eq49}
&\Lambda_{M_j}^{r,s}(g_j^{v-1}(f_{jk}(z_k,s),s),h^{v-1}(s))\equiv_v \Lambda_{M_j}^{r,s}(g_{jk}(g_k^{v-1}(z_k,s),h^{v-1}(s))+\Gamma_{jk|v}(z_j,s),h^{v-1}(s))\\
&\equiv_v \Lambda_{M_j}^{r,s}(g_{jk}(g_k^{v-1}(z_k,s),h^{v-1}(s)),h^{v-1}(s))+\sum_{\beta=1}^n \frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\Gamma_{jk|v}^{\beta}(z_j,s)\notag\\
&=\sum_{p,q=1}^n \Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s),h^{v-1}(s))\frac{\partial g_{jk}^r}{\partial \xi_k^p}(g_k^{v-1}(z_k,s),h^{v-1}(s))\frac{\partial g_{jk}^s}{\partial \xi_k^q}(g_k^{v-1}(z_k,s),h^{v-1}(s))+\sum_{\beta=1}^n \frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\Gamma_{jk|v}^{\beta}(z_j,s)\notag
\end{align}
On the other hand, we consider the second term of $(\ref{eq48})$. We note that from $(\ref{vv11})$, $(\ref{tt002})$ and $(\ref{vv3})$, we have
\begin{align}\label{eq50}
&\sum_{\alpha,\beta=1}^n \Lambda_{N_j}^{\alpha,\beta}(f_{jk}(z_k,s),s)\frac{\partial {g_j^{r}}^{v-1}}{\partial z_j^{\alpha}}(f_{jk}(z_k,s),s)\frac{\partial {g_j^{s}}^{v-1}}{\partial z_j^{\beta}}(f_{jk}(z_k,s),s)\\
& =\sum_{\alpha,\beta,a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial f_{jk}^\alpha(z_k,s)}{\partial z_k^a}\frac{\partial f_{jk}^\beta(z_k,s)}{\partial z_k^b }\frac{\partial {g_j^{r}}^{v-1}}{\partial z_j^{\alpha}}(f_{jk}(z_k,s),s)\frac{\partial {g_j^{s}}^{v-1}}{\partial z_j^{\beta}}(f_{jk}(z_k,s),s)\notag\\
&=\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial (g_j^{rv-1}(f_{jk}(z_k,s),s))}{\partial z_k^a}\frac{\partial (g_j^{sv-1}(f_{jk}(z_k,s),s))}{\partial z_k^b}\notag\\
&\equiv_v \sum_{a,b=1}^n\Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial(g_{jk}^r(g_k^{v-1}(z_k,s),h^{v-1}(s))+\Gamma_{jk|v}^r(z_j,s))}{\partial z_k^a}\frac{\partial (g_{jk}^s(g_k^{v-1}(z_k,s),h^{v-1}(s))+\Gamma_{jk|v}^s(z_j,s))}{\partial z_k^b}\notag\\
&\equiv_v\sum_{a,b,p,q=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_k^{pv-1}}{\partial z_k^a}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\frac{\partial g_{k}^{qv-1}}{\partial z_k^b}+\sum_{a,b=1}^n \Lambda_{M_{0k}}^{a,b}(z_k)\frac{\partial \Gamma_{jk|v}^r}{\partial z_k^a}\frac{\partial z_j^s}{\partial z_k^b}+\sum_{a,b=1}^n \Lambda_{M_{0k}}^{a,b}(z_k)\frac{\partial z_j^r}{\partial z_k^a}\frac{\partial \Gamma_{jk|v}^s}{\partial z_k^b}\notag\\
&\equiv_v\sum_{a,b,p,q=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_k^{pv-1}}{\partial z_k^a}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\frac{\partial g_{k}^{qv-1}}{\partial z_k^b}+\sum_{a,b,\beta=1}^n \Lambda_{M_{0k}}^{a,b}(z_k)\frac{\partial \Gamma_{jk|v}^r}{\partial z_j^\beta}\frac{\partial z_j^\beta}{\partial z_k^a}\frac{\partial z_j^s}{\partial z_k^b}+\sum_{a,b,\beta=1}^n\Lambda_{M_{0k}}^{a,b}(z_k)\frac{\partial z_j^r}{\partial z_k^a}\frac{\partial \Gamma_{jk|v}^s}{\partial z_j^\beta}\frac{\partial z_j^{\beta}}{\partial z_k^b}\notag\\
&\equiv_v\sum_{a,b,p,q=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_k^{pv-1}}{\partial z_k^a}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\frac{\partial g_{k}^{qv-1}}{\partial z_k^b}+\sum_{\beta=1}^n \Lambda_{M_{0j}}^{\beta,s}(z_j)\frac{\partial \Gamma_{jk|v}^r}{\partial z_j^\beta}+\sum_{\beta=1}^n \Lambda_{M_{0j}}^{r,\beta}(z_j)\frac{\partial \Gamma_{jk|v}^s}{\partial z_j^\beta}\notag
\end{align}
where we mean $\frac{\partial g_{jk}^r}{\partial \xi_k^p}$ and $\frac{\partial g_{jk}^s}{\partial \xi_k^q}$ by
\begin{align}\label{tt89}
\frac{\partial g_{jk}^r}{\partial \xi_k^p}:=\frac{\partial g_{jk}^r}{\partial \xi_k^p}(g_k^{v-1}(z_k,s),h^{v-1}(s)),\,\,\,\,\,\frac{\partial g_{jk}^s}{\partial \xi_k^q}:=\frac{\partial g_{jk}^s}{\partial \xi_k^q}(g_k^{v-1}(z_k,s),h^{v-1}(s))
\end{align}
Hence from (\ref{eq48}),(\ref{eq49}), and (\ref{eq50}), we have
\begin{align}\label{eq51}
&-\lambda_{j|v}(s)\equiv_v \sum_{r,s=1}^n\left(\sum_{p,q=1}^n(\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s), h^{v-1}(s))\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_{jk}^s}{\partial \xi_k^q}+\sum_{\beta=1}^n \frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}\Gamma_{jk|v}^{\beta}(z_j,s)\right)\frac{\partial}{\partial z_j^r}\wedge\frac{\partial}{\partial z_j^s}\\
&+\sum_{r,s=1}^n\left(-\sum_{a,b,p,q=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_k^{pv-1}}{\partial z_k^a}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\frac{\partial g_{k}^{qv-1}}{\partial z_k^b}-\sum_{\beta=1}^n \Lambda_{M_{0j}}^{\beta,s}(z_j)\frac{\partial \Gamma_{jk|v}^r}{\partial z_j^\beta}-\sum_{\beta=1}^n\Lambda_{M_{0j}}^{r,\beta}(z_j)\frac{\partial \Gamma_{jk|v}^s}{\partial z_j^\beta}\right)\frac{\partial}{\partial z_j^r}\wedge\frac{\partial}{\partial z_j^s}\notag
\end{align}
From (\ref{equ46}),(\ref{eq47}) and $(\ref{eq51})$, to show (\ref{eq45}) is equivalent to show that for each $r,s$,
\begin{align}\label{eq34}
&\sum_{p,q=1}^n\left(-\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s), h^{v-1}(s))+\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial {g_k^p}^{v-1}}{\partial z_k^a}\frac{\partial {g_k^q}^{v-1}}{\partial z_k^b}\right) \frac{\partial b_{jk}^r}{\partial z_k^p} \frac{\partial b_{jk}^s}{\partial z_k^q}\\
&+\sum_{p,q=1}^n \Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s),h^{v-1}(s))\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_{jk}^s}{\partial \xi_k^q}-\sum_{a,b,p,q=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_k^{pv-1}}{\partial z_k^a}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\frac{\partial g_{k}^{qv-1}}{\partial z_k^b}\equiv_v 0
\notag
\end{align}
$(\ref{eq34})$ is equivalent to
\begin{align}\label{poi}
\sum_{p,q=1}^n\left(-\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s), h^{v-1}(s))+\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial {g_k^p}^{v-1}}{\partial z_k^a}\frac{\partial {g_k^q}^{v-1}}{\partial z_k^b}\right)\left( \frac{\partial b_{jk}^r}{\partial z_k^p} \frac{\partial b_{jk}^s}{\partial z_k^q}-\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\right)\equiv_v 0
\end{align}
By induction hypothesis $(\ref{aa33})$, we have $\Lambda_{M_k}^{p,q}(g_k^{v-1}(z_k,s),h^v(s))\equiv_{v-1}\sum_{a,b=1}^n \Lambda_{N_k}^{a,b}(z_k,s)\frac{\partial {g_k^p}^{v-1}}{\partial z_k^{a}}\frac{\partial {g_k^q}^{v-1}}{\partial z_k^{b}}$, and we have $\left( \frac{\partial b_{jk}^r}{\partial z_k^p} \frac{\partial b_{jk}^s}{\partial z_k^q}-\frac{\partial g_{jk}^r}{\partial \xi_k^p}\frac{\partial g_{jk}^s}{\partial \xi_k^q}\right)\equiv_0 0$ since from $(\ref{tt89})$, we have $\frac{\partial g_{jk}^r}{\partial \xi_k^p}(g_k^{v-1}(z_k,0), h^{v-1}(0))=\frac{\partial g_{jk}^r}{\partial \xi_k^p}(z_k,0)=\frac{\partial b_{jk}^r}{\partial z_k^p}$ and similarly for $\frac{\partial g_{jk}^s}{\partial \xi_k^q}$. Hence we have (\ref{poi}). This completes Lemma \ref{lemma11}.
\end{proof}
\subsection{Proof of Convergence}\
By Lemma \ref{lemma10} and Lemma \ref{lemma11}, we can find $h_{u|v},u=1,...,m$, and $\{g_{j|v}(z_j,s)\}$ inductively on $v$ such that $h^v(s)=h^{v-1}(s)+h_{u|v}(s)$ and $g_j^v(z_j,s)=g_j^{v-1}(z_j,s)+g_{j|v}(z_j,s)$ satisfy $(\ref{aa11})_v$ and $(\ref{aa12})_v$ so that we have formal power series $h(s)$ and $g_j(z_j,s)$ satisfying (\ref{aa90}) and (\ref{aa91}). In this subsection, we will prove that we can choose appropriate solutions $h_{u|v}(s)$ and $\{g_{j|v}(z_j,s)\}$ in each inductive step so that $h(s)$ and $g_j(z_j,s)$ converge absolutely in $|s|<\epsilon$ if $\epsilon>0$ is sufficiently small. As in \cite{Kod05} p.294-302, our approach is to estimate $\Gamma_{jk|v}(z_j,s)$, $\lambda_{j|v}(s)$ and use Lemma \ref{lemma3} below concerning the ``magnitude" of the solutions $h_{u|v}(s),u=1...,m,\{g_{j|v}(z_j,s)\}$ of the equation $(\ref{hu7})$.
\begin{definition}
Let $\mathcal{U}:=\{U_j\}$ be a finite open covering of $M_0$ in $(\ref{covering})$. We may assume that $U_j=\{z_j\in\mathbb{C}^n|z_j|<1\}$ and $M_0=\bigcup_j U_j^\delta$, where $U_j^\delta=\{z_j\in U_j||z_j|<1-\delta\}$ for a sufficiently small number $\delta>0$. We denote a $1$-cocycle $(\{\lambda_j\},\{\Gamma_{jk}\})\in C^{0}(\mathcal{U},\wedge^2 \Theta_{M_0})\oplus C^1(\mathcal{U},\Theta_{M_0})$ by $(\lambda,\Gamma)$ in the \u{C}ech resolution of the complex of sheaves $(\ref{complex})$, and define its norm by
\begin{align}\label{yy3}
|(\lambda,\Gamma)|:=\max_j \sup_{z_j\in U_j^\delta} |\lambda_j(z_j)|+\max_{j,k}\sup_{z_j\in U_j\cap U_k}|\Gamma_{jk}(z_j)|
\end{align}
\end{definition}
\begin{remark}
We explain the meaning of $|\lambda_j(z_j)|$ and $|\Gamma_{jk}(z_j)|$ in $(\ref{yy3})$. We regard holomorphic vector field $\Gamma_{jk}(z_j)=\sum_{\alpha=1}^n \Gamma_{jk}^{\alpha}(z_j) \frac{\partial}{\partial z_j^{\alpha}}$ as a vector-valued holomorphic function $(\Gamma_{jk}^1(z_j),...,\Gamma_{jk}^n(z_j))$ and regard holomorphic bivector field $\lambda_j(z_j)=\sum_{r,s=1}^n \lambda_j^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}$ as a holomorphic vector valued function $(\lambda_j^{1,1}(z_j),\cdots,\lambda_j^{r,s}(z_j),\cdots, \lambda_j^{n,n}(z_j))$. For $z_j\in U_j\cap U_k$, we define $|\Gamma_{jk}(z_j)|:=\max_\alpha |\Gamma_{jk}^\alpha(z_j)|$. On the other hand, for $ z_j\in U_j^\delta$ we define $|\lambda_j(z_j)|:=\max_{r,s} |\lambda_j^{r,s}(z_j)|$.
\end{remark}
\begin{remark}
Since each $U_j=\{z_j\in \mathbb{C}^n||z_j|<1\}$ in $(\ref{covering})$ is a coordinate polydisk, we may assume that the coordinate function $z_j$ is defined on a domain of $M_0$ containing $\overline{U}_{j}$ $($the closure of $U_j$$)$. Hence there exists a constant $L_1>0$ such that for all $\alpha,\beta=1,...,n$, and for all $U_k\cap U_j\ne \emptyset$,
\begin{align}\label{hy1}
\left|\frac{\partial z_j^{\alpha}}{\partial z_k^{\beta}}(z_k)\right|=\left|\frac{\partial b_{jk}^\alpha}{\partial z_k^\beta}(z_k) \right|< L_1,\,\,\,\,\,z_k\in U_k\cap U_j,
\end{align}
and there exist constants $C,C'>0$ such that for all $r,s,\beta=1,...,n$ and for all $U_j$,
\begin{align}\label{ii99}
\left|\Lambda_{M_{0j}}^{r,s}(z_j)\right| < C,\,\,\,\,\,\, \left|\frac{\partial \Lambda_{M_{0j}}^{r,s}}{\partial z_j^{\beta}}(z_j)\right|<C',\,\,\,\,\,z_j\in U_j.
\end{align}
We define the norm of the matrix $B_{jk}(z_j):=\left(\frac{\partial z_j^{\alpha}}{\partial z_k^{\beta}}(z_k)\right)_{\alpha,\beta=1,...,n}$ by $|B_{jk}(z_k)|=\max_{\alpha}\sum_{\beta}\left|\frac{\partial z_j^{\alpha}}{\partial z_k^{\beta}}(z_k)\right|$. Then there exists a constant $K_1>1$ such that for all $U_j\cap U_k\ne \emptyset$,
\begin{align}\label{yt2}
|B_{jk}(z_k)|<K_1 \,\,\,\,\,\text{ $z_k\in U_k\cap U_j$}
\end{align}
Since $\theta_{ujk}^{\alpha}(z_j)=\left(\frac{\partial g_{jk}^{\alpha}(z_k,t)}{\partial t_u}\right)_{t=0}$ in $(\ref{p1})$ are bounded on $U_j\cap U_k\ne \emptyset$, there exists a constant $K_2$ such that
\begin{align}\label{yt1}
|\theta_{ujk}(z_j)|=|\sum_{\alpha=1}^n \theta_{ujk}^{\alpha}(z_j)\frac{\partial}{\partial z_j^{\alpha}}|:=\max_{\alpha}|\theta_{ujk}^{\alpha}(z_j)|<K_2
\end{align}
Since $\Lambda_{uj}'^{r,s}(z_j)=\left(\frac{\partial \Lambda_{M_j}^{r,s}(z_j,t)}{\partial t_u}\right)_{t=0}$ in $(\ref{p2})$ are bounded on $U_j$, there exists a constant $L_2$ such that
\begin{align}\label{yt3}
|\Lambda_{uj}'(z_j)|=|\sum_{r,s=1}^n \Lambda_{uj}'^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s}|:=\max_{r,s} |\Lambda_{uj}'^{r,s}(z_j)|<L_2
\end{align}
\end{remark}
\begin{lemma}[compare \cite{Kod05} Lemma 6.2 p.295]\label{lemma3}
There exist solutions $h_u,u=1,...,m$, and $\{g_j(z_j)=\sum_{\alpha=1}^n g_j^\alpha(z_j)\frac{\partial}{\partial z_j^\alpha}\}$ of the equation
\begin{align}\label{p55}
(\{\lambda_j\},\{\Gamma_{jk}\})=\sum_{u=1}^m h_u(\{\Lambda'_{uj}\}, \{\theta_{ujk}\})-\delta_{HP}\{g_j(z_j)\}
\end{align}
which satisfy
\begin{align*}
|h_u|\leq M|(\lambda,\Gamma)|,\,\,\,\,\,\,\, |g_j(z_j)|:=\max_\alpha |g_j^{\alpha}(z_j)| \leq M|(\lambda,\Gamma)| \,\,\,\,\text{for $z_j\in U_j$}
\end{align*}
where $M$ is a constant independent of a $1$-cocycle $(\lambda,\Gamma)=(\{\lambda_j\},\{\Gamma_{jk}\})$.
\end{lemma}
\begin{proof}
The proof is similar to \cite{Kod05} Lemma 6.2 p.295 to which we refer for the detail. For a $1$-cocycle $(\lambda,\Gamma)=(\{\lambda_j\},\{\Gamma_{jk}\})$ with $|(\lambda,\Gamma)|<\infty$, we define $\iota(\lambda,\Gamma)$ by
\begin{align*}
\iota(\lambda,\Gamma)=\inf \max_{u,j}\{|h_u|,\sup_{z_j\in U_j}|g_j(z_j)|\},
\end{align*}
where $\inf$ is taken with respect to all the solutions $h_u,u=1,...,m$, and $g_j(z_j)$ of (\ref{p55}). We will show that there exists a constant $M$ such that for all $1$-cocycles $(\lambda,\Gamma)\in \mathcal{C}^0(\mathcal{U},\wedge^2 \Theta_{M_0})\oplus \mathcal{C}^1(\mathcal{U},\Theta_{M_0})$, we have
\begin{align*}
\iota(\lambda,\Gamma)\leq M|(\lambda,\Gamma)|
\end{align*}
Suppose there is no such constant $M$. Then we can find a sequence of $1$-cocycles $(\lambda^{(v)},\Gamma^{(v)})=(\{\lambda_j^{(v)}\},\{\Gamma_{jk}^{(v)}\})\in \mathcal{C}^0(\mathcal{U},\wedge^2 \Theta_{M_0})\oplus \mathcal{C}^1(\mathcal{U},\Theta_{M_0}),v=1,2,,3\cdots$, and their solutions $g_j^{(v)}(z_j), h_u^{(v)},u=1,...,m$ such that
\begin{align}\label{yt7}
\iota(\lambda^{(v)},\Gamma^{(v)})=1,\,\,\,\,\,\,\,\,\, |(\lambda^{(v)},\Gamma^{(v)})|<\frac{1}{v},
\end{align}
and the sequence $\{h_u^{(v)}\},u=1,...,m$ converge and $\{g_j^{(v)}(z_j)\}$ converges uniformly on $U_j$.
Put $h_u=\lim_{v\to \infty} h_u^{(v)}$, and $g_j(z_j)=\lim_{v\to \infty}g_j^{(v)}(z_j)$ and note that
\begin{align*}
\Gamma_{jk}^{(v)}(z_j)&=\sum_{u=1}^m h_u^{(v)}\theta_{ujk}(z_j)+B_{jk}(z_k)g_k^{(v)}(z_k)-g_j^{(v)}(z_j),\\
\lambda_j^{(v)}(z_j)&=\sum_{u=1}^mh_u^{(v)}\Lambda_{uj}'(z_j)-[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\sum_{\alpha=1}^n g_j^{\alpha(v)}(z_j)\frac{\partial}{\partial z_j^{\alpha}}]
\end{align*}
Since $|\Gamma_{jk}^{(v)}(z_j)|\leq |(\lambda^{(v)},\Gamma^{(v)})|\to 0$ for $z_j\in U_j\cap U_k$, and $|\lambda_j^{(v)}(z_j)|\leq |(\lambda^{(v)},\Gamma^{(v)})|\to 0$ for $z_j\in U_j^\delta$ as $v\to \infty$, we have
\begin{align*}
0&=\sum_{u=1}^m h_u\theta_{ujk}(z_j)+B_{jk}(z_k)g_k(z_k)-g_j(z_j),\,\,\,\,\,\text{$z_j\in U_j\cap U_k$}\\
0&=\sum_{u=1}^mh_u\Lambda_{uj}'(z_j)-[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\sum_{\alpha=1}^n g_j^{\alpha}(z_j)\frac{\partial}{\partial z_j^{\alpha}}],\,\,\,\,\,\text{$z_j\in U_j^\delta$}
\end{align*}
By identity theorem, we have
\begin{align*}
0=\sum_{u=1}^mh_u\Lambda_{uj}'(z_j)-[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\sum_{\alpha=1}^n g_j^{\alpha}(z_j)\frac{\partial}{\partial z_j^{\alpha}}],\,\,\,\,\,\text{$z_j\in U_j$}
\end{align*}
By putting $\tilde{h}_u^{(v)}=h_u^{(v)}-h_u$, and $\tilde{g}_j^{(v)}(z_j)=g_j^{(v)}(z_j)-g_j(z_j)$, we obtain
\begin{align*}
\Gamma_{jk}^{(v)}(z_j)&=\sum_{u=1}^m \tilde{h}_u^{(v)}\theta_{ujk}(z_j)+B_{jk}(z_k)\tilde{g}_k^{(v)}(z_k)-\tilde{g}_j^{(v)}(z_j),\\
\lambda_j^{(v)}(z_j)&=\sum_{u=1}^m\tilde{h}_u^{(v)}\Lambda_{uj}'(z_j)-[\sum_{r,s=1}^n \Lambda_{M_{0j}}^{r,s}(z_j)\frac{\partial}{\partial z_j^r}\wedge \frac{\partial}{\partial z_j^s},\sum_{\alpha=1}^n \tilde{g}_j^{\alpha(v)}(z_j)\frac{\partial}{\partial z_j^{\alpha}}]
\end{align*}
Hence $\tilde{h}_u^{(v)},u=1,...,m$, and $\{\tilde{g}_j^{(v)}(z_j)\}$ satisfy the equation (\ref{p55}) for $(\lambda,\Gamma)=(\{\lambda^{(v)}\},\{\Gamma^{(v)}\})$. This is a contradiction to $\iota(\lambda,\Gamma)=1$ ((\ref{yt7})) since we have $\tilde{h}_u^{(v)}\to 0$ and $\sup_{z_j\in U_j}|\tilde{g}_j^{(v)}(z_j)|\to 0$.
\end{proof}
Next we will prove that we can choose appropriate solutions $h_{u|v}(s),u=1,...,m$ and $\{g_{j|v}(z_j,s)\}$ in each inductive step by estimating $\Gamma_{jk|v}(z_j,s),\lambda_{j|k}(z_j,s)$ and using Lemma \ref{lemma3} so that the formal power series $h(s)$ and $g_j(z_j,s)$ converge absolutely in $|s|<\epsilon$ if $\epsilon>0$ is sufficiently small. Before the proof, we remark the following.
\begin{remark}\label{remark123}\
\begin{enumerate}
\item For two power series of $s_1,...,s_l$,
\begin{align*}
P(s)&=\sum_{v_1,...,v_l=0}^\infty P_{v_1,...,v_l}s_1^{v_1}\cdots s_l^{v_l},\,\,\,\,\, P_{v_1,...,v_l}\in \mathbb{C}^n,\\
a(s)&=\sum_{v_1,...,v_l=0}^{\infty} a_{v_1,...,v_l}s_1^{v_1}\cdots s_l^{v_l},\,\,\,\,\,,a_{v_1,...,v_l}\geq 0,
\end{align*}
we write $P(s)\ll a(s)$ if $|P_{v_1,...,v_l}|\leq a_{v_1,...,v_l},\,\,\,\,\,v_1,...,v_l=0,1,2,...$.
\item For a power series $P(s)$, we denote by $[P(s)]_v$ the term of homogeneous part of degree $v$ with respect to $s$.
\item For $A(s)=\frac{b}{16c}\sum_{v=1}^{\infty}\frac{c^v(s_1+\cdots s_l)^v}{v^2},b>0,c>0$, we have $A(s)^v\ll \left(\frac{b}{c}\right)^{v-1}A(s),v=2,3,...$
\item Recall that for each $U_j=\{z_j\in \mathbb{C}^n||z_j|<1\}$, we set $U_j^\delta=\{z_j\in U_j||z_j|<1-\delta\}$ for a given $\delta$. Then $M_0=\bigcup_j U_j^\delta$ for a sufficiently small $\delta$.
\end{enumerate}
\end{remark}
To prove the convergence of $h(s)$ and $g_j(z_j,s)$, we will show the estimates $h(s)\ll A(s),\,\,\,g_j(z_j,s)-z_j\ll A(s)$ for suitable constants $b$ and $c$ in Remark \ref{remark123} (3), equivalently
\begin{align}\label{induction}
h^v(s)\ll A(s),\,\,\,\,\, g_j^v(z_j,s)-z_j\ll A(s)
\end{align}
for $v=1,2,3,...$. We will prove this by induction on $v=1,2,3,...$. For $v=1$, since the linear term of $A(s)$ is $\frac{b}{16}(s_1+\cdots +s_l)$, the estimate holds if $b$ is sufficiently large. Let $v\geq 2$ and assume that the induction holds for $v-1$. In other words,
\begin{align}\label{induction2}
h^{v-1}(s)\ll A(s),\,\,\,\,\, g_j^{v-1}(z_j,s)-z_j\ll A(s)
\end{align}
We will prove that (\ref{induction}) holds. For this, we estimate $\Gamma_{jk|v}(z_j,s)$ and $\lambda_{j|v}(z_j,s)$. For the estimation of $\Gamma_{jk|v}(z_j,s)$, we briefly summarize Kodaira's estimation presented in \cite{Kod05} p.298-302 in the following: since $f_{jk}(z_k,s)=b_{jk}(z_k)+\sum_{v=1}^\infty f_{jk|v}(z_j,s)$ are given vector-valued holomorphic functions, we may assume that
\begin{align*}
f_{jk}(z_k,s)-b_{jk}(z_k)\ll A_0(s),\,\,\,\,\,\, A_0(s)=\frac{b_0}{16c_0}\sum_{v=1}^\infty \frac{c_0^v(s_1+\cdots+s_l)^v}{v^2}
\end{align*}
holds for $z_k\in U_k\cap U_j$ with $b_0>0$ and $c_0>0$ such that $\frac{b_0}{c_0\delta}<\frac{1}{2}$, where $\delta$ from Remark \ref{remark123} (4). If we take $b$ and $c$ such that $b>b_0,c>c_0,\frac{ba_0(m+n)}{c}<\frac{1}{2}$, we can estimate
\begin{align}\label{yt6}
\Gamma_{jk|v}(z_j,s)\ll 2K_1K^*A(s),\,\,\,\,\, z_j\in U_j\cap U_k.
\end{align}
where $K^*=\frac{2^{n+1}b_0}{c\delta}+\frac{b_0}{b}+\frac{2ba_0^2(m+n)^2}{c}$ and $K_1$ from (\ref{yt2}) (for the detail, see page \cite{Kod05} 298-302).
Next we estimate $\lambda_{j|v}(z_j,s)$ (see (\ref{p5})). To estimate it, we estimate $\lambda_{j|v}^{r,s}(z_j,s)$ for each pair $(r,s)$ where $r,s=1,...,n$. We note that from (\ref{aa35}), we have
\begin{align}\label{jj11}
\lambda_{j|v}^{r,s} (z_j,s) =[ -\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))]_v+[\sum_{p,q=1}^n\Lambda_{N_j}^{p,q}(z_j,s)\frac{\partial {g_j^r}^{v-1}}{\partial z_j^{p}}\frac{\partial {g_j^s}^{v-1}}{\partial z_j^{q}}]_v
\end{align}
First we estimate $[\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))]_v$ in $(\ref{jj11})$. We expand $\Lambda_{M_{j}}^{r,s}(z_j+\xi,t)$ into power series in $\xi_1,...,\xi_n,t_1,...,t_m$, and let $L(\xi,t)$ be its linear term. Since $\Lambda_{M_j}^{r,s}(z_j,0)=\Lambda_{M_{0j}}^{r,s}(z_j)$ from $(\ref{ii34})$, we may assume that for all the pairs $(r,s)$,
\begin{align*}
\Lambda_{M_j}^{r,s}(z_j+\xi,t)-\Lambda_{M_{0j}}^{r,s}(z_j)-L(\xi,t)\ll \sum_{\mu=2}^{\infty} d_0^\mu(\xi_1+\cdots+\xi_n+t_1+\cdots +t_m)^\mu\,\,\,\,\,\text{for some constant $d_0>0$}
\end{align*}
Set $\xi=g_j^{v-1}(z_j,s)-z_j$, and $t=h^{v-1}(s)$. Since $\xi\ll A(s)$ and $t\ll A(s)$ by induction hypothesis (\ref{induction2}), we have from Remark \ref{remark123} (3)
\begin{align*}
&\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))-\Lambda_{M_{0j}}^{r,s}(z_j)-L(g_j^{v-1}(z_j,s)-z_j,h^{v-1}(s))\\
&\ll \sum_{\mu=2}^\infty {d_0}^\mu(n+m)^\mu A(s)^\mu\ll \sum_{\mu=2}^\infty d_0^{\mu}(m+n)^\mu\left(\frac{b}{c}\right)^{\mu-1}A(s)= \frac{bd_0^2(m+n)^2}{c}\sum_{\mu=2}^\infty\left(\frac{bd_0(m+n)}{c}\right)^{\mu-2} A(s)
\end{align*}
Hence we have
\begin{align*}
[\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))]_v\ll \frac{bd_0^2(m+n)^2}{c}\sum_{\mu=0}^\infty\left(\frac{bd_0(m+n)}{c}\right)^\mu A(s)
\end{align*}
Choose a constant $c$ such that $\frac{bd_0(m+n)}{c}<\frac{1}{2}$. Then we have
\begin{align}\label{jj16}
[\Lambda_{M_j}^{r,s}(g_j^{v-1}(z_j,s),h^{v-1}(s))]_v\ll\frac{2bd_0^2(m+n)^2}{c}A(s),\,\,\,\,\,\,\,\, z_j\in U_j
\end{align}
Next we estimate $[\sum_{p,q=1}^n\Lambda_{N_j}^{p,q}(z_j,s)\frac{\partial g_j^{rv-1}(z_j,s)}{\partial z_j^p}\frac{\partial g_j^{sv-1}(z_j,s)}{\partial z_j^q}]_v$ in $(\ref{jj11})$. By induction hypothesis (\ref{induction2}), set
\begin{align}\label{jj14}
\alpha_j^{rv-1}(z_j,s):=g_j^{rv-1}(z_j,s)-z_j^r \ll A(s)\,\,\,\,\,\text{for all $r=1,...,n$}
\end{align}
Since $\Lambda_{N_j}^{p,q}(z_j,s)$ is holomorphic, and $\Lambda_{N_j}^{p,q}(z_j,0)=\Lambda_{M_{0j}}^{p,q}(z_j)$ from $(\ref{ii34})$, we may assume that for all the pairs $(p,q)$,
\begin{align} \label{jj13}
\Pi_j^{p,q}(z_j,s):=\Lambda_{N_j}^{p,q}(z_j,s)-\Lambda_{M_{0j}}^{p,q}(z_j)\ll A_1(s)=\frac{b_1}{16c_1}\sum_{v=1}^\infty\frac{c_1^v(s_1+\cdots +s_l)^v}{v^2}
\end{align}
for some constants $b_1,c_1>0$. If we choose $b>b_1$ and $c>c_1$, then we have
\begin{align}\label{jj123}
\Pi_j^{p,q}(z_j,s)\ll \frac{b_1}{b}A(s).
\end{align}
Now assume that $z_j=(z_j^1,...,z_j^n) \in U_j^{\delta}$ from Remark \ref{remark123} (4). Then by Cauchy's integral formula, and $(\ref{jj14})$, we have, for $p=1,...,n$,
\begin{align*}
\frac{\partial \alpha_j^{rv-1}(z_j,s)}{\partial z_j^p}=\frac{1}{2\pi i}\int_{|\xi-z_j^p|=\delta}\frac{\alpha_j^{rv-1}(z_j^1,...,\overset{\text{$p$-th}}{\xi},...,z_j^n,s)}{(\xi-z_j^p)^2}d\xi
\end{align*}
Hence we have, for $p=1,...,n,$
\begin{align}\label{jj12}
\frac{\partial \alpha_j^{rv-1}(z_j,s)}{\partial z_j^p}\ll \frac{A(s)}{\delta}
\end{align}
Then from (\ref{jj14}),(\ref{jj13}), (\ref{jj123}), (\ref{jj12}) and $(\ref{ii99})$, we have
\begin{align}\label{jj15}
&[\sum_{p,q=1}^n\Lambda_{N_j}^{p,q}(z_j,s)\frac{\partial g_j^{rv-1}(z_j,s)}{\partial z_j^p}\frac{\partial g_j^{sv-1}(z_j,s)}{\partial z_j^q}]_v\\
&=[\sum_{p,q=1}^n(\Pi_j^{p,q}(z_j,s)+\Lambda_{M_{0j}}^{p,q}(z_j))\frac{\partial (\alpha_j^{rv-1}(z_j,s)+z_j^r)}{\partial z_j^p}\frac{\partial (\alpha_j^{sv-1}(z_j,s)+z_j^s)}{\partial z_j^q}]_v \notag\\
&=[\sum_{p,q=1}^n\Pi_j^{p,q}(z_j,s)\frac{\partial \alpha_j^{rv-1}(z_j,s)}{\partial z_j^p}\frac{\partial \alpha_j^{sv-1}(z_j,s)}{\partial z_j^q}]_v+[\sum_{p,q=1}^n\Pi_j^{p,q}(z,s)\frac{\partial \alpha_j^{rv-1}(z,s)}{\partial z_j^p}\frac{\partial z_j^s}{\partial z_j^q}]_v \notag\\
&+[\sum_{p,q=1}^n\Pi_j^{p,q}(z_j,s)\frac{\partial z_j^r}{\partial z_j^p}\frac{\partial \alpha_j^{sv-1}(z_j,s)}{\partial z_j^q}]_v+[\sum_{p,q=1}^n\Pi_j^{p,q}(z_j,s)\frac{\partial z_j^r}{\partial z_j^p}\frac{\partial z_j^s}{\partial z_j^q}]_v+[\sum_{p,q=1}^n\Lambda_{M_{0j}}^{p,q}(z_j)\frac{\partial \alpha_j^{rv-1}(z,s)}{\partial z_j^p}\frac{\partial \alpha_j^{sv-1}(z_j,s)}{\partial z_j^q}]_v\notag\\
&\ll \frac{n^2b_1}{b\delta^2}A(s)^3+\frac{2nb_1}{b\delta} A(s)^2+\frac{b_1}{b}A(s)+\frac{Cn^2}{\delta^2}A(s)^2 \,\,\,\, \text{for $C>0$ from (\ref{ii99}) which does not depend on $b,c.$}\notag\\
& \ll \left(\frac{n^2b_1b^2}{bc^2\delta^2}+\frac{2nb_1b}{bc\delta}+\frac{b_1}{b}+\frac{Cn^2b}{c\delta^2}\right)A(s)=\left(\frac{n^2b_1b}{c^2\delta^2}+\frac{2nb_1}{c\delta}+\frac{b_1}{b}+\frac{Cn^2b}{c\delta^2}\right)A(s)\,\,\,\,\,\text{from Remark \ref{remark123} (3)}\notag
\end{align}
Hence from (\ref{jj11}),(\ref{jj16}), (\ref{jj15}), we have
\begin{align}\label{jj19}
\lambda^{r,s}_{j|v}(z_j,s)\ll LA(s),\,\,\,\,\,z_j\in U_j^\delta
\end{align}
where $L=\frac{2bd_0^2(m+n)^2}{c}+\frac{n^2b_1b}{c^2\delta^2}+\frac{2nb_1}{c\delta}+\frac{b_1}{b}+\frac{Cn^2b}{c\delta^2}$.
Then by Lemma \ref{lemma3}, $(\ref{yt6})$ and $(\ref{jj19})$, we can choose solutions $h_{u|v}(s)$, $u=1,...,m$, $\{g_{j|v}(s)\}$ such that
\begin{align*}
h_{u|v}(s)\ll NA(s),\,\,\,\,\, g_{j|v}(s) \ll NA(s), \,\,\,\text{where}\,\,\, N=M\left(2K_1K^*+ L\right)
\end{align*}
Note that $N$ is independent of $v$ and $K^*=\frac{2^{n+1}b_0}{c\delta}+\frac{b_0}{b}+\frac{2ba_0^2(m+n)^2}{c}$, $L=\frac{2bd_0^2(m+n)^2}{c}+\frac{n^2b_1b}{c^2\delta^2}+\frac{2nb_1}{c\delta}+\frac{b_1}{b}+\frac{Cn^2b}{c\delta^2}$. If we first choose a sufficiently large $b$, and then choose $c$ so that $\frac{c}{b}$ be sufficiently large (so that $\frac{b}{c}$ is sufficiently small and $\frac{b}{c^2}$ is sufficiently small), then we obtain $N \leq 1$. Note that $b$ and $c$ satisfy $b>\max \{b_0,b_1\}$, $c> \max \{c_0,c_1\}$, $\frac{ba_0(m+n)}{c}< \frac{1}{2}$ and $\frac{bd_0(m+n)}{c}<\frac{1}{2}$.
Hence the above solution $h_{u|v}(s),u=1,...,m, \{ g_{j|v}(s) \}$ satisfy the inequalities
\begin{align*}
h_{v}(s)\ll A(s),\,\,\,\,\, g_{j|v}(s)\ll A(s)
\end{align*}
Since $h^v(s)=h^{v-1}(s)+h_v(s),\,\,\,\,\, g_j^v(z_j,s)=g_j^{v-1}(z_j,s)+g_{j|v}(z_j,s)$, we have $h^v(s)\ll A(s)$, and $g_j^v(z_j,s)\ll A(s)$. This completes the induction, and so we have $h(s)\ll A(s)$ and $g_j(z_j,s)-z_j\ll A(s)$. These inequalities imply that, if $|s|<\frac{1}{lc}$, $h(s)$ converges absolutely, and $g_j(z_j,s)$ converges absolutely and uniformly for $z_j\in U_j$.
\subsection{Proof of Theorem \ref{complete9}}\
By the same argument presented in page 303-304, we can glue together each $g_j$ on $U_j^{\delta}\times \Delta_{\epsilon}$ to construct a Poisson holomorphic map $g:\pi^{-1}(\Delta_{\epsilon})=(\bigcup_j U_j^{\delta}\times \Delta_{\epsilon},\Lambda_{\mathcal{N}}|_{\Delta_{\epsilon}} )\to (\mathcal{M},\Lambda_{\mathcal{M}})$ which extends the identity map $g_0:(N_0,\Lambda_0)\to (M_0=N_0,\Lambda_0)$ (see \cite{Kod05} page 303-304 for the detail and notations). This completes the proof of Theorem $\ref{complete9}$.
\begin{example}
Let $U_i=\{[z_0,z_1,z_2]|z_i\ne0\}$ $i=0,1,2$ be an open cover of complex projective plane $\mathbb{P}_{\mathbb{C}}^2$. Let $x=\frac{z_1}{z_0}$ and $w=\frac{z_2}{z_0}$ be coordinates on $U_0$. Then the holomorphic Poisson structures on $U_0$ are parametrized by $t=(t_1,...,t_{10})\in \mathbb{C}^{10}$
\begin{align*}
(t_1+t_2x+t_3w+t_4x^2+t_5xw+t_6w^2+t_7x^3+t_8x^2w+t_9xw^2+t_{10}w^3)\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}
\end{align*}
This parametrizes the whole holomorphic Poisson structures on $\mathbb{P}_{\mathbb{C}}^2$ $($see \cite{Pin11} Proposition 2.2$)$. Let $\Lambda_0=x\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}$ be the holomorphic Poisson structure on $\mathbb{P}_{\mathbb{C}}^2$. Then $\mathbb{H}^1(\mathbb{P}_{\mathbb{C}}^2,\Theta_{\mathbb{P}_\mathbb{C}^2}^\bullet)=5$, $\mathbb{H}^2(\mathbb{P}_{\mathbb{C}}^2,\Theta_{\mathbb{P}_\mathbb{C}^2}^\bullet)=0$$($see \cite{Pin11} Example $3.5$ $)$. $w^2\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $x^3\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $x^2w\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$, $xw^2\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$ and $w^3\frac{\partial}{\partial x}\wedge\frac{\partial}{\partial w}$ are the representatives of the cohomology classes consisting of the basis of $\mathbb{H}^1(\mathbb{P}_{\mathbb{C}}^2,\Theta_{\mathbb{P}_{\mathbb{C}}}^2)$. Let $t=(t_1,t_2,t_3,t_4,t_5)\in \mathbb{C}^5$. Let $\Lambda(t)=(t_1w^2+x+t_2x^3+t_3x^2w+t_5xw^2+t_5w^3)\frac{\partial}{\partial x}\wedge \frac{\partial}{\partial w}$ be the holomorphic Poisson structure on $\mathbb{P}_{\mathbb{C}}^2\times \mathbb{C}^5$. Then $(\mathbb{P}_{\mathbb{C}}^2\times \mathbb{C}^5,\Lambda(t),\mathbb{C}^5, \omega)$, where $\omega$ is the natural projection, is a Poisson analytic family with $\omega^{-1}(0)=(\mathbb{P}_{\mathbb{C}}^2,\Lambda_0)$. Since the complex structure does not change in the family, the Poisson Kodaira-Spencer map is an isomorphism. Hence the Poisson analytic family is complete at $0$. \end{example}
\bibliographystyle{amsalpha}
|
1,116,691,497,170 | arxiv | \section{Introduction}
Nowadays, a lot of attention is drawn to plasma acceleration methods. \cite{Esarey2009RevModPhys, Kostyukov2015UFN}
Compared to conventional radio-frequency linacs, plasma accelerators can provide orders of magnitude higher acceleration gradients.
The main idea of these methods is to use a driver to excite a plasma wake wave whose longitudinal electric field can be used to efficiently accelerate co-propagating charged particles.
A short intense laser pulse \cite{Tajima1979laser} or a relativistic electron bunch \cite{Rosenzweig1988experimental} can be used as a driver, corresponding to laser-wakefield acceleration (LWFA) and plasma-wakefield acceleration (PWFA), respectively.
The experiments on plasma acceleration demonstrate acceleration gradients of tens of gigavolts per meter.
For example, in the leading LWFA experiments, accelerated electrons with the energy of \SI{4.2}{\GeV} for the acceleration distance of \SI{9}{cm} have been obtained.\cite{Leemans2014PRL}
For PWFA, the energy increase from \SI{42}{\GeV} to more than \SI{80}{\GeV} over the distance of \SI{85}{cm} has been observed.\cite{Blumenfeld2007Nature}
For sufficiently intense laser pulses or sufficiently dense electron bunches, the driver interacts with plasma in the strongly nonlinear regime, leading to the formation of a near-spherical plasma cavity (a bubble) free of plasma electrons. \cite{Pukhov2002Bubble}
On the boundary of this bubble, a thin electron sheath shielding the cavity from the surrounding plasma is formed.
In this regime, self-injection is possible,\cite{Froula_2009_PRL_103_215006} i.\,e. electrons from the background plasma are trapped and accelerated in the bubble, which is commonly used in experiments.
There have been significant advancements in the theoretical description of the bubble regime over the recent years.
A simple model in which the bubble is assumed ideally spherical can be used to qualitatively describe the bubble regime. \cite{Kostyukov_2004_PoP_11_115256}
A more detailed phenomenological model makes it possible to describe the boundary of the bubble with a differential equation. \cite{Lu_2006_PoP_13_056709}
This phenomenological model has also been generalized for plasmas with non-uniform transverse profiles \cite{Thomas_2016_PoP_23_053108, Golovanov_2016_QE_46_295}, and it is also capable of describing beam loading effects \cite{Tzoufras_2009_PoP_16_056705, Golovanov_2016_PoP_23_093114} (i.\,e. the influence of accelerated electron bunches on the bubble).
In the scope of the model, explicit expressions for the electromagnetic field components both inside and outside the bubble can be obtained. \cite{Golovanov_2017_PoP_24_103104}
Furthermore, scaling laws based on the similarity theory have been obtained for the bubble regime both in uniform plasmas \cite{Gordienko_2005_PoP_12_043109} and plasmas with channels. \cite{Pukhov2014Channel}
Despite the achievements in the theoretical description, the phenomenological and not self-consistent nature of current models limits their use for the description of LWFA and PWFA.
Numerical simulations with the particle-in-cell (PIC) method remain the most general way of studying laser--plasma and beam--plasma interactions. \cite{Pukhov2016PIC}
Being based on fundamental equations, such simulations can self-consistently capture most of the relevant physical effects and can be used as a tool for ``numerical experiments.''
However, due to their nature, full 3D PIC simulations often require immense computational resources, which can be prohibitive for many problems.
For simulations of laser--plasma interaction, distributed machines with hundreds of gigabytes of RAM are often necessary.
It is also not unusual for full LWFA and PWFA simulations to take weeks of time on modern multi-processor systems.
This can significantly limit the possibility of performing series of simulations for a wide range of parameters, and that is why simpler simulation methods are often used.
One of them is 2D PIC simulations in which a two-dimensional grid is used instead of a realistic 3D grid, which significantly reduces the amount of required resources.
From the physics point of view, it corresponds to a driver and wakefield infinitely long and completely uniform in one direction.
Despite the fact that this geometry is different from the realistic one, such simulations are actively used in theoretical studies, e.\,g. in Refs. \onlinecite{Zhang_2015_PRL_114_184801, Sahai_2017_arxiv_1704.02913, Shaw_2014_PPCF_56_084006, Petrillo_2008_PRSTAaB_11_070703, Papp_2018_arxiv_1801.04093, Williamson_2017_arxiv_1712.00255}.
Because of that, understanding the difference in the structure of the wakefield between 2D and 3D geometries is important.
In this paper, we develop a model of strongly nonlinear wakefield in the 2D Cartesian geometry.
The developed model is similar to the model of the bubble in the 3D geometry. \cite{Lu_2006_PoP_13_056709, Thomas_2016_PoP_23_053108, Golovanov_2016_QE_46_295}
The paper is structured as follows.
In Sec.~\ref{sec:wakefieldGeneral}, we provide basic equations for the description of the wakefield.
Then, in Sec.~\ref{sec:motion}, we describe the trajectories of electrons in the wakefield.
The model of the bubble in the 2D geometry is introduced in Sec.~\ref{sec:bubbleModel}.
Based on this model, an equation for the bubble boundary is obtained and solved analytically in Sec.~\ref{sec:bubbleEquation}.
The theoretical results are compared to the results of 2D PIC simulations.
Finally, in Sec.~\ref{sec:quasi2d}, the possibility of creating a bubble similar to the 2D bubble in the realistic 3D geometry is considered.
\section{Equations for the wakefield}
\label{sec:wakefieldGeneral}
Let us consider a driver (an electron bunch or a laser pulse) propagating in fully ionized plasma along the $x$ axis and exciting wakefield in the strongly nonlinear regime.
We assume the 2D geometry in which the driver is infinite in the $z$ direction, and therefore all values are independent of $z$.
The plasma density $n(y)$ depends only on the transverse coordinate $y$.
This allows us to consider plasmas with different types of channels in addition to uniform plasma.
Both the driver and the plasma density distributions are assumed to be symmetric about the $y=0$ plane.
In this paper, we use unitless values in which charges are normalized to $e$, masses to $m$, time to $\omega_\textup{p}^{-1}$, coordinates to $c/\omega_\textup{p}$, densities to $n_\textup{p}$, electric and magnetic fields to $mc\omega_\textup{p}/e$.
Here, $e>0$ is the elementary charge, $m$ is the electron mass, $n_\textup{p}$ is the typical electron number density (for example, for plasma channels, it could be the density far outside the channel), $\omega_\textup{p} = (4\pi e^2 n_\textup{p}/m)^{1/2}$ is the corresponding typical plasma frequency.
It is convenient to describe the electromagnetic field with the scalar potential $\varphi$ and the vector potential $\vb{A}$.
If we take into account the 2D geometry and the symmetry with respect to $y=0$, only three non-zero components of the electromagnetic field exist
\begin{align}
&E_x = -\pdv{A_x}{t} - \pdv{\varphi}{x}, \quad E_y = -\pdv{A_y}{t} - \pdv{\varphi}{y},\\
&B_z = \pdv{A_y}{x} - \pdv{A_x}{y}.
\end{align}
Both the fields and the potentials depend on time $t$ and coordinates $x$ and $y$.
However, it is typical that the structure of the wakefield changes slowly during its propagation through plasma, so the dependence on $t$ and $x$ can be replaced with the dependence on $\xi = t - x$, which is called ``the quasistatic approximation''.
In this case, the phase velocity of the wakefield is assumed to be equal to the speed of light (1 in unitless values).
Under this approximation, all derivatives with respect to $x$ and $t$ are replaced with derivatives with respect to $\xi$
\begin{align}
&E_x = \pdv{\Psi}{\xi}, \quad E_y = -\pdv{\Psi}{y} + B_z,\\
&B_z = -\pdv{A_y}{\xi} - \pdv{A_x}{y}.
\end{align}
Here, we have introduced the wakefield potential $\Psi = \varphi - A_x$.
For the potentials, we use the Lorenz gauge
\begin{equation}
\pdv{A_y}{y} = - \pdv{\Psi}{\xi}.
\end{equation}
Under the symmetry constraints, it leads to
\begin{equation}
A_y = - \int_0^y {\pdv{\Psi}{\xi} \dd{y'}},
\end{equation}
thus leaving only $\Psi(\xi, y)$ and $A_x(\xi, y)$ as independent potentials.
The Maxwell's equations for these potentials in coordinates $(\xi, y)$ reduce to
\begin{equation}
\pdv[2]{\Psi}{y} = J_x - \rho,\quad \pdv[2]{A_x}{y} = - J_x.
\end{equation}
Their solutions are
\begin{align}
&\Psi = -\int_y^\infty \dd{y'} \int_0^{y'} {(J_x - \rho)\dd{y''}},\label{eq:PsiGeneral}\\
&B_z = \int_0^y {\left(\pdv[2]{\Psi}{\xi} + J_x\right) \dd{y'}}.\label{eq:BzGeneral}
\end{align}
These equations allow us to calculate the distributions of $\Psi$ and $B_z$ if we know the distributions of sources $J_x$ and $J_x - \rho$.
Knowing the wakefield potential $\Psi$ is extremely important for studying the acceleration of particles in the wakefield.
If we consider a relativistic particle moving predominantly along the $x$-axis ($\abs{p_y} \ll p_x$), then the forces acting on such a particle are
\begin{align}
&F_x \approx - E_x = - \pdv{\Psi}{\xi}, \label{eq:FxGeneral} \\
&F_y \approx - E_y + B_z = \pdv{\Psi}{y}. \label{eq:FyGeneral}
\end{align}
These forces depend only on the wakefield potential, therefore its distribution fully determines the motion of accelerated relativistic particles.
In order to calculate this distribution, dynamics of plasma have to be considered.
\section{Motion of plasma electrons}
\label{sec:motion}
The most general description of collisionless plasmas in the electromagnetic field is given by the kinetic Vlasov equations for plasma components in which the electromagnetic field is treated self-consistently and depends on the plasma distribution. \cite{vlasov1968vibrational}
According to the method of characteristics, this kinetic approach is equivalent to the solution of motion equations for test particles in the self-consistent fields.
In the $(\xi, y)$ coordinates, the equations of motion for electrons are
\begin{align}
&\dv{p_x}{t} = - E_x - \frac{p_y B_z}{\gamma} - \frac{1}{2\gamma} \pdv{}{x} {\left\langle \vb{a}^2 \right\rangle},\label{eq:pxEquation}\\
&\dv{p_y}{t} = - E_y + \frac{p_x B_z}{\gamma} - \frac{1}{2\gamma} \pdv{}{y} {\left\langle \vb{a}^2 \right\rangle},\\
&\dv{\xi}{t} = 1 - \frac{p_x}{\gamma}, \quad \dv{y}{t} = \frac{p_y}{\gamma}. \label{eq:coordinateEquations}
\end{align}
where $\vb{a} = e\vb{E}_\textup{L}/(mc\omega_\textup{L})$ is the dimensionless amplitude of the laser electric field, $\omega_\textup{L}$ is the laser frequency,
\begin{equation}
\gamma = \sqrt{1 + \vb{p}^2 + {\left\langle \vb{a}^2 \right\rangle}}
\label{eq:gammaDef}
\end{equation}
is the Lorentz factor of an electron.
Here, we use the ponderomotive description of the laser pulse. \cite{Mora_1997_PoP_4_010217}
In this case, the field of the laser pulse is not taken into account in the Maxwell's equations and vectors $\vb{E}$ and $\vb{B}$, and the influence of the laser pulse on plasma electrons is determined by the ponderomotive force.
System of equations \eqref{eq:pxEquation}--\eqref{eq:coordinateEquations} can be described by a Hamiltonian
\begin{equation}
H(-\xi, y, P_x, P_y) = \gamma - P_x - \varphi,
\end{equation}
where $\vb{P} = \vb{p} - \vb{A}$ are canonical momenta.
As $\varphi$ and $\vb{A}$ do not depend explicitly on time in the $(\xi, y)$ coordinates, the value of the Hamiltonian is conserved on trajectories.
For electrons initially at rest (thermal motion is neglected), this value is $H = 1$.
Hence, on the electron trajectories,
\begin{equation}
\gamma - P_x - \varphi = \gamma - p_x - \Psi = 1.
\label{eq:constantOfMotion}
\end{equation}
Therefore,
\begin{equation}
\dv{\xi}{t} = \frac{\gamma - p_x}{\gamma} = \frac{1 + \Psi}{\gamma}.
\end{equation}
As $\dv*{\xi}{t}$ is always positive, $\xi(t)$ is a monotonous function.
Therefore, $\xi$ can be used instead of $t$ as a parameter for the electron trajectories.
Then, the equations for the transverse motion become
\begin{align}
&\dv{y}{\xi} = \frac{p_y}{1+\Psi},\\
&\frac{1 + \Psi}{\gamma}\dv{p_y}{\xi} = - \pdv{\Psi}{y} - \frac{1 + \Psi}{\gamma} B_z - \frac{1}{2\gamma} \pdv{}{y} {\left\langle \vb{a}^2 \right\rangle}.
\end{align}
Using Eqs.~\eqref{eq:gammaDef} and \eqref{eq:constantOfMotion}, we can find $\gamma$ through the other values as well,
\begin{equation}
\gamma = \frac{1 + (1+\Psi)^2 + p_y^2 + {\left\langle \vb{a}^2 \right\rangle}}{2(1+\Psi)}.
\end{equation}
Finally, the following second-order equation for an electron trajectory $y(\xi)$ can be obtained
\begin{multline}
\dv{\xi} \left[ (1 + \Psi) \dv{y}{\xi} \right] = - \frac{1}{2(1+\Psi)} \pdv{y} {\left\langle \vb{a}^2 \right\rangle} + \\
+ \left[\frac{1 + (1+\Psi)^2}{2(1+\Psi)^2} + \frac{1}{2} \qty(\dv{y}{\xi})^2 \right] \pdv{\Psi}{y} - B_z.
\label{eq:electronTrajectory}
\end{multline}
A similar equation can be obtained for the ion trajectories.
However, as ions are much heavier than electrons, their motion in the bubble regime can usually be neglected.
Because of this, we consider them immobile.
Hence, their charge density $\rho_\textup{i}(y)$ is determined only by the plasma profile $n(y)$, and their current density $\vb{J}_\textup{i} = 0$.
In principle, self-consistent solution of Eqs.~\eqref{eq:PsiGeneral}, \eqref{eq:BzGeneral}, \eqref{eq:electronTrajectory} is required in order to properly describe the excited wakefield.
However, a simpler phenomenological model can be used in the case of strongly nonlinear wakefield.
This model is described in the next section.
\section{Model of the bubble}
\label{sec:bubbleModel}
Based on the properties of the bubble regime observed in particle-in-cell simulations, the model of the bubble in the two-dimensional case can be chosen similar to the 3D model by \citet{Golovanov_2017_PoP_24_103104}
We assume that there are no plasma electrons inside the bubble, while on its boundary determined by a function $y_\mathrm{b}(\xi)$ there is a thin electron sheath of constant width $\Delta$.
Under this assumption, the source $J_x - \rho$ for the bubble modeled as
\begin{equation}
J_x - \rho = \begin{dcases}
-\rho_\textup{i}(y),& \abs{y} < y_\mathrm{b}(\xi),\\
S_0(\xi) g\left(\frac{\abs{y} - y_\mathrm{b}(\xi)}{\Delta}\right), &\abs{y} > y_\mathrm{b}(\xi).
\end{dcases}
\end{equation}
In this model, the space is split into two regions by curves $\pm y_\mathrm{b}(\xi)$ corresponding to the boundary of the bubble.
Inside the bubble, only plasma ions contribute to $J_x - \rho$, as there are no plasma electrons inside.
Relativistic electron bunches (either a driver or a witness) do not contribute to $J_x - \rho$ either, because their velocity $v_x \approx 1$, and thus
\begin{equation}
J_{x,\textup{B}} - \rho_\textup{B} = (v_x-1) \rho_\textup{B} \approx 0.
\end{equation}
An arbitrary function $g(X)$ describes the shape of the electron sheath on the boundary of the bubble.
Far outside the bubble, for $\abs{y} \gg y_\mathrm{b}$, plasma should remain unperturbed, therefore $g(X)$ must tend to zero.
For example, exponential $g(X) = \exp(-X)$ and rectangular $g(X) = \theta(1-X)$ profiles have been used in previous 3D models. \cite{Lu_2006_PoP_13_056709, Tzoufras_2009_PoP_16_056705}
By multiplying $\Delta$ and $S_0(\xi)$ by constants, we can always normalize this function in a way that
its moments $M_0(0) = M_1(0) = 1$, where the moments are defined as
\begin{equation}
M_n(X) = \int_X^\infty {g(X')\dd{X'}}.
\end{equation}
To simplify the calculations, we assume that $g(X)$ is normalized.
In order for the indefinite integral in Eq.~\eqref{eq:PsiGeneral} to converge, $\int_0^\infty (J_x - \rho) \dd{y} = 0$ is required, which allows us to find
\begin{equation}
S_0(\xi) = \frac{S_\textup{i}(y_\mathrm{b}(\xi))}{\Delta},
\end{equation}
where the function
\begin{equation}
S_\textup{i}(y) = \int_0^y {\rho_\textup{i}(y') \dd{y'}}
\end{equation}
is determined by the plasma profile.
Therefore, the function $y_\mathrm{b}(\xi)$ fully determines the source $J_x - \rho$ if the properties of plasma and the electron sheath are postulated.
Knowing $J_x - \rho$, we can calculate $\Psi$ using Eq.~\eqref{eq:PsiGeneral}.
For $\abs{y} < y_\mathrm{b}$, the resulting wakefield potential is
\begin{equation}
\Psi(\xi, y) = \int_y^{y_\mathrm{b}} S_\textup{i}(y') \dd{y'} + \Delta S_\textup{i}(y_\mathrm{b}).
\end{equation}
According to Eqs.~\eqref{eq:FxGeneral}, \eqref{eq:FyGeneral}, the forces acting on relativistic particles in this potential are
\begin{align}
&F_x(\xi) = -\left(S_\textup{i}(y_\mathrm{b}) + \Delta \rho_\textup{i}(y_\mathrm{b}) \right)\dv{y_\mathrm{b}}{\xi},\\
&F_y(y) = -S_\textup{i}(y).
\end{align}
Similarly to the 3D axisymmetric case,\cite{Lu_2006_PoP_13_056709} the longitudinal force depends only on the longitudinal coordinate, while the transverse force depends only on the transverse coordinate.
As expected, the transverse force is always focusing for electrons.
However, the amplitude of this force is different in the 2D case.
For example, if we consider uniform plasma ($S_\textup{i}(y) = y$), the focusing force in the 2D geometry $F_y = -y$ remains linear but is two times larger than the force in the 3D geometry $F_r = - r / 2$.
This means that electrons in 2D simulations will experience a stronger focusing force than in corresponding 3D simulations.
As this force is responsible for the transverse betatron oscillations and resulting betatron radiation of electrons,\cite{Kostyukov_2003_PoP_10_124818} this change may significantly influence the spectrum of betatron radiation observed in simulations.
In order to find the longitudinal field $E_x(\xi)$ and the corresponding longitudinal force $F_x$, we need to know the shape of the bubble's boundary $y_\mathrm{b}(\xi)$.
As it is known from the previous 3D models,\cite{Lu_2006_PoP_13_056709, Thomas_2016_PoP_23_053108} this shape can be self-consistently found.
The corresponding calculations for the 2D case are described next.
\section{Equation for the bubble's boundary}
\label{sec:bubbleEquation}
As electrons move in the electron sheath around the bubble, the boundary of the bubble $y_\mathrm{b}(\xi)$ at the same time serves as the innermost electron trajectory.
Therefore, Eq.~\eqref{eq:electronTrajectory} for an arbitrary electron trajectory is valid for the boundary $y_\mathrm{b}(\xi)$ as well.
In order to use this equation, the values of the wakefield potential and its derivatives at $y = y_\mathrm{b}$ are required.
They are
\begin{align}
&\Psi(\xi, y_\mathrm{b}(\xi)) = \Delta S_\textup{i}(y_\mathrm{b}(\xi)),\\
&\pdv{\Psi}{y} (\xi, y_\mathrm{b}(\xi)) = - S_\textup{i}(y_\mathrm{b}(\xi)),\\
&\pdv{\Psi}{\xi} (\xi, y_\mathrm{b}(\xi)) = \left(S_\textup{i}(y_\mathrm{b}) + \Delta \rho_\textup{i}(y_\mathrm{b})\right) \dv{y_\mathrm{b}}{\xi}.
\end{align}
Also, the magnetic field $B_z(\xi, y_\mathrm{b})$ is needed; it can be calculated from Eq.~\eqref{eq:BzGeneral}
\begin{multline}
B_z(\xi, y_\mathrm{b}) = \int_0^{y_\mathrm{b}} {J_x(\xi, y')\dd{y'}} + \\
+ y_\mathrm{b} \left[(\rho_\textup{i} + \Delta \rho'_\textup{i}) \qty(\dv{y_\mathrm{b}}{\xi})^2 + (S_\textup{i} + \Delta \rho_\textup{i}) \dv[2]{y_\mathrm{b}}{\xi}\right].
\end{multline}
If we substitute all of these functions into Eq.~\eqref{eq:electronTrajectory}, we obtain the equation describing the boundary of the bubble
\begin{equation}
A(y_\mathrm{b}) \dv[2]{y_\mathrm{b}}{\xi} + B(y_\mathrm{b}) \qty(\dv{y_\mathrm{b}}{\xi})^2 + C(y_\mathrm{b}) = \lambda + L. \label{eq:bubbleEquationGeneral}
\end{equation}
This second-order ordinary differential equation shows how the boundary of the bubble $y_\mathrm{b}$ evolves taking into account sources $\lambda$ and $L$.
The coefficients in this equation are
\begin{align}
&A(y_\mathrm{b}) = 1 + S_\textup{i} y_\mathrm{b} + S_\textup{i} \Delta + \rho_\textup{i} y_\mathrm{b} \Delta,\label{eq:coeffA}\\
&B(y_\mathrm{b}) = y_\mathrm{b} \rho_\textup{i} + \frac{S_\textup{i}}{2} + y_\mathrm{b} \rho'_\textup{i} \Delta + \rho_\textup{i} \Delta,\\
&C(y_\mathrm{b}) = \frac{1 + (1 + \Delta S_\textup{i})^2}{2(1+ \Delta S_\textup{i})^2} S_\textup{i}.\label{eq:coeffC}
\end{align}
Here, $S_\textup{i} \equiv S_\textup{i}(y_\mathrm{b})$, $\rho_\textup{i} \equiv \rho_\textup{i}(y_\mathrm{b})$, $\rho_\textup{i}' \equiv \rho_\textup{i}'(y_\mathrm{b})$.
The coefficients are determined solely by the plasma profile $\rho_\textup{i}(r)$ and the width of the electron sheath $\Delta$.
Interestingly enough, the shape of the electron sheath $g(X)$ does not appear in this equation, unlike in the 3D case.
The sources on the right-hand side are
\begin{align}
&\lambda(\xi, y_\mathrm{b}) = -\int_0^{y_\mathrm{b}} {J_x(\xi, y')\dd{y'}},\\
&L(\xi, y_\mathrm{b}) = - \frac{1}{2(1+\Delta S_\textup{i})} \pdv{}{y} {\left\langle \vb{a}^2 \right\rangle} \bigg|_{y = y_\mathrm{b}}.
\end{align}
As the only source of the electric current $J_x$ inside the bubble are the relativistic electron bunches, the first term $\lambda$ describes the influence of the electron driver and accelerated electrons on the shape of the bubble.
Correspondingly, the second term $L$ describes the action of the ponderomotive force of the laser pulse.
Therefore, Eq.~\eqref{eq:bubbleEquationGeneral} allows us to take into account both the driver (either a laser or an electron bunch) and the accelerated electrons when calculating the shape of the bubble.
Typically, a bubble is large compared to the width of the sheath $y_\mathrm{b} \gg \Delta$.
However, the width of the sheath is also usually sufficiently large so that $S_\textup{i} \Delta \gg 1$ (see Ref.~\onlinecite{Golovanov_2016_QE_46_295} for additional details for the 3D case).
For example, if uniform plasma is considered, these two conditions correspond to $y_\mathrm{b}^{-1} \ll \Delta \ll y_\mathrm{b}$.
Under those two conditions, the coefficients \eqref{eq:coeffA}--\eqref{eq:coeffC} are simplified, and Eq.~\eqref{eq:bubbleEquationGeneral} becomes
\begin{equation}
S_\textup{i} y_\mathrm{b} \dv[2]{y_\mathrm{b}}{\xi} + \left(\frac{S_\textup{i}}{2} + y_\mathrm{b} \rho_\textup{i} \right) \qty(\dv{y_\mathrm{b}}{\xi})^2 + \frac{S_\textup{i}}{2} = \lambda + L.
\label{eq:bubbleEquation}
\end{equation}
The longitudinal electric field can also be found from the shape of the bubble
\begin{equation}
E_x(\xi) \approx S_\textup{i}(y_\mathrm{b}(\xi)) \dv{y_\mathrm{b}}{\xi} (\xi)
\end{equation}
We assume that the center of the bubble, i.\,e. the point where it reaches its maximum transverse size, is located at $\xi = 0$, so that the initial conditions are
\begin{equation}
y_\mathrm{b}(0) = y_0, \quad \dv{y_\mathrm{b}}{\xi} (0) = 0,
\end{equation}
where $y_0$ is the maximum size of the bubble.
We also assume that there are no sources in the rear part of the bubble ($\xi > 0$), i.\,e. $\lambda = 0$, $L = 0$.
In this case, the solution to Eq.~\eqref{eq:bubbleEquation} for $\xi > 0$ can be found analytically similarly to the 3D case\cite{Golovanov_2016_PoP_23_093114}
\begin{equation}
\xi = \int_{y_\mathrm{b}(\xi)}^{y_0} \frac{\sqrt{y'} S_\textup{i}(y')\dd{y'}}{\sqrt{\int_{y'}^{y_0} S_\textup{i}^2(y'') \dd{y''} }}.
\end{equation}
This solution defines the function $y_\mathrm{b}(\xi)$ implicitly.
It makes it easy to find the half-length of a bubble $\xi_\textup{max}$ by setting $y_\mathrm{b}(\xi = \xi_\textup{max}) = 0$.
The electric field in this bubble is
\begin{equation}
E_x = - \sqrt{\frac{1}{y_\mathrm{b}(\xi)} \int_{y_\mathrm{b}(\xi)}^{y_0} S_\textup{i}^2(y') \dd{y'} }
\end{equation}
If the plasma is uniform ($\rho_\textup{i}(y) = 1$, $S_\textup{i}(y) = y$) and there are no sources, Eq.~\eqref{eq:bubbleEquation} becomes
\begin{equation}
2 y_\mathrm{b} \dv[2]{y_\mathrm{b}}{\xi} + 3 \qty(\dv{y_\mathrm{b}}{\xi})^2 + 1 = 0.
\label{eq:bubbleEquationUniform}
\end{equation}
It can be compared to the equation for the 3D axisymmetric case (see Ref.~\onlinecite{Lu_2006_PoP_13_056709})
\begin{equation}
r_\mathrm{b} \dv[2]{r_\mathrm{b}}{\xi} + 2 \qty(\dv{r_\mathrm{b}}{\xi})^2 + 1 = 0. \label{eq:bubbleEquation3D}
\end{equation}
While the equation in the 3D case is close to the equation of a circle, Eq.~\eqref{eq:bubbleEquationUniform} resembles the equation of an ellipse $\sqrt{2}$ times longer in the longitudinal direction.
This can be shown by finding a solution to Eq.~\eqref{eq:bubbleEquationUniform} near the center of the bubble ($\xi=0$)
\begin{equation}
y_\mathrm{b} \approx y_0\left(1 - \frac{\xi^2}{4y_0^2}\right)
\end{equation}
which corresponds to an ellipse with semi-axes equal to $\sqrt{2}y_0$ and $y_0$.
However, the electric field in the 2D case
\begin{equation}
E_x \approx -\frac{\xi}{2}
\end{equation}
is exactly the same as in the 3D case.
\begin{figure}[tb]
\centering
\includegraphics[]{pics/fig1.eps}
\caption{
Electron density distribution in (a) a 2D bubble, (b) a 3D axysimmetric bubble driven by an electron bunch propagating to the right.
The dashed lines show the analytic solutions for the boundaries of the bubbles according to Eqs.~\eqref{eq:bubbleEquation} and \eqref{eq:bubbleEquation3D}, respectively.
The dotted line in (a) shows the analytic solution for a 3D axisymmetric bubble for comparison.
All lengths are normalized to $c/\omega_\textup{p} = \lambda_\textup{p} / 2\pi$.
}
\label{fig:bubble2Dvs3D}
\end{figure}
The behavior described above can be observed in particle-in-cell (PIC) simulations.
To demonstrate that, we carried out two-dimensional simulations using the Smilei PIC code. \cite{Smilei, Derouillat_2018_CPC_222_351}
In these simulations, we used an electron bunch driver with the energy of electrons equal to \SI{2}{\GeV}, the maximum charge density of $25n_\textup{p}$, and the longitudinal and the transverse sizes of $2\lambda_\textup{p}$ and $0.1\lambda_\textup{p}$, respectively.
It excited a wakefield in the strongly nonlinear (bubble) regime in uniform plasma.
In Fig.~\ref{fig:bubble2Dvs3D}(a), the resulting electron density distribution in the wakefield and the analytic solution for the bubble's boundary calculated using Eq.~\eqref{eq:bubbleEquation} for the uniform plasma are shown.
For comparison, the analytic solution for the 3D case is drawn with a dotted line.
It is evident that the shape of the bubble in the 2D geometry is closer to an ellipsis stretched in the longitudinal direction than to a circle.
For reference, Fig.~\ref{fig:bubble2Dvs3D}(b) demonstrates a typical spherical bubble of a similar size in the 3D geometry.
The 3D simulations were also performed with the Smilei PIC code.
An electron bunch with the maximum charge density of $40n_\textup{p}$ and longitudinal and transverse sizes of $1.6 \lambda_\textup{p}$ and $0.4 \lambda_\textup{p}$ was used to excite the wakefield in this case.
\begin{figure}[tb]
\centering
\includegraphics[]{pics/fig2.pdf}
\caption{
Longitudinal electric fields $E_x$ in the bubbles shown in Fig.~\ref{fig:bubble2Dvs3D}.
The dashed lines correspond to the analytical solutions.
The dotted line in (a) shows the analytically calculated electric field in the 3D axisymmetric bubble for comparison.
}
\label{fig:Ex2Dvs3D}
\end{figure}
The corresponding longitudinal electric fields in the simulations and their comparison to the respective 2D and 3D analytical models are shown in Fig.~\ref{fig:Ex2Dvs3D}.
The simulations support the analytical finding that the dependence of the electric field on the longitudinal coordinate is predominantly linear, and the coefficient of this dependence for uniform plasma is the same for 2D and 3D geometries and is equal to $1/2$.
Figs.~\ref{fig:bubble2Dvs3D} and \ref{fig:Ex2Dvs3D} both show that the developed analytic model fairly accurately describes the bubble observed in the simulations.
The differences occur only at the front and rear edges of the bubble where the assumption that the radial size of the bubble is large becomes incorrect.
Compared to the 3D geometry, a 2D bubble is elongated in the longitudinal direction.
However, the properties of the longitudinal electric field remain the same: it does not depend on the transverse coordinate, is predominantly linear close to the center of the bubble, and its gradient in the uniform plasma is the same as in the 3D case.
This similarity is very important, as the dephasing length, the maximum energy, and the spectra of electrons are determined predominantly by this field.
It might indicate that the resulting properties of the accelerated electron bunches should be qualitatively similar in the 2D simulations compared to the full 3D ones.
\section{Quasi-2D bubble in 3D PIC simulations}
\label{sec:quasi2d}
\begin{figure}[tb]
\centering
\includegraphics[]{pics/fig3.eps}
\caption{
Electron density distribution in the $xy$ and $xz$ planes in a bubble excited by a disk-like electron bunch with different transverse sizes.
The dashed line shows the analytic solution for the two-dimensional bubble in uniform plasma according to Eq.~\eqref{eq:bubbleEquation}.
All coordinates are normalized to $\lambda_\textup{p}/2\pi$.
}
\label{fig:bubble2d3d}
\end{figure}
In the 3D space, a 2D bubble corresponds to a driver with an infinite size along one transverse direction.
Therefore, it should be possible to create a quasi-2D bubble in the three-dimensional space by using a disk-like driver with one of the transverse sizes significantly exceeding the other.
As an example, a bubble excited by an electron bunch with the maximum charge density of $25 n_\textup{p}$, the longitudinal size of $\lambda_\textup{p}$, and the transverse sizes of $0.1\lambda_\textup{p}$ and $6.4\lambda_\textup{p}$ along the $y$ and $z$ directions, respectively, is shown in Fig.~\ref{fig:bubble2d3d}.
These parameters correspond to the 2D bubble shown in Fig.~\ref{fig:bubble2Dvs3D}(a).
As the comparison to Fig.~\ref{fig:bubble2Dvs3D}(a) as well as the comparison to the analytical solution (the dashed line in Fig.~\ref{fig:bubble2d3d}) shows, the bubble indeed has the same properties as the 2D bubble in the $xy$ plane.
In the $xz$ plane (corresponding to the plane of the disk-like electron bunch) the bubble has approximately the same size as the driver.
\begin{figure}[tb]
\centering
\includegraphics[]{pics/fig4.pdf}
\caption{
The longitudinal electric field $E_x$ on the axis of the bubble and the transverse forces $F_y$ and $F_z$ at $x = 16.5$ in the bubble shown in Fig.~\ref{fig:bubble2d3d}.
The dashed lines correspond to the analytic solutions.
}
\label{fig:fields2d3d}
\end{figure}
The longitudinal electric field and the transverse forces in this bubble are shown in Fig.~\ref{fig:fields2d3d}.
For comparison, the field and the forces predicted by our 2D model are also plotted with the dashed lines.
The comparison shows that the 2D model correctly describes the fields in the bubble.
Obviously, in a 2D bubble of infinite in the $z$ direction size, the transverse force $F_z = 0$.
However, in a quasi-2D bubble, this component is also present.
It is linear in the $z$ direction and focusing for electrons; its gradient is significantly smaller than the gradient of $F_y$.
Therefore, this force should correspond to long-period betatron oscillations in the $z$ direction.
\section{Discussion and conclusions}
We developed a phenomenological model describing the bubble regime of plasma wakefield in the 2D geometry.
In this regime, the influence of the driver (a laser pulse or a relativistic electron bunch) leads to the formation of a cavity free of plasma electrons behind it.
The model is similar to the previous 3D models and is based on the assumption that no plasma electrons are present inside the bubble.
At the same time, there is a thin electron layer on its boundary.
In the scope of the model, we obtained a differential equation describing the boundary and analytically solved it for absent sources.
The predictions of the model were verified by 2D PIC simulations and showed good correspondence to their result.
In addition, we showed that it is possible to generate a quasi-2D bubble using a disk-like electron bunch in the realistic 3D geometry.
The properties of such a bubble correspond to a bubble observed in 2D PIC simulations.
As 2D simulations are sometimes used as a substitute for more computationally expensive full 3D simulations,
the most interesting result of the model is the difference in the accelerating and focusing forces in the 2D model compared to a realistic 3D bubble.
The comparison was done both analytically and numerically.
The results show that a bubble in 2D geometry is elongated in the longitudinal direction compared to an almost spherical bubble in the 3D case.
However, the structure of the forces acting on the electrons inside the bubble remains virtually the same.
The accelerating force is predominantly linear in the longitudinal direction and does not depend on the transverse coordinate; its gradient in uniform plasma is exactly the same as in the 3D case.
The transverse force is also linear and depends only on the transverse coordinate, but its amplitude is two times larger then in the 3D case.
This should significantly affect betatron oscillations and the spectrum of betatron radiation.
Of course, the difference in the wakefield structure is not the only difference introduced by the use of the 2D geometry, as self-focusing of the laser pulse and self-injection and trapping of electrons significantly change as well. \cite {Tsung_2006_PoP_13_056708}
All such differences should be considered when making conclusions from 2D simulations.
\begin{acknowledgements}
The work has been supported by the Russian Science Foundation through Grant No.\,16-12-10383.
\end{acknowledgements}
\section*{References}
\bibliographystyle{aipnum4-1}
|
1,116,691,497,171 | arxiv | \section{Introduction}
\IEEEPARstart{T}{he} role of computers and robots in our modern society keeps expanding and diversifying. As humanity breaks the technological barriers of the past, daily activities become more and more assisted by an excessive amount of operations based on the interaction between humans and computers. The development of more sophisticated systems frequently drives to complicated interaction ways that incommode their usage.
In order to democratize decision-making machines, straightforward ways of interaction need to be developed which imitate the relationship between humans \cite{zlotowski2015anthropomorphism}. A convenient way for a human to interact with machines can be achieved by means of natural dialogue. Such examples that are already implemented on virtual assistants are known as interactive conversational systems \cite{kepuska2018next}. Computer vision techniques are often applied in this field, e.g., for face and emotion recognition \cite{kansizoglou2019active}\cite{efremova2019face}, 3D face mesh representation for augmented reality applications \cite{kartynnik2019real}, action recognition \cite{vemulapalli2014human}, and finally, human body pose estimation \cite{cao2018openpose}\cite{zhang2019fast} and hand pose estimation \cite{chen2019pose}\cite{yuan2018depth}.
Human hand pose estimation is a long standing problem in the computer vision and graphics research fields, with a plethora of applications such as machine control, or augmented and virtual reality \cite{jang20153d}\cite{piumsomboon2013user}\cite{fang2015robotic}. Due to its importance, numerous solutions have been proposed in the related literature, with one of the most common being based on accurate 2D keypoint localization \cite{rehg1994visual}.
\begin{figure*}
\centering
\includegraphics[totalheight=0.22\textwidth ]{fig/dense}
\caption{Dense Block with growth rate \textit{k} \cite{huang2017densely}.}
\captionsetup{justification=centering}
\label{fig:denseblock}
\end{figure*}
Despite the recent advances in the field of deep neural networks, this topic is still considered to be a challenging problem, that remains to be completely solved. Properties such as the hand's morphology, occlusions due to interaction with objects, appearance diversity due to clothing and jewelry, varying lightning conditions and different backgrounds, add extra burden to the nature of the problem. Nevertheless, unlike the human's body or face, hands have almost uniform shape and lack local characteristics. In addition, because of the few ten joints of a hand, there are myriads of different arbitrary poses. Thus, it becomes critical to localize more than 20 keypoints in each hand \cite{boukhayma20193d}\cite{iqbal2018hand} in order to accurately estimate its pose and use it as an input device.
In the paper at hand, we propose a computationally inexpensive CNN architecture for 2D direct hand pose estimation. Our contribution lies on the following novelties:
\begin{itemize}
\item The presentation of a single-stage end-to-end 2D hand pose estimation CNN architecture, which directly regresses the coordinates of the hand's keypoints from a single RGB image, without depending on the traditional two-stage pipeline\cite{newell2016stacked}\cite{wei2016convolutional}.
\item The introduction of a novel and notably efficient block, dubbed as \textit{Attention Augmented Inverted Bottleneck Block}, the performance of which is thoroughly assessed.
\item The design of a remarkably lightweight architecture based on the proposed block, robust to input shifts.
\item We show that the exploitation of a self-attention mechanism \cite{bello2019attention}, combined with traditional convolutional layers, outperforms other computationally demanding state-of-the-art methods.
\end{itemize}
We evaluate our approach using various contemporal challenging datasets, which include images in-the-wild, occlusions and hand-object interactions, and compare the achieved performance to other state-of-the-art methods.
The paper is organized as follows. Section \ref{Related Work} reviews of related works, Section \ref{Method} gives an aspect of our method, Section \ref{Evaluation} presents the experimental results and finally, in Section \ref{Conclusions} conclusions are drawn.
\section{Related Work}\label{Related Work}
Vision-based hand pose estimation has recently made significant progress. A vast amount of approaches uses Convolutional Neural Networks (CNNs) as a basis, due to their profound capability of extracting features from a given input. CNNs successfully perform 2D body pose estimation by classifying whether or not a body's joint is present in each pixel \cite{newell2016stacked}\cite{wei2016convolutional}.
The proposed methods, also known as Convolutional Pose Machines (CPM), enforce a CNN to generate a set of heat maps, each of which, is expected to have its maximum activation value in the pixel that contains the corresponding keypoint. However, to refine the outcome, this procedure is applied iteratively upon the generated heat maps. Furthermore, the majority of hand pose estimation methods are also based upon the same approach \cite{boukhayma20193d}\cite{iqbal2018hand}, which leads to computationally expensive networks and complicated system architectures.\par
Another line of work aims to directly map the input image to the keypoints' coordinates on the plane or to a specific frame of reference for 2D and 3D pose estimation, respectively, known as holistic regression \cite{tekin2016direct}\cite{li20143d}. The abovementioned approach does not have to generate intermediate representations (pixel-wise classification) while also preserving the ability to understand global constraints and correlations between keypoints. However, it is claimed that holistic regression is not able to generalize and that translational variance diminish the predicted results \cite{wan2018dense}. Despite those conservations, the proposed work is based on such a technique, proving its capabilities when combined with a proper anti-aliasing filter and a robust feature extractor.
\section{Method}\label{Method}
In this section, we describe the structure of the proposed architecture and the key ingredients for estimating a hand's 2D keypoints, coordinates, given a single RGB image. Towards a solution for this challenge, we make use of a feed-forward CNN architecture that directly produces the coordinates in a single stage, without intermediate supervision. The network's architecture comprises two parts, the \textit{stem} and the rest, for now on dubbed as \textit{tail}
\subsection{Network's architecture}
The presented architecture is based on the successful idea of \textit{DenseNets} \cite{huang2017densely}. In a \textit{DenseNet}, each layer obtains additional inputs from all preceding ones and propagates its own feature-maps to all subsequent layers by a channel-wise concatenation, as shown in \figurename{ \ref{fig:denseblock}}. In such a way, this structure receives a ``collective knowledge'' from all previous layers.
To keep the total number of parameters as low as possible, we were inspired by a popular building unit, viz. the \textit{Inverted residual block}, which is a highly efficient feature extractor, designed especially for mobile use \cite{sandler2018mobilenetv2}. The replacement of the standard convolutional layer by depthwise separable ones offers a computation reduction by a factor:
\begin{equation}
k_{f}^{2}\cdot d_{o}/(k^{2}+d_{o}),
\end{equation}
where ${k_{f}}$ equals the kernel's size, and \textit{$d_{o}$} equals the output depth size. The first convolutional layer expands the depth size by an \textit{e} factor while the last squeezes it by dividing the input's depth size by the same factor. Here, $\textit{e}= 4$.
\subsubsection{Stem}
For \textit{stem}, we use a number of \textit{dense blocks} which, unlike the original design, it contains an \textit{inverted residual block}. According to \cite{huang2017densely}, architectures with concatenated skip-connections maintain more information since they allow subsequent layers to reuse intermediate representations, which in turn, leads to increased performance.
A significant difference from the original block regarding its non-linearity is that we use the lately proposed \textit{Mish} activation function \cite{misra2019mish}. \textit{Mish}, unlike ReLU, is a smooth non-monotonic activation function which is defined as:
\begin{equation}
f(x)=x \cdot \tanh(\ln(1+e^{x})),
\end{equation}
As mentioned in \cite{misra2019mish}, \textit{Mish} demonstrates better results than both
Swish \cite{ramachandran2017swish} and ReLU for classification tasks. After extensive experimentation with both Swish and ReLU, we confirmed the above behaviour for the regression task at hand.
\subsubsection{Blur Pooling}
As it is widely known, many modern CNNs perform some sort of downsampling. A common practice for sub-sampling feature maps between convolutional layers is using either a pooling operation, or a strided convolution. In \cite{simoncelli1992shiftable}, it was explicitly discussed that a system based on both operations of convolution and sub-sampling lacks translation invariance, unless the translation is a multiple of each of the sub-sampling factors. Otherwise, sub-sampling creates alias that undermines the output. This property affects CNNs as well, since small spatial image transformations can lead to significant accuracy degradation \cite{engstrom2019exploring}\cite{azulay2018deep}. As stated in \cite{zhang2019making}, a feature extractor function $F \in \mathbb{R}^{H\times W \times C}$ is shift-equivariant when shifting the input equally shifts the output, making shifting and feature extraction commutable:
\begin{equation}
Shift_{\Delta h,\Delta w}(F(X)) = F(Shift_{\Delta h,\Delta w}(X))\qquad \forall(\Delta h, \Delta w).
\end{equation}
Furthermore, a representation is shift-invariant if shifting the inputs results in an identical representation:
\begin{equation}
F(X) = F(Shift_{\Delta h,\Delta w}(X))\qquad \forall(\Delta h, \Delta w).
\end{equation}
Regural pooling methods break shift-equivariance. To overcome this issue, we propose the adaptation of an anti-aliasing filter, which is convolved with feature maps \cite{zhang2019making}, with stride 2 to reduce spatial resolution. The method provides the ability to choose between different size of kernels, producible by a box filter. The following implements the anti-aliasing filter $Filt$.
\begin{equation}
B_{n}[x] =\left\{
\begin{array}{ll}
1, & \mbox{for } 0 \leq x < n, \\
0, & \mbox{ } otherwise,
\end{array}
\right.
\end{equation}
\begin{equation}
Box_{m} = B_{n}*B_{n},
\end{equation}
\begin{equation}
Filt = Box_{m} \otimes Box_{m},
\end{equation}
where $\otimes$ denotes the outer product, $n \in \mathbb{N}^{*}$, $x \in \mathbb{Z}$ and \mbox{$m= 2n-1$}.
In our case, the utilized anti-aliasing filter $Filt$ $n=2$.
\subsubsection{Attention Augmented Inverted Bottleneck Block}
Attention mechanisms enable a neural network to focus more on relevant elements of the input than on irrelevant parts. Visual attention is one of the most influential ideas in the deep learning research field. Attention mechanisms and especially self-attention, are powerful building blocks for processing not only text but also images. Many visual attention mechanisms have been proposed to enhance the convolutions' already proved performance \cite{vaswani2017attention}\cite{bello2019attention}.
The general idea is that given a query and a set of key elements, the attention mechanism aggregates w.r.t the trainable parameters, the resemblance between key-query pairs. Multiple attention functions provide the ability to attend multiple representation subspaces and spatial positions. Finally, each head's output is linearly aggregated with learnable weights \cite{zhu2019empirical}. Our work was inspired by a design proposed in \cite{zhu2019empirical}, in which a self-attention mechanism enfolds a standard residual block. More specifically, we implement an Attention Augmented Convolutional layer \cite{bello2019attention}, which embeds an \textit{inverted bottleneck block}, by adding its output to the product of the Depthwise Separable Convolutional layer, as shown in \figurename{ \ref{fig:aaibl}}.
A self-attention mechanism achieves better results when combined with convolutional layers \cite{bello2019attention}. In practice, a self-attention module uses three sets of learnable parameters $W^{Q}, W^{K}, W^{V},$ where $Q,K,V$ stand for \textit{Query}, \textit{Key} and \textit{Value}, respectively. According to \cite{vaswani2017attention}, an input tensor $ T \in \mathbb{R}^{H\times W \times F_{in}} $, is flattened to a matrix $X \in \mathbb{R}^{HW\times F_{in}}$ and then forwarded to the Transformer attention architecture. Since it has been found beneficial to apply self-attention multiple times, Eq. \ref{eq:1} is applied once for each attention head, producing $O_{[1,.,h]}$ outputs, where $ h\in \mathbb{N}^{*}$.
\begin{figure}
\centering
\includegraphics[totalheight=0.7\linewidth ]{fig/inv_aa}
\caption{Our proposed Attention Augmented Inverted Bottleneck Layer.}
\label{fig:aaibl}
\end{figure}
\begin{equation}\label{eq:1}
O_{h}=Softmax\left(\frac{(XW^{Q})(XW^{K})^{T}}{\sqrt{d_{k}^{h}}}\right)(XW^{V}).
\end{equation}
$W^{Q}, W^{K} \in \mathbb{R}^{F_{in} \times d_{k}^{h}}$ and $W^{V} \in \mathbb{R}^{F_{in}\times d_{v}^{h}}$.
The output of each head is then concatenated with the remaining, forming the Multihead Attention mechanism.
\begin{equation}
MHA(X)=Concat\left[ O_{1},..,O_{h}\right] W^{O},
\end{equation}\\
where $W^{O} \in \mathbb{R}^{d_{v}\times d_{x}}$ is a trainable matrix which linearly transforms the aggregated output of each head. We refer to the \textit{Values'} depth as $d_{v}$, \textit{Queries'} depth as $d_{k}$ and the number of heads as $N_{h}$.
\begin{table}
\renewcommand*{\arraystretch}{1.4}
\caption{{{Network's Architecture}. The growth rate is \textit{k}=10}.}
\label{table:arch}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc}
\hline
\textbf{Layers} & \textbf{Output Size} & \textbf{Architecture} \\
\hline
\multicolumn{1}{l}{Dense Block (1)} & 224$\times$224 & {[}Inverted bottleneck layer] $\times$8 \\
\multicolumn{1}{l}{Transition Layer} & 112 $\times$ 112 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 64\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
\multicolumn{1}{l}{Dense Block (2)~} & 112 $\times$ 112 & {[}Inverted bottleneck layer] $\times$8 \\
\multicolumn{1}{l}{Transition Layer} & 56 $\times$ 56 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 64\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (3) & 56 $\times$ 56 & {[}Attention Augmented Inverted bottleneck layer] $\times$6 \\
Transition Layer & 28 $\times$ 28 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 64\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (4) & 28 $\times$ 28 & {[}Attention Augmented Inverted bottleneck layer] $\times$8 \\
Transition Layer & 14 $\times$ 14 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 64\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (5) & 14 $\times$ 14 & {[}Attention Augmented Inverted bottleneck layer] $\times$10 \\
Transition Layer & 7 $\times$ 7 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 64\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (6) & 7 $\times$ 7 & {[}Attention Augmented Inverted bottleneck layer] $\times$12 \\
Transition Layer & 4 $\times$ 4 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 128\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (7) & 4 $\times$ 4 & {[}Attention Augmented Inverted bottleneck layer] $\times$14 \\
Transition Layer & 2 $\times$ 2 & \begin{tabular}[c]{@{}c@{}}1 $\times$ 1 conv $\times$ 128\\3 $\times$ 3 BlurPool, s2\end{tabular} \\
Dense Block (8) & 2 $\times$ 2 & {[}Attention Augmented Inverted bottleneck layer] $\times$32 \\
AA-Bottleneck & 2 $\times$ 2 & {[}Attention Augmented Inverted bottleneck layer] $\times$1 \\
& 1 $\times$ 1 & 2 $\times$ 2 Average Pooling, s2 \\
& 1 $\times$ 1 $\times$ 42 & 1 $\times$ 1 conv $\times$ 42 \\
\hline
\end{tabular}}
\end{table}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{fig/pck.pdf}
\caption{{ PCK curves on MPII+NZSL testing set}.}
\label{grapha}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{fig/pckh.pdf}
\caption{{ PCKh curves on MPII+NZSL testing set}.}
\label{graphb}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{fig/pck_all.pdf}
\caption{{ PCK curves of our method on different datasets}.}
\label{graphc}
\end{subfigure}
\caption{Performance Evaluation.}
\label{evaluation}
\end{figure}
\begin{figure*}[!th]
\centering
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/1.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/2.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/3.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/4.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/5.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/1/6.jpg}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/1.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/2.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/3.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/4.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/5.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/2/6.jpg}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/1.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/2.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/3.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/4.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/5.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/3/6.jpg}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/1.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/2.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/3.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/4.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/5.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/4/6.jpg}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/1.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/2.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/3.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/4.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/5.jpg}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/heatmaps/5/6.jpg}
\end{subfigure}
\caption{Representative feature maps of our proposed Attention Augmented Inverted Bottleneck Layer.}
\label{heatmaps}
\end{figure*}
An inherent characteristic of self-attention is that it is equivariant to an input's reordering. This essentially means that any spatial information is not maintained, which is prohibitive for vision tasks due to the structured nature of the images. To alleviate the limitation, a trainable \textit{positional encoding} is assigned to each pixel of the image. The relative position of both width and height, between each \textit{Query} and \textit{Key} pixel, is represented by two matrices that contain a \textit{relative position embedding} for every pixel pair. The relationship's strength between two pixels $i,j$ is computed as:
\begin{equation}
l_{i,j}= \frac{q_{i}^{T}} {\sqrt{d_{k}^{h}}} (k_{j}+r_{j_{x}-i_{x}}^{W}+r_{j_{y}-i_{y}}^{H}),
\end{equation}
where $q_{i}$ and $k_{j}$ are the \textit{Query} and \textit{Key} vectors for pixels $i,j$, while $r_{j_{x}-i_{x}}^{W}$ and $r_{j_{y}-i_{y}}^{H}$ are learned embeddings for relative width and height, respectively.
Each attention head enhanced by \textit{relative position embeddings} becomes:
\begin{equation}
O_{h}=Softmax\left(\frac{(XW^{Q})(XW^{K})^{T}+S^{rel}_{H}+S^{rel}_{W}}{\sqrt{d_{k}^{h}}}\right)(XW^{V}),
\end{equation}
where $S^{rel}_{H}$, $S^{rel}_{H} \in \mathbb{R}^{HW\times HW}$ are matrices of \textit{relative positional embeddings} for every pixel pair.
As previously mentioned, this type of visual attention has the ability to attend feature subspaces and spatial positions simultaneously, both due to the attention mechanism that introduces additional feature maps and the convolution operator. The last part of the Attention's Augmented Convolution integration includes the concatenation between the convolutional operator and Multiheaded Attention's output.
\begin{equation}
AAC(X)=concat\left[ Conv(X),MHA(X)\right].
\end{equation}
Denoted as $ u=\frac{d_v}{F_{out}}$ is the ratio between attention depth size and the output depth size, while $\kappa = \frac{d_{k}}{F_{out}}$ is the ratio of key depth over the output depth.
For the network's \textit{tail}, we implement recurrently \textit{dense blocks} that contain the \textit{Attention Augmented Inverted Bottleneck layer}, with a similar manner proposed for the \textit{stem}.
\subsubsection{Downsampling}
To downsample the feature maps between \textit{dense blocks}, a \textit{transition layer} is used, which comprises a pointwise convolutional layer for depth reduction, an Blur Pooling filter with stride 2 and finally, batch normalization.
\subsection{Training}
During training, \textit{Cyclical Learning Rate} \cite{smith2017cyclical} with triangular policy was used with \textit{Stochastic Gradient Descent} optimizer. The selected hyper-parameters are, $stepsize=6$, minimum learning rate of $10^{-4}$ and maximum learning rate of $10^{-1}$. The \textit{batchsize} equals to 256, and the training was executed using \textit{Tensor Processing Units} (TPUs) on the cloud, provided by Google. Finally, a mixed-precision training policy was used by exploiting both 16-bit (bfloat16) and 32-bit (float32) floating-point types \cite{micikevicius2017mixed}. This practice resulted to memory gain, which in turn led to greater batch size, smaller model size and faster execution time. Table \ref{table:arch} explicitly shows the model's architecture, totaling just \textit{1.9M} parameters and \textit{7.1 Million} FLOPs in terms of computational demands, which was developed using the TensorFlow library \cite{abadi2016tensorflow}.
\begin{table}[!b]
\renewcommand*{\arraystretch}{1.3}
\centering
\small
\caption{{Quantitative performance results}.}
\label{table:results}
\begin{tabular}{cccc}
\hline
& \multirow{2}{*}{\textbf{AUC }} & \multicolumn{2}{c}{\textbf{EPE(px) }} \\
& & \textbf{Mean } & \textbf{Median } \\
\hline
\multicolumn{4}{c}{MPII+NZSL Dataset} \\
\hline
Zimm. et al. (ICCV 2017) & 0.17 & 59.4 & - \\
Bouk. et al. (CVPR 2019) & 0.50 & 18.95 & - \\
\textbf{Ours } & \textbf{0.55 } & \textbf{16.1 } & \textbf{11 } \\
\hline
\multicolumn{4}{c}{LSMV Dataset} \\
\hline
Gomez-Donoso et al. & - & 10 & - \\
Li et al. & - & 8 & - \\
\textbf{Ours } & 0.89 & \textbf{3.3 } & 2.5 \\
\hline
\multicolumn{4}{c}{Stereo Hand Pose Dataset} \\
\hline
Zimm et al. (ICCV 2017) & 0.81 & 5 & 5.5 \\
\textbf{Ours } & \textbf{0.92 } & \textbf{2.2 } & \textbf{1.8 } \\
\hline
\multicolumn{4}{c}{HO-3D Dataset} \\
\hline
Ours & 0.87 & 3.9 & 3.3 \\
\hline
\multicolumn{4}{c}{FreiHand Dataset} \\
\hline
Ours & 0.87 & 4 & 3.1 \\
\hline
\end{tabular}
\end{table}
\begin{table*}[!h]
\renewcommand*{\arraystretch}{1.3}
\centering
\caption{{Different architectures utilized in the ablation study}.}
\label{table:diff_archs}
\begin{tabular}{ccccccccccccc}
\hline
& \textbf{Arch 1} & \textbf{Arch 2} & \textbf{Arch 3} & \textbf{Arch 4} & \textbf{Arch 5} & \textbf{Arch 6} & \textbf{Arch 7} & \textbf{Arch 8} & \textbf{Arch 9} & \textbf{Arch 10} & \textbf{Arch 11} & \textbf{Arch 12} \\
\hline
\textbf{Attention module} & \checkmark & - & - & \checkmark & \checkmark & - & \checkmark & - & - & \checkmark & - & \checkmark \\
\textbf{Pooling Method} & Blur & Blur & Average & Average & Blur & Average & Average & Blur & Max & Max & Max & Max \\
\textbf{Activation Function} & Mish & Mish & Mish & Mish & ReLU & ReLU & ReLU & ReLU & Mish & Mish & ReLU & ReLU \\
\hline
\end{tabular}
\end{table*}
\section{Evaluation}\label{Evaluation}
We evaluate our method's 2D pose estimation in a number of contemporary datasets and with respect to state-of-the-art methods. We show that our exceptionally lightweight and straightforward technique outperforms other notably larger and complex deep learning architectures, which are computationally expensive. Our experiments were performed on five different datasets, the characteristics of which are presented below.
\subsection{Datasets}
\textbf{PANOPTIC} \cite{Simon_2017_CVPR} is an accurate large-scale human posture dataset with many instances of occluded subjects. We based our training set on three dataset sessions, \textit{office1, cello3} and \textit{tools1}. In accordance with the literature \cite{boukhayma20193d}, the training set of MPII+NZSL was also included \cite{Simon_2017_CVPR} resulting into a total of 165000 training images. The evaluation was made on the testing set of MPII+NZSL.
The \textbf{HO-3D} \cite{hampali2019ho} is a newly released markerless dataset, consisting of 10505 images in the training set. We augmented the dataset's images by flipping and rotating them by 0-90-180 degrees.
The \textbf{FreiHAND} \cite{zimmermann2019freihand} provides a multi-view hands dataset, recorded in front of a green screen and augmented with artificial background, resulting into a total of 130240 image instances.
The \textbf{LSMV} \cite{gomez2019large} provides images of hands from multiple points of view. The total of frames is 80000.
The \textbf{SHP} \cite{zhang20163d} provides 3D pose annotations of a person's hand, performing various gestures in 18000 frames.
\begin{figure*}[!h]
\centering
\begin{adjustbox}{minipage=\linewidth,scale=1.02}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/MPII+NZSL_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/FreiHand_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/LSMV_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/SHP_Dataset}
\end{subfigure}
\caption{Results under different architecture configurations.}
\label{fig:diff_radars}
\end{adjustbox}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{adjustbox}{minipage=\linewidth,scale=1.02}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/t_MPII+NZSL_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/t_FreiHand_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/t_LSMV_Dataset}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig/radars/t_SHP_Dataset}
\end{subfigure}
\caption{Results under different architecture configurations with randomly shifted input.}
\label{fig:diff_radars_t}
\end{adjustbox}
\end{figure*}
Each dataset was separately evaluated and split by a rule of 80\%-10\%-10\% for training, validation and testing, respectively. Every image was cropped to the resolution of $224\times 224$.
\subsection{Ablation studies}
In this subsection, we wish to justify the architectural blocks identified into our system. We evaluate the proposed method under different settings by: i) optionally excluding the attention augmented convolution module, ii) using different pooling methods and iii) trying different activation functions, as presented in Table {{\ref{table:diff_archs}}}. Regarding the computational efficiency of our approaches, adding the attention augmented convolution module leads to a slight increment of the overall FLOPs to \textit{7.1 Million}. Each of these twelve configurations were trained and tested on the same four datasets, used for evaluation, including {PANOPTIC} \cite{Simon_2017_CVPR}, {FreiHAND} \cite{zimmermann2019freihand}, {LSMV} \cite{gomez2019large} and {SHP} \cite{zhang20163d}.
The results for each combination are presented in \figurename{ \ref{fig:diff_radars}}, where the supremacy of \textit{Architecture 1} over every other one becomes apparent, as it is the one occupying the minimum area in all radar-charts. In order to prove the robustness of the proposed method to input's translations, we also applied random shifts to each of the corresponding datasets during evaluation only, in both vertical and horizontal axis, with an interval of 20 pixels. As one can observe based on the results presented in \figurename{ \ref{fig:diff_radars_t}}, the input's translation leads into accuracy degradation for every tested architecture. However, \textit{Architecture 1} exhibits the modest decrease justifying its employment into our final architecture. We attribute this behavior to the \textit{Blur Pooling layer}, which acts as an anti-aliasing filter during sub-sampling. This allows the network to propagate as much information as possible to the deeper layers improving the regression results. It is also worth mentioning that, instead of the typical \textit{ReLU} non-linearity, the combination of \textit{Blur Pooling} with the \textit{Mish} activation function, yields the highest overall system performance.
\subsection{Comparative results}
We compare our results with other state-of-the-art methods in \figurename{ \ref{evaluation}}, according to the protocol proposed in \cite{Simon_2017_CVPR}, showing that our method outperforms other approaches. More specifically, in \figurename{ \ref{grapha}} and \figurename{ \ref{graphb}}, the percentage of correct keypoints is visualized for different absolute and normalized thresholds, respectively, and compared to other techniques. Figure {\ref{graphc}} depicts our method's performance trained on different datasets. The abovementioned results are also summarized in Table \ref{table:results}.
\figurename{ \ref{heatmaps}}, contains representative examples of the feature maps, as computed by our proposed \textit{Attention Augmented Inverted Bottleneck Block} for a sample of images, during different network's stages. Even if the network was not explicitly trained for the generation of a heat map, it is easily perceptible the effectiveness of the specific block to detect the points of interest, which in our case are the hand's keypoints. Finally, in \figurename{ \ref{fig:photo}}, some qualitative results of our method are presented for different dataset instances.
\begin{figure*}[!t]
\centering
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand1.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand2.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand3.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand4.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand5.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand6.png}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand9.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand10.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand11.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand12.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand13.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand14.png}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand18.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand19.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand20.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand21.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand22.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand23.png}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand26.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand27.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand28.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand29.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand30.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand31.png}
\end{subfigure}
\vspace{7pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand34.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand35.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand36.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand37.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand38.png}
\end{subfigure}
\hspace{5pt}
\begin{subfigure}[c]{0.12\linewidth}
\includegraphics[width=\linewidth]{fig/images/hand39.png}
\end{subfigure}
\caption{Our 2D hand pose estimations on different testing sets.}
\label{fig:photo}
\end{figure*}
\section{Conclusions}\label{Conclusions}
We presented an alternative approach in contrast to the majority of pose estimation methods which incorporate complex and computationally inefficient architectures. Our proposed single-stage end-to-end CNN model exhibits competitive results with just \textit{1.9M} parameters and a model's size of 11 Mbytes, which is achieved by directly predicting the joints' coordinates. This property allows for the deployment of our system on low processing power devices accommodating the operational guidelines of modern mobile systems.
The method's success is mainly based on the effectiveness of the proposed \textit{Attention Augmented Inverted Bottleneck Block} to understand global constraints and correlations between keypoints, as well as, to the architecture's ability to share a ``collective knowledge`` among the subsequent layers. Closing, it is highly possible that our proposed architecture could be suitable for other tasks too, such as 3D pose estimation, human body pose estimation or classification, which we intend to explore as a future work.
\section{Acknowledgments}
This work was supported by Google's TensorFlow Research Cloud and Google's Research Credits programme.
\bibliographystyle{IEEEtran}
|
1,116,691,497,172 | arxiv | \section{Introduction}
In this paper we study some local and global deformation properties of spaces of uniform embeddings and
groups of uniform homeomorphisms of metric covering spaces over compact manifolds and metric spaces with Euclidean ends.
Suppose $(X,d)$ and $(Y, \rho)$ are metric spaces.
A map $h : (X,d) \to (Y, \rho)$ is said to be uniformly continuous if for each $\varepsilon > 0$ there is a $\delta > 0$ such that
if $x,x' \in X$ and $d(x,x') < \delta$ then $\rho(f(x), f(x')) < \varepsilon$.
The map $h$ is called a uniform homeomorphism if $h$ is bijective and both $h$ and $h^{-1}$ are uniformly continuous.
A uniform embedding is a uniform homeomorphism onto its image.
In \cite{EK} R.\,D.~Edwards and R.\,C.~Kirby obtained a fundamental local deformation theorem for embeddings of a compact subspace in a manifold.
Based upon this theorem, in this article we deduce a local deformation lemma for uniform embeddings in a metric covering space over a compact manifold.
Here, the Arzela-Ascoli theorem (\cite[Theorem 6.4]{Du}) plays an essential role in order to pass from the compact case to the uniform case.
Suppose $(M, d)$ is a topological manifold possibly with boundary with a fixed metric $d$ and $X$, $C$ are subspaces of $M$.
Let $\mathcal E^u_\ast(X, M; C)$ denote the space of uniform proper embeddings $f : (X, d|_X) \to (M, d)$ such that $f = \mathrm{id}$ on $X \cap C$.
This space is endowed with the uniform topology induced from the sup-metric
$$d(f,g) = \sup \big\{ d(f(x), g(x)) \mid x \in X \big\} \in [0, \infty] \hspace{8mm} (f, g \in \mathcal E^u_\ast(X, M; C)).$$
Since the notion of uniform continuity depends on the choice of metric $d$ on the manifold $M$,
it is necessary to select a reasonable class of metrics.
In \cite{Ce} (cf, \cite[Section 5.6]{Ru}) A.V.~{\v C}ernavski\u\i \break considered the case where $M$ is the interior of a compact manifold $N$ and the metric $d$ is a restriction of some metric on $N$.
In this article we consider the case where $M$ is a covering space over a compact manifold $N$ and the metric $d$ is the pull-back of some metric on $N$.
The natural model is the class of Riemannian coverings in the smooth category.
In order to remove the extra requirements in the smooth setting, here we introduce the notion of metric covering projection.
Its definition and basic properties are included in Section 2.2 below.
The following is our main theorem.
\begin{theorem}\label{thm_local_deformation}
Suppose $\pi : (M, d) \to (N, \rho)$ is a metric covering projection, $N$ is a compact topological $n$-manifold possibly with boundary,
$X$ is a closed subset of $M$, $W' \subset W$ are uniform neighborhoods of $X$ in $(M, d)$ and
$Z$, $Y$ are closed subsets of $M$ such that $Y$ is a uniform neighborhood of $Z$.
Then there exists a neighborhood $\mathcal W$ of the inclusion map $i_W : W \subset M$ in $\mathcal E^u_\ast(W, M; Y)$ and
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\ast(W, M; Z)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi_1(h) = \mathrm{id}$ \ on \ $X$, \\[2mm]
{\rm (iii)} & $\phi_t(h) = h$ \ on \ $W - W'$ \ \ and \ \ $\phi_t(h)(W) = h(W)$ \ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\phi_t(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
\end{theorem}
This theorem induces some consequences on the theory of uniform homeomorphisms.
Suppose $(X,d)$ is a metric space and $A$ is a subset of $X$.
Let ${\mathcal H}^u_A(X,d)$ denote the group of uniform homeomorphisms of $(X, d)$ onto itself which fix $A$ pointwise,
endowed with the uniform topology.
Let ${\mathcal H}^u_A(X, d)_0$ denote the connected component of the identity map $\mathrm{id}_X$ of $X$ in ${\mathcal H}_A^u(X, d)$.
We are also concerned with the subgroup
$${\mathcal H}^u_A(X, d)_b = \{ h \in {\mathcal H}_A^u(X, d) \mid d(h, \mathrm{id}_X) < \infty \}.$$
It is easily seen that ${\mathcal H}^u_A(X, d)_0 \subset {\mathcal H}^u_A(X, d)_b$ since ${\mathcal H}^u_A(X, d)_b$ is both closed and open in ${\mathcal H}^u_A(X, d)$.
When $X - A$ is relatively compact in $X$, the group
${\mathcal H}^u_A(X,d)$ coincides with the whole group of homeomorphisms of $X$ onto itself which fix $A$ pointwise endowed with the compact-open topology.
In this case we delete the script ``$u$'' from the notation. As usual, the symbol $A$ is suppressed when it is an empty set.
In \cite{Ce} it is shown that ${\mathcal H}^u(M, d)$ is locally contractible in the case where
$M$ is the interior of a compact manifold $N$ and the metric $d$ is a restriction of some metric on $N$.
The next corollary is a direct consequence of Theorem~\ref{thm_local_deformation}.
\begin{corollary}\label{cor_local_contractibility}
Suppose $\pi : (M, d) \to (N, \rho)$ is a metric covering projection onto a compact topological $n$-manifold $N$ possibly with boundary.
Then ${\mathcal H}^u(M, d)$ is locally contractible.
\end{corollary}
Next we study a global deformation property of the group ${\mathcal H}^u(X,d)$.
The most standard example is the $n$-dimensional Euclidean space $\mathbb R^n$ with the standard Euclidean metric.
The relevant feature in this scenario is the existence of similarity transformations.
This enables us to deduce a global deformation of uniform embeddings from a local one.
To be more general,
we treat metric spaces with bi-Lipschitz Euclidean ends.
Recall that a map $h : (X,d) \to (Y, \rho)$ between metric spaces is said to be Lipschitz if there exists a constant $C > 0$
such that $\rho(h(x), h(x')) \leq Cd(x, x')$ for any $x, x' \in X$.
The map $h$ is called a bi-Lipschitz homeomorphism if $h$ is bijective and both $h$ and $h^{-1}$ are Lipschitz maps.
The model of Euclidean end is the complement $\mathbb R^n_r = \mathbb R^n - O(r)$ of the round open $r$-ball $O(r)$ centered at the origin.
These complements $\mathbb R^n_r$ $(r > 0)$ are bi-Lipschitz homeomorphic to each other under similarity transformations.
A bi-Lipschitz $n$-dimensional Euclidean end of a metric space $(X, d)$ means a closed subset $L$ of $X$
which admits a bi-Lipschitz homeomorphism of pairs,
$\theta : (\mathbb R^n_1, \partial \mathbb R^n_1) \approx ((L, {\rm Fr}_X L), d|_L)$ and $d(X - L, L_r) \to \infty$ as $r \to \infty$, where
${\rm Fr}_X L$ is the topological frontier of $L$ in $X$ and
$L_r = \theta(\mathbb R^n_r)$ for $r \geq 0$.
We set $L' = \theta(\mathbb R^n_2)$ and $L'' = \theta(\mathbb R^n_3)$.
Using similarity transformations, we can deduce the following result from the local deformation theorem, Theorem~\ref{thm_local_deformation}.
\begin{theorem}\label{thm_Euclid-end}
Suppose $X$ is a metric space and $L_1, \cdots, L_m$ are mutually disjoint bi-Lipschitz Euclidean ends of $X$.
Let $L' = L_1' \cup \cdots \cup L_m'$ and $L'' = L_1'' \cup \cdots \cup L_m''$.
Then there exists a strong deformation retraction $\phi$ of ${\mathcal H}^u(X)_b$ onto ${\mathcal H}^u_{L''}(X)$ such that
$$\mbox{$\phi_t(h) = h$ \ on \ $h^{-1}(X - L') - L'$ \ \ for any \ $(h,t ) \in {\mathcal H}^u(X)_b \times [0,1]$.}$$
\end{theorem}
\begin{example}\label{example}
(1) ${\mathcal H}^u(\mathbb R^n)_b$ is contractible for every $n \geq 0$.
In fact, $\mathbb R^n$ has the model Euclidean end $\mathbb R^n_1$ and hence
there exists a strong deformation retraction of ${\mathcal H}^u(\mathbb R^n)_b$ onto
${\mathcal H}^u_{\mathbb R^n_3}(\mathbb R^n)$.
The latter is contractible by Alexander's trick.
(2) The $n$-dimensional cylinder $M = {\Bbb S}^{n-1} \times \mathbb R$ is the product of the $(n-1)$-sphere ${\Bbb S}^{n-1}$ and the real line $\mathbb R$. If $M$ is asigned a metric so that ${\Bbb S}^{n-1} \times (-\infty, -1]$ and
${\Bbb S}^{n-1} \times [1, \infty)$ are two bi-Lipschitz Euclidean ends of $M$, then
${\mathcal H}^u(M)_b$ includes the subgroup
${\mathcal H}_{{\Bbb S}^{n-1} \times \mathbb R_1}(M) \approx {\mathcal H}_\partial({\Bbb S}^{n-1} \times [-1,1])$ as a strong deformation retract.
In particular, ${\mathcal H}^u(M)_0$ admits a strong deformation retraction onto ${\mathcal H}_{{\Bbb S}^{n-1} \times \mathbb R_1}(M)_0 \approx {\mathcal H}_\partial({\Bbb S}^{n-1} \times [-1,1])_0$.
(3) In dimension 2, we have a more explicit conclusion. Suppose $N$ is a compact connected 2-manifold with a nonempty boundary and $C = \cup_{i=1}^m C_i$ is a nonempty union of some boundary circles of $N$.
If the noncompact 2-manifold $M = N - C$ is assigned a metic $d$ such that for each $i = 1, \cdots, m$ the end $L_i$ of $M$ corresponding to the boundary circle $C_i$ is a bi-Lipschitz Euclidean end of $(M, d)$, then ${\mathcal H}^u(M, d)_0 \simeq {\mathcal H}^u_{L''}(M)_0 \approx {\mathcal H}_{C}(N)_0 \simeq \ast$.
\end{example}
\begin{remark} In Example~\ref{example} (1),
one might expect that conjugation by a suitable shrinking homeomorphism $\mathbb R^n \approx O(1)$ and
extension by the identity on the boundary would directly reduce the problem to the case of
${\mathcal H}_\partial(B(1))$, the group of homeomorphisms of the closed unit ball relative to the boundary,
since this group is contractible by Alexander's trick.
However, the contraction of ${\mathcal H}^u(\mathbb R^n)_b$ obtained in this way is not continuous.
In fact,
it would mean that any $h \in {\mathcal H}^u(\mathbb R^n)_b$ could be approximated by compactly supported homeomorphisms in the sup-metric.
But this does not hold, for example, for any translation $h(x) = x +a$ $(a \neq 0)$.
\end{remark}
In \cite{MSYY} we studied the topological type of ${\mathcal H}^u(\mathbb R)_b$ as an infinite-dimensional manifold
and showed that it is homeomorphic to $\ell_\infty$.
Example 1.1 leads to the following conjecture.
\begin{conjecture} ${\mathcal H}^u(\mathbb R^n)_b$ is homeomorphic to $\ell_\infty$ for any $n \geq 1$.
\end{conjecture}
This paper is organized as follows. Section 2 includes some preliminary results on
metric covering projections and spaces of uniform embeddings.
Section 3 is devoted to the proof of Theorem~\ref{thm_local_deformation} and
the final section, Section 4, includes the proof of Theorem~\ref{thm_Euclid-end}.
\section{Preliminaries}
\subsection{Conventions} \mbox{}
Maps between topological spaces are assumed to be continuous.
The word ``function'' means a correspondence not assumed to be continuous.
For a topological space $X$ and a subset $A$ of $X$,
the symbols ${\rm Int}_X A$, $cl_X A$ and ${\rm Fr}_X A$ denote the topological
interior, closure and frontier of $A$ in $X$.
The identity map on $X$ is denoted by $\mathrm{id}_X$, while the inclusion map $A \subset X$ is denoted by $i_A$, $\iota_A$ or $\mathrm{id}_A$, etc.
When ${\mathcal F}$ is a collection of subsets of $X$,
the union of ${\mathcal F}$ is denoted by $|\mathcal F|$ or $\bigcup \mathcal F$.
For $A \subset X$
the star of $A$ with respect to $\mathcal{F}$ is defined by ${\rm St}(A, \mathcal{F}) = A \cup \big( \cup \{ F \in {\mathcal F} \mid F \cap A \neq \emptyset \}\big) \subset X$.
For an $n$-manifold $M$,
the symbols $\partial M$ and ${\rm Int}\,M$ denote the boundary and interior of $M$ as a manifold.
\subsection{Metric covering projections} \mbox{}
Suppose $(X, d)$ is a metric space.
(Below, when the metric $d$ is implicitly understood, we eliminate the symbol $d$ from the notations.)
The distance between two subsets $A, B$ of $X$ is defined by
$d(A, B) = \inf \{ d(x, y) \mid x \in A, y \in B \}$.
For $\delta \geq 0$
the closed $\delta$-neighborhood of $A$ in $X$ is defined by
$C_\delta(A) = \{ x \in X \mid d(x, A) \leq \delta \}$.
A neighborhood $U$ of $A$ in $X$ is called a uniform neighborhood of $A$ in $(X, d)$
if $C_\delta(A) \subset U$ for some $\delta >0$.
For $\varepsilon > 0$ a subset $A$ of $X$ is said to be $\varepsilon$-discrete if
$d(x,y) \geq \varepsilon$ for any distinct points $x, y \in A$.
More generally, a collection ${\mathcal F}$ of subsets of $X$
is said to be $\varepsilon$-discrete if $d(F, F') \geq \varepsilon$ for any $F, F' \in \mathcal F$ with $F \neq F'$.
We say that $A$ or $\mathcal F$ is uniformly discrete if it is $\varepsilon$-discrete for some $\varepsilon > 0$.
For the basics on covering spaces, one can refer to \cite[Chapter 2, Section 1]{Sp}.
If $p : M \to N$ is a covering projection and $N$ is a topological $n$-manifold possibly with boundary, then so is $M$ and $\partial M = \pi^{-1}(\partial N)$.
\begin{defn} A covering projection $\pi : (X, d) \to (Y, \rho)$ between metric spaces is called a metric covering projection
if it satisfies the following conditions:
\begin{itemize}
\item[$(\natural)_1$] There exists an open cover ${\mathcal U}$ of $Y$ such that for each $U \in {\mathcal U}$ the inverse $\pi^{-1}(U)$ is the disjoint union of open subsets of $X$ each of which is mapped isometrically onto $U$ by $\pi$.
\item[$(\natural)_2$] For each $y \in Y$ the fiber $\pi^{-1}(y)$ is uniformly discrete in $X$.
\item[$(\natural)_3$] $\rho(\pi(x), \pi(x')) \leq d(x, x')$ for any $x, x' \in X$.
\end{itemize}
\end{defn}
When an open subset $U$ of $Y$ satisfies the condition in $(\natural)_1$,
we say that $U$ is isometrically evenly covered by $\pi$.
In this case, if $U$ is connected, then each connected component of $\pi^{-1}(U)$ is mapped isometrically onto $U$ by $\pi$.
Riemannian covering projections are typical examples of metric covering projections.
\begin{lemma}\label{lemma_covering_proj}
Suppose $\pi : (X, d) \to (Y, \rho)$ is a metric covering projection and $Y$ is compact.
\begin{itemize}
\item[(1)] There exists $\varepsilon > 0$ such that each fiber of $\pi$ is $\varepsilon$-discrete.
\item[(2)] Suppose $U$ is an open subset of $Y$ and $V$ is an open subset of $\pi^{-1}(U)$ which is mapped isometrically onto $U$ by $\pi$,
$E$ is a subset of $V$ and $F = \pi(E) \subset U$.
Then $d(X - V, E) \geq \min \{ \varepsilon/2, \rho(Y - U, F) \}$.
\end{itemize}
\end{lemma}
\begin{proof}
(1) By $(\natural)_1$, $(\natural)_2$ for each $y \in Y$ we can find
\begin{itemize}
\item[(i)\ ] $\varepsilon_y > 0$ such that $\pi^{-1}(y)$ is $3\varepsilon_y$-discrete and
\item[(ii)\,] an open neighborhood $U_y$ of $y$ in $Y$ such that $\operatorname{diam} U_y \leq \varepsilon_y$ and $U_y$ is isometrically evenly covered by $\pi$, that is,
$\pi^{-1}(U_y)$ is the disjoint union of some open subsets $V_y^\lambda$ $(\lambda \in \Lambda_y)$ of $X$ and
each $V_y^\lambda$ is mapped isometrically onto $U_y$ by $\pi$.
\end{itemize}
We show that the family $\{ V_y^\lambda \}_{\lambda \in \Lambda_y}$ is $\varepsilon_y$-discrete. In particular, for any $z \in U_y$
the fiber $\pi^{-1}(z)$ is $\varepsilon_y$-discrete.
To see this claim, take any $\lambda, \mu \in \Lambda_y$ with $\lambda \neq \mu$.
We have to show that $d(V_y^\lambda, V_y^\mu) \geq \varepsilon_y$.
Let $y_\lambda \in V_y^\lambda$ and $y_\mu \in V_y^\mu$ be the points such that
$\pi(y_\lambda) = \pi(y_\mu) = y$.
Then, for any $x_\lambda \in V_y^\lambda$ and $x_\mu \in V_y^\mu$ it follows that
\begin{itemize}
\item[] $d(x_\lambda, y_\lambda) \leq \operatorname{diam} V_y^\lambda = \operatorname{diam} U_y \leq \varepsilon_y$ \ \ and \ \
$d(x_\mu, y_\mu) \leq \operatorname{diam} V_y^\mu = \operatorname{diam} U_y \leq \varepsilon_y$, \ \ so that
\vskip 0.5mm
\item[] $3 \varepsilon_y \leq d(y_\lambda, y_\mu) \leq d(y_\lambda, x_\lambda) + d(x_\lambda, x_\mu) + d(x_\mu, y_\mu) \leq d(x_\lambda, x_\mu) + 2 \varepsilon_y$
\ \ and \ \ $d(x_\lambda, x_\mu) \geq \varepsilon_y$.
\end{itemize}
Since $Y$ is compact, there exist finitely many points $y_1, \cdots, y_n \in Y$ such that $\{ U_{y_1}, \cdots, U_{y_n} \}$ covers $Y$.
Then $\varepsilon = \min \{ \varepsilon_{y_1}, \cdots, \varepsilon_{y_n} \}$ satisfies the required condition.
(2) Take any points $x \in E$ and $x' \in X - V$. Let $y = \pi(x)$ and $y' = \pi(x')$.
\begin{itemize}
\item[(i)\ ] the case that $x' \in \pi^{-1}(U) - V$; Let $x'' \in V$ be the point such that $\pi(x'') = y'$.
Since $\pi : (V, d) \to (U, \rho)$ is an isometry, we have $d(x, x'') = \rho(y, y')$.
From $(\natural)_3$ it follows that $\rho(y,y') \leq d(x,x')$.
Therefore,
$\varepsilon \leq d(x',x'') \leq d(x', x) + d(x, x'') \leq 2d(x', x)$ and $d(x', x) \geq \varepsilon/2$.
\item[(ii)\,] the case that $x' \in X - \pi^{-1}(U)$;
By $(\natural)_3$ we have $d(x,x') \geq \rho(y,y') \geq \rho(F, Y - U)$.
\end{itemize}
This implies the assertion.
\end{proof}
\subsection{Spaces of uniformly continuous maps} \mbox{}
First we list some basic facts on the uniform topology on the space of uniformly continuous maps.
Recall that the definitions of uniformly continuous maps, uniform homeomorphisms and uniform embeddings are included in Section 1.
Below $(X, d)$, $(Y, \rho)$ and $(Z, \eta)$ denote metric spaces. (The metrics $d$, $\rho$ and $\eta$ are also denoted by the symbols $d_X$, $d_Y$ and $d_Z$ respectively.
As usual, when these metrics are implicitly understood, we eliminate them from the notations.)
Let ${\mathcal C}(X, Y)$ and ${\mathcal C}^u((X, d), (Y, \rho))$
denote the space of maps $f : X \to Y$ and
the subspace of uniformly continuous maps $f : (X, d) \to (Y, \rho)$.
The metric $\rho$ on $Y$ induces the sup-metric on ${\mathcal C}(X, Y)$ defined by
$$\rho(f,g) = \sup \{ \rho(f(x), g(x)) \mid x \in X \} \in [0, \infty].$$
The topology on ${\mathcal C}(X, Y)$ induced by this sup-metric $\rho$ is called the uniform topology.
Below the space ${\mathcal C}(X,Y)$ and its subspaces are endowed with the sup-metric $\rho$ and
the uniform topology, otherwise specified. To emphasize this point, sometimes we use the symbol ${\mathcal C}(X, Y)_u$.
On the other hand, when the space ${\mathcal C}(X, Y)$ is endowed with the compact-open topology,
we use the symbol ${\mathcal C}(X, Y)_{co}$.
When $X$ is compact, we have ${\mathcal C}^u((X, d), (Y, \rho))_u = {\mathcal C}(X, Y)_{co}$.
It is important to notice that the composition map
$${\mathcal C}^u((X, d), (Y, \rho))_u \times {\mathcal C}^u((Y, \rho), (Z, \eta))_u \longrightarrow {\mathcal C}^u((X, d), (Z, \eta))_u.$$
is continuous, while the composition map ${\mathcal C}(X, Y)_u \times {\mathcal C}(Y, Z)_u \longrightarrow {\mathcal C}(X, Z)_u$ is not necessarily continuous.
Let $\mathcal E(X, Y)$ and $\mathcal E^u((X, d), (Y, \rho))$ denote the space of embeddings $f : X \to Y$
and the subspace of uniform embeddings $f : (X, d) \to (Y, \rho)$ (both with the sup-metric and the uniform topology).
When $X \subset Y \subset Z$, for a subset $C$ of $Z$ we use the symbol
$\mathcal E(X, Y; C)$ to denote the subspace $\{ f \in \mathcal E(X, Y) \mid f = \mathrm{id} \ \text{on} \ X \cap C \}$ and
for $\varepsilon > 0$ let $\mathcal E(i_X, \varepsilon; X, Y; C)$ denote the closed $\varepsilon$-neighborhood of the inclusion $i_X : X \subset Y$ in the space $\mathcal E(X, Y; C)$.
The meaning of the symbols $\mathcal E^u(X, Y; C)$, $\mathcal E^u(i_X, \varepsilon; X, Y; C)$, etc are obvious.
Similarly, for a subset $A$ of $X$
let ${\mathcal H}_A(X)$ denote the group of homeomorphisms $h$ of $X$ onto itself with $h|_A = \mathrm{id}_A$
and ${\mathcal H}^u_A(X, d)$ denote the subgroup of ${\mathcal H}_A(X)$ consisting of uniform homeomorphisms of $(X, d)$
(both with the sup-metric and the uniform topology).
We denote by ${\mathcal H}^u_A(X, d)_0$ the connected component of the identity $\mathrm{id}_X$ in ${\mathcal H}^u_A(X, d)$ and define
the subgroup
$${\mathcal H}^u_A(X, d)_b = \{ h \in {\mathcal H}^u_A(X, d) \mid d(h, \mathrm{id}_X) < \infty \}.$$
Then ${\mathcal H}^u_A(X, d)$ is a topological group and ${\mathcal H}^u_A(X, d)_b$ is an open (and closed) subgroup of ${\mathcal H}^u_A(X, d)$, so that
${\mathcal H}^u_A(X, d)_0 \subset {\mathcal H}^u_A(X, d)_b$.
The next lemma follows directly from the definitions.
\begin{lemma}\label{lemma_unif-emb}
For any $f \in {\mathcal C}(X, Y)$ the following conditions are equivalent:
\begin{enumerate}
\item $f \in \mathcal E(X, Y)$ and $f^{-1} : (f(X), \rho) \to (X, d)$ is uniformly continuous.
\item for any $\varepsilon > 0$ there exists $\delta > 0$ such that if $x, x' \in X$ and $d(x, x') \geq \varepsilon$ then $\rho(f(x), f(x')) \geq \delta$.
\end{enumerate}
\end{lemma}
Recall that a family $f_\lambda \in {\mathcal C}(X, Y)$ $(\lambda \in \Lambda)$ is said to be equi-continuous if
for any $\varepsilon > 0$ there exists $\delta > 0$ such that for any $\lambda \in \Lambda$
if $x, x' \in X$ and $d(x,x')< \delta$ then $\rho(f_\lambda(x), f_\lambda(x')) < \varepsilon$.
More generally, we say that a family of maps $\{ f_\lambda : (X_\lambda, d_\lambda) \to (Y_\lambda, \rho_\lambda) \}_{\lambda \in \Lambda}$ between metric spaces is equi-continuous if
for any $\varepsilon > 0$ there exists $\delta > 0$ such that for any $\lambda \in \Lambda$
if $x, x' \in X_\lambda$ and $d_\lambda(x,x')< \delta$ then $\rho_\lambda(f_\lambda(x), f_\lambda(x')) < \varepsilon$.
For embeddings, we also use the following terminology: a family of embeddings
$\{ h_\lambda : (X_\lambda, d_\lambda) \to (Y_\lambda, \rho_\lambda)\}_{\lambda \in \Lambda}$ is equi-uniform
if both of the families $\{ h_\lambda : (X_\lambda, d_\lambda) \to (Y_\lambda, \rho_\lambda)\}_{\lambda \in \Lambda}$ and $\{(h_\lambda)^{-1} : (h_\lambda(X_\lambda), \rho_\lambda) \to (X_\lambda, d_\lambda)\}_{\lambda \in \Lambda}$ are equi-continuous.
For a subset ${\mathcal C}$ of ${\mathcal C}(X, Y)$, the symbol
$cl_u\,{\mathcal C}$ means the closure of ${\mathcal C}$ in ${\mathcal C}(X, Y)_u$.
\begin{lemma}\label{lemma_equi-conti}
{\rm (1)} $cl_u\,\mathcal E^u(X, Y) \subset {\mathcal C}^u(X, Y)$.
\begin{enumerate}
\item[(2)] Suppose ${\mathcal C} \subset \mathcal E^u(X, Y)$.
If ${\mathcal C}' = \{ f^{-1} : f(X) \to X \mid f \in {\mathcal C} \}$ is equi-continuous, then $cl_u\,{\mathcal C} \subset \mathcal E^u(X, Y)$.
\end{enumerate}
\end{lemma}
\begin{proof} (1) Given $f \in cl_u\,\mathcal E^u(X, Y)$. To see that $f$ is uniformly continuous, take any $\varepsilon > 0$.
Choose $g \in \mathcal E^u(X, Y)$ with $\rho(f,g) < \varepsilon/3$.
Since $g$ is uniformly continuous, there exists $\delta > 0$ such that
if $x,y \in X$ and $d(x,y) < \delta$ then $\rho(g(x), g(y)) < \varepsilon/3$.
It follows that if $x,y \in X$ and $d(x,y) < \delta$ then
$$\rho(f(x), f(y)) \leq \rho(f(x), g(x)) + \rho(g(x), g(y)) + \rho(g(y), f(y)) < \varepsilon.$$
(2) Given $f \in cl_u\,{\mathcal C}$. By (1) $f$ is uniformly continuous. Take any $\varepsilon > 0$.
Since ${\mathcal C}'$ is equi-continuous, there exists $\delta > 0$ such that
if $g \in {\mathcal C}$, $x, y \in X$ and $d(x,y) \geq \varepsilon$ then $\rho(g(x), g(y)) \geq 3\delta$.
Choose $h \in {\mathcal C}$ with $\rho(f, h) < \delta$.
It follows that if $x, y \in X$ and $d(x,y) \geq \varepsilon$ then
$$\rho(f(x), f(y)) \geq \rho(h(x), h(y)) - \rho(f(x), h(x)) - \rho(f(y), h(y)) \geq \delta.$$
By Lemma~\ref{lemma_unif-emb} this means that $f \in \mathcal E^u(X, Y)$.
\end{proof}
\begin{lemma}\label{lemma_unif_nbd}
Suppose $A$ is a compact subset of $X$ and $f \in {\mathcal C}(X, Y)$.
Assume that $\varepsilon, \delta >0$ satisfy the following condition: if $x,y \in A$ and $d(x,y) \leq \delta$, then $\rho(f(x), f(y)) < \varepsilon$.
Then there exists an open neighborhood $U$ of $A$ in $X$ such that if $x,y \in U$ and $d(x,y) \leq \delta$, then $\rho(f(x), f(y)) < \varepsilon$.
\end{lemma}
\begin{proof} We proceed by contradiction. Suppose there does not exist such an open neighborhood $U$.
Then for each $n \geq 1$ there exists a pair of points $x_n, y_n \in C_{1/n}(A)$ such that $d(x_n, y_n) \leq \delta$ and $\rho(f(x_n), f(y_n)) \geq \varepsilon$.
Choose points $x_n', y_n' \in A$ with $d(x_n, x_n') \leq 1/n$ and $d(y_n, y_n') \leq 1/n$.
Since $A$ is compact, we can find subsequences $x_{n_i}'$ and $y_{n_i}'$ such that $x_{n_i}' \to x$, $y_{n_i}' \to y$ $(n \to \infty)$ in $A$.
Then $x_{n_i} \to x$, $y_{n_i} \to y$ $(n \to \infty)$ in $X$ and so $d(x,y) \leq \delta$ and $\rho(f(x), f(y)) \geq \varepsilon$.
This contradicts the assumption.
\end{proof}
\begin{lemma}\label{lemma_conti} Suppose $P$ is a topological space,
$f : P \to {\mathcal C}(X, Y)_u$, $g : P \to {\mathcal C}(X, Z)_u$ are continuous maps
and $h : P \to {\mathcal C}^u(Y, Z)_u$ is a function.
If $f_p$ is surjective and $h_pf_p = g_p$ for each $p \in P$, then $h$ is continuous.
\end{lemma}
\begin{proof} Given any point $p \in P$ and any $\varepsilon > 0$.
Since $h_p$ is uniformly continuous, there exists $\delta > 0$ such that
if $y_1, y_2 \in Y$ and $d_Y(y_1, y_2) < \delta$, then $d_Z(h_p(y_1), h_p(y_2)) < \varepsilon/2$.
Since $f, g$ are continuous, there exists a neighborhood $U$ of $p$ in $P$ such that
$d_Y(f_p, f_q) < \delta$ and $d_Z(g_p, g_q) < \varepsilon/2$ for each $q \in U$.
Then for each $q \in U$ it follows that
$$d_Z(h_q, h_p) = d_Z(h_qf_q, h_pf_q) \leq d_Z(g_q, g_p) + d_Z(h_pf_p, h_pf_q) < \varepsilon/2 + \varepsilon/2 = \varepsilon.$$
\vskip -7mm
\end{proof}
\begin{lemma}\label{lemma_collar}
Suppose $S$ is a compact subset of $X$ which has an open collar neighborhood $\theta : (S \times [0, 4), S \times \{ 0 \}) \approx (N, S)$ in $X$. Let $N_a = \theta(S \times [0,a])$ $(a \in [0,4))$. Then there exists a strong deformation retraction
$\phi_t$ $(t \in [0,1])$ of ${\mathcal H}^u_{N_1}(X)_b$ onto ${\mathcal H}^u_{N_2}(X)_b$ such that
$$\mbox{$\phi_t(h) = h$ \ on \ $h^{-1}(X - N_3) - N_3$ \ \ for any \ $(h,t ) \in {\mathcal H}^u_{N_1}(X)_b \times [0,1]$.}$$
\end{lemma}
\begin{proof}
Consider the map $\gamma : [0, 1] \longrightarrow {\mathcal C}([0, 4), [0,4))$ defined by
\vspace*{1mm}
$$\gamma(s)(u)
= \left\{
\begin{array}[c]{ll}
\ \ 2 u & (u \in [0, 1]) \\[1.5mm]
\displaystyle \frac{s}{1+s}(u-1) + 2 \ & (u \in [1, 2+s]) \\[2.5mm]
\ \ u & (u \in [2+s, 4))
\end{array} \right. \hspace{5mm} (s \in [0, 1])$$
\vskip 1mm
\noindent and the homotopy $\lambda : [0, 1] \times [0,1] \longrightarrow {\mathcal C}([0, 4), [0,4))$ defined by
$$\lambda_t(s)(u) = (1-t)u + t \gamma(s)(u) \hspace{6mm} ((s,t) \in [0, 1] \times [0,1], u \in [0, 4)).$$
The homotopy $\lambda$ induces a pseudo-isotopy
\begin{itemize}
\item[] $\xi : [0, 1] \times [0,1] \longrightarrow {\mathcal C}^u(X, X)_u$ :
\vspace{1mm}
$$\xi_t(s)(x) = \left\{
\begin{array}[c]{@{\ }ll}
\theta(z, \lambda_t(s)(u)) & (x = \theta(z,u), (z,u) \in S \times [0,4)) \\[2mm]
\ x & (x \in X - N_3)
\end{array}\right.
\hspace{5mm} ((s,t) \in [0, 1] \times [0,1])$$
\end{itemize}
\vskip 1mm
satisfying the following properties :
\begin{itemize}
\item[(1)]
\begin{itemize}
\item[(i)\ ] $\xi_0(s) = \mathrm{id}_X$, \hspace{5mm} (ii) \ $\xi_t(s) = \mathrm{id}$ on $X - N_{2+s}$ \ \ ($(s,t) \in [0, 1] \times [0,1]$),
\item[(iii)] $\xi_t(s) \in {\mathcal H}^u(X)$ \ \ ($(s,t) \in [0,1] \times [0,1] - \{ (0,1) \}$).
\end{itemize}
\end{itemize}
Choose a map $\alpha : {\mathcal H}^u_{N_1}(X)_b \to [0,1]$ such that $\alpha^{-1}(0) = {\mathcal H}^u_{N_2}(X)_b$.
By (1)(iii) we can define the homotopy
\vspace{1mm}
$$\phi : {\mathcal H}^u_{N_1}(X)_b \times [0,1] \longrightarrow {\mathcal H}^u_{N_1}(X)_b : \
\phi_t(h) =
\left\{ \begin{array}[c]{@{\,}ll}
\xi_t(\alpha(h)) \, h \,\xi_t(\alpha(h))^{-1} & (h \in {\mathcal H}^u_{N_1}(X)_b - {\mathcal H}^u_{N_2}(X)_b), \\[2mm]
\ h & (h \in {\mathcal H}^u_{N_2}(X)_b).
\end{array} \right.$$
\vskip 1mm
\noindent If $h \in {\mathcal H}^u_{N_1}(X)_b - {\mathcal H}^u_{N_2}(X)_b$, then
$\xi_t(\alpha(h))^{-1}(N_1) \subset N_1$ and $h = \mathrm{id}$ on $N_1$, so that $\phi_t(h) = \mathrm{id}$ on $N_1$.
Since $\xi_t(0)(N_2) = N_2$ and $\xi_t(0) = \mathrm{id}$ on $X - N_2$ by (1)(ii), it follows that
$h \,\xi_t(0) = \xi_t(0) h$ $((h,t) \in {\mathcal H}^u_{N_2}(X)_b \times [0,1])$ and so
\begin{itemize}
\item[(2)] $\phi_t(h) \, \xi_t(\alpha(h)) = \xi_t(\alpha(h)) \, h$ \ \ $((h,t) \in {\mathcal H}^u_{N_1}(X)_b \times [0,1])$.
\end{itemize}
Hence, the continuity of $\phi$ follows from Lemma~\ref{lemma_conti}
applied to the parameter space $P = {\mathcal H}^u_{N_1}(X)_b \times [0,1]$ and the maps
$$\mbox{$f : P\longrightarrow {\mathcal C}^u(X, X)_u$ : \ $f(h,t) = \xi_t(\alpha(h))$ \ \ and \ \ $g : P\longrightarrow {\mathcal C}^u(X, X)_u$ : \ $g(h, t) = \xi_t(\alpha(h)) h$.}$$
For each $h \in {\mathcal H}^u_{N_1}(X)_b - {\mathcal H}^u_{N_2}(X)_b$ we have
$\xi_1(\alpha(h))^{-1}(N_2) = N_1$, so $\phi_1(h) = \mathrm{id}$ on $N_2$.
These observations imply that $\phi$ is a strong deformation retraction of ${\mathcal H}^u_{N_1}(X)_b$ onto ${\mathcal H}^u_{N_2}(X)_b$.
Finally, the defining property $\xi_t(s) = \mathrm{id}$ on $X - N_3$ leads to the additional property
$\phi_t(h) = h$ on $h^{-1}(X - N_3) - N_3$.
This completes the proof.
\end{proof}
\subsection{Basic deformation theorem for topological embeddings in topological manifolds} \mbox{}
Next we recall the basic deformation theorem on embeddings of a compact subset in topological manifold.
Suppose $M$ is a topological $n$-manifold possibly with boundary and $X$ is a subspace of $M$.
An embedding $f : X \to M$ is said to be
\begin{itemize}
\item[(i)\ ] proper if $f^{-1}(\partial M) = X \cap \partial M$ and
\item[(ii)\,] quasi-proper if $f(X \cap \partial M) \subset \partial M$.
\end{itemize}
For any subset $C \subset M$, let
${\mathcal E}_\ast(X, M; C)$ and ${\mathcal E}_\#(X, M; C)$ denote the subspaces of ${\mathcal E}(X, M; C)$ consisting of proper embeddings and quasi-proper embeddings respectively.
Note that ${\mathcal E}_\#(X, M; C)$ is closed in ${\mathcal E}(X, M; C)$
(while this does not necessarily hold for ${\mathcal E}_\ast(X, M; C)$) and that
for any $f \in {\mathcal E}_\#(X, M; C)$ the restriction of $f$ to ${\rm Int}_M X$ is a proper embedding.
These properties are the reasons why we introduce the space of quasi-proper embeddings.
In fact, in Section 3 we need to consider the closures of
some collections of proper embeddings when we apply the Arzela-Ascoli theorem.
\begin{theorem}\label{thm_basic_deform} $($\cite[Theorem 5.1]{EK}$)$
Suppose $M$ is a topological $n$-manifold possibly with boundary,
$C$ is a compact subset of $M$,
$U$ is a neighborhood of $C$ in $M$ and
$D$ and $E$ are two closed subsets of $M$ such that $D \subset {\rm Int}_M E$.
Then, for any compact neighborhood $K$ of $C$ in $U$,
there exists a neighborhood $\mathcal U$ of $i_U$ in ${\mathcal E}_\ast(U, M; E)$
and a homotopy \
$\varphi : \mathcal U \times [0, 1] \longrightarrow {\mathcal E}_\ast(U, M; D)$ \ such that
\begin{itemize}
\item[{\rm (1)}] for each $f \in {\mathcal U}$,
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\varphi_0(f) = f$, \hspace{3mm} {\rm (ii)}
$\varphi_1(f)|_C = i_C$, \hspace{3mm} {\rm (iii)}
$\varphi_t(f) = f$ on $U - K$ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $f = \mathrm{id}$ on $U \cap \partial M$, then $\varphi_t(f) = \mathrm{id}$ on $U \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[{\rm (2)}] $\varphi_t(i_U) = i_U$ $(t \in [0,1])$.
\end{itemize}
\end{theorem}
\begin{remark}
In \cite{EK} the spaces ${\mathcal E}_\ast(U, M; E)$ and ${\mathcal E}_\ast(U, M; D)$ are endowed with the compact-open topology. Even if we replace the compact-open topology with the uniform topology, ${\mathcal U}$ is still a neighborhood of $i_U$ and the homotopy $\phi$ is continuous since
the deformation is supported in the compact subset $K$ by the condition (1)(iii).
\end{remark}
\begin{compl}\label{compl} Theorem~\ref{thm_basic_deform} still holds
if we replace the spaces of proper embeddings,
${\mathcal E}_\ast(U, M; D)$ and ${\mathcal E}_\ast(U, M; E)$,
by the spaces of quasi-proper embeddings,
${\mathcal E}_\#(U, M; D)$ and ${\mathcal E}_\#(U, M; E)$.
\end{compl}
In fact, the quasi-proper case is derived from the proper case by the following observation.
First we apply the proper case to ${\rm Int}_M\,U$ instead of $U$ itself, so to obtain the deformation $\phi_t$ of proper embeddings of ${\rm Int}_M\,U$.
If $h \in {\mathcal E}_\#(U, M; E)$ is close to $i_U$, then
we obtain the deformation $\phi_t(h|_{{\rm Int}_M\,U})$ of the restriction $h|_{{\rm Int}_M\,U}$.
Then, the condition (1)(iii) guarantees that it extends by using $h$ itself to a deformation of $h$.
\section{Deformation lemma for uniform embeddings}
In this section, from the deformation theorem for embeddings of compact spaces (Theorem~\ref{thm_basic_deform})
we derive Theorem~\ref{thm_local_deformation}, a deformation theorem for uniform embeddings in a metric covering space over a compact manifold. When passing to the uniform case from the compact case,
the Arzela-Ascoli theorem (\cite[Theorem 6.4]{Du}) shall play an essential role.
\subsection{Product covering case} \mbox{}
First we consider the product covering case.
Throughout this subsection we work under the following assumption.
\begin{notation}\label{notation-1}
Suppose $\pi : (M, d) \to (N, \rho)$ is a metric covering projection and $N$ is a compact topological $n$-manifold possibly with boundary. Suppose $U$ is a connected open subset of $N$ isometrically evenly covered by $\pi$,
$C$ is a compact subset of $U$ and
$K$ is a compact neighborhood of $C$ in $U$.
Suppose $W$ is a subset of $M$ and
${\mathcal V}$ is a collection of connected components of $\pi^{-1}(U)$ such that $V \equiv \cup\,{\mathcal V} \subset W$.
Let $X = \pi^{-1}(C) \cap V$ and $P = \pi^{-1}(K) \cap V$.
\end{notation}
The first lemma establishes a fundamental deformation theorem for uniform embeddings in the simplest case.
\begin{lemma}\label{lem_1}
Suppose $D$ and $E$ are closed subsets of $N$ with $D \subset {\rm Int}_N E$ and
$Z \subset Y$ are subsets of $M$ such that
$Y \cap V = \pi^{-1}(E) \cap V$ and $Z \cap V = \pi^{-1}(D) \cap V$.
Then there exists a neighborhood $\mathcal W$ of the inclusion map $i_W : W \subset M$ in $\mathcal E^u_\#(W, M; Y)$ and
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi_1(h) = \mathrm{id}$ on $X$, \hspace{3mm}
{\rm (iii)} $\phi_t(h) = h$ on $W - P$ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & $\phi_t(h)(P) \subset V$ and \ $\phi_t(h)(V) = h(V)$ \ $(t \in [0,1])$, \\[2mm]
{\rm (v)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\varphi_t(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
\begin{proof}
Since $N$ is compact, by Lemma~\ref{lemma_covering_proj}\,(1) there exists $\lambda > 0$ such that each fiber of $\pi$ is $\lambda$-discrete.
Choose a compact subset $L$ of $U$ such that $K \subset {\rm Int}_N\, L$ and set $Q = \pi^{-1}(L) \cap V$.
We can find $\delta \in (0, \lambda/2)$ such that $C_{\delta}(L) \subset U$ and $C_{\delta}(K) \subset {\rm Int}_N\, L$.
Let $\{ V_i \}_{i \in \Lambda}$ be the collection of connected components of $V$
and set
$$(Q_i, P_i, X_i) = (Q, P, X) \cap V_i$$
for each $i \in \Lambda$.
Then the restriction $\pi_i := \pi|_{V_i} : (V_i, d) \to (U, \rho)$ is an isometry
and $C_{\delta}(Q_i) \subset V_i$ since
$d(M - V_i, Q_i) \geq \min \{ \lambda/2, \rho(N - U, L) \} > \delta$ by Lemma~\ref{lemma_covering_proj}(2).
By Deformation Theorem~\ref{thm_basic_deform} and Complement to Theorem~\ref{thm_basic_deform} (with replacing $(M, U)$ by $(U, L)$)
there exists a neighborhood $\mathcal U$ of $i_L$ in ${\mathcal E}_\#(L, U; E)$
and a homotopy
$\psi : \mathcal U \times [0, 1] \longrightarrow {\mathcal E}_\#(L, U; D)$ \ such that
\begin{itemize}
\item[{\rm (1)}] for each $f \in {\mathcal U}$ \
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\psi_0(f) = f$, \hspace{3mm} {\rm (ii)}
$\psi_1(f)|_C = i_C$, \hspace{3mm} {\rm (iii)}
$\psi_t(f) = f$ on $L - K$ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $f = \mathrm{id}$ on $L \cap \partial N$, then $\psi_t(f) = \mathrm{id}$ on $L \cap \partial N$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[{\rm (2)}] $\psi_t(i_L) = i_L$ $(t \in [0,1])$.
\end{itemize}
We may assume that ${\mathcal U} = {\mathcal E}_\#(i_L, \gamma; L, U; E)$ (the closed $\gamma$-neighborhood of $i_L$ in $\mathcal E_\#(L, U; E)$) for some $\gamma \in (0, \delta)$.
For each $i \in \Lambda$
we obtain the isometry with respect to the sup-metrics
$$\theta_i : \mathcal E_\#(Q_i, V_i) \cong \mathcal E_\#(L, U) : \hspace{3mm} \theta_i(f) = \pi_i f \pi_i^{-1},$$
which restricts to the isometries
$$\theta_i' : \mathcal E_\#(Q_i, V_i; Y) \cong \mathcal E_\#(L, U; E) \hspace{5mm} and \hspace{5mm}
\theta_i'' : \mathcal E_\#(Q_i, V_i; Z) \cong \mathcal E_\#(L, U; D).$$
Then, ${\mathcal W}_i \equiv (\theta_i')^{-1}({\mathcal U}) = {\mathcal E}_\#(i_{Q_i}, \gamma; Q_i, V_i; Y)$
and the homotopy $\psi$ induces the corresponding homotopy
$$\phi^i : {\mathcal W}_i \times [0,1] \to \mathcal E_\#(Q_i, V_i; Z),$$
which satisfies the following conditions
\begin{itemize}
\item[{\rm (3)}] for each $f \in {\mathcal W}_i$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\varphi^i_0(f) = f$, \hspace{3mm} {\rm (ii)}
$\varphi^i_1(f) = \mathrm{id}$ \ on \ $X_i$, \hspace{3mm} {\rm (iii)}
$\varphi^i_t(f) = f$ \ on \ $Q_i - P_i$ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $f = \mathrm{id}$ on $Q_i \cap \partial M$, then $\varphi^i_t(f) = \mathrm{id}$ on $Q_i \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[{\rm (4)}] $\varphi^i_t(i_{Q_i}) = i_{Q_i}$ \ $(t \in [0,1])$.
\end{itemize}
Let ${\mathcal W} = \mathcal E^u_\#(i_W, \gamma; W, M; Y)$ and define a homotopy $\phi : {\mathcal W} \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ as follows.
Take any $h \in {\mathcal W}$. Since $\gamma < \delta$,
for any $i \in \Lambda$ we have $h(Q_i) \subset C_{\delta}(Q_i) \subset V_i$ and
$h|_{Q_i} \in {\mathcal W}_i$.
Therefore we can define $\phi_t(h)$ $(t \in [0,1])$ by
$$\phi_t(h)|_{Q_i} = \phi^i_t(h|_{Q_i}) \hspace{3mm} (i \in \Lambda) \hspace{5mm}
\text{and} \hspace{5mm} \phi_t(h) = h \ \ \text{on} \ \ W - P.$$
Since $\phi^i_t(h|_{Q_i}) = h$ on $Q_i - P_i$, the map $\phi_t(h)$ is a well-defined embedding
and the required conditions (1), (2) for $\phi$ follow from the corresponding conditions (3), (4) for $\phi^i$ $(i \in \Lambda)$.
For (1)(iv) note that $\phi_t(h)(Q_i) = \phi^i_t(h|_{Q_i})(Q_i) = h(Q_i)$ $(i \in \Lambda)$.
It remains to show that
\begin{itemize}
\item[] $(\ast)_1$ $\phi_t(h)$ is a uniform embedding for any $h \in {\mathcal W}$ and $t \in [0,1]$
\hspace{1mm} and
\hspace{1mm} $(\ast)_2$ $\phi$ is continuous.
\end{itemize}
Take any $h \in {\mathcal W}$. For each $i \in \Lambda$ let $h_i = \theta_i'(h|_{Q_i})$.
Since $h$ is a uniform embedding,
the family $h|_{Q_i} \in {\mathcal W}_i$ ($i \in \Lambda$) is an equi-uniform family of embeddings.
Therefore, the families ${\mathcal C}(h) = \{ h_i \}_{i \in \Lambda}$ and ${\mathcal C}'(h) = \{ h_i^{-1} \}_{i \in \Lambda}$ are also equi-continuous. Since ${\rm Im}\,h_i \subset C_{\delta}(L) \subset U$ $(i \in \Lambda)$ and $C_{\delta}(L)$ is compact,
by the Arzela-Ascoli theorem (\cite[Theorem 6.4]{Du}) the closure $cl\,{\mathcal C}(h)$ of ${\mathcal C}(h)$ in ${\mathcal C}(L, U)$ is compact.
It also follows that $cl\,{\mathcal C}(h) \subset {\mathcal U} \subset \mathcal E_\#(L, U; E)$
by Lemma~\ref{lemma_equi-conti} and the equi-continuity of ${\mathcal C}'(h)$.
Now we show that
$\psi_t(h_i) \in \mathcal E_\#(L, U; D)$ \ $(i \in \Lambda, t \in [0,1])$ \ is an equi-uniform family of embeddings.
Since $\psi(cl\,{\mathcal C}(h) \times [0,1]) \subset \mathcal E_\#(L, U; D)$ is compact,
the family $\psi_t(h_i)$ $(i \in \Lambda, t \in [0,1])$ is equi-continuous.
The equi-continuity of the family $(\psi_t(h_i))^{-1}$ $(i \in \Lambda, t \in [0,1])$ is shown as follows. Since ${\rm Im}\,\psi_t(f) = {\rm Im}\,f$ for each $(f,t) \in {\mathcal U} \times [0,1]$,
we have the map
$$\mbox{$\chi : {\mathcal U} \times [0,1] \longrightarrow {\mathcal H}(L)$ : $\chi_t(f) = (\psi_t(f))^{-1}f$.}$$
Since $\chi(cl\,{\mathcal C}(h) \times [0,1])$ is compact,
the family $\chi_t(h_i)$ $(i \in \Lambda, t \in [0,1])$ is equi-continuous.
Since ${\mathcal C}'(h) = \{ h_i^{-1} \}_{i \in \Lambda}$ is equi-continuous,
the family
$$(\psi_t(h_i))^{-1} = \chi_t(h_i) h_i^{-1} \ \ (i \in \Lambda, t \in [0,1])$$
is also equi-continuous as desired.
$(\ast)_1$
Pulling back each map $\psi_t(h_i)$ by the isometry $\theta_i''$,
it follows that $\phi^i_t(h|_{Q_i})$ $(i \in \Lambda, t \in [0,1])$ is an equi-uniform family of embeddings.
The map $\phi_t(h)$ is uniformly continuous,
since $\phi_t(h)|_{W-P} = h|_{W-P}$ is uniformly continuous,
the family $\phi_t(h)|_{Q_i} = \phi^i_t(h|_{Q_i})$ $(i \in \Lambda)$ is equi-continuous
and $C_\delta(P_i) \subset Q_i$ $(i \in \Lambda)$.
Similarly the map $\phi_t(h)^{-1}h$ is uniformly continuous,
since $\phi_t(h)^{-1}h = \mathrm{id}$ on $W-P$,
the family $\phi_t(h)^{-1}h|_{Q_i} = \phi^i_t(h|_{Q_i})^{-1}h|_{Q_i}$ $(i \in \Lambda)$ is equi-continuous
and $C_\delta(P_i) \subset Q_i$ $(i \in \Lambda)$.
Therefore, $\phi_t(h)^{-1} = (\phi_t(h)^{-1}h)h^{-1}$ is also uniformly continuous.
$(\ast)_2$ To see the continuity of the homotopy $\phi$,
take any $(h,t) \in {\mathcal W} \times [0,1]$ and $\varepsilon > 0$.
Since $cl\,{\mathcal C}(h)$ is compact,
the homotopy
$$\psi : {\mathcal U} \times [0,1] \longrightarrow
\mathcal E_\#(L, U; D)$$
is uniformly continuous on $cl\,{\mathcal C}(h) \times [0,1]$.
Hence there exists $\eta \in (0, \varepsilon)$ such that
\begin{itemize}
\item[]
if $(f, u), (g, v) \in cl\,{\mathcal C}(h) \times [0,1]$ and $\rho(f,g) \leq \eta$, $|u-v| \leq \eta$, then $\rho(\psi_u(f),\psi_v(g)) < \varepsilon$.
\end{itemize}
By Lemma~\ref{lemma_unif_nbd} we can find a neighborhood ${\mathcal O}$ of $cl\,{\mathcal C}(h)$ in
${\mathcal U}$ such that
\begin{itemize}
\item[] if $(f, u), (g, v) \in {\mathcal O} \times [0,1]$ and $\rho(f,g) \leq \eta$, $|u-v| \leq \eta$, then $\rho(\psi_u(f),\psi_v(g)) < \varepsilon$.
\end{itemize}
Choose $\zeta \in (0, \eta)$ such that
${\mathcal O}$ contains the open $\zeta$-neighborhood ${\mathcal O}(\zeta)$ of $cl\,{\mathcal C}(h)$ in ${\mathcal U}$.
Then it follows that
\begin{itemize}
\item[] if $(k,s) \in {\mathcal W} \times [0,1]$ and $d(h,k) < \zeta$, $|t-s| < \zeta$, then
$d(\phi_t(h),\phi_s(k)) \leq \varepsilon$.
\end{itemize}
In fact, take any $(k,s) \in {\mathcal W} \times [0,1]$ with $d(h,k) < \zeta$ and $|t-s| < \zeta$.
Then, for any $i \in \Lambda$,
we have $\rho(h_i, k_i) = d(h|_{Q_i}, k|_{Q_i}) < \zeta$ and $h_i \in {\mathcal C}(h)$,
so that $h_i, k_i \in {\mathcal O}(\zeta) \subset {\mathcal O}$.
Hence, by the choice of ${\mathcal O}$ and $\eta$
we have $\rho(\psi_t(h_i), \psi_s(k_i)) < \varepsilon$ and so
$d(\phi^i_t(h|_{Q_i}), \phi^i_s(k|_{Q_i})) < \varepsilon$.
This implies $d(\phi_t(h),\phi_s(k)) \leq \varepsilon$.
This completes the proof.
\end{proof}
The next lemma deals with the problem on the pattern of intersection of each sheet of $V$ with $Y$
(which is represented by a set $I_{V_0}$ defined in the proof).
Here, $V$ represents the subset of $W$ on which we shall deform the uniform embeddings,
while $Y$ represents the subset on which the uniform embeddings have already been deformed to the identity.
In Lemma~\ref{lem_1} the pattern is same for all sheets of $V$ (corresponding to the inverse image of a subset of $N$),
while in Lemma~\ref{lem_2} appear finitely many patterns of intersections
(relating to the inverse images of finitely many subsets of $N$) and
Lemma~\ref{lem_1} is applied to each pattern separately.
When $O_0$ is a connected open subset of $N$ isometrically evenly covered by $\pi$,
let ${\mathcal S}(O_0)$ denote the collection of connected components of $\pi^{-1}(O_0)$.
For each $O \in {\mathcal S}(O_0)$ the restriction $\pi|_O : O \to O_0$ is an isometry.
For subsets $A_0, B_0 \subset O_0$, let
$${\mathcal S}(O_0, A_0, B_0) = \{ (O, A, B) \mid O \in {\mathcal S}(O_0), A = (\pi|_O)^{-1}(A_0), B = (\pi|_O)^{-1}(B_0)\}.$$
We keep the notations given in Notation~\ref{notation-1}.
\begin{lemma}\label{lem_2}
Suppose $\{ (O_j, E_j, D_j) \}_{j\in J}$ is a finite family of subsets of $N$ such that
(a) for each $j \in J$, $O_j$ is a connected open subset of $N$ and $E_j, D_j$ are closed subsets of $N$ with
$D_j \subset {\rm Int}_N E_j$ and $E_j \subset O_j$ and
(b) $O_j$ $(j \in J)$ and ${\rm St}(U, \{ O_j \}_{j \in J})$ are isometrically evenly covered by $\pi$.
Suppose ${\mathcal F}$ is a subcollection of $\cup_{j \in J} \{ j \} \times {\mathcal S}(O_j, E_j, D_j)$ and
let
$$Y = \cup \{ E \mid (j, (O, E, D)) \in {\mathcal F} \} \hspace{5mm} \text{and} \hspace{5mm}
Z = \cup \{ D \mid (j, (O, E, D)) \in {\mathcal F} \}.$$
Then there exists a neighborhood $\mathcal W$ of the inclusion map $i_W$ in $\mathcal E^u_\#(W, M; Y)$ and
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi_1(h) = \mathrm{id}$ on $X$, \\[2mm]
{\rm (iii)} & $\phi_t(h) = h$ \ on \ $W - P$ \ \ and \ \ $\phi_t(h)(W) = h(W)$ \ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\varphi_t(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
\begin{proof}
For each $V_0 \in {\mathcal V}$
consider the subset $I_{V_0}$ of $J$ defined by
$$I_{V_0} = \{ j \in J \mid \exists \, (j, (O, E, D)) \in {\mathcal F} \ \text{such that} \ E \cap V_0 \neq \emptyset \}.$$
For each $I \subset J$ let
$$\begin{array}[t]{l}
{\mathcal V}_I = \{ V_0 \in {\mathcal V} \mid I_{V_0} = I \}, \ \
V_I \equiv \cup \,{\mathcal V}_I, \ \ X_I = \pi^{-1}(C) \cap V_I, \ \ P_I = \pi^{-1}(K) \cap V_I \\[2mm]
E_I = \cup_{i \in I} E_i \ \ \text{and} \ \ D_I = \cup_{i \in I} D_i.
\end{array}$$
\vskip 1mm
\noindent It follows that $V_I \subset V \subset W$ and $D_I \subset {\rm Int}_N E_I$.
We show that
$${\rm (i)} \ \ Y \cap V_I = \pi^{-1}(E_I) \cap V_I \hspace{5mm} \text{and} \hspace{5mm} {\rm (ii)} \ \ Z \cap V_I = \pi^{-1}(D_I) \cap V_I.$$
(i) Given $x \in Y \cap V_I$.
Since $x \in Y$, it follows that
$x \in E$ for some $(i, (O,E,D)) \in {\mathcal F}$, thus $(O,E,D) \in {\mathcal S}(O_i,E_i,D_i)$ and $E = (\pi|_O)^{-1}(E_i)$, so
$\pi(x) \in E_i$.
Since $x \in V_I$, it is seen that $x \in V_0$ for some $V_0 \in {\mathcal V}_I$ and $I_{V_0} = I$.
Since $x \in E \cap V_0 \neq \emptyset$, we have $i \in I_{V_0} = I$ and $E_i \subset E_I$.
Hence $\pi(x) \in E_I$ and $x \in \pi^{-1}(E_I) \cap V_I$.
Conversely suppose $x \in \pi^{-1}(E_I) \cap V_I$.
Then $\pi(x) \in E_I$, thus $\pi(x) \in E_i$ for some $i \in I$.
Since $x \in V_I$, it follows that $x \in V_0$ for some $V_0 \in {\mathcal V}_I$ and $I_{V_0} = I \ni i$ so that
$E \cap V_0 \neq \emptyset$ for some $(i, (O,E,D)) \in {\mathcal F}$, so $(O,E,D) \in {\mathcal S}(O_i,E_i,D_i)$ and $E \subset Y$.
By the assumption $\widetilde{U} \equiv {\rm St}(U, \{ O_j \}_{j \in J})$ is a connected open subset of $N$ isometrically evenly covered by $\pi$.
Since $\pi(x) \in U \cap E_i \subset U \cap O_i$, we have $O_i \subset \widetilde{U}$, so $V_0, O \subset \pi^{-1}(\widetilde{U})$.
Since $V_0$ and $O$ are connected and $V_0 \cap O \supset V_0 \cap E \neq \emptyset$,
there exists $\widetilde{U}_0 \in {\mathcal S}(\widetilde{U})$ with $V_0, O \subset \widetilde{U}_0$.
Since $\pi : \widetilde{U}_0 \to \widetilde{U}$ is an isometry,
$E \subset O \subset \widetilde{U}_0$, $x \in V_0 \subset \widetilde{U}_0$ and
$\pi(x) \in E_i \subset O_i \subset \widetilde{U}$
it follows that $x \in E \subset Y$ so $x \in Y \cap V_I$ as desired.
The assertion (ii) follows from the same argument as (i).
By Lemma~\ref{lem_1} there exists
a neighborhood $\mathcal W_I$ of $i_W$ in $\mathcal E^u_\#(W, M; Y)$ and
a homotopy $\phi^I : \mathcal W_I \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W_I$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi^I_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi^I_1(h) = \mathrm{id}$ on $X_I$, \hspace{3mm}
{\rm (iii)} $\phi^I_t(h) = h$ on $W - P_I$ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & $\phi^I_t(h)(P_I) \subset V_I$ \ and \ $\phi^I_t(h)(V_I) = h(V_I)$ \ $(t \in [0,1])$, \\[2mm]
{\rm (v)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\phi^I_t(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi^I_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
Since ${\mathcal V}$ is the disjoint union of the subcollections ${\mathcal V}_I$ $(I \subset J)$,
it follows that $V$ is the disjoint union of $V_I$ $(I \subset J)$.
Then ${\mathcal W} = \cap_{I \subset J} {\mathcal W}_I$ is a neighborhood of $i_W$ in $\mathcal E^u_\#(W, M; Y)$ and we can define
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ by
$$\mbox{$\phi_t(h) = \phi^I_t(h)$ \ \ on \ \ $V_I$ \hspace{3mm} and \hspace{3mm} $\phi_t(h) = h$ \ \ on \ \ $W - P$.}$$
Since there exists $\gamma > 0$ such that $C_\gamma(P_I) \subset V_I$ $(I \subset J)$,
the uniform continuity of $\phi_t(h)$ follows from those of the maps $h$ and $\phi^I_t(h)$ $(I \subset J)$.
Similarly, the map $\phi_t(h)^{-1}h$ is uniformly continuous since $\phi_t(h)^{-1}h = \mathrm{id}$ on $W - P$ and
the maps $\phi^I_t(h)^{-1}h$ $(I \subset J)$ are uniformly continuous.
Thus $\phi_t(h)^{-1} = (\phi_t(h)^{-1}h)h^{-1}$ is also uniformly continuous.
\end{proof}
\subsection{General case} \mbox{}
Theorem~\ref{thm_local_deformation} is easily deduced from Theorem~\ref{thm_local_deformation-2},
whose proof is based upon a recursive application of Lemma~\ref{lem_2} to a finite family of local trivializations of the metric covering projection $\pi$.
Here, the key is to set up the correct data to which Lemma~\ref{lem_2} is applied.
\begin{theorem}\label{thm_local_deformation-2}
Suppose $\pi : (M, d) \to (N, \rho)$ is a metric covering projection, $N$ is a compact topological $n$-manifold possibly with boundary,
$X$ is a closed subset of $M$, $W' \subset W$ are uniform neighborhoods of $X$ in $(M, d)$ and
$Z$, $Y$ are closed subsets of $M$ such that $Y$ is a uniform neighborhood of $Z$.
Then there exists a neighborhood $\mathcal W$ of the inclusion map $i_W : W \subset M$ in $\mathcal E^u_\#(W, M; Y)$ and
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi_1(h) = \mathrm{id}$ \ on \ $X$, \\[2mm]
{\rm (iii)} & $\phi_t(h) = h$ \ on \ $W - W'$ \ \ and \ \ $\phi_t(h)(W) = h(W)$ \ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\phi_t(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
\end{theorem}
\begin{proof} For $m \in \mathbb N$ let $[m] = \{ 1, 2, \cdots, m \}$.
Choose $\gamma > 0$ such that $C_\gamma(X) \subset W'$ and $C_\gamma(Z) \subset Y$.
Since $N$ is compact, there exists a finite open cover ${\mathcal U} = \{ U_i \}_{i\in[m]}$ of $N$ such that for each $i \in [m]$
\begin{itemize}
\item[] $\operatorname{diam} U_i < \gamma$, \
$U_i$ is connected \ and \ ${\rm St}\,(U_i, \mathcal U)$ is isometrically evenly covered by $\pi$.
\end{itemize}
There exists a finite closed covering $\mathcal F = \{ F_i \}_{i\in[m]}$ of $N$ such that $F_i \subset U_i$ for each $i \in [m]$.
By Lemma~\ref{lemma_covering_proj} there exists $\lambda > 0$ such that each fiber of $\pi$ is $\lambda$-discrete.
Choose $\delta \in (0, \lambda/2)$ such that $C_{\delta}(F_i) \subset U_i$ for each $i \in [m]$.
Take real numbers
$$\delta > \delta_0 > \delta_1 > \cdots > \delta_m > 0.$$
For each $i \in [m]$ we apply Lemma~\ref{lem_2} to the following data:
\vskip 2mm
\begin{tabular}[t]{l}
$U = U_i \subset N$, \hspace{5mm} $(K_i, C_i) = (C_{\delta_{i-1}}(F_i), C_{\delta_{i}}(F_i))$, \hspace{5mm} $W \subset M$, \\[3mm]
${\mathcal V}_i = \{ V' \in {\mathcal S}(U_i) \mid V' \cap X \neq \emptyset \}$,
\hspace{5mm} $V_i = \cup \, {\mathcal V}_i$,
\hspace{5mm} $X_i = \pi^{-1}(C_i) \cap V_i$,
\hspace{5mm} $P_i = \pi^{-1}(K_i) \cap V_i$, \\[3mm]
$(O_j, E_j^i, D_j^i) = (U_j, C_{\delta_{i-1}}(F_j), C_{\delta_{i}}(F_j))$ \ $(j \in [m])$, \\[3mm]
${\mathcal F}_i = \{(k, (O, E, D)) \in \bigcup_{j \in [m]} \{ j \} \times {\mathcal S}(O_j, E_j^i, D_j^i) \mid
\mbox{(a) $E \cap X \neq \emptyset$ and $k \leq i-1$ or (b) $E \cap Z \neq \emptyset$} \}.$ \\[3mm]
$Y_i = \cup \{ E \mid (k, (O, E, D)) \in {\mathcal F}_i\}$ \hspace{5mm} and \hspace{5mm}
$Z_i = \cup \{ D \mid (k, (O, E, D)) \in {\mathcal F}_i\}$.
\end{tabular} \\[3mm]
By the choice of $\gamma$ it is seen that $V_i \subset C_\gamma(X) \subset W' \subset W$.
Thus we obtain a neighborhood $\mathcal W_i$ of $i_W$ in $\mathcal E^u_\#(W, M; Y_i)$ and
a homotopy $\phi^i : \mathcal W_i \times [0,1] \longrightarrow \mathcal E^u_\#(W, M; Z_i)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W_i$ \\
\begin{tabular}[t]{c@{\ \,}l}
{\rm (i)} & $\phi^i_0(h) = h$, \hspace{3mm}
{\rm (ii)} $\phi^i_1(h) = \mathrm{id}$ \ on \ $X_i$, \\[2mm]
{\rm (iii)} & $\phi^i_t(h) = h$ \ on \ $W - P_i$ \ \ and \ \ $\phi^i_t(h)(W) = h(W)$ \ \ $(t \in [0,1])$, \\[2mm]
{\rm (iv)} & if $h = \mathrm{id}$ on $W \cap \partial M$, then $\phi_t^i(h) = \mathrm{id}$ on $W \cap \partial M$ $(t \in [0,1])$,
\end{tabular}
\vskip 1.5mm
\item[(2)] $\phi^i_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
To compose these homotopies we use the following implications;
\begin{itemize}
\item[(3)] $Y_{i+1} \subset Z_i \cup X_i$ \ \ $(i \in [m-1])$, \hspace{5mm}
\item[(4)] (i) $Z \subset Z_i$ \ \ $(i \in [m])$, \hspace{5mm}
(ii) $X \subset X_m \cup Z_m$, \hspace{5mm}
(iii) \,$Y_1 \subset Y$.
\end{itemize}
We will verify these statements later and continue the construction of the required homotopy $\phi$.
By (3) and (1)(ii) we have the maps $\phi^i_1 : \mathcal W_i \to \mathcal E^u_\#(W, M; Y_{i+1})$ $(i \in [m-1])$.
Since $\phi^i_1(i_W) = i_W \in \mathcal W_{i+1}$,
by the backward induction the neighborhoods $\mathcal W_i$ $(i \in [m-1])$ can be replaced by smaller ones
so to achieve the condition $\phi^i_1(\mathcal W_i) \subset \mathcal W_{i+1}$.
Since $\mathcal E^u_\#(W, M; Y) \subset \mathcal E^u_\#(W, M; Y_1)$ by (4)(iii),
there exists a neighborhood $\mathcal W$ of $i_W$ in $\mathcal E^u_\#(W, M; Y)$ such that $\mathcal W \subset \mathcal W_1$.
Then we have the composition maps $\phi^{i-1}_1 \cdots \phi^1_1 : \mathcal W \to \mathcal W_i$ $(i \in [m])$,
where $\phi^{i-1}_1 \cdots \phi^1_1 = i_{\mathcal W}$ for $i=1$.
Finally, since $\mathcal E^u_\#(W, M; Z_i) \subset \mathcal E^u_\#(W, M; Z)$ by (4)(i),
we can define the required homotopy
$$\mbox{$\phi : \mathcal W \times [0,m] \longrightarrow \mathcal E^u_\#(W, M; Z)$ \hspace{2mm} by
\hspace{2mm}
$\phi_t = \phi^i_{t-i+1}\phi^{i-1}_1 \cdots \phi^1_1$ \hspace{2mm} $(t \in [i-1,i]$, $i \in [m])$.}$$
By (1)(i) the homotopy $\phi$ is well-defined and the required conditions (1), (2) for $\phi$ follow from the corresponding properties (1), (2) of the homotopies $\phi^i$ $(i \in [m])$.
For (1) (ii) note that $\phi_m(h) = \phi_1^m(\phi^{m-1}_1 \cdots \phi^1_1(h)) = \mathrm{id}$ on $X_m \cup (W \cap Z_m)$ and
that $X \subset W \cap (X_m \cup Z_m) = X_m \cup (W \cap Z_m)$ by (4)(ii).
It remains to verify the assertions (3) and (4).
(3) Take any $y \in Y_{i+1}$.
We have $y \in E$ for some $(k, (O,E,D)) \in {\mathcal F}_{i+1}$, so
$(O,E,D) \in {\mathcal S}(O_k, E_k^{i+1}, D_k^{i+1})$ and
(a) $E \cap X \neq \emptyset$ and $k \leq i$ or (b) $E \cap Z \neq \emptyset$.
It follows that $\pi|_O : O \to O_k$ is an isometry and $E = (\pi|_O)^{-1}(D_k^i)$ since $E_k^{i+1} = D_k^i$.
In the case (a) with $k = i$;
Since $O_k = U_i$ and $D_k^i = D_i^i = C_i$, it follows that
$y \in E = (\pi|_O)^{-1}(C_i) \subset \pi^{-1}(C_i)$.
Since $O \cap X \supset E \cap X \neq \emptyset$, it follows that
$O \in {\mathcal V}_i$ and $V_i \supset O \supset E \ni y$.
Hence we have $y \in \pi^{-1}(C_i) \cap V_i = X_i$
In the case (a) with $k \leq i-1$ or (b);
Let $E' = (\pi|_O)^{-1}(E_k^i)$. Then $(O, E', E) \in {\mathcal S}(O_k, E_k^i, D_k^i)$ and
$(k, (O, E', E)) \in {\mathcal F}_i$ since $E' \cap X \supset E \cap X \neq \emptyset$ in the case (a) with $k \leq i-1$ and
$E' \cap Z \supset E \cap Z \neq \emptyset$ in the case (b).
Hence we have $y \in E \subset Z_i$.
(4)(ii) Give any $x \in X$. Then $\pi(x) \in F_k$ for some $k \in [m]$.
In the case where $k \leq m - 1$;
Since $\pi(x) \in F_k \subset U_k = O_k$, it follows that
$x \in O$ for some $O \in {\mathcal S}(O_k)$ and $\pi|_O : O \to O_k$ is an isometry.
Put $E = (\pi|_O)^{-1}(E_k^m)$ and $D = (\pi|_O)^{-1}(D_k^m)$.
Then, it follows that $x \in D$ since $\pi(x) \in F_k \subset D_k^m$, and that
$(k, (O, E, D)) \in {\mathcal F}_m$ since $(O, E, D) \in {\mathcal S}(O_k, E_k^m, D_k^m)$, $x \in E \cap X \neq \emptyset$ and $k \leq m-1$.
This implies that $x \in D \subset Z_m$.
In the case where $k = m$;
Since $\pi(x) \in F_m \subset C_m \subset U_m$,
it follows that $x \in \pi^{-1}(C_m)$ and there exists $V' \in {\mathcal S}(U_m)$ with $x \in V'$.
Since $x \in V' \cap X$, we have $V' \in {\mathcal V}_m$ and $x \in V' \subset V_m$.
This implies that $x \in \pi^{-1}(C_m) \cap V_m = X_m$.
The statements (4)(i) and (4)(iii) are verified similarly. This completes the proof.
\end{proof}
\section{Groups of uniform homeomorphisms of metric spaces with bi-Lipschitz Euclidean ends}
In this section we study some global deformation properties of groups of uniform homeomorphisms of manifolds with bi-Lipschitz Euclidean ends.
The Euclidean space $\mathbb R^n$ admits the canonical Riemannian covering projection $\pi : \mathbb R^n \to \mathbb R^n/\mathbb Z^n$ onto the flat torus.
Therefore we can apply the Local Deformation theorem Theorem~\ref{thm_local_deformation} to uniform embeddings in $\mathbb R^n$.
\begin{proposition}\label{prop_deform_Euclid}
For any closed subset $X$ of $\mathbb R^n$ and any uniform neighborhoods $W' \subset W$ of $X$ in $\mathbb R^n$
there exists a neighborhood $\mathcal W$ of the inclusion map $i_W : W \subset \mathbb R^n$ in $\mathcal E^u_\ast(W, \mathbb R^n)$ and
a homotopy $\phi : \mathcal W \times [0,1] \longrightarrow \mathcal E^u_\ast(W, \mathbb R^n)$ such that
\begin{itemize}
\item[(1)] for each $h \in \mathcal W$ \ \
\begin{tabular}[t]{c@{\ }l}
{\rm (i)} & $\phi_0(h) = h$, \ \
{\rm (ii)} $\phi_1(h) = \mathrm{id}$ on $X$, \\[2mm]
{\rm (iii)} & $\phi_t(h) = h$ on $W - W'$ \ \ and \ \ $\phi_t(h)(W) = h(W)$ \ \ $(t \in [0,1])$,
\end{tabular}
\vskip 1mm
\item[(2)] $\phi_t(i_W) = i_W$ \ $(t \in [0,1])$.
\end{itemize}
\end{proposition}
The relevant feature of Euclidean space $\mathbb R^n$ in this context is the existence of similarity transformations
$$\mbox{$k_\gamma : \mathbb R^n \approx \mathbb R^n$ : \ $k_\gamma(x) = \gamma x$ \hspace{5mm} \ $(\gamma > 0)$.}$$
This enables us to deduce, from the local one, a global deformation in groups of uniform homeomorphisms on $\mathbb R^n$
and more generally, manifolds with bi-Lipschitz Euclidean ends.
\subsection{Euclidean ends case} \mbox{}
Recall our conventions: For $r \in \mathbb R$ we set $\mathbb R^n_r = \mathbb R^n - O(r)$, where $O(r) = \{ x \in \mathbb R^n \mid \| x \| < r \}$.
For $s > r > 0$ and $\varepsilon > 0$, let $\mathcal E^u(\iota_s, \varepsilon; \mathbb R^n_s, \mathbb R^n_r)$ denote the open $\varepsilon$-neighborhood of the inclusion map $\iota_{s,r} : \mathbb R^n_s \subset \mathbb R^n_r \subset \mathbb R^n$ in the space $\mathcal E^u(\mathbb R^n_s, \mathbb R^n_r)_u.$
We can apply Proposition~\ref{prop_deform_Euclid} to $(X, W', W) = (\mathbb R^n_v, \mathbb R^n_u, \mathbb R^n_s)$ and replace $\mathcal W$ by a smaller one to obtain the following conclusion.
\begin{lemma}\label{lemma_local_deform_E-end}
For any $0 \leq r < s < u < v$ and $\varepsilon > 0$ there exist $\delta > 0$ and a homotopy
$$\phi : \mathcal E^u(\iota_{s,r}, \delta; \mathbb R^n_s, \mathbb R^n_r) \times [0,1] \longrightarrow \mathcal E^u(\iota_{s,r}, \varepsilon; \mathbb R^n_s, \mathbb R^n_r)$$
such that {\rm (1)} for each $h \in \mathcal E^u(\iota_{s,r}, \delta; \mathbb R^n_s, \mathbb R^n_r)$
\begin{itemize}
\item[] \hspace*{8mm} {\rm (i)} $\phi_0(h) = h$, \ \ {\rm (ii)} $\phi_1(h) = \mathrm{id}$ on $\mathbb R^n_v$, \ \
{\rm (iii)} $\phi_t(h) = h$ on $\mathbb R^n_s - \mathbb R^n_u$ \ $(t \in [0,1])$,
\item[(2)] $\phi_t(\iota_{s,r}) = \iota_{s,r}$ \ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
Now we apply a similarity transformation $k_\gamma$ for a sufficiently large $\gamma > 0$ to Lemma~\ref{lemma_local_deform_E-end}.
\begin{lemma}\label{lemma_deformation}
For any $c, s_0 > 0$ and $\beta > \alpha > 1$ there exist $s > s_0$ and a homotopy
$$\psi : \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n) \times [0,1] \longrightarrow \mathcal E^u(\iota_s, s; \mathbb R^n_s, \mathbb R^n)$$
such that {\rm (1)} for each $h \in \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n)$
\begin{itemize}
\item[] \hspace{8mm} {\rm (i)} $\psi_0(h) = h$, \ \ {\rm (ii)} $\psi_1(h) = \mathrm{id}$ on $\mathbb R^n_{\beta s}$, \ \
{\rm (iii)} $\psi_t(h) = h$ on $\mathbb R^n_s - \mathbb R^n_{\alpha s}$ \ $(t \in [0,1])$,
\item[(2)] $\psi_t(\iota_s) = \iota_s$ \ $(t \in [0,1])$
\item[(3)] $\psi(\mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n_r) \times [0,1]) \subset \mathcal E^u(\iota_s, s; \mathbb R^n_s, \mathbb R^n_r)$ for any $r < s$.
\end{itemize}
\end{lemma}
\begin{proof}
We apply Lemma~\ref{lemma_local_deform_E-end} to $0 < 1 < 2 \alpha < 2\beta$ and $\varepsilon = 1$. This yields $\delta \in (0, c/s_0)$ and a homotopy
$$\phi : \mathcal E^u(\iota_1, \delta; \mathbb R^n_1, \mathbb R^n) \times [0,1] \longrightarrow \mathcal E^u(\iota_1, 1; \mathbb R^n_1, \mathbb R^n).$$
as in Lemma~\ref{lemma_local_deform_E-end}.
Let $s := c/\delta$. Then $s > s_0$ and we have the homeomorphism
$$\eta : \mathcal E^u(\mathbb R^n_1, \mathbb R^n) \approx \mathcal E^u(\mathbb R^n_s, \mathbb R^n) : \ \eta(f) = k_s f \, k_{1/s}.$$
Since $\eta(\iota_1) = \iota_s$ and $\displaystyle d(\eta(f), \eta(g)) = s \,d(f, g)$, for each $c > 0$ we have the restriction
$$\eta_c : \mathcal E^u(\iota_1, a; \mathbb R^n_1, \mathbb R^n)) \approx \mathcal E^u(\iota_s, sa; \mathbb R^n_s, \mathbb R^n).$$
Then the homotopy $\psi$ is defined by
$$\psi_t = \eta_s \phi_t\eta_c^{-1}.$$
The conditions (1), (2) on $\psi$ follow from the corresponding properties of $\phi$.
By (1)(iii) ${\rm Im}\, \psi_t(h) = {\rm Im}\, h$ for each $h \in \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n)$, which implies (3).
\end{proof}
\subsection{Bi-Lipschitz Euclidean ends case} \mbox{}
Suppose $(X, d)$ is a metric space and $L$ is a bi-Lipschitz $n$-dimensional Euclidean end of $X$.
This means that $L$ is a closed subset of $X$ which admits a bi-Lipschitz homeomorphism
$\theta : (\mathbb R^n_1, \partial \mathbb R^n_1) \cong ((L, {\rm Fr}_X L), d|_L)$ and
$d(X - L, \theta(\mathbb R^n_r)) \to \infty$ as $r \to \infty$.
Let $\kappa \geq 1$ be the bi-Lipschitz constant of $\theta$ and
for $a \geq 1$ let $L_a = \theta(\mathbb R^n_a)$ and $\theta_a = \theta|_{\mathbb R^n_a} : \mathbb R^n_a \approx L_a$.
\begin{lemma}\label{lemma_deform_homeo_1}
For any $\lambda > 0$ and $s_0 \geq 1$ there exist $s > s_0$, $\mu > 0$ and
a homotopy $\phi : {\mathcal H}^u(X; \lambda) \times [0,1] \longrightarrow {\mathcal H}^u(X; \mu)$
such that for each $h \in {\mathcal H}^u(X; \lambda)$
\begin{itemize}
\item[(i)\,] $\phi_0(h) = h$, \ \ {\rm (ii)} $\phi_1(h) = \mathrm{id}$ on $L_{3s}$, \ \ {\rm (iii)} $\phi_t(h) = h$ on $X - L_{2s}$ \ $(t \in [0,1])$,
\item[(iv)] if $h = \mathrm{id}$ on $L_s$, then $\phi_t(h) = h$ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
\begin{proof} Take any $\lambda > 0$.
Since $d(X - L, L_r) \to \infty$ ($r \to \infty$), there exists
\begin{itemize}
\item[(1)] $r > s_0$ such that $h(L_r) \subset L_1$ for any $h \in {\mathcal H}^u(X; \lambda)$.
\end{itemize}
Let $c \equiv \lambda \kappa > 0$.
Applying Lemma~\ref{lemma_deformation} to $c, r$ and $\alpha=2$, $\beta=3$,
we obtain $s > r$ and a homotopy
$$\psi : \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n_1) \times [0,1] \longrightarrow \mathcal E^u(\iota_s, s; \mathbb R^n_s, \mathbb R^n_1)$$
such that (2) for each $f \in \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n_1)$
\begin{itemize}
\item[] \hspace{8mm} (i) $\psi_0(f) = f$, \ \ (ii) $\psi_1(f) = \mathrm{id}$ on $\mathbb R^n_{3s}$, \ \
(iii) $\psi_t(f) = h$ on $\mathbb R^n_s - \mathbb R^n_{2s}$ \ $(t \in [0,1])$,
\item[(3)] $\psi_t(\iota_s) = \iota_s$ \ $(t \in [0,1])$.
\end{itemize}
Consider the homeomorphism
$$\Theta_s : \mathcal E^u(L_s, L_1) \approx \mathcal E^u(\mathbb R^n_s, \mathbb R^n_1) : \ \ \Theta_s(f) = \theta_1^{-1} f \hspace{0.5mm}\theta_s.$$
Since $\theta$ is $\kappa$-bi-Lipschitz, it is seen that $\Theta_s$ is also $\kappa$-bi-Lipschitz with respect to the sup-metrics.
Since $\Theta_s(\iota_s^L) = \iota_s$,
the maps $\Theta_s$ and $\Theta_s^{-1}$ restrict to
$$\Theta_s : \mathcal E^u(\iota_s^L, \lambda; L_s, L_1) \longrightarrow \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n_1) \hspace{4mm} \text{and} \hspace{4mm}
\Theta_s^{-1} : \mathcal E^u(\iota_s, c; \mathbb R^n_s, \mathbb R^n_1) \longrightarrow \mathcal E^u(\iota_s^L, \kappa c; L_s, L_1).$$
Hence we obtain the homotopy
$$\chi : \mathcal E^u(\iota_s^L, \lambda; L_s, L_1) \times [0,1] \longrightarrow \mathcal E^u(\iota_s^L, \kappa c; L_s, L_1) : \ \ \chi_t = (\Theta_s)^{-1} \psi_t \Theta_s.$$
From (2), (3) it follows that
\begin{itemize}
\item[(4)] for each $h \in \mathcal E^u(\iota_s^L, \lambda; L_s, L_1)$ \\
\hspace*{5mm} (i) $\chi_0(f) = f$, \ \ (ii) $\chi_1(f) = \mathrm{id}$ on $L_{3s}$, \ \
(iii) $\chi_t(f) = f$ on $L_s - L_{2s}$ \ $(t \in [0,1])$,
\item[(5)] $\chi_t(\iota_s^L) = \iota_s^L$ \ $(t \in [0,1])$.
\end{itemize}
Since $s > r$, by (1) we have the restriction map
$$R_s : {\mathcal H}^u(X; \lambda) \longrightarrow \mathcal E^u(\iota_s^L, \lambda; L_s, L_1) : R_s(h) = h|_{L_s}.$$
Let $\mu =\kappa c$.
Due to (4)(iii), the required homotopy is defined by
$$\phi : {\mathcal H}^u(X; \lambda) \times [0,1] \longrightarrow {\mathcal H}^u(X; \mu) \ \ \ \text{by} \ \ \
\phi_t(h) =
\left\{
\begin{array}[c]{@{\,}cl}
\chi_tR_s(h) & \text{on \ $L_s$} \\[2mm]
h & \text{on \ $X - L_{2s}.$}
\end{array} \right.$$
\vskip -5mm
\end{proof}
\begin{lemma}\label{lemma_deform_homeo_2}
For any $\lambda > 0$ and $r > r_0 \geq 1$ there exist $\lambda' > 0$ and
a homotopy $\chi : {\mathcal H}^u(X; \lambda) \times [0,1] \longrightarrow {\mathcal H}^u(X; \lambda')$
such that for each $h \in {\mathcal H}^u(X; \lambda)$
\begin{itemize}
\item[(i)\,] $\chi_0(h) = h$, \ \ {\rm (ii)} $\chi_1(h) = \mathrm{id}$ on $L_r$, \ \ {\rm (iii)} $\chi_t(h) = h$ on $h^{-1}(X - L_{r_0}) - L_{r_0}$ $(t \in [0,1])$,
\item[(iv)] if $h = \mathrm{id}$ on $L_{r_0}$, then $\chi_t(h) = h$ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
\begin{proof} Let $s, \mu > 0$ and $\phi$ be as in Lemma~\ref{lemma_deform_homeo_1} with respect to $\lambda$ and $s_0 = r$.
Using the product structure of $L$, we can find an isotopy
$\xi : X \times [0,1] \to X$ such that
\begin{itemize}
\item[] (a) $\xi_0 = \mathrm{id}_X$, \ \ (b) $\xi_1(L_r) = L_{3s}$, \ \ (c) $\xi_t = \mathrm{id}$ on $(X - L_{r_0}) \cup L_{4s}$ \ $(t \in [0,1])$.
\end{itemize}
By (c) the map $[0,1] \ni t \longmapsto \xi_t \in {\mathcal H}^u(X)$ is continuous and $\nu \equiv \max \{ d(\xi_t, \mathrm{id}_X) \mid t \in [0,1] \} < \infty$.
Thus, we obtain the homotopy
$$\chi : {\mathcal H}^u(X; \lambda) \times [0,1] \longrightarrow {\mathcal H}^u(X) : \ \ \ \chi_t(h) = \xi_t^{-1} \phi_t(h) \xi_t.$$
Since $d(\xi_t^{-1}, \mathrm{id}_X) = d(\xi_t, \mathrm{id}_X) \leq \nu$, it follows that
$d(\chi_t(h), \mathrm{id}_X) \leq \lambda' \equiv \mu + 2 \nu$ $(h \in {\mathcal H}^u(X; \lambda))$ and that ${\rm Im}\, \chi \subset {\mathcal H}^u(X; \lambda')$.
The required conditions on $\chi$ follow from the properties of $\phi$ and $\xi$.
\end{proof}
\begin{lemma}\label{lemma_deform_homeo_3} For any $r \in (1,2)$ there exists a homotopy $\psi : {\mathcal H}^u(X)_b \times [0,1] \longrightarrow {\mathcal H}^u(X)_b$
such that for each $h \in {\mathcal H}^u(X)_b$
\begin{itemize}
\item[(i)\,] $\psi_0(h) = h$, \ \ {\rm (ii)} $\psi_1(h) = \mathrm{id}$ on $L_2$, \ \ {\rm (iii)} $\psi_t(h) = h$ on $h^{-1}(X - L_r) - L_r$ $(t \in [0,1])$,
\item[(iv)] if $h = \mathrm{id}$ on $L_r$, then $\psi_t(h) = h$ $(t \in [0,1])$,
\item[(v)\,] for any $\lambda > 0$ there exists $\mu > 0$ such that $\psi_t({\mathcal H}^u(X; \lambda)) \subset {\mathcal H}^u(X; \mu)$ $(t \in [0,1])$.
\end{itemize}
\end{lemma}
\begin{proof} For $\lambda \geq 0$ let
${\mathcal H}^u(X; \geq \hspace{-0.5mm}\lambda) = \{ h \in {\mathcal H}^u(X)_b \mid d(h, \mathrm{id}_X) \geq \lambda \}$.
Take any sequence $r = r_1 < r_2 < \cdots < 2$.
By repeated applications of Lemma~\ref{lemma_deform_homeo_2} we can find $\lambda_i > 0$ $(i \in \mathbb N)$ and homotopies
$$\chi^i : {\mathcal H}^u(X; \lambda_i+1) \times [0,1] \longrightarrow {\mathcal H}^u(X; \lambda_{i+1}) \ \ (i \in \mathbb N)$$
such that for each $i \in \mathbb N$
\begin{itemize}
\item[(1)] $\lambda_{i+1} > \lambda_i + 1$,
\item[(2)] for each $h \in {\mathcal H}^u(X; \lambda_i+1)$ \\
\begin{tabular}[t]{c@{\ }l}
(i) & $(\chi^i)_0(h) = h$, \hspace{3mm} (ii) $(\chi^i)_1(h) = \mathrm{id}$ on $L_{r_{i+1}}$, \\[2mm]
(iii) & $(\chi^i)_t(h) = h$ on $h^{-1}(X - L_{r_i}) - L_{r_i}$ $(t \in [0,1])$, \\[2mm]
(iv) & if $h = \mathrm{id}$ on $L_{r_i}$, then $(\chi^i)_t(h) = h$ $(t \in [0,1])$.
\end{tabular}
\end{itemize}
\vskip 1mm
For each $i \in \mathbb N$ take a map
\begin{itemize}
\item[(3)] $\alpha_i : {\mathcal H}^u(X; \lambda_i+1) \to [0,1]$ such that \ $\alpha_i(h) = 1$ if $d(h, \mathrm{id}_X) \leq \lambda_i$
\ and \ $\alpha_i(h) = 0$ if $d(h, \mathrm{id}_X) = \lambda_i+1$.
\end{itemize}
We modify $\chi^i$ to obtain the homotopy
$$\eta^i : {\mathcal H}^u(X)_b \times [0,1] \longrightarrow {\mathcal H}^u(X)_b, \ \
(\eta^i)_t(h) = \left\{ \begin{array}[c]{@{\ }ll}
(\chi^i)_{\alpha_i(h) t}(h) & (h \in {\mathcal H}^u(X; \lambda_i+1)), \\[2mm]
\ h & (h \in {\mathcal H}^u(X; \geq \hspace{-0.5mm}\lambda_i+1)).
\end{array} \right.$$
Then, $\eta^i$ has the following properties: \
\begin{itemize}
\item[(4)] for each $h \in {\mathcal H}^u(X)_b$ \ \
(i) $(\eta^i)_0(h) = h$, \ \ (ii) $(\eta^i)_t(h) = h$ on $h^{-1}(X - L_{r_i}) - L_{r_i}$ $(t \in [0,1])$.
\item[(5)]
\begin{tabular}[t]{c@{\ }l}
(i) & $(\eta^i)_t(h) = h$ $(t \in [0,1])$ for any $h \in {\mathcal H}^u_{L_{r_i}}(X)_b \cup {\mathcal H}^u(X; \geq \hspace{-0.5mm}\lambda_i+1)$. \\[1.5mm]
(ii) & $(\eta^i)_t({\mathcal H}^u(X; \lambda_i+1)) \subset {\mathcal H}^u(X; \lambda_{i+1})$ $(t \in [0,1])$.
\end{tabular}
\vskip 1.5mm
\item[(6)] $(\eta^i)_1({\mathcal H}^u(X; \lambda_i)) \subset {\mathcal H}^u_{L_{r_{i+1}}}(X)_b$.
\end{itemize}
From (5) it follows that
\begin{itemize}
\item[(7)]
\begin{tabular}[t]{c@{\ }l}
(i) & $(\eta^j)_t({\mathcal H}^u(X;\lambda_i)) \subset {\mathcal H}^u(X;\lambda_i)$ \ \ $(j \leq i-1$, $t \in [0,1]$), \\[1.5mm]
(ii) & $(\eta^j)_t(h) = h$ $(h \in {\mathcal H}^u_{L_{r_{i+1}}}(X)_b)$ \ \ $(j \geq i+1$, $t \in [0,1]$).
\end{tabular}
\end{itemize}
\vskip 1.5mm
Hence we have
\begin{itemize}
\item[(8)]
\begin{tabular}[t]{c@{\ }l}
(i) & $(\eta^i)_1(\eta^{i-1})_1 \dots (\eta^1)_1({\mathcal H}^u(X;\lambda_i)) \subset (\eta^i)_1({\mathcal H}^u(X; \lambda_i)) \subset {\mathcal H}^u_{L_{r_{i+1}}}(X)_b$, \\[1.5mm]
(ii) & $(\eta^j)_t (\eta^{j-1})_1 \cdots (\eta^i)_1 \dots (\eta^1)_1(h) = (\eta^i)_1 \dots (\eta^1)_1(h)$ \ \ ($h \in {\mathcal H}^u(X;\lambda_i)$, $j \geq i+1$, $t \in [0,1]$).
\end{tabular}
\end{itemize}
\vskip 1mm
Replacing $[0,1]$ by $[0, \infty]$, the homotopy $\psi : {\mathcal H}^u(X)_b \times [0,\infty] \longrightarrow {\mathcal H}^u(X)_b$ is defined by
\vspace{1mm}
$$\psi_t(h) = \left\{ \begin{array}[c]{@{\ }ll}
(\eta^j)_{t -j+1} (\eta^{j-1})_1 \cdots (\eta^1)_1(h) & (t \in [j-1, j], j \in \mathbb N) \\[2mm]
\displaystyle \lim_{j \to \infty}(\eta^j)_1 \cdots (\eta^1)_1(h) & (t = \infty).
\end{array} \right.$$
\vskip 1mm
\noindent By (8)(ii) we have
\begin{itemize}
\item[(9)] $\psi_t(h) = (\eta^i)_1 \dots (\eta^1)_1(h)$ \ \ ($h \in {\mathcal H}^u(X;\lambda_i)$, $t \in [i, \infty]$).
\end{itemize}
This means that $\psi$ is well-defined and continuous. The required conditions on $\psi$ follow from (4) $\sim$ (8).
For (v) note that $\psi_t({\mathcal H}^u(X; \lambda_i)) \subset {\mathcal H}^u(X; \lambda_{i+1})$ $(i \in \mathbb N, t \in [0,1])$.
\end{proof}
\begin{proposition}\label{prop_deform_homeo}
For any $1 < s < r < 2$ there exists a strong deformation retraction $\phi$ of ${\mathcal H}^u(X)_b$ onto ${\mathcal H}^u_{L_r}(X)_b$ such that
$$\mbox{$\phi_t(h) = h$ \ on \ $h^{-1}(X - L_s) - L_s$ \ \ for any \ $(h,t) \in {\mathcal H}^u(X)_b \times [0,1]$.}$$
\end{proposition}
\begin{proof}
Let $\psi : {\mathcal H}^u(X)_b \times [0,1] \longrightarrow {\mathcal H}^u(X)_b$ be the homotopy given by Lemma~\ref{lemma_deform_homeo_3}.
Then $\psi$ is a deformation of ${\mathcal H}^u(X)_b$ into ${\mathcal H}^u_{L_2}(X)_b$ which fixes ${\mathcal H}^u_{L_r}(X)_b$ pointwise and
satisfies
\begin{itemize}
\item[(1)] $\psi_t(h) = h$ on $h^{-1}(X - L_r) - L_r$ \ \ $(h \in {\mathcal H}^u(X)_b, t \in [0,1])$.
\end{itemize}
Let $Y = X - {\rm Int}\,L_3$, $S = L_3 - {\rm Int}\,L_3$ and $N = {\rm Int}\,L - {\rm Int}\,L_3$.
Then $N$ is an open collar neighborhood of $S$ in $Y$ and
for any $s \in (0, r)$ it admits a parametrization
\begin{itemize}
\item[(2)] $\vartheta : (S \times [0,4), S \times \{ 0 \}) \approx (N, S)$ \ \ such that \ \
$N_1 = L_2 - {\rm Int}\,L_3$, \
$N_2 = L_r - {\rm Int}\,L_3$, \
$N_3 = L_s - {\rm Int}\,L_3$.
\end{itemize}
Here, $N_s = \theta(S \times [0,s])$ $(s \in [0,4))$.
Under the canonical identification $({\mathcal H}^u_{L_2}(X)_b, {\mathcal H}^u_{L_r}(X)_b) \approx ({\mathcal H}^u_{N_1}(Y)_b, {\mathcal H}^u_{N_2}(Y)_b)$,
Lemma~\ref{lemma_collar} yields a strong deformation retraction
$\chi_t$ $(t \in [0,1])$ of ${\mathcal H}^u_{L_2}(X)_b$ onto ${\mathcal H}^u_{L_r}(X)_b$ such that
\begin{itemize}
\item[(3)] $\chi_t(h) = h$ \ on \ $h^{-1}(X - L_s) - L_s$ \ \ for any \ $(h,t ) \in {\mathcal H}^u_{L_2}(X)_b \times [0,1]$.
\end{itemize}
Finally, the homotopy $$\phi : {\mathcal H}^u(X)_b \times [0,1] \longrightarrow {\mathcal H}^u(X)_b : \hspace{3mm}
\phi_t =
\left\{\begin{array}[c]{ll}
\psi_{2t} & (t \in [0,1/2]), \\[2mm]
\chi_{2t-1} \psi_1 & (t \in [1/2,1])
\end{array}\right.$$
\vskip 2mm
\noindent is a strong deformation retraction of ${\mathcal H}^u(X)_b$ onto ${\mathcal H}^u_{L_r}(X)_b$ satisfying the required condition.
\end{proof}
\begin{proof}[\bf Proof of Theorem~\ref{thm_Euclid-end}]
For each $i \in [m]_+$ we can replace the bi-Lipschitz homeomorphism $\theta_i$ for $L_i$ by another $\theta_i'$ such that $L_i' = \theta_i'(\mathbb R^{n_i}_{4/3})$ and $L_i'' = \theta_i'(\mathbb R^{n_i}_{3/2})$.
Then, by Proposition~\ref{prop_deform_homeo}
there exists a strong deformation retraction $\phi^i$ of ${\mathcal H}^u(X)_b$ onto ${\mathcal H}^u_{L_i''}(X)_b$ such that
$$\mbox{$(\phi^i)_t(h) = h$ \ on \ $h^{-1}(X - L_i') - L_i'$ \ \ for any \ $(h,t) \in {\mathcal H}^u(X)_b \times [0,1]$.}$$
Define the homotopy $\phi$ by $\phi_t = (\phi^m)_t \cdots (\phi^1)_t$ $(t \in [0,1])$.
\end{proof}
|
1,116,691,497,173 | arxiv | \section{Introduction and Related Work}
\label{sec:intro}
\vspace{-0.3cm}
Semantic segmentation is a fundamental problem in computer vision and a pivotal step towards content-based image analysis and scene understanding. It has received an upsurge of attention recently owing to its wide variety of applications in medical imaging \cite{ronneberger2015u, rezaei2017conditional}, autonomous driving \cite{menze2015object, cordts2016cityscapes}, satellite image processing \cite{volpi2015semantic, henry2018road}, and robotics \cite{geiger2013vision, shvets2018automatic}, to name a few. Early segmentation methodologies are mostly developed with clustering algorithms at their core \cite{kass1988snakes, nock2004statistical, plath2009multi, minaee2019admm}. Recent advances in deep learning have revolutionized this field resulting in state-of-the-art (SoTA) image segmentation algorithms such as FCN \cite{long2015fully}, U-Net \cite{ronneberger2015u}, PSPNet \cite{zhao2017pyramid}, EncNet \cite{zhang2018context}, Exfuse \cite{zhang2018exfuse}, DeepLabv3+ \cite{chen2018encoder}, PS and Panoptic DeepLab \cite{kirillov2019panoptic, cheng2019panoptic}, HRNet \cite{wang2020deep} and many other elegant architectures that considerably outperformed the traditional signal processing-based methods.
The choice of loss function is essential for applicability of semantic segmentation in different contexts. Therefore, several studies have investigated the impact of modified loss functions on the performance of semantic segmentation models in generic and use-case related datasets. A comprehensive list of some of these recent loss functions is provided in four different categories in \cite{jadon2020survey}: i) distribution-based losses such as binary class cross-entropy (BCE), ii) region-based losses such as Dice loss \cite{sudre2017generalised} and its variants, iii) boundary based losses such as Hausdorff distance \cite{ribera2018weighted, karimi2019reducing}, and iv) compound losses such Exponential Logarithmic loss \cite{wong20183d}. Generic empirical risk minimization (ERM) loss functions such as BCE or Dice can disproportionately advantage or disadvantage some classes in favors of an improved average performance. This results in models that treat certain classes (e.g. with less presence in the training dataset) in an \emph{unfair} fashion. Somewhat related to this concern, significant research has been conducted on proposing alternatives to the cross-entropy (CE) loss for semantic segmentation to handle imbalanced data using weighted cross entropy \cite{pihur2007weighted}, balanced cross entropy \cite{xie2015holistically}, and the Focal loss \cite{lin2017focal}. However, none of these approaches directly address the \emph{fairness} problem. Being the most relevant approach among the mentioned solutions, Focal loss \cite{lin2017focal} down-weights the contribution of easy examples and enables the model to focus more on learning hard ones. In a broader scope of optimization for machine learning, \cite{hashimoto2018fairness} proposes a solution to ensure different subgroups within a population are treated fairly and \cite{duchi2016variance} develops a solution with favorable out-of-sample performance. Most recently, a unified framework called tilted ERM (TERM) has been proposed in \cite{li2020tilted} to flexibly address the deficiencies of traditional ERM with respect to handling outliers and treating subgroups fairly. It is demonstrated in \cite{li2020tilted} that TERM can not only efficiently promote fairness in a multitude of applications, but also outperforms the likes of Focal Loss \cite{lin2017focal} and RobustRegRisk \cite{duchi2016variance} in different settings. Besides, \cite{li2020tilted} provides efficient batch and stochastic first-order optimization methods for solving TERM. Inspired by flexibility and superior performance of TERM in promoting fairness, we have studied the impact of adopting a similar approach in a new context, i.e., semantic segmentation.
Our contributions are as follows: i) We propose to employ tilted cross-entropy (TCE) as a novel loss to promote fairness in semantic segmentation. ii) We adapt the derivations of \cite{li2020tilted} to fit into semantic segmentation setting and reformulate the commonly used CE loss as TCE. We then propose a stochastic non-hierarchical optimization algorithm for solving TCE. iii) We empirically demonstrate the effectiveness of Stochastic TCE on promoting fairness in semantic segmentation for Cityscapes (and ADE20k in appendix). \footnote{The code will be publicly available soon.}
\section{Tilted CE (TCE) for Semantic Segmentation}
\label{sec:term_semseg}
\vspace{-0.3cm}
In this section, we introduce TCE: an adapted and reformulated version of TERM \cite{li2020tilted} for semantic segmentation. We then propose a stochastic non-hierarchical batch solution of TCE. In the following, we denote the cardinality of $\mathcal{X}$ as $|\mathcal{X}|$, and the set $\{1, \cdots, n\}$ as $[n]$. Let $\mathcal{D}_t =\{({\bf X},{\bf Y})_{1},...,({\bf X},{\bf Y})_{M}\}$ be the training dataset containing $M$ samples with ${\bf X}_m$ and ${\bf Y}_m$ respectively denoting the $m$th image and its corresponding label map (also called mask). Here, ${\bf X}$ is of size $H \times W \times 3$ for RGB images with a total of $H \times W = N$ pixels. The corresponding label map ${\bf Y}$ is of size $H \times W$ with elements in $[K]$. ${\bf X}_m$ contains a maximum of $K$ classes each occupying $n_{m, c}$ pixels, where $\sum_{c \in [K]} n_{m, c} = H \times W = N, \forall m$. The most commonly used ERM loss for semantic segmentation is the pixel-wise loss $\mathcal{L} = \sum_{m=1}^M\textup{CE}({\bf Y}_m, \hat{{\bf Y}}_m)$ \cite{chen2018encoder, xiao2018unified, wang2020deep}, which is computed using a multi-class CE between the $1$-hot encoded versions of the original label map ${\bf Y}$ and the inferred one $\hat{{\bf Y}}$:
\begin{equation}
\label{eq:erm}
\mathcal{L} = - \frac{1}{MN}\sum_{m=1}^M \sum_{c = 1}^K \sum_{i = 1}^{n_c} y_{m,c,i}\, \log(\hat{y}_{m,c,i}),
\end{equation}
where $y_{m,c,i}$ denotes the $i$th pixel in the $c$th class of ${\bf Y}_m$. \cite{li2020tilted} proposes to \emph{tilt} ERM $\mathcal{R}(\theta) = 1/M \sum_{i \in [M]} f(x_i, \theta)$ as $\tilde{\mathcal{R}}(\theta, t) = 1/t \log(1/M \sum_{i \in [M]} e^{tf(x_i, \theta)})$, with loss function $f(x_i, \theta)$ and model parameters $\theta$. The tilt parameter $t$ can be tuned to flexibly promote robustness or fairness. In theory, setting $t=0$ recovers ERM (i.e, $\tilde{\mathcal{R}}(\theta, 0) = \mathcal{R}(\theta)$), and as $t \rightarrow \infty$, TERM minimizes the worst loss, thus ensuring the model is a reasonable fit for all samples \cite{li2020tilted}. With this in mind, there are (at least) two levels around which a sensible tilting of multi-class CE (MCCE) \eqref{eq:erm} can be implemented. More specifically, we can tilt \eqref{eq:erm} at i) image (or sample) level and ii) class level. Let us start with the image level. To tilt at \emph{image level}, we need to reformulate \eqref{eq:erm} as
\begin{align}
\label{eq:term_m}
\begin{split}
\tilde{\mathcal{L}}^{img} &= \frac{1}{t} \log\Big( \frac{1}{M}\sum_{m=1}^M e^{t\, \mathcal{L}_m} \Big),\\
\textup{where:} \quad \mathcal{L}_m &= - \frac{1}{N} \sum_{c = 1}^K \sum_{i = 1}^{n_c} y_{m,c,i}\, \log(\hat{y}_{m,c,i}).
\end{split}
\end{align}
Following the same strategy, to tilt at \emph{class level} per image we need to reformulate \eqref{eq:erm} as
\begin{align}
\label{eq:term_c}
\begin{split}
\tilde{\mathcal{L}}^{cls} &= \frac{1}{M}\sum_{m=1}^M \frac{1}{t} \log\Big( \frac{1}{K}\sum_{c = 1}^K e^{t\, \mathcal{L}_{m,c}} \Big),\\
\textup{where:} \quad \mathcal{L}_{m,c} &= - \frac{1}{n_c}\sum_{i = 1}^{n_c} y_{m,c,i}\, \log(\hat{y}_{m,c,i}).
\end{split}
\end{align}
\subsection{Solving TCE for Semantic Segmentation}
\label{ssec:stoachstic}
\vspace{-0.3cm}
Depending on the tilt level, one has to replace the pixel-wise MCCE part of the semantic segmentation loss with one of the proposed losses in \eqref{eq:term_m} and \eqref{eq:term_c}. In our experience, directly plugging these loss functions into semantic segmentation optimization problem could lead to convergence issues and caution has to be put in place. An alternative approach to solve TCE (applicable to both sample and hierarchical levels) is to follow along the \emph{stochastic} approach proposed in \cite{li2020tilted}. It is proven in \cite{li2020tilted} that the gradient of the tilted loss $\tilde{\mathcal{L}}$ is a weighted average of the gradients of the original individual losses, where each data point is weighted exponentially proportional to the value of its loss. This is the key idea behind the proposed dynamic weight updating and sampling strategy of Stochastic TCE laid out in Algorithm~\ref{alg:sotchastc_term}. Let us dive deeper and walk through the algorithm. For the sake of simplicity, here we drop the superscript denoting the tilting level of $\tilde{\mathcal{L}}$ in \eqref{eq:term_m} and \eqref{eq:term_c}, and use a subscript to refer to the class $\tilde{\mathcal{L}}_c$ and batch of data within the class $\tilde{\mathcal{L}}_B$. The class weights $w_c$ are stored/updated in $W$.
\LinesNumbered
\begin{algorithm}[t!]
\SetKwInput{Require}{Require}
\SetKwInput{Initialize}{Initialize}
\SetAlgoLined
\DontPrintSemicolon
\SetNoFillComment
\Initialize{$w_c = \tilde{\mathcal{L}}_c = 0, \forall c \in [C]$, $\theta$}
\Require{$\gamma$, $\eta$, $t$}
Divide $\mathcal{D}^t$ into $\mathcal{D}^t_c, \forall c \in [C]$\;
\While{\textup{stopping criteria not met}}{
sample class $c \in [C]$ from a categorical distribution with probabilities $w_c \in W$\;
sample minibatch $B$ within $\mathcal{D}^t_c$\;
$\mathcal{L}_B \gets$ compute the loss \eqref{eq:erm} on $B$\;
tilt the batch loss: $\tilde{\mathcal{L}}_B \gets e^{t \mathcal{L}_B}$\;
$\tilde{\mathcal{L}}_c \gets (1 - \gamma)\,\tilde{\mathcal{L}_c} + \gamma \tilde{\mathcal{L}}_B$\;
$w_c \gets \tilde{\mathcal{L}}_c / (\sum_{l=1}^K \tilde{\mathcal{L}}_l), \forall c \in [C]$\;
Update model parameters: $\theta \gets \theta - \nabla \mathcal{L}_B$
}
\caption{Stochastic TCE for Segmentation}\label{alg:sotchastc_term}
\end{algorithm}
The algorithm starts by dividing the traninig dataset $\mathcal{D}^t$ into $C$ subsets $\mathcal{D}^t_c$ each containing the images corresponding to individual classes. Note that an image can contain multiple classes, and thus, these sets can overlap. One can also consider forming non-overlapping sets based on $\mathcal{D}^t$. Per propagation round, one class (let us say $c$) will be selected from the categorical distribution $[C]$ with probabilities (weights) $W$ (line $3$). These weights are dynamically updated (line $8$). Next, a minibatch $B$ is sampled from the training data of the selected class $\mathcal{D}^t_c$ and the tilted batch loss $\tilde{\mathcal{L}}_B$ is calculated on $B$ (lines $5$ and $6$). Line $7$ proposes a linear dynamic with rate $\gamma$ to update the tilted loss of the selected class $\tilde{\mathcal{L}}_c$ based on its previous value and the current batch estimate $\tilde{\mathcal{L}}_B$. The weight of class $c$, $w_c$, will then be updated using a normalization applied to all the tilted losses (line $8$). These dynamically updated weights, $w_c \in W$, will be used in the next iteration to decide from which class to sample. Finally, model parameters in $\theta$ are updated.
\section{Experimental Setup}
\label{sec:exp}
\vspace{-0.3cm}
Here, we assess the impact of TCE on one of the most commonly adopted datasets for semantic segmentation, Cityscapes \cite{cordts2016cityscapes}. The evaluation results for ADE20k \cite{zhou2019semantic} can be found in the appendix. Cityscapes contains $2,975$ train and $500$ validation images from $19$ main target classes.
\begin{table*}[t!]
\footnotesize
\caption{Performance comparison on Cityscapes \emph{validation} set, sorted based on DLv3+ with MCCE.}
\vspace{-0.35cm}
\label{tb:cityscapes_val}
\centering
{\tabcolsep=0pt\def1.0{1.0}
\begin{tabularx}{400pt}{@{}l Y Y Y Y Y Y Y Y Y Y Y Y@{}}
\toprule
Method & wall & train & rider & fence & terrain & truck & m.cycle & pole & bus & t. light \\
\midrule
MCCE \cite{chen2018encoder} & 48.46 & 53.66 & 61.12 & {\bf62.23} & 62.98 & 66.21 & 67.72 & 69.19 & 71.53 & 74.33 \\
Focal loss \cite{lin2017focal} & 49.11 & {\bf79.55} & {\bf67.56} & 60.75 & 62.31 & {\bf73.16} & 67.71 & 64.53 & {\bf85.02} & 68.92 \\
\rowcolor{LightCyan}
TCE \tiny{$t=.1$} &49.36 & 76.75 & 66.64 & 60.38 & {\bf65.69} & 72.03 & 69.09 & 69.34 & 75.61 & 73.64 \\
\rowcolor{LightYellow}
TCE \tiny{$t=1$} & {\bf53.47} & {\bf79.32} & 65.67 & 59.25 & 63.74 & 64.32 & {\bf69.45} & {\bf69.54} & 66.05 & {\bf74.51} \\
\toprule
\toprule
continued & bicycle & t. sign & person & sidewalk & vegetation & building & sky & car & road &\multicolumn{1}{c}{\cellcolor{gray!35}mIoU}\\
\midrule
MCCE \cite{chen2018encoder} & 79.23 & {\bf81.22} & 83.19 & 84.95 & {\bf92.52} & {93.04} & 95.00 & {\bf95.45} & 98.06 & \multicolumn{1}{c}{\cellcolor{gray!35}75.79} \\
Focal loss \cite{lin2017focal} & 77.71 & 77.62 & 81.23 & 81.56 & 91.54 & 91.89 & 93.71 & 94.67 & 97.14 & \multicolumn{1}{c}{\cellcolor{gray!35}77.14} \\
\rowcolor{LightCyan}
TCE \tiny{$t=.1$} &{\bf79.89} & 81.77 & {\bf84.00} & {\bf86.24} & 92.43 & {\bf93.16} & {\bf95.34} & {\bf95.47} & {\bf98.27} & \multicolumn{1}{c}{\cellcolor{gray!35}\bf78.16} \\
\rowcolor{LightYellow}
TCE \tiny{$t=1$} & {\bf79.92} & 81.04 &{\bf83.96} & {\bf86.22} & 92.44 & 92.92 & 95.11 & 94.53 & {\bf98.28} &\multicolumn{1}{c}{\cellcolor{gray!35}77.35} \\
\bottomrule
\end{tabularx}}
\vspace{-0.25cm}
\end{table*}
\begin{table*}[t]
\footnotesize
\caption{Performance comparison on Cityscapes \emph{validation} set, sorted based on SoTA DLv3+.}
\vspace{-0.35cm}
\label{tb:cityscapes_val_sota}
\centering
{\tabcolsep=0pt\def1.0{1.0}
\begin{tabularx}{400pt}{@{}l Y Y Y Y Y Y Y Y Y Y Y Y@{}}
\toprule
Method & wall &fence &rider &terrain &m.cycle &pole &t. light &bicycle &t. sign &train \\
\midrule
SoTA MCCE \cite{chen2018encoder} & {\bf57}.26 &{\bf62.18} &62.76 &63.38 & 64.50 &65.11 &68.41 &77.26 &78.78 &{\bf80.90}\\
\rowcolor{LightCyan}
TCE \tiny{$t=.1$} &49.36 & 60.38 & {\bf66.64} & {\bf65.69} & {\bf69.09} & {\bf69.34} & {\bf73.64} & {\bf79.89} & {\bf81.77 }& 76.75 \\
\toprule
\toprule
continued & person &sidewalk &truck &bus &vegetation &building &sky &car &road &\multicolumn{1}{c}{\cellcolor{gray!35}mIoU}\\
\midrule
SoTA MCCE \cite{chen2018encoder} & 82.14 &84.7 &{\bf85.31} &{\bf89.07} &{\bf92.65} &92.69 &95.29 &{95.31} &98.13 &\multicolumn{1}{c}{\cellcolor{gray!35}\bf78.73} \\
\rowcolor{LightCyan}
TCE \tiny{$t=.1$} & {\bf84.00} & {\bf86.24} & 72.03 & 75.61 & 92.43 & {\bf93.16} & {\bf95.34} &{\bf 95.47} & {\bf98.27} & \multicolumn{1}{c}{\cellcolor{gray!35}78.16}\\
\bottomrule
\end{tabularx}}
\vspace{-0.1cm}
\end{table*}
\textbf{Training strategy and baselines.} Our trainings are run separately on standard Microsoft Azure $4$-GPU P100 Tesla nodes, each with $16$GB of memory. For experiments on Cityscapes, we used DeepLabv3+ (also referred to as DLv3+) \cite{chen2018encoder} with ReseNet-101 backbone as our reference implementation of multi-class CE (MCCE), and on top of that we have implemented TCE. DLv3+ is among the top performing model architectures for Cityscapes. Following \cite{chen2018encoder}, we used minibatch SGD with learning rate $l_r = 0.01$ and momentum $0.9$ for all models, and adjusted for a total minibtch size of $8$ ($2$ per GPU). The reported results of \cite{chen2018encoder} are based on our own trainings, for the sake of a fair comparison. Image crop size and other pre/post-processing parameters are set per default as suggested in \cite{chen2018encoder}. We also compare our performance against the Focal loss for semantic segmentation \cite{lin2017focal} with the best parameters $\gamma = 2$ and (class weights) $\alpha$ set to the inverse (normalized) class pixel counts computed across the whole dataset.
\textbf{Fairness and its evaluation criteria.} The notion of fairness in this setting is promoting a more consistent (and less varied) performance across different classes. This is to ensure that there are less (or ideally no) classes that have been significantly disadvantaged, due to for instance less presence or difficult characteristic features, for the sake of a higher average performance. Promoting fairness by minimizing performance disparity is also a core idea of TERM \cite{li2020tilted}, and resonates with other recent approaches to fairness \cite{hashimoto2018fairness, li2019fair, mohri2019agnostic}. More concretely, i) best worst-case performance \cite{hashimoto2018fairness, mohri2019agnostic}, and ii) least variance across clients/classes \cite{li2019fair} are recently proposed to promote/evaluate fairness across a set of tasks or networked clients. We investigate both measures as our key criteria. More specifically, besides the overall mean-intersection-over-union (mIoU), we compare the models on: i) sorted (w.r.t MCCE) bottom and top $25\%$ mIoUs; ii) bottom and top $25$th percentiles; and iii) standard deviation and worst case performance (in IoU) across classes. Note that the overall mIoU does not have to be improved when applying TCE; the goal is to minimize performance disparity which can sometimes come at the cost of lower overall mIoU.
\section{Evaluation Results}
\label{sec:eval_res}
\vspace{-0.3cm}
\begin{figure*}[t!]
\centering
\includegraphics[trim={0cm 3.15cm 0cm 0cm},clip,width=0.99\textwidth]{Figures/title.png}
\begin{tikzonimage}[width=0.99\textwidth]{Figures/bus_2_wf.png}
\draw[yellow, very thick, rounded corners, dashed] (0.9,0.44) rectangle (0.999,0.86);
\draw[yellow, very thick, rounded corners, dashed] (0.945,0.395) rectangle (0.981,0.79);
\end{tikzonimage}
\begin{tikzonimage}[width=0.99\textwidth]{Figures/truck_2_wf.png}
\draw[yellow, very thick, rounded corners, dashed] (0.945,0.37) rectangle (0.999,0.995);
\end{tikzonimage}
\begin{tikzonimage}[width=0.99\textwidth]{Figures/train_1_wf.png}
\draw[yellow, very thick, rounded corners, dashed] (0.805,0.36) rectangle (0.865,0.972);
\end{tikzonimage}
\begin{tikzonimage}[width=0.99\textwidth]{Figures/train_2_wf.png}
\draw[yellow, very thick, rounded corners, dashed] (0.805,0.465) rectangle (0.879,0.75);
\draw[yellow, very thick, rounded corners, dashed] (0.928,0.34) rectangle (0.999,0.485);
\end{tikzonimage}
\caption{Impact of TCE on improving low-performing classes of MCCE. Best view in color with $300\%$ zoom.}
\vspace{-0.4cm}
\label{fig:comparison_1}
\end{figure*}
Table~\ref{tb:cityscapes_val} compares the sorted mIoU breakdown of DLv3+ with ResNet-101 backbone \cite{chen2018encoder} trained with the standard multi-class cross-entropy (MCCE) against the same model retrained with the Focal loss and the proposed TCE for $t = 0.1$ and $1$. Even though we are not \emph{necessarily} expecting an improvement in the overall mIoU, here for all $t$'s, the overall performance with TCE has improved beyond same model trained with MCCE ($+2\%$ for $t = 0.1$) and Focal loss (about $1\%$ for $t = 0.1$). TCE with $t = 1$ is expected to push more towards minimizing performance disparity and thus promoting fairness (as is also shown in Table~\ref{tb:fairness_focal_Cityscapes}), and it shows the best improvement in the least performing classes such as ``wall'' and ``train''. The same model architecture, DeepLabv3+ (abbreviated to DLv3+ here), reports better performance results with a more complex backbone Xception-65 \cite{chen2018encoder}. We could not reproduce those results because we did not have access to large enough nodes on Microsoft Azure to accommodate this backbone with batch sizes larger than $8$. Nonetheless, we were curious to know how TCE implemented on top of a model with a weaker backbone would compare with the the state-of-the-art (SoTA) results reported in \cite{chen2018encoder}. This comparison is summarized in Tables~\ref{tb:cityscapes_val_sota}. Interestingly, even compared to the SoTA model with improved backbones, TCE is still improving on several low-performing classes (such as ``rider", ``terrain'', ``motorcycle'', etc.) in favor of promoting fairness across target classes.
\begin{table}[t!]
\footnotesize
\caption{Fairness criteria for Cityscapes \cite{cordts2016cityscapes}}
\vspace{-0.35cm}
\label{tb:fairness_focal_Cityscapes}
\centering
{\tabcolsep=0pt\def1.0{1.0}
\begin{tabularx}{230pt}{l s s | m m | s s }
\toprule
& \multicolumn{2}{c}{sorted $25\%$ } & \multicolumn{2}{c}{$(25^{th} \textup{perc., mIoU})$} & \multicolumn{2}{c}{overall}
\tabularnewline
\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}
Method & bottom & top & bottom &top & worst & std. \tabularnewline
\midrule
MCCE \cite{chen2018encoder} & 57.69 & {94.81} & (64.60, 57.69) & (88.74, {94.81}) & 48.46 & 14.96 \\
Focal loss \cite{lin2017focal} & 63.86 & 93.79 & (67.64, 60.85) &(88.28, 93.79) & 49.11 & {\bf13.35} \\
\rowcolor{LightCyan}
TCE \tiny{$t=.1$}& 63.76 & 94.93 & (69.22, \bf{62.23}) &(89.34, 94.93) & 49.36 & {\bf13.34} \\
\rowcolor{LightYellow}
TCE \tiny{$t=1$} & {\bf64.29} & 94.66 & (65.86, {61.29}) &(89.33, 94.66)& {\bf53.47} &13.57 \\
\bottomrule
\end{tabularx}}
\end{table}
These two tables illustrate promising improvement in low-performing classes the extent of which is further investigated and is summarized in Table~\ref{tb:fairness_focal_Cityscapes}. Here, the following three measures are presented. First, the sorted bottom and top $25\%$ mIoU. To compute this, the target classes are sorted based on the IoU performance breakdown of the the model trained with MCCE. Then for each model the mIoU of the bottom and top $25\%$ ($5$ classes out of $19$) are taken into account. This is to demonstrate the impact of TCE compared to the reference MCCE. Here, improved mIoU for bottom classes (even at cost the of lower mIoU for the top ones) indicates more fairness. Second, in a tuple, the IoU threshold corresponding to (bottom and top) $25$th percentile of each model and the mIoU of the classes falling within the percentile are presented. Note that in this case each model will be sorted according to its own target class IoU's. The idea is that improvement in bottom percentile threshold and corresponding mIoU could be indicative of improved fairness. Again, we do not expect improvement but potential drop in the top percentiles; they are reported to provide a more complete picture. Third, the overall fairness measures \cite{hashimoto2018fairness, li2019fair, mohri2019agnostic}, i.e., the worst performance among target classes and the standard deviation across class IoU's (denoted respectively as worst and std. in the table) are presented.
The results show that on all three metrics TCE offers improved fairness when compared to MCCE and Focal loss. Let us focus on $t = 1$. In sorted bottom $25\%$, we gain $+6\%$ and $+0.4\%$ beyond MCCE and Focal loss, respectively. For $25$th percentiles, $+5\%$ and $+0.4\%$ beyond MCCE and Focal loss, respectively. The TCE with $t=0.1$ seems to do better ($+1\%$ beyond Focal) which could be due to different sorting per model, and thus, different classes falling in those percentiles. In the ovrall metrics \cite{hashimoto2018fairness, li2019fair, mohri2019agnostic}, the worst-case IoU (among target classes) is $5\%$ and $+4\%$ better (higher) than MCCE and Focal loss, respectively. The standard deviation across classes is improved (decreased) by $+1\%$ comapred to MCCE and remains in the same regime as Focal loss. Finally, qualitative results in Fig.~\ref{fig:comparison_1} further corroborate the impact of TCE in comparison with MCCE and Focal loss. The top two rows highlight improvement in ``rider'', ``bus'', and ``truck'', and the next two rows show improvement in ``tram'' and ``sidewalk'', most of which associate to the low-performing classes of the model trained with MCCE. There is plenty of room for extending this work. An avenue to explore is different implementations of TCE, especially (in a non-stochastic fashion) by directly plugging in \eqref{eq:term_m} and \eqref{eq:term_c} as the optimization loss function. In doing so, remedies have to be put in place to circumvent convergence issues. Further quantitative and qualitative results on ADE20k dataset are provided in the appendix. More experimentation on other datasets is left as future work. \vspace{-0.3cm}
\section{Acknowledgment}
\label{sec:ack}
\vspace{-0.3cm}
The authors would like to thanks Shell Global Solutions International B.V. and Delft University of Technology (TU Delft) for their support and for the permission to publish this work. The authors extend their appreciation to Ahmad Beirami from Facebook AI for helpful discussions on tilted empirical risk minimization (TERM).
{\small
\bibliographystyle{ieeetr}
|
1,116,691,497,174 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, Farhi {\it et al.}~\cite{farhi_quantum_2014, farhi_quantum_2014-1} proposed a
new class of quantum heuristic algorithms, the quantum approximate
optimization algorithm (QAOA).
We present an algorithm for Grover's unstructured search problem
\cite{grover_fast_1996} inspired by QAOA.
This algorithm shows a quantum advantage for a
number of iterations $p$ in the intermediate range between $p=1$ and $p\to\infty$.
We also introduce a tool, a representation based on spin-coherent states,
for the design and analysis of the QAOA-type circuits. Using this tool,
we prove a $\Theta(\sqrt N)$ query complexity for our algorithm.
The algorithm has the advantage of requiring fewer two-qubit gates
than Grover's original algorithm because we use the transverse field in
place of Grover's original diffusion operator.
With an increasing number of iterations $p$, an exhaustive search of the QAOA
parameters often becomes inefficient due to the curse of dimensionality. Our
method avoids this difficulty by restricting the parameters to be periodic.
The approach suggests a potential route for parameter optimization
for QAOA-based quantum heuristic algorithms more generally.
In our algorithm, mixing and problem (oracle) Hamiltonians are applied
to the system in a sequence that is periodic in time. The long-time
dynamics of a periodically driven quantum system can be
profoundly different from a time-homogeneous
one~\cite{goldman_periodically_2014}. To analyze the outcome after
$\Theta(\sqrt N)$ periods, we solve the relevant eigenvalues and eigenvectors
of the composite (effective) unitary in a single period to exponential
precision ${O}(1/\sqrt N)$. This analysis gives further evidence that,
while the initial motivation for Farhi {\it et al.}'s design of
QAOA circuits may have come from Trotterization
of adiabatic quantum optimization (AQO), the analysis required to
understand QAOA circuits involves a very different process from
estimating an exponentially small energy gap of a Hamiltonian.
Instead, the intuition for this algorithm comes from a
phase-space representation based on spin-coherent states in
which both the unitaries generated by the mixing and the oracle Hamiltonians
take simple forms. We find that the composite unitary generates a closed
transition between two states that have high degrees of overlap with the
initial state
and the target state, respectively.
The transition rate in our algorithm is of order $\Theta(1/\sqrt N)$,
and the overlaps are of order $\Theta(1)$, yielding a nearly optimal query
complexity of $T\simeq \sqrt N\hspace{0.4pt} (\pi/2\sqrt 2\,)$.
We begin, in Sec.~\ref{sec:QAOA}, by briefly reviewing QAOA circuits, providing
context and inspiration for our construction.
In Sec.~\ref{sec:grover}, we briefly review prior approaches to Grover's
problem.
In Sec.~\ref{sec:main}, we introduce our algorithm.
Section~\ref{sec:scs} gives an intuitive picture,
using a representation based on spin-coherent states,
for why the algorithm works.
The most straightforward application of this picture results in a query
complexity that is close to optimal, up to a polylog factor. We then
refine the algorithm, removing the polylog factor, to obtain a query
complexity within a small constant of the optimal value. This improvement
makes use of the phase-space representation we describe
in Sec.~\ref{sec:phase_space}.
Section~\ref{sec:eigen_V} shows how we use
this phase space representation to derive
analytical results, including the success probability and the query complexity
of our algorithm. In Sec.~\ref{sec:check}, we briefly comment on how to check
whether the correct solution has been found. We conclude in
Sec.~\ref{sec:conclusion} with thoughts on future directions.
\section{Review of QAOA circuits}
\label{sec:QAOA}
QAOA circuits iteratively alternate between a classical
Hamiltonian (usually the problem Hamiltonian derived from a cost function)
and a mixing term (often the transverse field)~\cite{farhi_quantum_2014, farhi_quantum_2014-1}. Farhi {\it et al.}\
proposed these circuits to tackle
approximate optimization of challenging combinatorial problems, with
the approximation ratio improving (or at least not decreasing)
as the number of iterations $p$ increases. We will refer to circuits with the above structure as QAOA circuits
whether or not they are used for approximate optimization or for some other
purpose. Since Farhi {\it et al.}'s original work,
QAOA circuits have also been applied for exact optimization
\cite{wecker2016training} and sampling \cite{farhi_quantum_2016}.
Further, Farhi and Harrow~\cite{farhi_quantum_2016} argued, under
reasonable complexity theoretic assumptions, that
it is not possible for any classical algorithm to produce samples according
to the output distribution of QAOA circuits with even a single iteration ($p = 1$).
Their results suggest that QAOA circuits applied to sampling are
among the most promising candidates for early
demonstrations of ``quantum supremacy''~\cite{preskill_quantum_2012,
boixo_characterizing_2016}.
It remains an open question whether QAOA circuits provide a quantum
advantage for approximate optimization.
Trotterization of adiabatic quantum optimization (AQO) implies that
QAOA can always achieve the optimum in the limit of infinite iterations ($p\to\infty$). At the other end of the
spectrum, Farhi {\it et al.}~\cite{farhi_quantum_2014-1} proved that a QAOA circuit
with $p = 1$ beat the best classical approximation ratio for MaxE3Lin2 (each constraint is a linear equation mod 2 on 3 variables) at
the time; this quantum circuit then inspired a new classical approach that currently hold the
record~\cite{barak_beating_2015_published}).
The parameters for these circuits are the times $\beta_i$ and $\gamma_i$,
$1 \leq i \leq p$,
for which the mixing and classical Hamiltonian, respectively, are applied.
Farhi {\it et al.}\ show that, for a fixed $p$, the optimal parameters can be computed
in polynomial time in the number of qubits $n$. If we discretize so that each parameter can take on $m$ values, an exhaustive search for the optimum takes exponential steps in $p$ as $m^{2p}$.
For this reason, prior to this work,
there were no results for QAOA circuits with an intermediate number of
iterations $1 \ll p < \infty$. Here, we give such an algorithm.
Our approach suggests that considering QAOA
circuits with periodic parameters may be a profitable way for parameter
setting for QAOA circuits with $1 \ll p < \infty$.
\section{Review of prior quantum algorithms for Grover's problem}
\label{sec:grover}
Grover's algorithm~\cite{grover_fast_1996}
has attracted much attention, because it has been proven
that it outperforms any classical algorithm.
It searches for a needle in a haystack, achieving
a query complexity of $\Theta(\sqrt N)$,
where $N = 2^n$ is the size of the search space.
Grover's algorithm is optimal among quantum algorithms for such a
task~\cite{bennett_strengths_1997, farhi_analog_1998, zalka_grovers_1999}.
It offers a modest quadratic speedup over any classical
counterpart, although even quadratic speedup is considerable when $N$ is large.
Grover's algorithm selectively alters the phase of the target state given by
the oracle, at each iteration. While this operation on its own would not change
the probability of reading out the target state, it sets the stage for the
next operation which takes advantage of the phase difference
to increase the probability of the system being in that state. This effect
would be impossible were quantum amplitudes not able to store
phase information as well as the probability.
This step is carried out by Grover's diffusion operator, which
applies a phase of $\pi$ to the even superposition state
and does nothing to any state orthogonal to it.
It requires $\Theta(n)$ two-qubit gates to
implement Grover's diffusion operator~\cite{diao_quantum_2002}.
Grover's unstructured search problem can also be solved by adiabatic quantum
computation, where a mixing Hamiltonian (typically a transverse field) is
gradually replaced by the problem Hamiltonian that encodes the answer in
its ground state. The minimum gap of the total Hamiltonian is crucial to the
time complexity of the algorithm and was first given by Farhi
{\it et al.}~\cite{farhi_quantum_2000}.
Recently, the exponential scaling of this minimum gap was rederived using an
instanton approach, without solving the eigenvalue equation (see Supplemental
Material in~\cite{isakov_understanding_2016}).
By adjusting the evolution rate of the
Hamiltonian, Roland and Cerf~\cite{roland_quantum_2002} recover the quadratic
advantage of Grover's original algorithm over classical search.
Roland and Cerf do not use the standard mixing operator, the transverse
field, but rather a Hamiltonian related to Grover's diffusion operator.
A natural question
is whether it is possible to implement unstructured quantum search in
the circuit model using the transverse field instead of Grover's diffusion
operator. Here, we give an affirmative answer to this question.
\section{Our Algorithm}
\label{sec:main}
Here, we give a high-level view of the algorithm.
Sec.~\ref{sec:scs} describes the intuition behind our algorithm, based
on a picture using spin-coherent states.
{\it Grover's problem.} Suppose we are given a problem Hamiltonian (oracle)
\begin{align}
C_{{\bm u}} = - \proj{{\bm u}}\,,
\end{align}
that encodes an unknown bit string ${\bm u}$ of length $n$ ($n$ is even, for
simplicity). The aim is to find ${\bm u}$ using as few calls to this
oracle as possible.
Our algorithm uses the transverse field operator $B$ as the driver
(mixing term),
\begin{align}
B =\sum_{j=1}^n X_j\,,
\end{align}
where $X_j$ is the Pauli $X$ operator of the $j$th qubit. An advantage of
using $B$ over Grover's diffusion operator is that $B$ acts only on
individual spins, so it is easier and more efficient to implement.
The input state of
our algorithm is the usual one, the tensor product $\ket{+}^{\otimes n}$, the
joint $+1$ eigenstate of all the $X_j$ operators, and the even superposition
of all bit strings,
\begin{align}\label{eq:initial}
\ket{\psi_\mathrm{in}} = \ket{+}^{\otimes n} = \frac{1}{\sqrt N} \sum_{{\bm s}\in \{0,1\}^n} \ket{{\bm s}} \,.
\end{align}
We can simplify the analysis, following Farhi {\it et al.}~\cite{farhi_quantum_2000},
by working in a basis in which the target state is $\ket{\bm 0} = \ket{0\cdots 00}$.
Since the driver $B$ and the initial state $\ket{\psi_\mathrm{in}}$ remain the
same when any subset of the $n$ qubits is flipped, the problem can be
converted to finding the bit string $\bm 0$ using the oracle $C_{\bm 0}$ with
the same driver $B$. Doing so drastically simplifies our analysis:
the state $\ket{\bm 0}$ and the initial state $\ket{+}^{\otimes n}$
are in the $(n + 1)$-dimensional symmetric subspace (under permutations of
qubits), and the evolution under both $B$ and $C_{\bm 0}$ preserves this
subspace, so we need to consider only that $(n + 1)$-dimensional subspace
instead of the whole Hilbert space of dimension $2^n$.
To simplify notation, we will omit the
subscript in $C_{\bm 0}$ hereafter, i.e., $C\equiv C_{\bm 0}$.
The building block of our algorithm is a simple product of
unitaries generated by $B$ and $C$,
\begin{align}\label{eq:W}
W(\gamma) = e^{-i\pi B/n}e^{i\gamma C}e^{-i\pi B/n}e^{-i\gamma C}\,,
\end{align}
where $\gamma \in (0, \pi]$ is a free parameter. The intuition for why we choose the angle of the rotation $e^{-i\pi B/n}$ can be found in Sec.~\ref{sec:scs}.
The algorithm repeatedly applies the unitary $W(\gamma)$ for
$\Theta(\sqrt N)$ times (see Fig.~\ref{fig:grover_circuit}).
The relevant eigenvalues of the unitary $W(\gamma)$ determine the query
complexity of our algorithm, while the corresponding eigenvectors determine
the probability of success.
We will show that the relevant eigenvalues are the ones closest to $1$, but
not equal to $1$.
\begin{figure}
\label{fig:grover_circuit}
\includegraphics[width=0.48\textwidth]{grover_circuit.pdf}
\caption{To map the input state to a state having large overlap with the
target, the unitary $W(\gamma)$ is repeated for ${O}(N^{1/2})$ times.}
\end{figure}
The unitary $W(\gamma)$ has a time-reversal-like symmetry
\begin{align}\label{eq:time_reversal}
\Lambda\hspace{0.4pt} W(\gamma) \Lambda^\dagger = W^\dagger(\gamma)\,,
\end{align}
where $\Lambda = e^{-i\pi B/n} Z_1 Z_2\cdots Z_n$ with $Z_j$ being the
Pauli-$Z$ operator of the $j$th qubit. Equation~(\ref{eq:time_reversal})
holds generally for Hamiltonians based on classical cost functions,
Hamiltonians diagonal in the computational basis.
This symmetry implies that if
$\alpha$ is an eigenvalue of $W(\gamma)$, then its complex conjugate
$\alpha^*$ is also an eigenvalue
of $W(\gamma)$; the corresponding eigenstates are denoted by $\ket{{w}_\alpha}$
and $\ket{{w}_{\alpha^*}}$, respectively.
When restricted to the two-dimensional subspace $\mathcal S_\alpha$ spanned
by $\{\ket{{w}_\alpha}, \ket{{w}_{\alpha^*}}\}$ and written in the
basis $\{\ket{{w}_+}, \ket{{w}_-}\}$, where
\begin{align}
\ket{{w}_\pm} = \frac{1}{\sqrt 2}\Big(\ket{{w}_\alpha} \pm \ket{{w}_{\alpha^*}}\Big)\,,
\end{align}
$W(\gamma)$ has the matrix representation
\begin{align}
W\big|_{\mathcal S_\alpha}(\gamma) =
\exp\left[\mathord{-}i\left(\hspace{-0.4pt}
\begin{matrix}
0 & \arg(\alpha)\\
\arg(\alpha) & 0
\end{matrix}\right)\right]\,.
\end{align}
The unitary $W(\gamma)$ thus
drives a closed transition between $\ket{{w}_\pm}$ with the transition rate
$\arg(\alpha)$. To drive a full transition, one needs to repeat $W(\gamma)$ for
roughly $\pi/[2\arg(\alpha)]$ times.
Let $\ket{{b}_\pm} = \frac{1}{\sqrt 2}\big(\hspace{0.4pt}\ket{+}^{\otimes n} \pm \ket{-}^{\otimes n}\big)$.
We show in Sec.~\ref{sec:eigen_V} that for eigenvalues $\alpha$ and $\alpha^*$ exponentially close to $1$ but not equal to
$1$, $\ket{{w}_\alpha}$ and $\ket{{w}_{\alpha^*}}$ have large overlaps with
$\frac{1}{\sqrt 2}\big(\hspace{0.4pt}\ket{\bm 0}\pm i \ket{{b}_+}\big)$, respectively. In
other words, $\ket{{w}_+}$ and $\ket{{w}_-}$ have large overlaps with $\ket{\bm
0}$ and $i \ket{{b}_+}$, respectively, so the algorithm drives $\ket{{b}_+}$
close to the target state $\ket{\bm 0}$.
The value of $\arg(\alpha)$ has to be exponentially small in $n$; otherwise,
our algorithm would have beaten the optimal query complexity of Grover's
algorithm. Hereafter, $\alpha$ will refer to this specific eigenvalue.
The initial state~(\ref{eq:initial}) can be written as
\begin{align}
\ket{\psi_\mathrm{in}} = \ket{+}^{\otimes n} = \frac{1}{\sqrt 2}\Big(\ket{{b}_+} + \ket{{b}_{-}}\Big)\,;
\end{align}
note that $\ket{{b}_{-}}$ is a dark state, i.e.,
$W(\gamma) \ket{{b}_-}=\ket{{b}_{-}}$. For
$\norm{\braket{\bm 0}{{w}_+}}\simeq \norm{\braket{{b}_+}{{w}_-}}\simeq 1$,
the output state is approximately
\begin{align}
\ket{\psi_\mathrm{out}} \simeq \frac{1}{\sqrt 2}\Big(\ket{\bm 0}+\ket{{b}_-}\Big)\,,
\end{align}
and the probability of finding the target state $\ket{\bm 0}$ is approximately $1/2$.
In Sec.~\ref{sec:eigen_V}, we derive approximate results for our algorithm in
the large-$n$ limit. For $\gamma=\pi$, we find that
$\norm{\braket{\bm 0}{{w}_+}}\simeq (1-\pi^2/2n)^{1/4}$
in Eq.~(\ref{eq:fidelity_psi_+}) (the
fidelity is smaller for $\gamma\neq \pi$). See Fig.~\subref*{fig:w_plus} for a
comparison of analytical and numerical results. We also find that
$\norm{\braket{{b}_+}{{w}_-}} \simeq 1- N^{-1}$ in
Eq.~(\ref{eq:fidelity_psi_-_c}). See Fig.~\subref*{fig:psi_minus} for a comparison
of analytical and numerical results.
Furthermore, we calculate that $\arg(\alpha) \simeq 4\sqrt{2}\, N^{-1/2}
(1-\pi^2/2n)^{1/4}$ in Eq.~(\ref{eq:arg_modified}).
Figure~\subref*{fig:arg_alpha} shows a comparison of analytic and numerical results.
Considering that the success probabilities of our
algorithm is about $1/2$ and each iteration $W(\gamma)$ calls the oracle twice,
the average query complexity of our algorithm is \begin{align}
T(n) \simeq \frac{2\pi}{\arg(\alpha)}\simeq \frac{\pi}{2\sqrt 2}\, 2^{n/2}\,,
\end{align}
which differs from the optimal value presented in Ref.~\cite{zalka_grovers_1999} by a factor of $\sqrt 2$.
\begin{figure}
\subfloat[]{\hspace{-1.5mm}
\includegraphics[width=.23\textwidth, height= 3.3cm]{w_plus.pdf}
\label{fig:w_plus}
}
\subfloat[]{\hspace{.8mm}
\includegraphics[width=.23\textwidth, height= 3.17cm]{w_minus.pdf}
\label{fig:psi_minus}
}
\caption{(a) Numerical and analytical (large-$n$ limit) results for the
infidelity $1-\lvert\braket{\bm 0}{{w}_+}\rvert$ as a function of the number of
qubits for $\gamma = \pi$, which vanishes polynomially as $n$ increases. The
numerical results are calculated by direct diagonalization of the matrix
$W(\pi)$ in the symmetric subspace; the analytical results use
Eq.~(\ref{eq:fidelity_psi_+}),
$\norm{\braket{\bm 0}{{w}_+}}\simeq (1-\pi^2/2n)^{1/4}$.
(b) Numerical and analytical (large-$n$
limit) results for the infidelity $1-\lvert\braket{{b}_+}{{w}_-}\rvert$ as a
function of the number of qubits $n$ for $\gamma = \pi$, which decreases
exponentially as $n$ increases. The numerical results come from direct
diagonalization, and the analytical results come from
Eq.~(\ref{eq:fidelity_psi_-_b}),
$\braket{{b}_+}{{w}_-} \simeq i\, \sqrt{2d/n}\, \big(1-\pi^2/2n\big)^{1/4}$.
}
\label{fig:psi}
\end{figure}
\begin{figure}
\subfloat[]{\hspace{-2mm}
\includegraphics[width=.23\textwidth]{arg_alpha.pdf}
\label{fig:arg_alpha}
}
\subfloat[]{\hspace{.4mm}
\includegraphics[width=.22\textwidth]{arg_gamma.pdf}
\label{fig:arg_gamma}
}
\caption{(a) Numerical and analytical (large-$n$ limit) results for $\sqrt N
\arg(\alpha)$ as a function of $n$ for $\gamma = \pi$. (b) Numerical results
for $\sqrt N \arg(\alpha)$ as a function of $\gamma$. The numerical results
are computed using exact diagonalization. The analytical result comes
from Eq.~(\ref{eq:arg_modified}).
}
\label{fig:arg}
\end{figure}
\section{An Intuitive Picture Using Spin-coherent states}
\label{sec:scs}
This section gives the intuition behind our algorithm.
A representation using spin-coherent states~\cite{radcliffe_properties_1971,
perelomov_generalized_1986} is useful for understanding why the
algorithm works. Consider spin-coherent states of the form
\begin{align}\label{eq:spin_coherent}
\ket{\psi(\theta)} = e^{-i\theta B/2}\ketb{\bm 0}\,,
\end{align}
where $\theta\in [\, 0, 2\pi\hspace{0.4pt})$; these states form an overcomplete basis for the symmetric subspace $\mathcal H_S$. We pay particular attention to a set of discrete angles $\theta_k = k\Delta \theta$, where $k=0,1,\ldots,n-1$ and $\Delta\theta = 2\pi/n$. Along with the dark state $\ket{{b}_-}$, this set of discrete spin-coherent states form a complete basis of $\mathcal H_S$. The state $\ket{{b}_+}$ can be expanded as [see Fig.~\ref{fig:b_plus}],
\begin{align}\label{eq:b_+}
\ket{{b}_+}
&= \frac{1}{n\hspace{0.4pt} \braket{{b}_+}{\bm 0}}\, \sum_{k=0}^{n-1} (-1)^k e^{-ik\pi B/n} \ket{\bm 0}\,,
\end{align}
where $\braket{{b}_+}{\bm 0} = \sqrt{2/N}$ is exponentially small in $n$. The normalization factor can be derived by noticing
\begin{align}
\sum_{k=0}^{n-1} (-1)^k \bra{{b}_+} e^{-ik\pi B/n} \ket{\bm 0} = n \braket{{b}_+}{\bm 0}\,,
\end{align}
where we used the identity
\begin{align}\label{eq:rotation_a}
e^{-i\pi B/n} \ket{{b}_\pm} = - \ket{{b}_\pm} \,.
\end{align}
The expansion coefficient in Eq.~(\ref{eq:b_+}) can be derived by
noticing that $\ket{{b}_+}$ is orthogonal to any eigenstate of $B$ with an
eigenvalue other than $\pm n$. We will also need the eigenstate of $B$ with
eigenvalue $0$, \begin{align}
\ket{{b}_0} \propto P_S \Big(\ket{+}^{\otimes \frac{n}{2}}\otimes \ket{-}^{\otimes \frac{n}{2}}\Big)\,,
\end{align}
where $P_S$ is the projector onto $\mathcal H_S$.
In other words,
$\ket{{b}_0}$ is proportional to the sum of the $\binom{n}{n/2}$ terms that are
tensor products of the single-qubit states $\ket{+}$ and $\ket{-}$ with the
same number of occurrences, i.e., Hamming weight $n/2$ strings in the Hadamard
basis.
The overlap of this state with the target state is
\begin{align}\label{eqn:b0approx}
\norm{\braket{{b}_0}{\bm 0}}^2 &= \frac{n!}{(n/2)!\hspace{0.4pt} (n/2)!} \,\frac{1}{2^n}
\simeq \sqrt{\frac{ 2 }{\pi n}}\,,
\end{align}
which is only polynomially small. This state has the following expansion using the discrete spin-coherent states [see Fig.~\subref*{fig:b_zero}]:
\begin{align}\label{eq:b_0}
\ket{{b}_0} = \frac{1}{n\hspace{0.4pt} \braket{{b}_0}{\bm 0}}\, \sum_{k=0}^{n-1} e^{-ik\pi B/n} \ket{\bm 0}\,,
\end{align}
where $\braket{{b}_0}{\bm 0}$ is of order $n^{-1/4}$. It remains the same under the discrete rotation,
\begin{align}\label{eq:rotation_b}
e^{-i\pi B/n} \ket{{b}_0} = \ket{{b}_0} \,.
\end{align}
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=.228\textwidth]{b_plus.pdf}
\label{fig:b_plus}
}
\subfloat[]{
\includegraphics[width=.228\textwidth]{b_zero.pdf}
\label{fig:b_zero}
}
\caption{Spin-coherent-state representation (a) for $\ket{{b}_+}$, where $\sqrt N/n$ is the order of the expansion coefficients in Eq.~(\ref{eq:b_+}), and (b) for $\ket{{b}_0}$, where $n^{-3/4}$ is the order of the expansion coefficients in Eq.~(\ref{eq:b_0}).}
\label{fig:b}
\end{figure}
The unitary generated by the oracle takes the following form for $\gamma\ll 1$,
\begin{align}\label{eq:oracle_small}
e^{-i\gamma C} \ket{\psi} &= \ket{\psi} + i\gamma\, \braket{\bm 0}{\psi} \ket{\bm 0} + {O}(\gamma^2)\,,
\end{align}
where $\ket{\psi}$ is an arbitrary state. Putting Eqs.~(\ref{eq:W}), (\ref{eq:rotation_a}), (\ref{eq:rotation_b}), and (\ref{eq:oracle_small}) together, we have
\begin{gather}
W(\gamma)^{\frac{n}{2}} \ket{{b}_+} \simeq
\ket{{b}_+} + i \gamma\eta\hspace{0.4pt} \ket{{b}_0}\,,\\[3pt]
W(\gamma)^{\frac{n}{2}} \ket{{b}_0} \simeq
\ket{{b}_0} + i \gamma\eta\hspace{0.4pt} \ket{{b}_+}\,,
\end{gather}
where
\begin{align}
\eta = n \,\braket{{b}_+}{\bm 0}\braket{{b}_0}{\bm 0} \simeq \sqrt[4]{2/\pi}\, n^{3/4} N^{-1/2}\,.
\end{align}
Thus, the unitary $W(\gamma)^{n/2}$ approximately drives a transition between
$\ket{{b}_+}$ and $\ket{{b}_0}$ with the rate $\gamma\eta$.
Applying the
unitary $W(\gamma)$ for order $n/\gamma\eta$ times, one can drive the state
$\ket{{b}_+}$ to a state close to $\ket{{b}_0}$. The probability of finding the
target state with $\ket{{b}_0}$ is only polynomially small in $n$ as opposed to
the exponentially small value with $\ket{{b}_+}$, achieving the quadratic
speedup in Grover's algorithm up to a logarithmic factor.
Although the case $\gamma\ll 1$ is illustrative, it requires logarithmically
many more calls to the oracle than Grover's original algorithm, and the
probability of finding the target state is small.
Since $\eta$ is exponentially small in $n$, both
$\ket{{b}_+}$ and $\ket{{b}_0}$ are
close to eigenvectors for eigenvalues exponentially close to $1$.
This analysis suggests concentrating on the subspace spanned by
$\{\ket{{w}_\alpha}, \ket{{w}_{\alpha^*}}\}$, where $\alpha$ and $\alpha^*$
are the eigenvalues closest to $1$. Indeed, we show in Sec.~\ref{sec:eigen_V}
that one can increase the success probability and reduce the number of
calls to the oracle by setting $\gamma=\pi$. In Fig.~\subref*{fig:arg_gamma},
$\arg(\alpha)$ is plotted as a function of $\gamma$. The reason behind why
$\gamma=\pi$ performs the best (or why it even works) seems unclear without a
tedious calculation. We give this
calculation in Sec.~\ref{sec:eigen_V}, after introducing a ``phase-space"
representation that will be useful in that analysis.
\section{Phase-space representations}
\label{sec:phase_space}
We introduce a phase-space representation in this section which is
essential in the following section to the analytical solution of the success
probability and the query complexity of our algorithm. The phase-space
representation is based on the inner products of a quantum state with the spin-coherent states we introduced in Sec.~\ref{sec:scs}.
Any state $\ket{\psi}\in \mathcal H_S$ can be uniquely determined by the inner
products $\brab{\bm 0}e^{i\theta B/2}\ketb{\psi}$. The $\chi$ function,
\begin{align}
\chi\big(\ket{\psi}, \theta\big) = \brab{\bm 0}e^{i\theta B/2}\ketb{\psi}\,,
\end{align}
fully determines the state $\ket{\psi}$ since the spin-coherent states $e^{-i\theta B/2}\,\ketb{\bm 0}$ for $\theta \in
[\,0,2\pi\hspace{0.4pt})$ are overcomplete for the symmetric subspace; the advantage of
this representation is that both $B$ and $C$ can be expressed concisely.
For even $n$, the $\chi$ function satisfies the periodic boundary condition
\begin{align}
\begin{split}
\chi\big(\ket{\psi}, 2\pi\big) &= \brab{\bm 0}e^{i \pi B}\ketb{\psi}\\
&= (-1)^n \braket{\bm 0}{\psi} = \chi\big(\ket{\psi},0\big)\,.
\end{split}
\end{align}
For the initial state in Eq.~(\ref{eq:initial}), we have
\begin{align}
\chi\big(\ket{\psi_\mathrm{in}}, \theta\big) = \brab{\bm 0}e^{i\theta B/2}\ketb{\psi_\mathrm{in}} = \frac{e^{-i n\theta/2}}{\sqrt N}\,.
\end{align}
For the target state $\ket{\bm 0}$, we have
\begin{align}
\chi\big(\ket{\bm 0}, \theta\big) = \brab{\bm 0}e^{i\theta B/2}\ketb{\bm 0}= \cos(\theta/2)^n\,.
\end{align}
The unitaries $e^{-i\phi B/2}$ and $e^{-i\gamma C}$ take simple forms,
\begin{align}\label{eq:rotaion_chi}
&\chi\big(e^{-i\phi B/2}\ket{\psi}, \theta\big) = \chi\big(\ket{\psi}, \theta -\phi\big)\,,\\[3pt]
\begin{split}
&\chi\big(e^{-i\gamma C}\ket{\psi}, \theta\big) \\
&\qquad = \chi\big(\ket{\psi}, \theta\big)+ (e^{i\gamma}-1) \, \chi\big(\ket{\psi}, 0\big)\cos(\theta/2)^n \,.
\end{split}\label{eq:oracle_chi}
\end{align}
For the discrete angles $\theta_k = 2 k \pi/n$, we introduce the notation
\begin{align}
\chi_k\big(\ket{\psi}\big) = \brab{\bm 0}e^{ik \pi B/n}\ketb{\psi}\,.
\end{align}
The $\chi$ function of $\ket{\bm 0}$ will be used frequently, and we denote it as
\begin{align}\label{eq:xi}
\xi_k \equiv \chi_k\big(\ket{\bm 0}\big) = \cos(k \pi/n)^n\,.
\end{align}
We will use the following identity repeatedly:
\begin{align}\label{eq:odd_sum}
\sum_{k=0}^{n-1}(-1)^k\xi_k = n \braket{\bm 0}{{b}_+}^2= \frac{2 n}{N}\,.
\end{align}
For discrete angles, Eqs.~(\ref{eq:rotaion_chi}) and (\ref{eq:oracle_chi}) become
\begin{align}
&\chi_k\big(e^{-i\pi B/n} \ket{\psi}\big) = \chi_{k-1}\big(\ket{\psi}\big)\,,\label{eq:rotation_chi_dis}\\[3pt]
&\chi_k\big(e^{-i\gamma C}\ket{\psi}\big)
= \chi_k\big(\ket{\psi}\big)+ (e^{i\gamma}-1) \chi_0\big(\ket{\psi}\big)\xi_k \,.\label{eq:oracle_chi_dis}
\end{align}
For the eigenstates of $B$ with eigenvalues $\pm n$, we have
\begin{align}\label{eq:b_n_chi}
\chi_k\big(\ket{{b}_{n}}\big)= \chi_k\big(\ket{{b}_{-n}}\big)= (-1)^k N^{-1/2}\,,
\end{align}
where $\ket{{b}_{n}} = \ket{\psi_\mathrm{in}} = \ket{+}^{\otimes n}$ and $\ket{{b}_{-n}} = \ket{-}^{\otimes n}$. Since the discrete $\chi$ functions of $\ket{{b}_n}$ and $\ket{{b}_{-n}}$ are the same, it does not uniquely determine a state in the symmetric subspace with dimension $n+1$. The discrete $\chi$ function, however, is unique in the orthogonal space of $\ket{{b}_-} = \frac{1}{\sqrt 2} \big(\ket{{b}_{n}}-\ket{{b}_{-n}}\big)$. We will restrict our discussions into that subspace, and $\ket{{b}_-}$ is a dark state anyway. For $\ket{{b}_+} = \frac{1}{\sqrt 2} \big(\ket{{b}_{n}}+\ket{{b}_{-n}}\big)$, we have
\begin{align}\label{eq:b_+_chi}
\chi_k\big(\ket{{b}_+}\big)= \sqrt 2\,(-1)^k N^{-1/2}\,.
\end{align}
The state $\ket{{b}_0}$ remains the same under $e^{-i\theta B/2}$, and its
$\chi$ function is a constant \begin{align}\label{eq:chi_b_0}
\chi_k\big(\ket{{b}_0}\big) = \chi_0\big(\ket{{b}_0}\big) \simeq \sqrt[4]{2 /\pi n}\,,
\end{align}
using the approximation in Eq.~(\ref{eqn:b0approx}).
To calculate the normalization factor of the $\chi$ representation, we need the
Fourier component
\begin{align}
\tilde\chi_j\big(\ket{\psi}\big) &= \frac{1}{n}\sum_{k=0}^{n-1} \chi_k\big(\ket{\psi}\big) \, e^{ijk\pi/n}\,,
\end{align}
where $j\in J \equiv \{-n,\ldots,-2,0, 2,\ldots, n\}$. The normalization condition is
\begin{align}\label{eq:normalization_chi}
\norm{\braket{\psi}{\psi}}^2 &= \frac{N}{2}\,\normb{\tilde\chi_n\big(\ket{\psi}\big)}^2+ \sum_{j\in J'} \frac{\normb{\tilde\chi_j\big(\ket{\psi}\big)}^2}{\norm{\braket{\bm 0}{{b}_j}}^2}\,,
\end{align}
where $J'$ denotes the set $J\backslash\{\pm n\}$, and $\ket{{b}_j}$ is the eigenstate of $B$ whose eigenvalue is $j$, i.e. $B\hspace{0.4pt}\ket{{b}_j} = j\ket{{b}_j}$.
\section{Analytical solutions}
\label{sec:eigen_V}
In this section, we solve the case $\gamma=\pi$ analytically using the phase
space representation introduced in Sec.~\ref{sec:phase_space}. Because $
e^{-i\pi C} = e^{i\pi C}$, it suffices to consider \begin{align}
V &\equiv e^{-i\pi B/n}\,e^{i\pi C}= \sqrt{W(\pi)}\,.
\end{align}
The state $\ket{{w}_\alpha}$, an eigenstate of $W$, is also an eigenstate of
$V$. The new eigenvalue is $\beta = \alpha^{1/2}$ ($\beta$ is close to $-1$).
The remainder of this section is devoted to finding the relevant eigenvalues
and eigenstates of $V$. The eigenvalues determine the query complexity of our
algorithm, while the corresponding eigenvectors determine the probability of
success.
We introduce the unnormalized $\chi$ functions of the eigenstate
$\ket{{w}_\alpha}$, \begin{align}\label{eq:unnormalized_chi}
{\varphi}_k \equiv \chi_k(\ket{{w}_\alpha})\big/\chi_0(\ket{{w}_\alpha})\,,
\end{align}
which satisfies ${\varphi}_0=1$. Using Eqs.~(\ref{eq:rotation_chi_dis}) and (\ref{eq:oracle_chi_dis}), we have
\begin{align}\label{eq:unnormalized_eigenstate}
{\varphi}_k = \beta^k + 2\sum_{\ell=1}^k \beta^{k-\ell} \xi_\ell\,,
\end{align}
where $\xi_\ell$ is defined in Eq.~(\ref{eq:xi}). The periodic boundary condition ${\varphi}_n = {\varphi}_0$ gives the eigenvalue equation for $\beta$,
\begin{align}\label{eq:eign_val_eq_a}
( 1+ \beta^{n})/2+\beta^{n-1}\xi_1+ \cdots +\beta^2\xi_{n-2}+\beta\xi_{n-1} = 0\,.
\end{align}
Because Eq.~(\ref{eq:eign_val_eq_a}) contains only real coefficients, $\beta^*$ is also a solution to it [it comes from the symmetry~(\ref{eq:time_reversal})]. For $\beta\simeq -1$, we have
\begin{align}\label{eq:eign_val}
\beta = -\sqrt{1-\delta^2} - i\delta\simeq -1 - i\delta + \delta^2/2,
\end{align}
where $\delta > 0$ is a small real parameter of order $1/\sqrt N$. Putting
Eq.~(\ref{eq:eign_val}) into Eq.~(\ref{eq:eign_val_eq_a}) and keeping only terms up to order $\delta^2$, we have \begin{widetext}
\begin{align}\label{eq:eign_val_eq_b}
0 &= 1 + in\delta/2-n^2\delta^2/4+\sum_{k=1}^{n-1}(-1)^k\Big(1+(n-k) \big[i\delta-(n-k)\delta^2/2)\big]\Big)\xi_k + {O}(\delta^3)\nonumber\\
&=\sum_{k=0}^{n-1}(-1)^k \xi_k +i\delta \bigg( \frac{n}{2} + \sum_{k=1}^{n-1}(-1)^k (n-k)\, \xi_k \bigg)-\frac{\delta^2}{2}\bigg( \frac{n^2}{2}+ \sum_{k=1}^{n-1}(-1)^k (n-k)^2\, \xi_k\bigg)+{O}(\delta^3)\,.
\end{align}
\end{widetext}
The coefficient of the term with $i\delta$ in Eq.~(\ref{eq:eign_val_eq_b}) is
\begin{align}\label{eq:imaginary_sum}
\frac{n}{2} + \sum_{k=1}^{n-1}(-1)^k (n-k)\, \xi_k
&= \frac{n}{2} \sum_{k=0}^{n-1}(-1)^k \xi_k = \frac{n^2}{N}\,,
\end{align}
where we have used Eq.~(\ref{eq:odd_sum}); therefore, the pure imaginary term in Eq.~(\ref{eq:eign_val_eq_b}) is of order $\delta^3$ and can be neglected. Comparing the real
parts at both sides of Eq.~(\ref{eq:eign_val_eq_b}), we have
\begin{align}\label{eq:delta2}
\delta^2 \simeq \frac{2\sum_{k=0}^{n-1}(-1)^k \xi_k}{n^2/2+\sum_{k=1}^{n-1}(-1)^k (n-k)^2\, \xi_k}\,.
\end{align}
While the numerator in Eq.~(\ref{eq:delta2}) has already been solved in
Eq.~(\ref{eq:odd_sum}), the denominator is harder to calculate. We write the
denominator as \begin{align}\label{eq:d}
d = n^2/2+\sum_{k=1}^{n-1}(-1)^k (n-k)^2\, \xi_k\,,
\end{align}
and we will solve it later (but remember $d\sim n$). Putting Eqs.~(\ref{eq:odd_sum}) and (\ref{eq:d}) into Eq.~(\ref{eq:delta2}), we have the formal solution
\begin{align}\label{eq:delta_d}
\delta = 2\sqrt{n/d}\, N^{-1/2}\,,
\end{align}
where $d$ is to be determined.
Let ${\varphi}_k^+ $ and ${\varphi}_k^-$ be the real and imaginary parts of the
function ${\varphi}_k$ defined in Eq.~(\ref{eq:unnormalized_chi}); we have
\begin{align}
&{\varphi}_k^+ = \chi_k(\ket{{w}_+})\big/\chi_0(\ket{{w}_+})\,,\\
&{\varphi}_k^- = \chi_k(\ket{{w}_-})\big/\chi_0(\ket{{w}_+})\,,
\end{align}
where we use the identity $\chi_0(\ket{{w}_+}) = \sqrt 2 \, \chi_0(\ket{{w}_\alpha})$.
The normalization factor
$\chi_0(\ket{{w}_+})=\norm{\braket{\bm 0}{{w}_+}}$ determines the overlap and can be calculated by
using Eq.~(\ref{eq:normalization_chi}). Separating the real and imaginary parts
in the expansion~(\ref{eq:unnormalized_eigenstate}), we have \begin{align}
&{\varphi}_k^+ \simeq (-1)^k + 2\sum_{\ell=1}^k (-1)^{k-\ell} \xi_\ell\,,\label{eq:xx_real}\\[-2pt]
&{\varphi}_k^- \simeq i\delta \Big((-1)^k k + 2\sum_{\ell=1}^k (-1)^{k-\ell} (k-\ell) \xi_\ell\Big)\,,\label{eq:xx_imag}
\end{align}
where higher-order terms in $\delta$ are neglected. The $j$th Fourier component of ${\varphi}^+$ is
\begin{align}
\tilde{\varphi}_j^+ &= \frac{2}{n(1+e^{ij\pi/n})} \sum_{k=0}^{n-1} \xi_k \Big(e^{ijk\pi/n} - (-1)^k\Big)\nonumber\\
&\simeq \frac{2}{1+e^{ij\pi/n}}\, \norm{\braket{\bm 0}{{b}_j}}^2 \,,
\end{align}
where $j \in J\equiv \{-n,\ldots,-2,0, 2,\ldots, n\}$. Using the normalization condition~(\ref{eq:normalization_chi}), we have
\begin{align}\label{eq:fidelity_psi_+_a}
\frac{1}{\norm{\braket{\bm 0}{{w}_+}}^2}&\simeq \sum_{j\in J'} \frac{\norm{\tilde{\varphi}_j^+}^2}{ \norm{\braket{\bm 0}{{b}_j}}^2}\simeq \sum_{j\in J'} \frac{2\, \norm{\braket{\bm 0}{{b}_j}}^2}{1+\cos(j\pi/n)}\,,
\end{align}
where $J'=J\backslash\{\pm n\}$ and the exponentially small term proportional to $\norm{\tilde{\varphi}_n^+}^2$ is neglected. For $\norm{j}\ll n$, we have
\begin{align}\label{eq:modify}
\frac{2}{1+\cos(j\pi/n)} \simeq 1 + \pi^2\tau^2\simeq e^{\pi^2\tau^2}\,,
\end{align}
where $\tau \equiv j/2n$. The squared fidelity $\norm{\braket{\bm 0}{{b}_j}}^2$ can also be approximated by a Gaussian for $\tau\ll 1$,
\begin{align}\label{eq:Gaussian}
\norm{\braket{\bm 0}{{b}_j}}^2 &= \frac{n!}{n_+!\,n_-!} \,\frac{1}{2^n}\simeq \frac{2\hspace{0.4pt} e^{-2 n\tau^2}}{\sqrt{2\pi n}} \,,
\end{align}
where $n_\pm = (n\pm j)/2 = n(1/2\pm\tau)$. The term in Eq.~(\ref{eq:modify}) modifies the variance of the Gaussian~(\ref{eq:Gaussian}) by a factor of $2n/(2n-\pi^2)$, and thus we have
\begin{align}\label{eq:ratio_variance}
\sum_j \frac{2\, \norm{\braket{\bm 0}{{b}_j}}^2}{1+\cos(j\pi/n)}\simeq \sqrt{\frac{2n}{2n-\pi^2}}\,,
\end{align}
where we used the condition $\sum_{j\in J'} \,\norm{\braket{\bm 0}{{b}_j}}^2 \simeq 1$.
Putting Eq.~(\ref{eq:ratio_variance}) into Eq.~(\ref{eq:fidelity_psi_+_a}), we have
\begin{align}\label{eq:fidelity_psi_+}
\norm{\braket{\bm 0}{{w}_+}}&\simeq \big(1-\pi^2/2n\big)^{1/4}\,,
\end{align}
which becomes arbitrarily close to $1$ for large $n$; see Fig.~\subref*{fig:w_plus} for a comparison to numerics.
To derive the fidelity $\lvert\braket{{b}_+}{{w}_-}\rvert$, we notice
\begin{align}\label{eq:sum_relation}
{\varphi}_k^- + {\varphi}_{k+1}^- = - i\delta {\varphi}_k^+\,,
\end{align}
which is proportional to the $\chi$ function of $\ket{{w}_+}$. Because $\ket{{w}_+}\simeq e^{-i\pi B/n}\ket{{w}_+}$, Eq.~(\ref{eq:sum_relation}) implies that
\begin{align}\label{eq:w_-_decompo}
\ket{{w}_-} \simeq \braket{{b}_+}{{w}_-}\ket{{b}_+} -\frac{i\delta}{2}\, \ket{{w}_+}\,.
\end{align}
Thus, we can estimate the fidelity
\begin{align}\label{eq:fidelity_psi_-_a}
\norm{\braket{{b}_+}{{w}_-}} \simeq 1- \delta^2/8\,,
\end{align}
which is exponentially close to $1$ ($\delta^2\sim N^{-1}$).
The value of $\delta$, however, is only formally solved in Eq.~(\ref{eq:delta_d}). We still need to determine the value of $d$ defined in Eq.~(\ref{eq:d}). The Fourier component of ${\varphi}^-$ corresponding to $\ket{{b}_+}$ is
\begin{widetext}
\begin{align}
\frac{1}{n}\sum_{k=0}^{n-1} (-1)^k {\varphi}_k^- &= \frac{i\delta}{n} \sum_{k=0}^{n-1} (-1)^k\Big((-1)^k k + 2\sum_{\ell=1}^k (-1)^{k-\ell} (k-\ell) \xi_\ell\Big)\nonumber\\
&= \frac{i\delta}{n} \Big(\frac{1}{2} n (n-1) + 2\sum_{\ell=1}^{n-1} \sum_{k=\ell}^{n-1} (-1)^\ell (k-\ell) \xi_\ell\Big)\nonumber\\
&= \frac{i\delta}{n} \Big(\frac{n^2}{2} +\sum_{\ell=1}^{n-1} (-1)^\ell (n-\ell)^2 \xi_\ell - \frac{n}{2} - \sum_{\ell=1}^{n-1} (-1)^\ell (n-\ell) \xi_\ell\Big) = i\delta \big(d/n - n/N\big)\,,\label{eq:xx_-_B_+}
\end{align}
\end{widetext}
where we used Eqs.~(\ref{eq:imaginary_sum}) and (\ref{eq:d}) in the last step.
By neglecting the higher order term in Eq.~(\ref{eq:xx_-_B_+}), we have
\begin{align}\label{eq:fidelity_psi_-_b}
\begin{split}
\braket{{b}_+}{{w}_-} &\simeq i\delta (d/n)\sqrt{N/2}\: \norm{\braket{\bm 0}{{w}_+}}\\[3pt]
&\simeq i\, \sqrt{2d/n}\, \big(1-\pi^2/2n\big)^{1/4}\,.
\end{split}
\end{align}
where we used Eqs.~(\ref{eq:delta_d}) and (\ref{eq:fidelity_psi_+}). Comparing Eq.~(\ref{eq:fidelity_psi_-_b}) with Eq.~(\ref{eq:fidelity_psi_-_a}), we have
\begin{align}
d\simeq \frac{n}{2}\big(1-\pi^2/2n\big)^{-1/2}\,.
\end{align}
Putting this result into Eq.~(\ref{eq:delta_d}), we have
\begin{align}\label{eq:delta}
\delta \simeq 2\sqrt{2}\, N^{-1/2} \big(1-\pi^2/2n\big)^{1/4}\,.
\end{align}
The argument of $\alpha$ thus takes the form
\begin{align}\label{eq:arg_modified}
\arg(\alpha) \simeq 2\delta \simeq 4\sqrt{2}\, N^{-1/2} \big(1-\pi^2/2n\big)^{1/4}\,,
\end{align}
which conforms with the numerical result in Fig.~\subref*{fig:arg_alpha}. Putting
Eq.~(\ref{eq:delta}) into Eq.~(\ref{eq:fidelity_psi_-_a}), we have the fidelity
\begin{align}\label{eq:fidelity_psi_-_c}
\norm{\braket{{b}_+}{{w}_-}} \simeq 1 - N^{-1}\,,
\end{align}
where we drop the factor $(1-\pi^2/2n)^{1/2}$, because it is of the same order as the approximation made in Eq.~(\ref{eq:w_-_decompo}).
We calculate the fidelities $\norm{\braket{\bm 0}{{w}_+}}$ and $\lvert\braket{{b}_+}{{w}_-}\rvert$ numerically for $\gamma\neq \pi$ and find that they are always less than the corresponding values at $\gamma=\pi$. The alternating signs in $e^{i\gamma C}$ and $e^{-i\gamma C}$ are important for $\gamma\neq \pi$; the probability of finding the target state almost vanishes when the same sign is used (localized eigenstates).
\section{Check the solution}
\label{sec:check}
Because the success probability of our algorithm is about $1/2$, it may not be
very efficient to use a majority vote approach to find the marked bit string
with high probability. Here, we describe a method to check whether the marked
bit string has been found systematically.
Suppose that we have found the bit string $\ket{{\bm s}}$ at the output of the
circuit. Apply a $\pi/2$ pulse on an arbitrary qubit, creating an even
superposition of the bit string $\ket{{\bm s}}$ and a flipped bit sting
$\ket{{\bm s}'}$. Then apply the unitary $e^{i\pi C}$ to the system; this
step flips the sign of the target bit string. Finally, apply a $-\pi/2$
pulse to the selected qubit and measure in the computational basis. One of the
two bit stings $\ket{{\bm s}}$ and $\ket{{\bm s}'}$ must be the target if the
measurement outcome is the bit sting $\ket{{\bm s}'}$; otherwise, neither of the
two bit strings is the target. To distinguish whether the bit string
$\ket{{\bm s}}$ or $\ket{{\bm s}'}$ is the target, do the whole
procedure over again on a different qubit.
\section{Conclusion}
\label{sec:conclusion}
Inspired by the QAOA proposed by Farhi {\it et al.}~\cite{farhi_quantum_2014,
farhi_quantum_2014-1}, we presented a circuit-based quantum algorithm to search for a needle in a
haystack. We showed that Grover's diffusion operator can be replaced by the
transverse field, which requires only single-qubit gates, without
sacrificing the
quadratic quantum speedup. As single-qubit gates can usually be carried out
much more efficiently than multi-qubit gates in practice, our algorithm offers
a mild implementation advantage for Grover's unstructured search and its
variants.
This circuit model approach
can take advantage of fault-tolerant
error-correcting schemes; it is not known how, and could be
impossible, to achieve fault tolerance in a purely adiabatic
model~\cite{young_error_2013}.
We construct a simple periodic sequence of gates that
induces a closed transition between two states which have large overlaps with
the initial and target states, respectively. The query complexity of our
algorithm is $T(n) \simeq (\pi/2\sqrt 2\,)\, 2^{n/2}$, differing from the
optimal value proved in~\cite{zalka_grovers_1999} by only a constant factor of
$\sqrt
2$.
Our algorithm provides a QAOA circuit that exhibits
a quantum advantage at an intermediate number of iterations $p$,
$p \gg 1$, and the algorithm is not derived from Trotterization of an
AQO algorithm, demonstrating the breadth of the QAOA framework.
It remains an open question whether QAOA circuits provide a quantum
advantage for approximate optimization.
It is generally hard to find the optimal parameters in the QAOA when the number
of iterations of the algorithm is large. Our work demonstrates that even simple
periodic dynamics generated by the transverse field and the problem Hamiltonian
can induce interesting transitions between a problem-independent state and an
approximate target state. It offers a strategy to drastically simplify the
optimization of the parameters in QAOA by restricting them to be periodic. For
Grover's unstructured search, such simplification yields a near-optimal
solution
to the problem. It will be interesting to see how well this strategy works
for more general cases.
Our algorithm can be understood intuitively using a spin-coherent-state
representation, where the weights of the basis states evolve in a simple way
under the unitaries generated by the driver and the oracle. We also use a
phase-space representation based on spin-coherent states to analyze the
composite unitary in our algorithm.
The eigenstates (up to normalization
factors) of the composite unitary take explicit forms in this representation,
and the eigenvalue equation can be readily derived using the periodic boundary
condition. This enables us to solve the eigenstates and eigenvalues to exponential
precision in $n$. It is worth exploring the extent to which such a
representation is effective for more general quantum heuristic algorithms.
\begin{acknowledgments}
The authors thank Salvatore Mandr\`{a} and Davide Venturelli for enlightening and helpful
discussions. The authors would like to acknowledge support from the NASA
Advanced Exploration Systems program and NASA Ames Research Center. This work
was also supported in part by the AFRL Information Directorate under Grant No.
F4HBKC4162G001 and the Office of the Director of National Intelligence (ODNI). The
views and conclusions contained herein are those of the authors and should not
be interpreted as necessarily representing the official policies or
endorsements, either expressed or implied, of ODNI, AFRL, or the U.S.
Government. The U.S. Government is authorized to reproduce and distribute
reprints for Governmental purpose notwithstanding any copyright annotation
thereon.
\end{acknowledgments}
|
1,116,691,497,175 | arxiv | \section{Introduction}
\label{intro}
Remote sensing of exoplanet atmospheres is a rapidly expanding field, having progressed from the first detections of a molecular species in an atmosphere around another star \citep{barman07,tinetti07} to beginning to characterize complex temperature structures, clouds, and spatial heterogeneity in just over ten years. Retrieval methods -- iteratively comparing synthetic to observed spectra in order to infer the most likely atmospheric state -- were historically applied to Solar System atmospheres, and with some adaptation are now being used to analyze exoplanet spectra. Typically, retrieval codes couple a simple, parameterised, 1D radiative transfer model to a retrieval algorithm. The model parameters form the atmospheric state vector, and the output from the retrieval algorithm is a posterior probability distribution for each element in the state vector, including correlations between the model parameters.
This review paper, rather than simply presenting an overview of the current state of the art, instead discusses what we see as the major challenges facing exoplanet retrievals over the next few years, and thus the directions in which we expect development to be most rapid. In general, all of these challenges can be summarized as resolving the tension between model realism (with risks of overfitting or allowing informative priors to drive solutions) and model simplicity (with the risk that the model may be inadequate to accurately reproduce the data, or may reproduce them for the wrong reasons, and may be very far from the truth). For each challenge, we present the current status, and then provide our recommendations for future routes of exploration and improvement. The key areas which we have identified are listed below:
\begin{enumerate}
\item Inferring chemistry from measured molecular abundances
\item Representation of temperature structure
\item Representation of clouds and aerosols
\item Including 3D effects in 1D models
\end{enumerate}
These areas will be dealt with in turn from Section~\ref{chemistry}.
\subsection{Retrieval algorithms}
\label{algorithms}
A range of algorithms and retrieval codes have been applied to exoplanet retrievals, with each approach having different benefits. Figure~\ref{retrieval_schematic} shows the basic structure of a retrieval code. The earliest exoplanet retrievals used either a simple grid search (e.g. \citealt{madhu09}) or Optimal Estimation \citep{rodg00,irwin08}. Grid searches are simple to set up, but can be inefficient (since they may involve a detailed exploration of parameter space far from the solution) and results will be highly restricted by the parameter values included within the grid. Optimal Estimation is a matrix inversion method that assumes Gaussian statistics. A Levenberg-Marquardt scheme is used to iteratively solve the inverse problem and works to minimize a cost function, which assesses both the difference between the model output and the measured spectrum and also the distance of the atmospheric state vector from a Gaussian prior state vector.
Due to its imposition of Gaussianity, whilst Optimal Estimation is fast and efficient it is unable to a) effectively explore multimodal parameter spaces and b) explore a very broad parameter space, as the parameter ranges are restricted by the necessity of including a Gaussian prior constraint.
Markov-chain Monte Carlo (MCMC; see e.g. \citealt{line13a}) and nested sampling algorithms have more recently become the preferred tools within the community. These Bayesian approaches both allow a more comprehensive exploration of the parameter space, as they do not restrict priors or posteriors to obeying Gaussian statistics. Of these approaches, the \verb{MultiNest{ \citep{feroz08,feroz09,feroz13} implementation of the nested sampling method \citep{skilling06} has proved especially popular, as this provides a relatively efficient exploration of potentially multi-modal posteriors that effectively captures multiple modes and complex degeneracies.
With standard retrieval methods, there is always tension between achieving physical and chemical realism versus completing the calculation within a reasonable period of time. In practice, this means that the forward models converting chemical abundances and opacities into transit radii and fluxes need to be simplified to enable rapid computation. Recently, machine learning approaches have been adapted to performing atmospheric retrieval, which allows the burden of computing synthetic spectra to be shifted offline. A grid of models may be computed beforehand and used as a training set for the supervised machine learning method of choice. This approach has been demonstrated using regression trees and random forests \citep{marquez18}. It allows model grids from different research groups to be used for atmospheric retrieval, even if the computer codes used to generate these models are proprietary and non-public. Nevertheless, paying attention to model assumptions remains a key part of the process. The unsupervised machine learning method of deep convolutional generative adversarial networks has also been implemented for atmospheric retrieval \citep{zingales18}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{retrieval_schematic.png}
\caption{Schematic showing the basic structure of a retrieval algorithm.}
\label{retrieval_schematic}
\end{figure}
\subsection{Parameterised 1D models}
\label{1Dmodels}
A general requirement for the majority of retrieval schemes is for the forward model computation to be fast. This is especially necessary for Monte Carlo and nested sampling methods, as these typically require millions of individual forward models to be computed to adequately explore the parameter space. Therefore, forward models must be relatively simple; instead of containing detailed physics, models are usually parameterised, and are generally also one-dimensional. Parameterisation must be approached with care, as simple models uncoupled from physical assumptions may be prone to converging on unrealistic solutions (e.g. atmospheres with implausible chemistry, or temperature-pressure profiles that would be unstable). However, this potential disadvantage can also be a strength in situations where our understanding of the underlying physics and chemistry is still relatively immature, as it can prevent incorrect prior assumptions from driving the solution.
The parameterised approach is especially useful in the context of exoplanet retrievals because the information content of data is continuously changing, and the complexity of parameterised models can very easily be tuned. For example, the early exoplanet retrieval model of \cite{madhu09} contained a six-parameter temperature-pressure profile, which effectively divided the atmosphere into three layers and described the temperature gradient within each layer. They also retrieved altitude-independent abundances of H$_2$O, CO$_2$, CO, CH$_4$ and NH$_3$, which were the five species they considered to be most likely to be active in the infrared. They found that they were unable to simultaneously fit the data from different instruments with the same model. A subsequent analysis by \cite{lee12} allowed the temperature to vary freely and smoothly as a function of pressure, which allowed a reasonable fit to be achieved to all datasets, but clearly included greater potential for model degeneracy due to the increased number of parameters. \cite{lee12} present correlations between the temperature-pressure profile and the abundances of the molecular species, demonstrating the extent of this degeneracy.
Parameterisation has also evolved in modelling of primary transit spectra. The different geometries of primary and secondary transit observations mean that each is sensitive to different aspects of the atmospheric state, and so different parameters are included depending on the type of observation.
The transit depth in primary transit is given by
\begin{equation}
\mathrm{\Delta}_{\lambda} = 1 - \Big(\frac{R_{\mathrm{p,\lambda}}}{R_{\mathrm{s}}}\Big)^2
\end{equation}
where $R_{\mathrm{p,\lambda}}$ is the radius of the planet and $R_{\mathrm{s}}$ the radius of the star. A transit spectrum is the variation in transit depth as a function of wavelength, which results from the change in atmospheric opacity due to the presence of absorbing gases and aerosols (Figure~\ref{transit_fig}).
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{review_paper_transit_figure.png}
\caption{Sketches showing a transit lightcurve and the corresponding transit spectrum.}
\label{transit_fig}
\end{figure}
By contrast, the signal in secondary eclipse is obtained by measuring the difference in flux immediately before and after the eclipse, when the dayside of the planet is visible, with the flux of the star alone when the planet is in eclipse.
Primary transit observations do not probe deeper regions of the atmosphere, as the atmosphere becomes opaque to radiation passing tangentially through the atmosphere at lower pressures compared with radiation emerging close to nadir. Transit spectra also solely measure light from the star that has passed through the atmosphere rather than thermal emission from the planet itself; therefore, primary transit spectra are less sensitive than secondary eclipse spectra to the temperature-pressure profile. Retrievals covering only a small spectral range in primary transit have therefore generally assumed an isothermal temperature-pressure profile. By contrast, primary transit spectra are extremely sensitive to the atmospheric scale height $H$, as it is the physical thickness of the atmosphere that determines the amplitude of the spectral features in a primary transit observation.
The pressure $p(z)$ of an atmosphere in hydrostatic equilibrium may be defined as:
\begin{equation}
p(z) = p(0)e^\frac{-z}{H}
\end{equation}
where $p(0)$ is the surface pressure and $H$ is the atmospheric scale height.
\begin{equation}
H = \frac{kT}{{\mu}g}
\end{equation}
where $k$ is the Boltzmann constant, $T$ is temperature, $\mu$ is the mean molecular mass, and the local gravitational acceleration is
\begin{equation}
g = \frac{GM_P}{(R_P+z)^2}.
\end{equation}
Here, $G$ is the universal gravitational constant, $M_P$ is the mass of the planet, $R_P$ is the radius of the planet and $z$ is the altitude above the surface. The dependence of the scale height on $g$ means that there is significant sensitivity to the absolute radius of the planet (as opposed to the radius relative to that of the star). This emphasises the requirement for precise and accurate radii for planet host stars.
This in turn impacts what is variously referred to as the normalisation degeneracy or the baseline issue \citep{benneke12,griffith14,heng17}; because the planetary radius quoted in the literature is derived from the white light transit, the pressure that this represents is dependent on the atmospheric properties. Either the pressure at some given radius, or the radius at some given pressure, must therefore also be free parameters in the retrieval. Because the scale height is then proportional to both temperature and the square of the radius, these two quantities are degenerate and are inversely correlated in retrievals. \cite{fisher18} demonstrated that the normalisation degeneracy may be partially broken using low-resolution transmission spectra measured by \textit{Hubble}-WFC3 alone, because information on temperature and chemical abundances are encoded in the shape of the transmission spectrum. However, this degeneracy and others are more easily broken by including broad wavelength coverage data, as discussed in Section~\ref{sota_chem}.
Primary transit spectra are also affected by the presence of clouds. Effects can be dramatic to the point of cloud obscuring all molecular and atomic features in the spectrum (e.g. \citealt{kreidberg14}). In less extreme cases, the amplitudes of gas absorption features are reduced in the presence of cloud because the atmosphere becomes opaque below the cloud top, so only the centres of molecular bands are observed. This effect can be difficult to distinguish from either a) low abundances of the molecular species in question or b) a high mean molecular mass (and therefore low scale height) atmosphere.
Simple 1D forward models for retrieval codes need to include parameterisations of these effects. Cloud is often treated as a completely grey, opaque layer with a variable top pressure (e.g. \citealt{kreidberg14}). This approach has the advantage of introducing only a single parameter, but is also not very representative of a real cloud, which is likely to have a wavelength-dependent optical depth and to be partially transparent at some wavelengths. We discuss cloud parameterisation in more detail in Section~\ref{sota_clouds}. The mean molecular mass may be specified as a separate free parameter, or may be calculated after the fact based on the retrieved abundances of the modelled gases; this approach is computationally simpler, but risks misinterpretation should large abundances of a spectrally inactive, heavy gas such as N$_2$ are present. It also relies on a complete range of molecular species being included in the model.
\section{Chemistry}
\label{chemistry}
In this section, we discuss the challenges of inferring information about chemistry, and thence planetary formation and origin scenarios, from the retrieved abundances of individual gases. We begin by summarising the current state of the art. Whilst there is a wealth of literature available dealing with detailed studies of individual planets, here we find it is more instructive to focus on works that analyse multiple planets, as this provides a more general indication of the degree to which atmospheric properties can be constrained with currently available data.
\subsection{State of the art: chemistry}
\label{sota_chem}
Hot Jupiters observed in primary transit are ideal targets for molecular species detection and constraint, as these planets have large scale heights and therefore large feature amplitudes in primary transit (in the absence of clouds). Several comparative retrieval studies of hot Jupiters with \textit{Hubble Space Telescope} and \textit{Spitzer} observations have been recently performed, following on from the presentation by \cite{sing16} of near-infrared spectra of ten hot Jupiters with consistent data reduction.
\textit{Hubble} Wide Field Camera 3 (WFC3) data are now available for several tens of exoplanets. Many of these also have photometry from the \textit{Spitzer} InfraRed Array Camera (IRAC) and spectra from the \textit{Hubble} Space Telescope Imaging Spectrograph (STIS). As WFC3 spectra are the most widely available, studies such as \cite{tsiaras18} and \cite{fisher18} focus on this dataset only. Because WFC3 has a relatively narrow wavelength range, between 0.8 and 1.6 $\upmu$m, only a subset of interesting molecular species can be constrained. The 1.4 $\upmu$m H$_2$O band dominates the spectral shape in this region, although features from TiO, VO and FeH may be discernable at the shorter wavelength end if present, along with CH$_4$, HCN and NH$_3$ longwards of 1 $\upmu$m.
\cite{tsiaras18} use a 10-parameter model to study 30 hot and warm gaseous planets, including volume mixing ratios of H$_2$O, CO$_2$, CO, CH$_4$ and NH$_3$; isothermal temperature; planet radius; and three cloud parameters (discussed further in Section~\ref{sota_clouds}. For planets hotter than 1400 K they also include TiO and VO abundances. They define an atmospheric detectability index (ADI) which is the Bayes factor between the nominal atmospheric model and a straight line (featureless) spectrum, and they class any planet with ADI $>$3 as having a detectable atmosphere. They find that 16 of the 30 planets studied fulfil this criterion; H$_2$O is found to be present on all of these planets, with abundances typically constrained to $\pm$ an order of magnitude. No constraints are obtained for CO$_2$, CO, CH$_4$ or NH$_3$ on any planet, but for two (WASP-76b and WASP-121b) there is evidence that TiO and VO are present; a subsequent analysis including STIS data for WASP-121b by \cite{evans18} corroborates the presence of VO but not of TiO.
\cite{fisher18} examine a similar dataset of 38 WFC3 transmission spectra, although their analysis extends to smaller and temperate planets, such as the warm mini-Neptune GJ 1214b and the likely rocky earth-sized planets TRAPPIST 1d--g. Unlike \cite{tsiaras18}, they only consider volume mixing ratios of H$_2$O, NH$_3$ and HCN in their model. They include a slightly more complex cloud parameterisation (see Section~\ref{sota_clouds}) and allow for a non-isothermal temperature profile. They retrieve a reference pressure rather than a reference radius for the planet. \cite{fisher18} find no evidence that the region of the atmosphere probed during transit deviates from an isothermal profile, and they conclude that most of these spectra may be explained by an isothermal transit chord containing only water and grey clouds.
Two further studies, \cite{barstow17} and \cite{pinhas19}, consider a smaller number of planets but take into account data from \textit{Hubble}/STIS and \textit{Spitzer}/IRAC. A broader wavelength range allows degeneracies between cloud properties and gas abundances to be broken, but this coverage is not available for as many planets, and the inclusion of spectral segments obtained at different times introduces the issue of stitching together non-contemporaneous spectra that may have been affected by instrumental and astrophysical systematics in different ways. For this reason, the datasets used are those provided by \cite{sing16}, in which spectra were consistently reduced in an attempt to minimise this issue.
\cite{barstow17} uses a hybrid approach, combining the fast but prior-restricted optimal estimation retrieval method with a grid search to ensure exploration of a wide parameter space. Gases included in the retrieval are H$_2$O, CO$_2$, CO, CH$_4$, but there is no evidence for the presence of any gas except H$_2$O. Constraints on H$_2$O abundance are obtained for all planets except WASP-12b, which has poor quality WFC3 data in the \cite{sing16} paper, and WASP-6b and WASP-39b, for which no WFC3 data were available at the time. H$_2$O abundances are constrained only to within an order of magnitude, but show a clear trend towards subsolar abundances. This trend was also found by \cite{pinhas19}, who performed a nested sampling retrieval of the same dataset; \cite{pinhas19} used new WFC3 data for WASP-12b and WASP-39b, which allowed constraints on H$_2$O abundance for these planets also.
Although \cite{barstow17} and \cite{pinhas19} use different cloud parameterisations, the H$_2$O abundance results are consistent with each other where the same data are used. The differing results for the cloud properties are discussed further in Section~\ref{sota_clouds}.
In Figure~\ref{h2o_comp}, we present a comparison of the retrieved H$_2$O abundances for each of the studies described above. The values shown for \cite{barstow17} are taken from the range of values from the best-fitting models for each planet; the central value shown is just the average of the minimum and maximum. All other values are obtained directly from the marginalised retrieval solution in each case. In general, retrievals accounting for \textit{Hubble}/STIS and \textit{Spitzer}/IRAC data converge on lower H$_2$O abundances, whereas solutions from just \textit{Hubble}/WFC3 have higher H$_2$O abundances. \cite{pinhas19} conclude that H$_2$O abundances are generally subsolar. Error-weighted averages are shown, calculated over all available planets except for WASP-6b, for which no WFC3 data are available. The very low H$_2$O volume mixing ratio retrieved for WASP-6b from \cite{pinhas19} is likely to be a result of a substantial drop in transit depth between the STIS spectrum and the the IRAC points in the infrared, which forces a scenario in which the spectrum is characterised by opaque haze and low gas abundances. Averages between \cite{barstow17} and \cite{fisher18} differ by more than two orders of magnitude, indicating that the inclusion of STIS and IRAC data is influential on the solution. The difference between the retrieval results with and without STIS and IRAC is most apparent for HD 189733b and HD 209458b.
The likely reason for these differences when broader wavelength coverage data are added is that these data provide more constraints on cloud characteristics than WFC3 does by itself; muted H$_2$O features can either be the result of a low abundance of H$_2$O, or the presence of cloud. The detection of absorption features due to multiple gases can also break degeneracies between temperature and gas abundance, and low-amplitude features can also be a result of low temperatures. Following this logic, we expect to see substantial improvements with the launch of \textit{JWST}, which will provide extremely broad wavelength coverage (although it cannot cover the full spectral range simultaneously).
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{H2O_comparison.png}
\caption{A comparison of H$_2$O volume mixing ratios retrieved using four different retrieval algorithms. \cite{pinhas19} and \cite{barstow17} use spectra combining \textit{Hubble}/STIS and WFC3, and \textit{Spitzer}/IRAC, whereas \cite{fisher18} and \cite{tsiaras18} use only \textit{Hubble}/WFC3. The dashed lines represent the error-weighted average abundances, excluding WASP-6b for which no WFC3 data is available.}
\label{h2o_comp}
\end{figure}
Primary transit observations are generally preferred for obtaining constraints on molecular species abundance, but \cite{line14} completed a comparative study of 9 hot Jupiters in emission and present retrieval results for molecular species. The spectral coverage and resolving power is extremely variable across the 9 objects, with HD 189733b having data from \textit{Hubble}/Near Infrared Camera and Multi-Object Spectrograph (NICMOS), \textit{Spitzer}/IRAC and \textit{Spitzer}/InfraRed Spectrograph (IRS), whereas the majority are restricted to photometric observations only. Good constraints on molecular abundances (beyond upper and lower limits) are generally only obtained for cases with spectroscopic data. In this case, volume mixing ratios for H$_2$O, CO$_2$ and CH$_4$ are constrained to within an order of magnitude for HD 189733b, and H$_2$O is similarly constrained for TRES-3b, but no further strong constraints are obtained for any of the other planets in the sample.
Subsequent publications looking at single planets in emission have obtained some constraints on molecular abundances. \cite{stevenson14} analyse the dayside spectrum of ultra-hot Jupiter WASP-12b. They test oxygen-rich (C:O $\sim$ 0.5) and carbon-rich (C:O $> \sim$ 1.0) atmospheric models, and find that the carbon-rich model is preferred, although their best-fit solution has what the authors consider to be implausibly high abundances of CH$_4$ and CO$_2$, and very low abundances of H$_2$O. An analysis of the same dataset by \cite{oreshenko17} shows that the solution is highly dependent on prior assumptions made about the chemistry. \cite{heng16} point out that it is nearly impossible to have CO$_2$ be more abundant than CO in H$_2$-dominated atmospheres unless the metallicity exceeds solar by about 3 orders of magnitude, and this constraint should be used to rule out chemically implausible retrieval solutions. Some evidence for the presence of TiO and VO has also been reported from secondary eclipse observations, of WASP-33b (by \citealt{haynes15}) and WASP-121b (VO only, by \citealt{evans17}).
Molecular abundance information from secondary eclipse spectra lags behind that available from transits, as secondary eclipse contrast improves at wavelengths beyond the reach of \textit{Hubble}, and the lack of cryogenic cooling for \textit{Spitzer} means that currently precise secondary eclipse spectra are hard to come by. This situation is expected to improve enormously once \textit{JWST} has launched. Despite significant advances in spectral quality for both primary transit and secondary eclipse over the last decade, precise abundance constraints are only reliably available for H$_2$O, and even this is not universally possible. The main barrier to molecular species constraint is the typically narrow wavelength range accessible for most planets; wavelengths beyond the red end of the \textit{Hubble}/WFC3 G141 grism ($>\sim$ 1.6 $\upmu$m) are required to constrain most molecular species apart from H$_2$O and metal oxides/hydrides, and spectral data in this range is currently unavailable. This situation will be vastly improved once \textit{JWST} has launched, as it will improve signal-to-noise and resolving power by at least a factor of 10, and push spectral coverage further into the infrared. Several predictive studies exist that indicate \textit{JWST} spectra will provide excellent opportunities for retrieval constraints on molecular abundances from both primary transit and secondary eclipse spectra of hot Jupiters (e.g. \citealt{barstow15,greene16}) and also allow the characterization of smaller, terrestrial worlds (e.g. \citealt{barstow16}, \citealt{krissansen-totton18}).
\subsection{Recovery of underlying chemical trends}
\label{chemtrends}
A key part of the planetary formation/evolution puzzle is the bulk C:O ratio of a planet. It has been postulated that this is an indicator of where in the disc a planet has formed \citep{oberg11} as the location of the planet relative to the snowlines could affect the composition of the accreted material. \cite{oreshenko17} attempted this exercise for WASP-12b using an emission spectrum constructed from \textit{Hubble}-WFC3 and \textit{Spitzer}-IRAC, and suggested that WASP-12b experienced disk-free migration during its formation history. Determining the bulk C:O ratio from spectroscopy has already been attempted in exoplanet retrievals (e.g. \citealt{line14,kreidberg15}), although so far this is hampered by a lack of access to regions of the spectrum containing features of carbon species. Observations by \textit{JWST} will alleviate this aspect of the problem, but the question remains to what degree of precision underlying chemical trends such as the C:O ratio can be recovered. This is particularly important in the context of future missions such as \textit{ARIEL}, which aims to provide the first exoplanet atmosphere population study.
\cite{kreidberg15} compare retrievals with free chemistry (where each gas is retrieved individually) and retrievals of metallicity and C:O ratio under the assumption of equilibrium chemistry for the \textit{Hubble}/WFC3 spectrum of WASP-12b. The results for each case are in agreement in terms of the retrieved temperature and H$_2$O abundance, where H$_2$O is the only gas that can be constrained. Based on the assumptions within the chemical equilibrium model, \cite{kreidberg15} reject a carbon-rich atmosphere scenario at $>3\sigma$ confidence, as the retrieved H$_2$O abundance is higher than predicted for a carbon-rich model. However, this result is dependent on the assumptions within the chemical model used, so is somewhat less agnostic than a free-chemistry retrieval would be; there is a trade off between obtaining a tighter constraint and relying on a potentially flawed chemical model.
The only way to reliably demonstrate recoverability of underlying chemical trends is to conduct blind tests of retrieval algorithms on synthetic observations with known chemistry. There are two distinct facets to this challenge; 1) can the correct atmospheric C:O ratio be recovered for the constituents present within the observable atmosphere of the planet? and 2) can the correct planet bulk C:O ratio be recovered from the atmospheric C:O? The first issue simply relies on the ability of a retrieval algorithm to accurately determine the abundances of molecular and atomic species within a planet's atmosphere, whereas the second encompasses scenarios in which the bulk planet chemistry is not reflected in the molecular make up of the atmosphere, for example because substantial amounts of some elements are present in the form of clouds deep in the atmosphere. An illustration of this difficulty is the challenge of determining the H$_2$O volume mixing ratio in Jupiter's atmosphere; see e.g. \cite{li20}. Simple tests can be performed to answer question 1) with 1D forward models containing some parameterised chemistry, but for question 2) more complex models following through from planet formation to the eventual atmospheric composition will be required.
In the short term, studies testing the ability to accurately recover chemical trends in atmospheric composition should be undertaken. Efforts in this direction are already underway in preparation for the \textit{ARIEL} mission, but similar studies are required for other datasets as the information content of spectra is highly dependent on the precise details of resolving power and wavelength coverage.
\textbf{Recommended action: conduct retrievals of simulated datasets with known atmospheric chemistry, for a range of planetary temperatures and metallicities, as observed by a variety of instruments. This will allow us to determine observational requirements for precise constraints on C:O ratio, and other trends of interest e.g. N:O ratio.}
\section{Temperature structure}
\label{temp}
Whilst detailed information about temperature structure is difficult to obtain from primary transit observations due to the relatively narrow pressure range that is probed, temperature-pressure profiles have been retrieved from secondary eclipse and phase curve spectra. Whilst very broad spectral coverage, such as that available for HD 189733b (e.g. \citealt{lee12,line14}), probes a sufficient range of atmospheric pressures to allow a smoothed, free retrieval of temperature as a function of pressure, the majority of secondary eclipse spectra cover a smaller range and parameterisation is necessary to extrapolate the atmospheric structure beyond the region that is directly constrained.
\subsection{State of the art: temperature-pressure profiles}
\label{sota_temp}
The simplest approach to retrieving temperature is to make the crude assumption that the temperature profile is isothermal. This has often been the approach taken when analysing primary transit spectra; however, \cite{rocchetto16} show in their synthetic retrieval study for the \textit{James Webb Space Telescope} that this assumption can result in errors of more than an order of magnitude in the retrieved gas abundances for some cases. The isothermal approximation is therefore clearly inadequate, and approaches that capture the broad shape of the temperature structure must be explored.
There are two parameterisation approaches favoured by retrieval groups, the simpler of the two being the Guillot profile \citep{guillot10} which has 5 free parameters and was first implemented by \cite{line12}, and the other being the approach advocated by \citet{madhu09}, which we will call the Madhusudhan profile, and has 6 free parameters. The original Guillot profile assumes that no scattering occurs in the atmosphere; \cite{heng12} and \cite{heng14} respectively generalised the Guillot profile to include isotropic scattering (by either aerosols or atoms and molecules), and non-isotropic scattering (large particles).
The Guillot profile is based on a three-channel approximation for an atmosphere in thermal equilibrium and is described by the following equation,
\begin{equation}
T^4(\tau) = \frac{3T_{\mathrm{int}}^4}{4}\left(\frac{2}{3}+\tau\right) + \frac{3T_{\mathrm{irr}}^4}{4}(1-\alpha)\xi_{\gamma_{1}}(\tau) + \frac{3T_{\mathrm{irr}}^4}{4}(\alpha)\xi_{\gamma_{2}}(\tau)
\end{equation}
where
\begin{equation}
\xi_{\gamma_i} = \frac{2}{3} + \frac{2}{3\gamma_i} \left[ 1 + \left( \frac{\gamma_i \tau}{2} - 1 \right) e^{-\gamma_i \tau} \right] + \frac{2\gamma_i}{3} \left( 1- \frac{\tau^2}{2}\right) \mathrm{E}_2(\gamma_i\tau)
\end{equation}
and the irradiation temperature is
\begin{equation}
\label{eqn4}
T_{\mathrm{irr}} = \beta \left( \frac{R_{\star}}{2a} \right)^{1/2} T_{\star}
\end{equation}
The 5 free parameters are $\kappa_{\mathrm{IR}}$, the infrared opacity; $\gamma_1$ = $\kappa_{\mathrm{v1}}/\kappa_{\mathrm{IR}}$ and $\gamma_2$ = $\kappa_{\mathrm{v2}}/\kappa_{\mathrm{IR}}$, the ratio of two-band visible opacities to the IR opacity; $\alpha$, the ratio of the flux between the two visible streams; and $\beta$ is a measure of the recirculation efficiency of the atmosphere. $T_{\mathrm{int}}$ is the planet's internal temperature, and $T_{\mathrm{irr}}$ is the temperature calculated from irradiation by the parent star. $R_{\star}$ and $T_{\star}$ are the radius and temperature of the parent star, and $a$ is the orbital semi-major axis. $\tau$ = $\kappa_{\mathrm{IR}}p/g$ is the infrared optical depth of the atmosphere, where $p$ is atmospheric pressure and $g$ is gravitational acceleration. E$_2$ is the second order exponential integral function.
The Madhusudhan profile divides the atmosphere into three layers. Layer 1 is the uppermost and is bounded at the base by pressure $P_1$. Layer 2 extends from pressure $P_1$ to $P_3$, and Layer 3 extends downwards from $P_3$. The temperature in each layer is defined as follows:
$P_0 < P < P_1~~~P=P_0\mathrm{e}^{\alpha_1(T-T_0)^{\beta_1}}$
$P_1 < P < P_3~~~P=P_2\mathrm{e}^{\alpha_2(T-T_2)^{\beta_2}}$
$P > \textit{P}_3~~~T = T_3$
In all cases, $P_0 < P_1 < P_3$. If the temperature profile is inverted, $P_1 < P_2 < P_3$; if not, $P_1 \ge P_2$. This can be simplified to only 6 free parameters by setting $P_0$ equal to the pressure at the top of the atmosphere; empirically setting $\beta_1$ = $\beta_2$ = 0.5. Finally, the temperature profile is forced to be continuous at the boundaries between the layers where $P$ = $P_1$ and $P$ = $P_3$, which leaves 6 free parameters: $P_1$, $P_2$, $P_3$, $\alpha_1$, $\alpha_2$ and $T_3$.
The advantage of the Guillot profile is that the shape is physically motivated by the assumption of radiative equilibrium, whilst still being a fairly simple parameterisation. It does however contain a bias in that it produces isothermal profiles at low pressures, which may not be an accurate reflection of a real atmosphere. This isothermal behavior is a subtle artefact of using mean opacities, where ``mean" in this case is ill-defined. Specifically, in order for the solution to be analytically tractable, the derivation assumes that the absorption, flux and Planck mean opacities are equal. The Madhusudhan profile allows more flexibility of shape, particularly with regards to resolving temperature inversions, at the expense of an additional free parameter. \cite{blecic17} investigate the ability of such 1D temperature parameterisations to recover the temperature structure from synthetic eclipse spectra generated from 3D atmospheric circulation models. They find that the Madhusudhan profile provides a better match to the temperature structure in the middle atmosphere as it is more capable of producing an inversion; however, it does not match the deep temperature structure. We discuss the reliability of fitting a 1D temperature model to a dataset generated from a 3D circulation model in Section~\ref{sota_3d}.
A key science question relating to T-p profile retrievals is the presence or absence of a temperature inversion in hot Jupiter atmospheres. Inversions were predicted to occur in planets with incident flux of greater than 10$^9$ erg s$^{-1}$ cm$^{-2}$, due to the presence of optical absorbers TiO and VO in their atmospheres \citep{fortney08}. This category includes several well-studied hot Jupiters such as HD 209458b, but so far only a handful of planets show evidence for thermal inversions in their dayside spectra. These include WASP-33b (\citealt{haynes15}; fit using Madhusudhan profile); WASP-121b (\citealt{evans17}; fit using Guillot profile); and WASP-18b (\citealt{sheppard17}; fit using Madhusudhan profile). All of these are ultra-hot Jupiters with equilibrium temperatures of over 2000 K, suggesting that the cut-off irradiation for thermal inversions is somewhat higher than originally predicted.
Reliably retrieving the dayside temperature structure is further complicated by the presence of solution degeneracy with gas abundance retrievals. \cite{stevenson14} retrieve the dayside atmospheric state for WASP-12b and test two models which force either carbon-rich or oxygen-rich chemistry; the retrieved temperature profiles differ by several hundred K at low pressures. Similarly, \cite{barstow14} test the effect of varying gas abundance priors on a continuous Optimal Estimation retrieval of temperature from HD 189733b emission spectra, and find that the precise shape of the profile is dependent on the gas abundance prior chosen.
\subsection{Future challenges for temperature parameterisation}
\label{future_temp}
Investigations are underway into the most appropriate temperature parameterisations in the \textit{JWST} era and beyond. \cite{rocchetto16} simulate several \textit{JWST} hot Jupiter transmission spectra for model atmospheres with varying C:O ratios, and they demonstrate that oversimplified parameterisations in temperature structure retrieval can introduce significant bias in other retrieved properties. Assuming that the temperature profile is isothermal can result in, for example, retrieved CO abundances over an order of magnitude too high. A Guillot temperature-pressure profile, whilst it increases the uncertainty on the retrieved properties, results in a more accurate retrieval of the gas abundances. However, it is important to note that the input temperature-pressure profile is close to the typical shape predicted by the Guillot parameterisation, so the ability of the Guillot profile to achieve a good fit may be serendipitous. It is clear, therefore, that accurate chemistry retrievals are dependent on the suitability of the temperature parameterisation.
This issue is likely to only become more complex as the information content of the spectrum increases. The key difficulty in transmission will still be the relatively small pressure range (when compared with eclipse spectra) probed by the observation, and the degeneracy between the effects on temperature, mean molecular weight and gravity on the scale height. Further investigations of the kind presented by \cite{rocchetto16} are likely to be a critical aspect of model development. Ultimately, the ideal for eclipse spectra would be to explicitly retrieve temperature at each level in the model atmosphere, subject to some correlation length to ensure smoothness, but this is likely to only be possible for the very highest signal-to-noise observations.
\textbf{Recommendation: conduct retrievals of simulated datasets with a variety of temperature structures and chemistry, to investigate regions of parameter space where the temperature profile parameterisation introduces most bias. Investigate alternative approaches to those currently in the literature for intractable cases.}
\section{Clouds}
Initial attempts to characterise the atmospheres of hot exoplanets via retrieval were conducted without reference to clouds, due to the erroneous belief that the extreme temperatures would make it impossible for clouds to exist. The inclusion of clouds also inevitably complicates the retrieval process, as it introduces further parameters into what is already an underconstrained retrieval problem. Clouds are complex, potentially spatially variable, structures that provide broadband absorption and scattering, and as such affect spectra in ways that can be difficult to identify. They can also have the effect of muting molecular absorption features.
\subsection{State of the art: clouds}
\label{sota_clouds}
So far, retrieval efforts have used simple parameterisations to try and capture the cloud properties that produce the most significant effects on spectra. The different geometries of exoplanet observations require different treatment; in primary transit, due to the long path length through the atmosphere what is often referred to as the cloud top pressure is especially important because the atmospheric opacity rapidly increases below the cloud top. Conversely, the cloud top pressure is less critical if the planet is being directly imaged in the infrared, as the measured radiation is emerging from the planet beneath the cloud top.
Cloud top pressure is not in reality a well-defined pressure above which cloud ceases to exist, although it can be treated as such in simple parameterisations. It represents the pressure level at which the cloud optical depth is unity, which is highly dependent on the observation geometry - the cloud optical depth reaches unity at a higher altitude in limb geometry compared with nadir. The effective cloud top pressure can be altered in a simple model by setting a physical cloud top, or by varying the opacity of a cloud that is not confined to any particular pressure range. These two approaches are not exactly equivalent, so two different models with different predicted spectra could have the same effective cloud top pressure (Figure~\ref{cloud_schematic}).
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{cloud_schematic_retrieval_review.png}
\caption{Effective cloud top pressure for different models in both reflection (nadir; panels A and B) and transit (limb; panels C and D) geometry. In A and C, the cloud has a uniform specific density below a cloud top pressure of 10$^{-4}$ bar; in B and D, the cloud has a specific density that decreases with decreasing pressure, with the cloud extended throughout the atmosphere. The level in the cloud at which the optical depth is unity (dashed line) is the same for both cloud models in each geometry, even though the vertical distribution of aerosol is very different.}
\label{cloud_schematic}
\end{figure}
The simplest primary transit studies have assumed that the atmosphere is completely opaque at all wavelengths, for pressures higher than the cloud top pressure. This is suitable over relatively small wavelength ranges, and for planets with spectra that are flat over a wide wavelength range (e.g. GJ 1214b, \citealt{kreidberg14}; \citealt{fisher18}). However, in general this would only be representative of a cloud made of large particles with a broad size distribution, and fails to account for scenarios where aerosols may more closely resemble small-particle haze. Slightly more complex parameterisations allow for the possibility of optically thin clouds, and a simple power law for extinction as a function of wavelength (e.g. \citealt{barstow17,pinhas19}.)
Whilst the pressure at the cloud top and the extinction slope are the most important parameters for primary transit, the vertical distribution of the cloud below the cloud top may also be important, depending on the cloud optical thickness. It is also possible that the cloud consists of multiple components - for example, an optically thin, small particle haze layer overlying an optically thick cloud \citep{macdonald17,pinhas19}. This has led to a range of different parameterisation options even just within primary transit retrievals, which can produce different and apparently contradictory results when applied to the same dataset. For example, retrievals of the same HD 189733b dataset by \cite{barstow17} and \cite{pinhas19} give consistent values for the H$_2$O abundance of $\sim$10$^{-5}$, but the retrieved cloud properties appear dramatically different at first glance. \cite{barstow17} characterize the HD 189733b cloud layer as a vertically thin, high Rayleigh scattering haze layer, whilst \cite{pinhas19} retrieve a cloud top deep in the atmosphere. However, this retrieved cloud top is the top of an opaque, grey cloud, which is coupled to a scattering haze layer for $P<P_{\mathrm{top}}$. Therefore, results from both parameterisations are in agreement that there is no visible grey cloud layer, and are consistent with the presence of scattering, small particle haze higher in the atmosphere.
More complex parameterisations that include some information about composition have also been tested. \cite{kitzmann18} develop a parameterisation based on analytical fits to expected extinction cross-section curves of potential cloud species, such as e.g. MgSiO$_3$. The extinction efficiency $\kappa$ as a function of wavelength is parameterised as follows:
\begin{equation}
\kappa_{\mathrm{cloud}} = \frac{\kappa_0}{Q_0 x^{-a} + x^{0.2}}
\end{equation}
where $\kappa_0$ is a scaling factor, $Q_0$ determines the wavelength at which the extinction efficiency peaks and is related to the cloud composition, $a$ is a scattering slope index and $x$ is the particle size parameter, given by
\begin{equation}
x = \frac{2{\pi}r}{\lambda}
\end{equation}
This parameterisation is more easily related to real physical characteristics of cloud, such as particle size and composition. So far, it has been applied to \textit{Hubble}/WFC3 data by \cite{fisher18}, which provides relatively little constraint on cloud properties; it has not yet been applied to data spanning a broader wavelength range.
The limited information available from current spectra, and the range of possible ways cloud can be represented, makes interpretation of these retrievals very difficult. Without a good understanding of the precise effects of different parameterisations on the spectrum, erroneous conclusions can be drawn.
Attempts have also been made to consider cloud for secondary transit and directly imaged spectra. \cite{barstow14} consider the effect of clouds on the HD 189733b reflection spectrum observed by \cite{evans13}, but due to the requirement to include multiple scattering for reflection spectra only a simple grid search was performed. The cloud properties showed substantial degeneracy with the sodium abundance in the visible part of the spectrum.
\subsection{Future challenges for clouds}
\label{future_clouds}
\label{cloudtemp}
Current exoplanet retrieval efforts are already demonstrating that the details of parameterisation for cloud properties have the potential to bias results. In the case of cloud properties, gas abundance retrievals seem to be somewhat immune to the differences in cloud treatment, but the conclusions drawn about the clouds themselves can vary widely, as discussed in Section~\ref{sota_clouds}. The main challenge we face here is to tune complexity of parameterisation to the information content of the data, whilst avoiding where possible introducing bias into the retrieval. Again, the only way to guard against this is to conduct rigorous simulation tests of retrieval parameterisations.
Recent work has been undertaken to combine cloud microphysics models with 3D circulation models, and to use this to predict emergent spectra \citep{lines18}. Whilst we do no expect these simulations to perfectly predict real cloud and haze in exoplanet atmospheres, the ability of the retrieval scheme to recover key parameters from these synthetic spectra is an important test of the cloud parameterisation used. It provides an opportunity to check whether the parameterisation is sufficient to represent the spectral effect of complex cloud structure, and ensure that it does not introduce bias into the retrieval. Several different approaches to modelling cloud microphysics (e.g. \citealt{helling08,ackerman01}) and including cloud in GCMs (e.g. \citealt{lee17,parmentier16,mendonca18}) are available; the ideal would be a parameterised model that can recover key cloud properties from this range of available cloud models, whilst also accurately retrieving other atmospheric properties.
\textbf{Recommended action: conduct retrievals of simulated datasets based on more detailed, physically motivated, 3D cloudy atmosphere models. Test a variety of simple cloud parameterisations, for a range of observational geometries, and compare results.}
\section{Phase curves and 3D effects}
For a handful of the most favourable targets, spectroscopic phase curves have been obtained which have allowed phase-resolved retrievals to be undertaken. The first example of this is the \cite{stevenson14b} phase curve retrieval for WASP-43b, obtained using \textit{Hubble}/WFC3. The limited wavelength coverage means there is only sensitivity to temperature structure over a small pressure range, and some information about the H$_2$O abundance. Difficulties of interpretation are compounded because the pressure of weighting function peak varies with phase, so comparison between phases is not straightforward. Phase curve observations with broader spectral coverage would resolve these difficulties and are planned for \textit{JWST}. \cite{mendonca18} re-analyzed the Spitzer data of WASP-43b and ran cloudy GCMs to jointly analyze the Hubble and Spitzer phase-resolved emission spectra. They find that the dayside is consistent with being cloudfree, with clouds confined to the nightside, and tentative evidence for elevated levels of carbon dioxide.
Whilst phase curves provide some direct information about spatial variations in the thermal emission from the planet (and in some cases the reflected light), spatial variation in the atmospheric properties can also affect transmission spectra, albeit in a more subtle way. Evidence from observed phase curves and GCMs suggests that one terminator is likely to be hotter than the other for hot Jupiters, which will in turn impact the terminator chemistry and cloud coverage. For example, \cite{mendonca18} ran GCMs with disequilibrium chemistry (using a method known as ``chemical relaxation") and demonstrated that the coupling between atmospheric dynamics and chemistry produces spatial inhomogeneities across latitude, longitude and pressure for molecules such as water, and cannot be neglected if one wishes to accurately model phase-resolved spectra or wavelength-dependent phase curves. The challenge in interpretation is that transmission spectra are averaged over the whole terminator region, so observations are implicitly 1D. Similarly, for planets not sufficiently favourable for us to have phase curve observations, secondary eclipse spectra are also 1D integrations over a non-uniform (and asymmetric) disc; the structure and chemistry retrieved using a 1D model will represent some sort of disc average, but it is unclear exactly what this corresponds to (Figure~\ref{3D_schematic}).
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Slide1.jpeg}
\caption{Strong superrotation on hot Jupiters, coupled with extreme irradiation, results in significant variation in temperature around the terminator region as observed in transit, and an asymmetric pattern on variation on the dayside as observed in eclipse.}
\label{3D_schematic}
\end{figure}
Likewise, the transit spectroscopy technique relies on the stellar disc being uniform once stellar limb darkening is corrected for, since it makes the implicit assumption that the planet transits a region of the stellar disc that is representative of the whole. This is of course not the case; stellar surfaces are highly non-uniform, with time-variable coverage of features such as spots and faculae. Spots and faculae have different spectral characteristics compared with the rest of the stellar disc, so unknown spot/faculae coverage fractions could lead to misinterpretation of transit spectra (e.g. \citealt{rackham18}).
Directly imaged planets are relatively free of these issues, since they are not highly irradiated and their observations do not depend on the uniformity of the star's behaviour; we expect them to more closely resemble the Solar System giant planets in terms of their dynamics. However, we cannot rule out spatial asymmetry on these objects; whilst the dynamical regimes of hot Jupiters result in strong longitudinal gradients in temperature, the Solar System giants display latitudinal variation in chemistry and cloud properties (see e.g. PH$_3$ abundance on Jupiter and Saturn, \citealt{fletcher09}; in addition to the equator-pole differences observed on Jupiter, Saturn also has strong north-south seasonal asymmetry due to its axial tilt of 26.7$^{\circ}$).
\subsection{State of the art: 3D effects}
\label{sota_3d}
\subsubsection{Phase curve retrievals}
For WASP-43b, a moderately hot Jupiter, it has been possible to obtain a spectroscopic phase curve using the \textit{Hubble}/WFC3 instrument. This allows retrievals to be performed as a function of phase, allowing longitudinal variations in chemistry and temperature structure to be mapped. \cite{stevenson14b} use the CHIMERA retrieval algorithm to analyse temperature structure at 16 different phases. The model includes 6 molecular absorbers, but only H$_2$O has a significant influence on the spectral characteristics. The temperature structure is modelled using 5 free parameters, after the method presented by \cite{parmentier14}. The retrieved upper atmosphere temperatures vary by 1000 K between the dayside and nightside, implying inefficient recirculation.
So far, this is the only planet for which a full spectroscopic phase curve exists, so further exploration of phase curve retrievals is hindered by a lack of available data. Retrieval algorithms have not been applied to single- and multi-channel photometric phase curves that exist for other planets, presumably because the problem would be highly degenerate. However, spectroscopic phase curve observations are likely to be a priority for \textit{JWST}. WASP-43b is particularly well-suited to such observations as it has a very short period of only 19.52 hours; this planet will be re-observed at longer wavelengths with the Mid-InfraRed Instrument (MIRI) during the \textit{JWST} Early Release Science programme \citep{batalha17}, which will provide stronger constraints on the variation in atmospheric properties with phase. A phase curve for WASP-43b will also be obtained with the shorter wavelength NIRSpec instrument as part of the Guaranteed Time Observation for the instrument team \citep{birkmann17}.
\subsubsection{3D cloud effects in transmission}
Work is already in progress to account for terminator asymmetry in retrieval models (e.g. \citealt{line16,macdonald17}), although so far it is restricted to cloud coverage, which ignores the fact that temperature structure, and likely the chemistry too, will also vary. \cite{line16} demonstrate that, over narrow wavelength ranges such as those probed by \textit{Hubble}/WFC3 only, partial terminator cloud cover is degenerate with cloud-free, high mean molecular weight atmosphere scenarios. Over a wider wavelength range, this degeneracy can be broken. \cite{pinhas19} include terminator cloud fraction in their retrieval of \textit{Hubble}/STIS + WFC + \textit{Spitzer}/IRAC spectra, and they recover a range of values between $\sim$0.2 and $\sim$0.8 for the 10 planets in their sample. They find no correlation between cloud fraction and any other key parameters in the study. \cite{line16} analyse WFC3 data only for HD 189733b, and find a cloud fraction that is comparable with the result from \cite{pinhas19}.
\subsubsection{3D temperature structure from eclipse spectra}
\cite{blecic17} investigate the ability of a 1D parameterised model to recover an average temperature structure from a simulated dayside spectrum generated from a 3D model atmosphere. They test both the Guillot and Madhusudhan temperature parameterisations discussed previously in Section~\ref{sota_temp}. Both parameterisations produce a retrieved temperature profile close to the arithmetic mean of the circulation model temperature profiles across the dayside, which is somewhat odd; the amount of radiation detected from different regions of the dayside is weighted by the cosines of the latitude and longitude, so it should follow that the hemisphere-integrated temperature-pressure profile should be a weighted average rather than a straightforward arithmetic mean (Figure~\ref{3D_weighted_mean_schematic}). This is an indication that further development of 1D retrieval models, and an investigation into surprising results such as this one, are required to reliably interpret these hemispherically averaged spectra.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{weighted_mean.png}
\caption{Panel A illustrates the contributions from each part of the dayside disc where the emission angle is not taken into account, whereas panel B shows which parts of the planet would dominate the signal when emission angle is accounted for.}
\label{3D_weighted_mean_schematic}
\end{figure}
\cite{feng16} test the impact of using two temperature-pressure profiles to represent the hotter/colder regions of a planetary disc. They apply this to the first-quarter observation of WASP-43b, for which the visible portion of the planet is half in daylight and half in shadow, maximising the expected contrast. They also test simulated spectra for both current state-of-the-art observational scenarios (\textit{Hubble}+\textit{Spitzer}) and future observations with \textit{JWST}. They find that there is insufficient evidence with current data to favour a more complex model, but that for \textit{JWST} simulations significant biases in gas abundances are introduced when only a single temperature-pressure profile is used to represent the temperature structure. This approach is shown to work well where the temperature variation is adequately represented by two temperature-pressure profiles of equal weight, but it remains to be seen whether this is appropriate in the context of secondary eclipse, where the hotspot is likely to dominate.
\subsubsection{Stellar heterogeneity in retrievals}
Parameterisation of the effects of starspots and faculae is now starting to be included within exoplanet retrieval frameworks. Initial results for super Earth GJ 1214b are presented by \cite{rackham17}. \textit{Magellan} telescope observations are fit using the CPAT absorber model coupled with a Markov-Chain Monte Carlo algorithm. The CPAT model for describing stellar heterogeneity divides the stellar disc into occulted and unocculted fractions. The wavelength-dependent transit depth, instead of being simply given by
\begin{equation}
\mathrm{\Delta}_{\lambda} = 1 - \Big(\frac{R_{\mathrm{p,\lambda}}}{R_{\mathrm{s}}}\Big)^2
\end{equation}
where $R_{\mathrm{p,\lambda}}$ is the radius of the planet and $R_{\mathrm{s}}$ the radius of the star, is instead given by
\begin{equation}
\mathrm{\Delta}_{\lambda} = 1- \frac{(R_{\mathrm{p,\lambda}}/R_{\mathrm{s}})^2S_{\mathrm{o}}}{(1-F)S_{\mathrm{o}} + FS_{\mathrm{u}}}
\end{equation}
where $S_{\mathrm{o}}$ is the spectrum of the star in the occulted region, $S_{\mathrm{u}}$ is the spectrum of the star in the unocculted region, and $F$ is the fraction of the disc that is unocculted. \cite{rackham17} test PHOENIX \citep{husser13} model spectra with different metallicities, and different temperatures as a proxy for varying levels of absorption across the stellar disc. They retrieve metallicity/temperature contrast between the occulted and unocculted regions and a constant offset in $R_{\mathrm{p}}$/$R_{\mathrm{s}}$, finding that the observed optical spectrum can be described by a case where 3.2 \% of the unocculted disc is 350 K hotter than the rest of the disc. This may be explained by starspot or facula contrast.
\cite{pinhas18} perform a retrieval analysis of nine hot Jupiters (the sample from \cite{sing16} excluding HD 189733b) using the same model as that presented in \cite{pinhas19} but also including stellar hetereogeneity. This is parameterised by the temperature of the heterogeneous regions (with the star's measured average photospheric temperature fixed) and the fractional coverage of any heterogeneities. \cite{pinhas18} do not discuss these values in detail, but instead present the model evidence for inclusion of stellar effects. They find substantial evidence of stellar heterogeneity for WASP-6 and WASP-39; whilst WASP-6 is one of the two most active stars in the sample based on log$R_{\mathrm{HK}}$ index, WASP-39 is less active, and for the most active star (WASP-19) the evidence is substantially against there being any stellar heterogeneity. This would indicate that the log$R_{\mathrm{HK}}$ index is an unreliable estimator of the importance of stellar heterogeneity effects on transit spectra.
\subsection{Future challenges for recovering planetary and stellar spatial information}
Two key resources for exploration of our ability to recover 3D information about planets are Global Circulation Models (GCMs; e.g. \citealt{selsis11,rauscher12,charnay15,amundsen16,lee16,parmentier16,mendonca18}), and the Solar System planets. GCMs are based on our current best understanding of the physical processes on hot Jupiters and young directly-imaged planets, and should be able to predict the broad characteristics of spatial variability on these planets. However, there are limits to the predictive power of GCMs due to the inability to accurately specify and represent all sources of dissipation in the atmosphere, e.g, \cite{goodman09,heng11,fromang16}; on Earth, these uncertainties can be mitigated by empirically calibrating the sources of dissipation in the GCM using in-situ data, an approach that is impossible for exoplanets.
The Solar System giant planets on the other hand, whilst they exist in a very different temperature/dynamics regime to the majority of well-studied exoplanets, have the advantage that we can directly compare spatially resolved datasets with the information that we would be able to recover if the planet was treated as a point source. This allows us to investigate for real objects how much information about large scale atmospheric ability and asymmetry persists in disc-integrated observations.
Models will also be key for understanding the impact of stellar heterogeneities on transmission spectra. As shown by \cite{rackham17}, whilst monitoring of target stars can provide an indication of the amplitude of variation in spot coverage, this does not provide information about the baseline level or the relative contributions of spots and faculae, both of which are important for transmission spectra. Understanding typical distributions and sizes of spots/faculae on different types of star will be extremely important for future observations.
\textbf{Recommended actions: use simulated datasets from GCMs/stellar atmosphere models to test the ability of parameterised retrieval models to recover 3D information about the planet and the star. Investigate how the information content of spatially resolved observations of Solar System giants compares with that of the same observation degraded to a point source.}
\section{Conclusions}
We have presented a summary of the current and imminent future challenges surrounding atmospheric retrievals of exoplanets. In general, the obstacles faced result from the lack of available ground truth for exoplanet observations, and, especially in the near future, a rapid increase in the information content of observations which requires modelling strategies to constantly evolve.
A common theme for solutions to these challenges is the use of physically based climate and circulation models to provide simulated datasets. Whilst we cannot yet be sure that the outputs from these models are accurate representations of real exoplanet atmospheres, they do allow us to perform important tests of how well simple parameterised models capture more complex atmospheric characteristics. In the case of stellar heterogeneity, it is likely that we will have to rely to some extent on ab initio stellar atmosphere models if we want to correct for spectral contamination of starspots and faculae.
Another key attribute required for retrieval models is flexibility; since the data quality is, and is likely to remain, variable across different planets, it is important that models can be easily tuned to maximally exploit the information content of a given observation. Oversimplification has been demonstrated to introduce bias - for example, assuming an isothermal temperature structure for broad wavelength coverage observations can significantly bias the retrieved chemistry - but equally overfitting can also produce problems. Explicit calculation of information content, such as that featured by \cite{howe17}, may prove useful both for observation planning and also for tailoring retrieval models.
There are of course several aspects of exoplanet spectral inversion that we have not touched on. Perhaps one of the most significant is the completeness and accuracy of the gas absorption information that is included in retrieval schemes. \cite{tennyson18} provide a summary of the ExoMol project, which is one of the current community efforts to ensure that gas absorption data are as accurate as possible. In addition, the processing of these data for inclusion in retrieval models is also an important step that can be a potential source of error.
Finally, there are other methods for extracting spectral information of exoplanet atmospheres which we have not discussed here, as they are beyond the scope of this paper. These include high-spectral-resolution observations, which can also be used to recover information about exoplanet chemistry and atmospheres (e.g. \citealt{demooij09,schwarz15,hoeijmakers18}); their use in retrieval scenarios is currently being explored \citep{brogi19}. We have also focused on transiting exoplanets in this work; with the launch of \textit{JWST}, and first-light for next generation ground-based telescopes such as the \textit{Extremely Large Telescope} fast approaching, significant advances in direct spectral imaging of exoplanets may also be expected over the current state-of-the-art (represented by e.g. \cite{macintosh15,bonnefoy16,gravity19}), opening up further opportunities to characterise non-transiting worlds.
\begin{acknowledgements}
JKB was supported by a Royal Astronomical Society Research Fellowship while this work was taking place. KH thanks the Swiss National Science Foundation, PlanetS National Center of Competence in Research, European Research Council via Consolider Grant number 71620 and MERAC Foundation for partial financial support. We thank the two anonymous reviewers whose comments improved the clarity of this manuscript.
\end{acknowledgements}
\bibliographystyle{aps-nameyear}
|
1,116,691,497,176 | arxiv |
\section{Application of the Categorization \davoud{was decided to use classification} Scheme}
We apply the above mentioned scheme in order to answer the basic research questions mention in the Introduction. As figure \ref{fig:process} shows, the main process had 4 phases as pre-categorization, applying categorization scheme, disagreement resolution and analysis. In this section we discuss first three phases followed by the analysis in the results section.
\subsection{Pre-categorization}
\subsubsection{Pilot Study and calibration}
The pilot study was done with selected 49 papers of the ICSE 2018 conference. Three raters were assigned randomly to each abstract and the all raters worked individually. At the end of the pilot study all raters got together for a calibration meeting where we took a few decisions to move forward.
\waqar{this again is the process rather than part of the scheme itself}Based on this scheme, first, raters were asked to read a abstract and first judge whether that publication is directly related to human values or not. If it is related, then the rater would judge value category for the paper. Further, rater would judge an individual value in that category for the paper if it is possible. If both raters of the a paper have agreed as the paper is relevant to human value, then we consider it as a relevance level agreement. Following the same rule, we define the category level agreement and the individual value level agreements.
However, in the field of software engineering, we do not find any practical definitions either for the value categories or the individual values \cite{Mougouei2018}. Therefore, we recognized the chance of picking an individual value(s) for a abstract which is not necessarily align with the original Schwartz's model hierarchy. Most common example was the security-privacy pair. Raters found several papers which can be categorized under the security category and which belonged to privacy as the individual value. According to Schwartz, privacy is not belong to security category but for the self direction. Therefore, we relaxed the restrictions and allowed to pick any individual values as per raters' judgment. This resulted 33 abstract classifications which go against the original Schwartz's model. The comparison of the results are discussed in the section \ref{subsec:values-consideration}. \harsha{Arif, please check the subsection number}
\begin{enumerate}[(i)]
\item Reduce the number of raters from three to two due to the resource availability.
\item As every paper has some sort of value, strictly look at the explicit link between the research and the human values in the categorization scheme.
\item Pay more attention to the contribution they have provided, rather looking at the high-level problem they are trying to solve.
\item Finalized the years and venues
\item Adding a new value category as holistic view as raters found some papers which have considered values as a single unit. In such case, existing Schwartz's model do not support. Therefore we have added the 11th category.
\end{enumerate}
\begin{comment}
\subsection{Research Questions \davoud{research questions have changed; see the intro}}
For the selected sample:
\davoud{do we really need to repeat RQs here?}
\begin{itemize}
\item \textbf{(RQ1)} What values are commonly considered?
\item \textbf{(RQ2)} What values are under-represented?
\item \textbf{(RQ3)} How are the publications supporting values distributed among top-tier SE venues?
\end{itemize}
\end{comment}
\subsubsection{Paper Collection for Categorization}
As we mentioned, among the many conference and journals in the software engineering field, we have applied our scheme for two conference (ICSE and FSE) and two journals (TSE and TOSEM) within the expand of 4 years. Except for the main tracks, we considered two other tracks from ICSE namely, Software Engineering in Practice (SEIP) and Software Engineering in Society (SEIS). As table \ref{papercount} shows, we ended up with 1350 total number of publications.\harsha{we have to justify this selection with a proper argument and can't we enforce the paper collection table to be here?}
\subsection{Applying categorization process}
\label{subsec:classification-process}
\ref{subsec:classification-process}
\label{sec:result-classification-process}
In the classification process, we adapted the steps provided by Bertolino, et al. \cite{Bertolino2018} \harsha{have to double check this reference for these steps} as follows:
\waqar{Should be consistent, is this categorization process or classification - also we don't apply categorization process. The heading should either say Categorization Process or Classification process and the text following it should be consistent with the heading}
\begin{enumerate}[(i)]
\item We had seven raters who are experts or experienced academics \harsha{have to check this?} in the domains of software engineering and human values. All 1350 papers were allocated among them randomly in a way that each paper has two raters. Couples were mixed based on the expertise such that there is always a mix of experiences in research.
\item All supportive information was given such as Schwartz's model image and category definitions. We used an online shared spreadsheet to do the classification.
\item After reading the title and the abstract of each paper, the rater needs to decide whether the paper is (a) directly relevant or not relevant to human values, and, if relevant, (b) to which value categories of human values and (c) to which specific human values. As previously mentioned the value categories and individual values are derived from \cite{Schwartz2012} as shown in Figure \ref{fig_values}.
\item In the classification process,for (b), the rater needs to choose one value category. As for (c), the rater has the options as not to choose any specific value or choose up to three specific value.
\arif{in the last step, we need to inform that the arbiter is one of the remaining author. We can change all `author` terms into `raters`}.
\end{enumerate}
\arif{Harsha: do not forget to change Specific Value to Individual Value}
\begin{comment}
When we were classifying paper, we intentionally ignored the productivity and efficiency (anyway not in the model) as almost all the paper somehow related to one or both.
We looked at the direct implications to SE community or society in general in terms of Human values
We had two raters per paper and papers were randomly allocated.
As we had 7 raters and 14 pairs - a mix of experience on the domain
High-level overview of the method given in the figure
\end{comment}
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{Figures/process2.png}
\caption{Process of classification} \waqar{we might want to say Initial Classification phase -1 instead of applying categorization scheme, then the final step should be be Final Classification}
\label{fig:process}
\end{figure}
\subsection{Disagreement Resolution}
In case of disagreements on each classification level, both raters met and discussed the disagreements. At the end of the discussion, the rater has the freedom to stand on his/her opinion or change his/her mind.
One of the remaining author acted as an arbiter to discuss with the raters and figure out the reason of the disagreement. Some of the raters change his/her opinion after having the discussion. The result of this step becomes the result of our classification.
176 disagreements were discussed (in all three levels) and 145 was changed (in all three levels) \harsha{We have to explain why most of them were changed. Nature of values? Or individual experience? Or why?}
\harsha{Arif, Will have a chat and present these values in a better way}
\arif{Do we have to present the exact number of the disagreement?}
\section{Background}
\label{sec:background}
Cheng and Fleischmann summarize seven different definitions of human values as ``guiding principles of what people consider important in life''~\cite{cheng2010developing}. Human values with an ethical and moral import such as \textit{Equality}, \textit{Privacy} and \textit{Fairness} have been studied in technology design and human-computer interaction for more than two decades \cite{friedman1996value,flanagan2005values,friedman2007human}. Meanwhile, the rapid popularization of artificial intelligence (AI) and its potential negative impact on society have raised the awareness of human values in AI research~\cite{riedl2016using, etzioni2017incorporating,cath2018artificial}. Consequently, human values are getting renewed research focus.
There has been some recent (but isolated) research in software engineering such as values-based requirements engineering \cite{thew2018value}, values-first SE \cite{ferrario2016values} and values-sensitive software development \cite{aldewereld2015design}.
However, there has been no previous work that measures to what extent human values have been considered in SE research. Motivated by this research gap, we follow a classification approach, similar to that used in previous SE research to map topic trends \cite{shaw2003writing,systa2012inbreeding,montesi2008software}, but with a different purpose, to measure values relevance. There are no current classification schemes for human values in SE. Therefore, we take inspiration from the social sciences.
Social scientists have been searching for the most useful way to conceptualize basic human values since the 1950s \cite{schwartz2007basic}. In 1973, Rockeach captured 36 human values and organized them into 2 categories \cite{rokeach1973nature}. In 1992, Schwartz introduced his theory of basic human values (henceforth referred to as Schwartz's Values Structure (SVS)) which recognized 58 human values categorized into 10 value categories \cite{schwartz1992universals, schwartz2005basic}. While these two value structures remain the most well recognized ways of representing values, there are at least ten other value classifications \cite{cheng2010developing}. In this paper, we use SVS, which is the most cited and most widely applied classification not only in the social sciences but also in other disciplines \cite{thew2018value, Ferrario2014}.\harsha{need more references here}
In SVS, Schwartz introduced 10 motivationally-distinct value categories recognized across more than 30 cultures \cite{schwartz1992universals}. Each value category has underlying distinct motivational goals (see Table \ref{tab:valuecategories_defOnly}) which relate to three fundamental needs of human existence \cite{schwartz1992universals}.
Schwartz subdivided each value cateogory into a set of closely related values \cite{schwartz1994there, schwartz1992universals}.
These 10 value categories and 58 values are arranged in a circular motivational structure as shown in Figure \ref{fig:values}. Value categories located close to each other are complementary whereas values further apart tend to be in tension with each other. Section \ref{sec:methodology} discusses how we applied SVS in our classification study.
\begin{table}
\input{Table_ValueCat_JustDescription.tex}
\end{table}
\begin{comment}
\waqar{something like - \textit{Human values such as (x, y and z) have been studied in design and HCI domain for more than two decades however their integration SE domain has recently getting more attention. Recently value-conscious techniques such as VBRE [ref], Values first SE [ref] have been proposed, however human values in general have not been adequately addressed in SE}. With the popularization of AI and ML and their potential negative impact on society (examples) certain values such as privacy and fairness are getting more research attention.}
\waqar{}
\cite{cheng2010developing}. Every individual holds a set of values which is important to his/her life. It is not necessary to hold the same set of values by every individual. However, \waqar{can use content from here} \harsha{need a reference}. The \waqar{don't need 'The' here }Schwartz structure specifies 10 values categories and 58 values that are ordered in a circular motivational structure (figure \ref{fig:values}) \cite{schwartz1992universals,schwartz2005basic}.
\waqar{can we use a flow e.g. we start off with covering different people's work in this area and then end with Schwartz which we say is the chosen Scheme for this research. Then we explain its components, values, categories and so on? }
According to Schwartz, each value category has a distinct motivation goal and a set of specific values. Moreover, the model explains that
\end{comment}
\begin{figure*}[!htbp]
\centering
\centerline{\includegraphics[scale=0.75,angle=0]{Figures/valu.png}}
\captionsetup{margin=3cm}
\caption{Schwartz Values Structure~\cite{schwartz2006valeurs,schwartz2004evaluating} (adopted from \cite{holmes_blackmore_hawkins_wakeford_2011}). Words in black boxes are values categories, each subdivided into values.}
\label{fig:values}
\end{figure*}
\section{Methodology}
\subsection{Classification Scheme}
\label{sec:Categorization_Scheme}
In order to understand the prevalence of support for human values in software engineering publications, we have categorized 1350 research publications. In this section we introduce the classification scheme selected for this study. As we explained, human values have been studied mainly in social sciences for decades. Though there are several human values representations available, we are using Schwartz universal model of human values as our classification scheme \cite{Schwartz2012,schwartz1994there} which is the most accepted model for represent human values \cite{}. \harsha{we might need a better justification}\waqar{i have briefly justified the choice of this scheme in the threats to validity section which can be used as a base and then extended for this section }
\begin{figure*}[!htbp]
\centering
\centerline{\includegraphics[scale=0.725,angle=0]{Figures/values}}
\captionsetup{margin=0.5cm}
\caption{Value Model Across 68 Countries Based on Schwartz~\cite{schwartz2006valeurs,schwartz2004evaluating} (Adopted from \cite{ferrario2016values,holmes2011common})\davoud{Harsha, I have only found one true source for this figure and that was the psychology-related website I shared with you last time, can you please add that website to our bibliography and update ``Adopted ... with that?''}.}
\label{fig_values}
\end{figure*}
As Figure \ref{fig_values} depicts, Schwartz model consist of 10 main value categories and 58 individual values. As per the Schwartz model, Each of the 10 categories in Schwartz model are defined based on a set of goals and contain a set of individual values. Table \ref{valuecategories} \harsha{@Arif, We have to take this table into the results section. examples should be added to the table} We have added another value category after the calibration in pilot phase of the classification process, which will be discussed in the \ref{} section.
\begin{comment}
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{Figures/threelayer.png}
\caption{Categorization Process Decision Diagram \davoud{does the sequence matter? if yes, use arrows instead of lines, if not, then the whole diagram is not required, I think the sequence is not important here => suggest removing this diagram} \davoud{values -> values} }
\label{fig:threelayer}
\end{figure}
\end{comment}
\begin{comment}
as we found through the pilot study that some of the papers has considered all human values as a single entity. Altogether there are 11 main human values categories in the first layer. Moreover, following the Schwartz model, we have extended the categorization scheme to the next level to identify the main individual values. 58 individual values were identified and divided among the 11 value categories.
\end{comment}
\begin{comment}
The scheme consists of 2 layers as the Human value category and the individual human value. First layer , human value categories have 10
It urges to have three layers for the scheme. As demonstrated in Figure \ref{fig:threelayer}, first, the raters decided the relevance of the allocated papers to human values. If it was found as directly relevant, then rater should pick one main value category. After that, if it possible to differentiate any possible individual value, raters would pick one individual value. There are important protocols that we followed in each layer, which are explained in the following.
\textit{Relevance}. This is the first layer. In here, after reading the title and the abstract of the paper, raters had to decide whether this paper is directly related to human values. Raters were instructed to look for explicit links between the contribution of the paper and human values in the Schwartz model \cite{schwartz1994there,Schwartz2012}. The decision was made after the pilot study, which is explained below, where we learned two things. (i) If we are to create implicit links, every single paper has some connection with human values even though the paper was not trying to address any human values; (ii) Some paper had a motivation or a problem which related to humans or human values yet the actual contribution is far away from operationalizing any human value.
\textit{Main Value Category}. If it was found that the paper is directly relevant to human values, then rater should pick the main value category. These categories were adopted from the Schwartz model. After the pilot study, we decided to add one main value category as Holistic view as we found that some papers have addressed human values as a whole unit.
\textit{Individual Value}. There are 58 individual values in the Schwartz model. If it was possible to identify an individual value that has been addressed in the paper, then that paper was categorized under that value.
\subsection{Categorization scheme constraints and rules}
-\textbf{An individual value should not necessarily follow its category}
Why?: Some definition of terms are bit different from the SE understanding of such value
E.g. Security and Privacy - not all paper about individual privacy are related to national security
- \textbf{Only category selection was allowed}
Why?: Some values are absent on the model
E.g. Transparency, Sustainability
- \textbf{Some individual values were mapped to the best possible match}
Why?: though some of the values are missing, we could see a direct link to an available value.
E.g. Some security papers attributed to national security although those papers are not directly related to national security.
- \textbf{Asked raters to pick a main category and main individual value}
Why?: Hard to map, nature of the values, linked to each other
\subsection{Categorization Scheme and Examples}
\davoud{for tables, you can use https://www.tablesgenerator.com/}
Main value categories and its individual values are presented in the following table. Moreover, it includes a part of an abstract that has been categorized under the particular individual value. These phrases show examples of how human values are presented in the abstracts of software engineering publications.
\end{comment}
\begin{comment}
\begin{table*}[!htbp]
\setlength\arrayrulewidth{1.5pt}
\caption{\arif{What if we only give some examples (but not all) value quotes? (1) we already gave the Schwartz model; (2) A lot empty space in where we cannot find a quote related to some values}}
\label{table}
\centering
\input{table}
\end{table*}
\end{comment}
\section{Conclusions and Future Work}
\label{sec:conclusion}
Repeated incidents of software security and privacy violations continue to attract researchers' attention. In this paper, however, we investigated the prevalence of a broader range of human values including \textit{Trust}, \textit{Equality} and \textit{Social justice} in software engineering research. Using Schwartz Values Structure as our classification scheme, we classified 1350 recently published (2015--2018) papers in top-tier SE conferences and journals.
We conclude that only a small proportion of SE research considers human values. While \textit{Security}, as a value category, and \textit{Privacy}, as a specific value, stand out as the main focus in SE research, few other human values such as \textit{Helpful}, \textit{Protecting the environment} and \textit{Social justice} are considered. A broad range of human values remain inadequately addressed in SE research. Finally, we found SE conferences publish more values relevant research compared to SE journals.
In future, we would like to extend this study using a machine learning approach. Manually labelled data from this study could be used for training machine learning algorithms to classify larger sets of publications with the aim to better visualize how SE research addresses human values. We also plan to utilise our manually labelled data captured from various SE contexts to develop definitions of human values that are relatively easy for practitioners to understand and implement. Finally, we plan to carry out case studies in software organizations to investigate whether SE research related to human values has actually made an impact on SE practice.
\section{Discussion}
We carried out this research to verify our hypothesis that SE research does not sufficiently consider human values. Our research findings confirm this intuition. The extent to which SE research ignores human values is significant: 1105 out the selected 1350 papers (82\%) were found not to be relevant to human values. Furthermore, out of 195 papers that do address values, 80 papers relate to \textit{Security}. This is unsurprising, but also illustrates that the lack of consideration of human values in SE is even more stark. Indeed, a majority of other human values (approximately 79\%) are not adequately addressed in SE research.
The value of \textit{Helpful} that relates to ``preserving and enhancing the welfare of those with whom one is in frequent personal contact'' (Table~\ref{tab:valuecategories_defOnly}) was the highest classified among all (58) values. This suggests that SE research is often aimed at being helpful to the SE community -- for example, by means of improving processes thus reducing development effort or removing development obstacles, or developing new tools and techniques to facilitate or improve certain practices or tasks. Our results indicate that only a small proportion of publications relate to individualistic-value categories (\textit{Hedonism, Achievement, Stimulation and Power}) compared to group-value categories (\textit{Universalism, Conformity and Tradition}). This verifies the tensions discussed in the Schwartz Values Structure about the competing and contradicting nature of these bipolar value categories~\cite{schwartz1994there}.
It is important to note that SVS served as an appropriate yet not ideal scheme for classifying human values in SE. We discovered that SVS does not include some values commonly discussed in SE. For example, sustainability is a value that has received significant recent attention in SE, yet is not listed among the 58 SVS values. Since SVS originates in the social sciences, raters sometimes found it difficult to map certain SE values to SVS-prescribed value categories. This is likely due to the difference in meaning of values in different contexts (i.e., social sciences versus software engineering). Future work will look at how to adapt SVS to an SE context.
Without attempting to generalize, certain findings are worth mentioning here. For example, among the selected venues, ICSE has the most diverse range of values covered compared to others. In addition, there are certain values such as \textit{Wealth}, \textit{Unity with nature}, \textit{Social recognition}, \textit{Honoring of parents and elders}, \textit{Enjoying life}, and \textit{A world at peace} found in ICSE publications but not addressed in any other venue. It is difficult to attribute this to a trend in ICSE submissions or to the broad nature of ICSE. Similarly, for other venues, a broader and more comprehensive study is needed to discuss any trends.
\section{Introduction}
\label{sec:introduction}
Ignoring human values while engineering software may \waqar{you are using 'may result' here (which shows a possibility. Use consistently both in the abstract and intro) here in the intro, where as you used 'results' (a certainty) in the abstract}result in violating those values~\cite{Mougouei2018,ferrario2016values} and subsequent dissatisfaction of users. This may lead to negative socio-economic impacts such as financial loss and reputational damage\waqar{not sure which organizations we mean here}. A recent example, which made news headlines, is the price gouging on airline tickets during Hurricane Irma~\cite{sablich_2017}. After a mandatory evacuation order, the cost of airline tickets rose six fold, due to supply and demand pricing systems, thus disadvantaging evacuees. Arguably, this occurred because of insufficient consideration of valuing compassion for those suffering in a natural disaster.\waqar{if I recall Jon commented regarding this example that it needs to be somehow linked with the value of compassion, i think it still needs that kind of link and motivation} A second example is software used by Amazon to determine free shipping by zip code, which turned out to discriminate against minority neighbourhoods~\cite{gralla_2016}. Racial bias in automatic prediction of re-offenders at parole boards in the US Justice system~\cite{angwin_larson_kirchner_mattu_2016} is another example where software violates human values\waqar{values violation}. Indeed, the negative impacts of ignoring values can go as far as risking human life: the tragic suicide of the British teenager Molly Russell~\cite{molly.2019} has been partially attributed to Instagram's personalisation algorithms, which flooded Molly's feed with self harm images; following public outrage\waqar{accusations}, Instagram has now banned such images.
As awareness about human aspects\waqar{aspects?} of software grows, the public is increasingly demanding software that accounts for their values. See, for example, those accusing Facebook of taking advantage of users' data to influence the US elections\waqar{rephrase: not clear} \cite{smith_2018}. Public demand has also motivated software vendors to take preemptive measures to avoid violating human values. Google, for instance, has pledged not to use its AI tools for surveillance conflicting with human rights~\cite{dave_2018}.
Though such initiatives are promising, we claim that software engineering research and practice currently pays insufficient attention to the majority of human values. This may be due to the lack of adequate methodological and technical support for engineering values in software~\cite{Mougouei2018}. To provide evidence for this claim, as part of our broader approach to studying human values, we have investigated software engineering (SE) research papers\waqar{the justification, why study SE research to answer problems in SE practice, is missing} to measure how much attention the SE field has given to values. In particular, we have classified software engineering publications in some of the top-tier SE venues (ICSE, FSE, TSE, and TOSEM), from 2015 to 2018, based on their relevance to different values. A paper was classified as \textit{directly relevant} to a particular value if its main research contribution addressed how to define, refine, measure, deliver or validate this value in software\waqar{what do you mean by directly considered a value}. A widely adopted value structure (Figure~\ref{fig:values}), based on Schwartz's theory of human values~\cite{Schwartz2012,schwartz1992universals}, was used as our classification scheme\waqar{do we need to say 'primary'}. \waqar{we don't need 'in the effort' we can start from To understand }Using this classification approach, we investigated the prevalence of human values in SE research, with three key research questions:
\begin{itemize}
\item [\textbf{(RQ1)}] To what extent are SE publications relevant to values?
\item [\textbf{(RQ2)}] Which values are commonly considered in SE publications?
\item [\textbf{(RQ3)}] How are the relevant publications distributed across venues?
\end{itemize}
The results of our study showed that: (a) only $16\%$ of publications were directly relevant to human values, referred to, henceforth, as \textit{relevant publications}; (b) for $60\%$ of human values, there were no relevant publications; (c) on average, $2$ relevant papers were found per value\waqar{which each value?}, while for $79\%$ of values, the number of relevant publications was $\leq 2$; and (d) $88\%$ of relevant papers\waqar{from 2015 to 2018, or during the selected period i.e. 2015-2018} were published in SE conferences rather than journals.
\section{Methodology}
\label{sec:methodology}
To investigate the prevalence of human values in SE research, we manually classified publications from top-tier SE conferences and journals based on their relevance to different values. We followed a methodology similar to that of prior classification work in SE \cite{shaw2003writing,systa2012inbreeding,montesi2008software}, which mapped trends of SE research over time in terms of topic and type of study. As with prior studies, ours was based on manual classification of paper abstracts by multiple raters. Classification based on abstracts, rather than reading the full paper, is sub-optimal but strikes a balance between accuracy and time needed for the study. All papers had multiple raters and inter-rater agreement was measured using Fleiss' Kappa \cite{landis1977measurement}. We chose to classify papers from the last four years of conferences and journals generally considered to be the top SE venues, namely, the International Conference on Software Engineering (ICSE), the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), the IEEE Transactions on Software Engineering (TSE), and the ACM Transactions on Software Engineering and Methodology (TOSEM).
When conducting such a study, there are a number of key experimental design decisions that need to be taken, including: (i) how to define relevance to human values, given the imperfect and high-level nature of values definitions in the literature; (ii) how many raters to assign to each paper, and (iii) how to resolve disagreements between raters. To make choices about these design decisions, we first carried out a pilot study before carrying out the main study. Both the pilot and main study assumed SVS as the classification scheme. In total, we employed 7 raters (5 Male, 2 Female) with varying levels of experience in SE research, ranging from PhD students to Professors, and including one rater from outside the software engineering field. Note that this is a relatively high number of raters compared to similar studies~\cite{Bertolino2018, vessey2002research}.
\subsection{Pilot Study}
\label{subsec:Pilot-Phase}
The pilot study had three steps: (i) Paper selection and allocation of papers to raters, (ii) Paper classification, and (iii) Calibration of classification decisions made by different raters.
The aim of the pilot study was not to measure relevance of papers to values; rather, we had the following objectives:
\begin{itemize}
\item To test the appropriateness of SVS as the classification scheme for SE publications
\item To develop a common understanding regarding the meaning of human values in SE contexts
\item To collect insights from raters to feed into the experimental design of the main study
\end{itemize}
\textit{(i) Paper selection and allocation of papers to raters.} We randomly selected 49 papers from ICSE 2018 as our pilot study dataset. These were equally allocated among the seven raters, with three raters per paper. Common practice is to assign two raters per paper \cite{Bertolino2018, vessey2002research}; three were assigned in the pilot to get a better understanding of how to map papers to values. ICSE was chosen as it has the broadest coverage of SE research \cite{Bertolino2018}. We chose the most recent ICSE proceedings -- 2018 at time of writing.
\textit{(ii) Paper classification.} Raters classified papers, independently, based on their title, abstract and keywords which is an approach used in similar classification studies in SE \cite{shaw2003writing, glass2002research, Bertolino2018}. Raters were instructed to decide if a paper was ``relevant'' or ``not relevant'' to human values: relevance was deliberately left ill-defined as one of the objectives of the pilot was to influence the definition of this term in the main study. For relevant papers, raters were asked to classify the papers into one value category, and then into one value within the category. Raters were not mandated to follow the hierarchical structure of SVS: that is, they could classify a paper into value X and value category Y even if X did not belong to category Y. This was to give us a way to assess, from a software engineering perspective, the appropriateness of the hierarchy in SVS.
\textit{(iii) Calibration.} After classification, all seven raters met to discuss the classification decisions. The main objective was to calibrate decisions and use this to refine the definition of values relevance. The intention was \textit{not} to decide which rater picked the correct classification.
Following the pilot study, we made a number of observations which were fed into experimental design of the main study.
\begin{itemize}[leftmargin=0.3cm]
\item \textit{Observation 1:} Raters found that almost every paper could be classified into a small number of values such as \textit{Helpfulness}, \textit{Wisdom} or \textit{Influence} because, in general, every piece of research tries to advance knowledge. Thus, an indirect argument could almost always be made why a paper is relevant to helpfulness (e.g., a paper on testing is helpful to testers), wisdom (any paper advances knowledge, thus leading to greater wisdom), or influence (e.g., a paper on an improved software process influences how software is developed). This observation illustrated the difficulty of working with vaguely defined concepts such as values, but also the importance of a better definition of relevance.
\item \textit{Decision 1:} It is beyond the scope of this paper to fully and formally define all the values; hence, it was decided in the main study to use inter-rater agreement as evidence that a value was sufficiently understood in the context of a particular paper to provide confidence in the results. The definition of relevance was, however, refined for the main study. Raters were instructed not to make indirect arguments why a paper might be relevant to a value. Instead, in the main study, classification was based on ``direct relevance'' -- a paper is defined as directly relevant to a value if its main research contribution is to define, refine, measure, deliver or validate a particular value in software development. All other papers are classified as not relevant. Thus, a paper should only be classified as directly relevant to helpfulness if the research provides software tools or techniques to encourage people to be helpful towards each other.
\item \textit{Observation 2:} Raters observed that some papers addressed values as a general concept rather than considering any specific value. An example would be a paper that presents a methodology for refining values into a software architecture. These papers should not be classified into any particular value category or value.
\item\textit{Decision 2:} To facilitate classification of such papers, we introduced a new value category in the main study, named \textit{Holistic view}. A paper classified under Holistic View relates to values generally without focusing on any specific value (Table \ref{tab:example}).
\item \textit{Observation 3:} Raters found that some papers should be classified under more than one value.
\item \textit{Decision 3:} To accommodate such papers in the main study, raters were allowed to select up to three values.\jon{why 3} This decision is different from similar studies in SE where raters were obliged to pick just one category \cite{Bertolino2018}.
\item \textit{Observation 4:} Not surprisingly, as SVS was not developed specifically for SE, there were cases where SVS was not a perfect fit. We will return to this point in Section \ref{sec:conclusion} but a key point for the main study is that some raters chose a value X and value category Y even if X does not belong to Y according to SVS. A common example was the value \textit{Privacy}, which from a SE perspective is clearly aligned with the category \textit{Security}, and yet appears in \textit{Self-direction} according to SVS (see Figure \ref{fig:values}).
\item \textit{Decision 4:} The main study maintained the decision to allow selection of values and value categories independent of the Schwartz hierarchical structure. In Section \ref{sec:result}, we present data to show the effect this had on the results.
\item \textit{Observation 5:} The pilot study gave us an opportunity to measure how long it took raters to rate papers. We found that, on average, each rater spent four minutes per abstract. Given the number of papers in the main study (1350 -- see Table \ref{tab:papercount}), assigning three raters per paper would be infeasible.
\item \textit{Decision 5:} Out of necessity, we reduced the number of raters in the main study to two. This is consistent with the number of raters in similar studies \cite{Bertolino2018, vessey2002research, glass2002research}.
\end{itemize}
\subsection{Main Study}
\label{subsec:classification-process}
Similar to the pilot study, the main study also had three phases: (i) Paper selection and allocation of papers to raters, (ii) Paper classification and (iii) Disagreement resolution. The final stage was different to the pilot study because rather than calibrating ratings to inform experimental design, some raters met to try and reach a consensus.
\textit{(i) Paper selection and allocation of papers to raters.} For the main study, we selected papers from ICSE, FSE, TSE and TOSEM over the last four years. These are the same venues used in similar paper classification studies \cite{Bertolino2018, glass2002research}. We selected all papers in TSE and TOSEM. For FSE, we used all papers from the main track, and for ICSE, we used all papers from the main track, from the Software Engineering in Practice (SEIP) track, and from the Software Engineering in Society (SEIS) track. The motivation for selecting tracks was to choose tracks which publish full research papers, not shorter papers. In total, there were 1350 papers published in the chosen venues over the years 2015--2018, at time of writing. This is a high sample size compared to similar studies (e.g., 976 in Bertolino et al. \cite{Bertolino2018} and 369 in Glass \cite{glass2002research}). Table \ref{tab:papercount} shows the distribution of selected papers by venue, track and year.
\begin{table}
\input{Table_Papercount.tex}
\end{table}
The papers were randomly allocated among the seven raters, two raters per paper. Each rater received around 400 papers to classify. We manually extracted links for each of the 1350 papers from digital databases, and provided a spreadsheet with these links and values and value categories for raters to select from.
\textit{(ii) Paper classification.}
Similar to the pilot study, raters were asked to classify papers on the basis of their title, abstract and keywords. However, the main study used a different definition of relevance, as suggested by the pilot study. Raters were asked to classify papers as directly relevant or not directly relevant, where the definition of direct relevance is as given in Section \ref{subsec:Pilot-Phase}. Papers found directly relevant to values were further classified into a category and then to a specific value(s). Throughout the process, raters complied with the decisions made during the \textit{calibration} step in the pilot study.
\textit{(iii) Disagreement resolution.} Given the subjective nature of the classification, raters sometimes disagreed. This could arise at three levels:
(a) relevance level, where raters disagreed on whether a paper was directly relevant or not; (b) value category level, where raters disagreed on the choice of value category; and (c) value level, where raters disagreed on the choice of value.
To attempt to resolve these disagreements, raters met to discuss their views about why the paper in question was classified in a certain way. If the raters could not come to an agreement, a third rater was introduced as an arbiter. The arbiter facilitated a second round of discussion, sharing his or her own views, to facilitate a consensus. However, if the disagreement persisted, the arbiter did not force a decision.
Aligned with previous studies \cite{Bertolino2018}, we calculated inter-rater agreement using Fleiss' Kappa, once attempts at resolving disagreements had taken place. The results of the Kappa measure are interpreted according to the agreement strengths introduced by Landis and Koch \cite{landis1977measurement}. We achieved \textit{almost perfect} agreements on relevance level and category level with Kappa values equal to 0.92 and 0.87, respectively. The agreement of value level was found as \textit{substantial} with Kappa value equal to 0.79. The results from the main study are further discussed in Section \ref{sec:result}.
\input{table-example}
\section{Related Work}
\label{sec:related-work}
Classification of papers has been widely adopted in the SE literature~\cite{shaw2003writing,systa2012inbreeding,montesi2008software,vessey2002research} as a way of providing insights on trends and directions in SE research. Such findings, though not conclusive, can indicate the general attitude of SE researchers as well as the priorities in SE research. Paper classification helps to highlight the gaps and the needs for further research in specific SE domains. Mary Shaw~\cite{shaw2003writing}, for instance, analyzed the abstracts of research papers submitted and accepted to ICSE 2002 to identify different research types as well as the trends in research question types, contribution types and validation approaches. The author also studied the program committee discussions regarding the acceptance or rejection of the papers. Another example is the work by Vessey et al.~\cite{vessey2002research}: to report their findings, the authors categorized samples of SE papers published from 1995 to 1999 in six journals based on topic, method, and approach.
However, paper classification methods rely on classification schemes, that can be general or specific depending on the purpose of the classification. To classify different SE papers, Montesi and Lago~\cite{montesi2008software} presented a paper classification approach based on the call for papers of top-tier SE conferences and journals included in the Journal Citation Reports and the instructions to authors of relevant journals and published works. Also, Ioannidis et al.~\cite{ioannidis2015meta} categorized the meta-research discipline into five main thematic fields corresponding to how to conduct, report, verify, correct and reward science. There have also been efforts to develop specific classification schemes. For instance, Wieringa et al.~\cite{wieringa2006requirements} developed a classification scheme to identify papers that belong to Requirements Engineering as a subdomain in SE. Sjoberg et al.~\cite{sjoberg2005survey} surveyed SE papers in nine journals and three conferences from 1993 to 2002 with the aim to characterize controlled experiments in SE by characterizing the topics of the experiments and their subjects, tasks, and environments.
Moreover, some paper classifications have identified gaps in SE practice. An example is the work by Stol and Fitzgerald~\cite{stol2015holistic}, where the authors observed the lack of a holistic view in SE research. The work contributed a framework for positioning a holistic set of research strategies and showed its strengths and weaknesses in relation to various research components. Also, Zelkowitz and Wallace~\cite{zelkowitz1997experimental} classified, according to a 12-model classification scheme, around 600 SE papers published over a period of three years to provide insights on the use of experimentation within SE. They identified a gap in SE research with respect to validation and experimentation. Another example is an empirical study of SE papers performed by Zannier et al.~\cite{zannier2006success} to investigate the improvement of the quantity and quality of empirical evaluations conducted within ICSE papers over time. The authors compared a random sample of papers in two periods, 1975 -- 1990 and 1991 -- 2005, and found that the quantity of empirical evaluation has grown, but the soundness of evaluation has not grown at the same pace.
Last but not least, some paper classifications have provided insights on SE venues in relation to the papers published in those venues. An example is the work by Systa et al.~\cite{systa2012inbreeding} that investigated the turnover of PC compositions and paper publication in six SE conferences. The work was later extended by Vasilescu et al. ~\cite{vasilescu2014healthy} by proposing a wider collection of metrics to assess the health of 11 SE conferences over a period of more than 10 years.
\subsection{The Prevalence of Values in SE Publications}
\label{subsec:values-prevalence}
To answer \textbf{(RQ1)} and \textbf{(RQ2)}, in this section, we present the results achieved from Section~\ref{subsec:classification-process} and discuss our findings on the prevalence of human values in SE Publications.
\subsubsection{Answering \textbf{(RQ1)}}
Figure~\ref{fig:relevant} demonstrates the prevalence of human values in classified publications. We observed (Figure~\ref{fig:relevant}) that the majority of the publications (82\%) were classified as \textit{Not Relevant} to values, which constitutes 1105 out of 1350 papers. For those publications that did not directly relate to values, an example is given in Table~\ref{tab:example}. On the other hand, 16\% of the publications (216 papers) were found to be directly relevant to values. The remaining 2\% of publications (29 papers) were classified as undecided, because the two raters could not agree on a classification. To investigate if there were any trends in the prevalence of values in SE venues over time, we compared the percentages of the relevant publications from 2015 to 2018 (Figure~\ref{fig:supporting-ratio-year}): no significant trends were observed.
It is worth mentioning that even though the raters agreed that 216 papers (16\% of the classified papers) were relevant to values, disagreements still remained at the value category level and value level (Section~\ref{subsec:classification-process}): out of 216 papers, agreements were reached for 195 papers at value category level and at the value level, agreements were reached for 115 papers.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{Figures/relevant}
\caption{Relevance of SE publications to human values}
\label{fig:relevant}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/relevant-by-year}
\caption{Relevant publications per year.}
\label{fig:supporting-ratio-year}
\end{figure}
\subsubsection{Answering \textbf{(RQ2)}}
\textit{Which values are commonly considered?}
Our results showed that for each of the 58 values in Figure~\ref{fig:values} -- on average -- $2$ relevant publications were found\davoud{box plot ?}. As shown in Figure~\ref{fig:specific-values-occurrences}, however, the frequency of the relevant publications varied significantly for different values. Figure~\ref{fig:specific-values} shows the level of attention given to the 58 human values in SVS.
It can be seen that for the majority of the values ($79\%$), the number of the relevant publications was $\leq 2$ while for 60\% (35 out of 58) of the values, no relevant publications were found (Figure~\ref{fig:specific-values}). Also, for some values, e.g., \textit{Enjoying life} and \textit{Honoring of parents and elders}, only one relevant publication was found across all of the studied venues from 2015 -- 2018 (Figure~\ref{fig:specific-values-occurrences}). It can also be seen in Figure~\ref{fig:specific-values} that only for 21\% (12 out of 58) of the values, e.g. \textit{Helpful} and \textit{Privacy}, the number of the relevant publications were above average ($> 2$).
While being cautious with generalizing, these findings are highly suggestive of negligible or limited attention paid by the SE research community to the majority of human values. Although finding the exact cause requires broader studies, it may not be difficult to attribute ignoring some of the values in SE publications to the lack of practical definitions for those values~\cite{Mougouei2018}; this is particularly clear for values such as \emph{Forgiving} and \emph{Mature love}, that need to be further clarified in a SE context before they can be used by SE researchers and practitioners.
\begin{figure*}[htbp]
\centering
\hspace{2cm}\includegraphics[width=0.75\textwidth]{Figures/individual-values-alternative}
\captionsetup{margin=15ex}
\caption{The level of attention given to 58 values in the Schwartz Value Structure. Publications were classified as relevant if their main research contribution directly considered values.}
\label{fig:specific-values}
\end{figure*}
In the attempt to understand which values are most commonly considered in SE research, we found (Figure~\ref{fig:specific-values-occurrences}) that the number of publications relevant to \emph{Helpful}, \emph{Privacy}, and \emph{Protecting the environment}, were the highest among all 58 values in SVS (Figure~\ref{fig:values}). Examples of such publications are given in Table~\ref{tab:example}. With 38 relevant papers, the value \textit{Helpful} was the most frequently considered value. Publications that contributed software tools or techniques to encourage people to be helpful towards each other were classified by the raters as relevant to \textit{Helpful}.
\vspace{0.25cm}
Moreover, the second highest number of relevant publications was observed for \textit{Privacy} (Figure~\ref{fig:specific-values-occurrences}). This category contained papers that directly considered user privacy. Also, \textit{Protecting the environment}, the third most commonly found value, appeared in publications that directly considered \textit{Sustainability} and \textit{Energy efficiency} in software.
\vspace{0.5cm}
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{Figures/specific-values-occurrence}
\caption{The number of relevant publications per value}
\label{fig:specific-values-occurrences}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/value-categories-side}
\caption{Considering value categories in SE publications}
\label{fig:value-categories-side}
\end{figure}
\textit{Which value categories are commonly considered?} As explained in section \ref{subsec:classification-process}, the raters were given the freedom to classify the publications under different value categories regardless of SVS hierarchical structure. As a result, the raters were allowed to pick, for a publication, values and value categories that did not necessarily match in SVS. Figure~\ref{fig:value-categories-side} shows the prevalence of the publications under different value categories specified by the raters.
Figure~\ref{fig:value-categories-side} also shows how those papers would have been classified if the raters strictly followed SVS (Figure~\ref{fig:values}): a significant difference was observed for value categories \textit{Security} and \textit{Self direction}, where the raters classified 80 papers as relevant to \textit{Security}; had the raters followed SVS for classification, only 55 papers would have been classified under \textit{Security}. On the other hand, raters classified only 6 papers as relevant to \textit{Self direction}. If it was based on SVS, 21 papers would have fallen under the category of \textit{Self direction} (Figure~\ref{fig:values}).
Scrutinizing the publications classified under \textit{Security} and \textit{Self direction} revealed an interesting finding: the raters chose \textit{Security} as the category of 12 papers classified as relevant to \textit{Privacy}, but based on Schwartz Values Structure (SVS), \textit{Privacy} is under \textit{Self direction}. As such, those 12 papers (annotated on the graph of Figure~\ref{fig:value-categories-side}) could have been classified under \textit{Self direction} if SVS (Figure~\ref{fig:values}). Though relatively small, similar differences were also observed for other value categories such as \textit{Power}, \textit{Achievement}, \textit{Conformity}, and \textit{Hedonism}. Considering the SE background of most of the raters (Section~\ref{sec:methodology}), this raised a major question: ``do software engineers perceive values differently from social scientists?'' To reflect the view of the raters in our discussions, we use, consistently, the categories specified by them in the rest of the paper.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/value-categories-proportion.png}
\caption{Relevant publications per value category}
\label{fig:value-categories-proportion}
\end{figure}
It can be observed from Figure~\ref{fig:value-categories-proportion} that 80 papers (41\% of the relevant publications) were classified as relevant to \textit{Security}, which made \textit{Security} the most prevalent value category. This was not hard to predict as \textit{Security} is a well-recognized quality aspect of software, for which there is a great demand from stakeholders. The second and third most highly prevalent value categories were found to be \emph{Benevolence} and \emph{Universalism}, which constituted 20\% and 16\% of the relevant publications, respectively. On the other hand, no publications were found to be relevant to the categories \emph{Tradition}, \emph{Stimulation}, and \emph{Hedonism}. Moreover, 8\% of the relevant papers were classified under the category \textit{Holistic view}, which does not exist in SVS -- this category was introduced based on the raters' feedback from the pilot study (Section~\ref{subsec:Pilot-Phase}) to account for publications that considered values in general.
\subsection{Relevant Publications per Venue}
\label{subsec:values-distribution}
To answer \textbf{(RQ3)}, this section reports our findings on the distribution of values relevant to SE publications across SE venues. Figure~\ref{fig:venue-relevance} demonstrates, for each venue/track, the proportion of the relevant publications in 2015 -- 2018.
\textit{The proportion of relevant publications in each venue/track}. We observed (Figure~\ref{fig:venue-relevance}) that the proportion of relevant publications in the SE journals, namely TOSEM (about 5\%) and TSE (about 11\%), is lower than the proportion of relevant publications in the main tracks of ICSE (about 18\%) and FSE (about 13\%), and significantly lower than the proportion of relevant papers in the SEIP (21\%) and SEIS (about 81\%) tracks of ICSE. In particular, the proportion of values relevant papers was significantly higher in SEIS. This is not surprising given the focus of the track.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figures/venue-values-support}
\caption{Proportion of values relevant publications in SE venues/tracks. The labels on the bars denote the number of papers in each category. }
\label{fig:venue-relevance}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Figures/venue-relevance-distribution-2}
\caption{Relevant publications per venue/track}
\label{fig:venue-relevance-distribution}
\end{figure}
\textit{The distribution of relevant publications by venue/track}. Figure \ref{fig:venue-relevance-distribution} demonstrates the distribution of relevant publications across the studied venues/tracks. From all 216 publications that directly considered values (relevant publications), 58\% were published in different tracks of ICSE: main track (33\%), SEIS (14\%), and SEIP (11\%). The highest prevalence of relevant publications was seen in the main tracks of ICSE (33\%) and FSE (30\%). As such, it was concluded that about $88\%$ of the publications that directly considered values were published in SE conferences: ICSE (58\%) and FSE (30\%). On the other hand, SE journals, TSE (11\%) and TOSEM (1\%), constituted only 12\% of the relevant publications (Figure~\ref{fig:venue-relevance-distribution}).
\begin{figure*}[htbp]
\centering
\includegraphics[width=\linewidth]{Figures/venue-values-distribution}
\captionsetup{margin=16ex}
\caption{The distribution of publications relevant to different values by venue/track; relevant publications were found only for 23 out of 58 values in Schwartz Value Structure (Figure~\ref{fig:values}).}
\label{fig:individual-by-venue}
\end{figure*}
\textit{The distribution of relevant publications by values and venues}. Figure~\ref{fig:individual-by-venue} shows how the publications relevant to different values are distributed across different venues/tracks. We observed that only 23 out of 58 values in SVS (Figure~\ref{fig:values}) were considered. For some values, relevant publications were found across most venues/tracks; publications relevant to \textit{Helpful} were found in 5 out of 6 venues/tracks. But for the majority of the considered values in Figure~\ref{fig:individual-by-venue} (15 out of 23), the number of the venues/tracks that published papers relevant to those values did not exceed 2. For instance, publications relevant to \emph{Social justice} and \emph{National security} were found only in the main tracks of FSE and ICSE. Also, publications relevant to \emph{Enjoying life}, \emph{Honoring of the parents and elders}, and \emph{A world at peace} appeared only in the main track of ICSE. Also, publications relevant to certain values, e.g. \emph{Equality}, \emph{Social justice}, and \textit{Healthy}, were only present in conference papers but not in journals. We further observed that for the majority of values (19 of 23 values in Figure~\ref{fig:individual-by-venue}), relevant publications were found in the main track of ICSE while publications in TOSEM only considered \textit{Privacy}.
\begin{figure}[htb]
\includegraphics[width=0.5\textwidth]{Figures/venue-value-categories}
\captionsetup{margin=4ex}
\caption{Publications relevant to different value categories across SE venues/tracks.\davoud{is it possible. if it's not gonna impact the gray scale printed version, to choose a different color for TOSEM? e.g. black; currently it looks like a gap on the bar as it is the same color as the background}}
\label{fig:venue-value-category}
\end{figure}
\textit{The distribution of relevant publications by value categories and venues}. Publications relevant to 7 out of 10 value categories in SVS (Figure~\ref{fig:values}) were found across different venues/tracks (in Figure~\ref{fig:venue-value-category}). We further found publications relevant to category \textit{Holistic view}, which was introduced based on pilot study, as discussed in Section~\ref{subsec:Pilot-Phase}. Publications relevant to all these 8 value categories were found in the main tracks of FSE and ICSE (Figure~\ref{fig:venue-value-category}). Also, publications relevant to \emph{Security} were found in all SE venues. Moreover, publications that directly considered \emph{Benevolence} and \emph{Universalism} were found across most venues/tracks. Publications relevant to \emph{Universalism} were more prevalent in the SEIS track of ICSE. However, publications in TOSEM only considered \emph{Security} but not other value categories. It was also interesting to see that, compared to other venues/tracks, the SEIS track of ICSE contained the highest proportion of publications relevant to \textit{Conformity}.
\section{Results}
\label{sec:result}
This section presents the results of the main study described in Section \ref{subsec:classification-process}. As a reminder, we investigate the following research questions:
\begin{itemize}
\item [\textbf{(RQ1)}] To what extent are SE publications relevant to values?
\item [\textbf{(RQ2)}] Which values are commonly considered in SE publications?
\item [\textbf{(RQ3)}] How are the relevant publications distributed across venues?
\end{itemize}
\input{result-01-prevalence}
\input{result-02-distribution}
\subsection{Data Availability}
The dataset that supports the findings of this study is available at https://figshare.com/s/7a8c55799584d8783cd6.
\section{Methodology}
\label{sec:methodology}
To investigate the prevalence of human values in SE research, we manually classified publications from top-tier SE conferences and journals based on their relevance to different values. We followed a methodology similar to that of prior classification work in SE \cite{shaw2003writing,systa2012inbreeding,montesi2008software}, which mapped trends of SE research over time in terms of topic and type of study. As with prior studies, ours was based on manual classification of paper abstracts by multiple raters. Classification based on abstracts, rather than reading the full paper, is sub-optimal but strikes a balance between accuracy and time needed for the study. All papers had multiple raters and inter-rater agreement was measured using Fleiss' Kappa \cite{landis1977measurement}. We chose to classify papers from the last four years of conferences and journals generally considered to be the top SE venues, namely, the International Conference on Software Engineering (ICSE), the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), the IEEE Transactions on Software Engineering (TSE), and the ACM Transactions on Software Engineering and Methodology (TOSEM).
When conducting such a study, there are a number of key experimental design decisions that need to be taken, including: (i) how to define relevance to human values, given the imperfect and high-level nature of values definitions in the literature; (ii) how many raters to assign to each paper, and (iii) how to resolve disagreements between raters. To make choices about these design decisions, we first carried out a pilot study before carrying out the main study. Both the pilot and main study assumed SVS as the classification scheme. In total, we employed seven raters (5M,2F) with varying levels of seniority and experience in SE research, ranging from PhD students to senior Professors, and including one rater from outside the software engineering field. Note that this is a relatively high number of raters compared to similar studies \cite{Bertolino2018, vessey2002research}.
\subsection{Pilot Phase}
\label{subsec:Pilot-Phase}
The pilot phase had three steps: (i) Paper selection and allocation of papers to raters, (ii) Paper classification, and (iii) Calibration of classification decisions made by different raters.
The aim of the pilot phase was not to measure relevance of papers to values; rather, we had the following objectives:
\begin{itemize}
\item To test the appropriateness of SVS as the classification scheme for SE publications
\item To develop a common understanding regarding the meaning of human values in SE contexts
\item To collect insights from raters to feed into the experimental design of the main study
\end{itemize}
\textit{(i) Paper selection and allocation of papers to raters.} We randomly selected 49 papers from ICSE 2018 as our pilot phase dataset. These were equally allocated among the seven raters, with three raters per paper. Common practice is to assign two raters per paper \cite{Bertolino2018, vessey2002research}; three were assigned in the pilot to get a better understanding of how to map papers to values. ICSE was chosen as it has the broadest coverage of SE research \cite{Bertolino2018}. We chose the most recent ICSE proceedings -- 2018 at time of writing.
\textit{(ii) Paper classification.} Raters classified papers based on their title, abstract and keywords which is an approach used in similar classification studies in SE \cite{shaw2003writing, glass2002research, Bertolino2018}. Raters were instructed to decide if a paper was ``relevant'' or ``not relevant'' to human values: relevance was deliberately left ill-defined as one of the objectives of the pilot was to influence the definition of this term in the main study. For relevant papers, raters were asked to classify the papers into one value category, and then into one value within the category. Raters were not mandated to follow the hierarchical structure of SVS: that is, they could classify a paper into value X and value category Y even if X did not belong to category Y. This was to give us a way to assess the appropriateness of the hierarchy in SVS.
Raters carried out their classification independently.
\textit{(iii) Calibration.} After classification, all seven raters met to discuss the classification decisions. The main objective was to calibrate decisions and use this to refine the definition of values relevance. The intention was \textit{not} to decide which rater picked the correct classification.
Following the pilot phase, we made a number of observations which were fed into experimental design of the main study.
\begin{enumerate}
\setlength{\itemsep}{0pt}
\item \textit{Observation 1:} Raters found that almost every paper could be classified into a small number of values such as \textit{helpfulness}, \textit{wisdom} or \textit{influence} because, in general, every piece of research tries to advance knowledge. Thus, an indirect argument could almost always be made why a paper is relevant to helpfulness (e.g., a paper on testing is helpful to testers), wisdom (any paper advances knowledge, thus leading to greater wisdom), or influence (e.g., a paper on an improved software process influences how software is developed). This observation illustrated the difficulty of working with vaguely defined concepts such as values, but also the importance of a better definition of relevance.
\textit{Decision 1:} It is beyond the scope of this paper to fully and formally define all the values; hence, it was decided in the main study to use inter-rater agreement as evidence that a value was sufficiently understood in the context of a particular paper to provide confidence in the results. The definition of relevance was, however, refined for the main study. Raters were instructed not to make indirect arguments why a paper might be relevant to a value. Instead, in the main study, classification was based on ``direct relevance'' -- a paper is defined as directly relevant to a value if its main research contribution is to define, refine, measure or validate a particular value in software development. All other papers are classified as not relevant. Thus, a paper should only be classified as directly relevant to helpfulness if the research provides software tools or techniques to encourage people to be helpful towards each other.
\item \textit{Observation 2:} Raters observed that some papers addressed values as a general concept rather than considering any specific value. An example would be a paper that presents a methodology for refining values into a software architecture. These papers should not be classified into any particular value category or value.
\textit{Decision 2:} To facilitate classification of such papers, we introduced a new value category in the main study, which we call \textit{Holistic view}. A paper classified under Holistic View relates to values generally without focusing on any specific value (see Table \ref{tab:example}).
\item \textit{Observation 3:} Raters found that some papers should be classified under more than one value.
\textit{Decision 3:} To accommodate such papers in the main study, raters were allowed to select up to three values.\jon{why 3} This decision is different from similar studies in SE where raters were obliged to pick just one category \cite{bertolino2018categorization}.
\item \textit{Observation 4:} Not surprisingly, as SVS was not developed specifically for SE, there were cases where the SVS model was not a perfect fit. We will return to this point in Section \ref{sec:conclusion} but a key point for the main study is that some raters chose a value X and value category Y even if X does not belong to Y according to SVS. A common example was the value \textit{Privacy}, which from a SE perspective is clearly aligned with the category \textit{Security}, and yet appears in \textit{Self-direction} according to SVS (see Figure \ref{fig:values}).
\textit{Decision 4:} The main study maintained the decision to allow selection of values and value categories independent of the Schwartz hierarchical structure. In Section \ref{sec:result}, we present data to show the effect this had on the results.
\item \textit{Observation 5:} The pilot phase gave us an opportunity to measure how long it took raters to rate papers. We found that, on average, each rater spent four minutes per abstract. Given the number of papers in the main study (1350 -- see Table \ref{tab:papercount}), assigning three raters per paper would be infeasible.
\textit{Decision 5:} Out of necessity, we reduced the number of raters in the main study to two. This is consistent with the number of raters in similar studies \cite{Bertolino2018, vessey2002research, glass2002research}.
\end{enumerate}
\subsection{Main Study}
\label{subsec:classification-process}
Similar to the pilot study, the main study also had three phases: (i) Paper selection and allocation of papers to raters, (ii) Paper classification and (iii) Disagreement resolution. The final stage was different to the pilot study because rather than calibrating ratings to inform experimental design, some raters met to try to come to a consensus.
\textit{(i) Paper selection and allocation of papers to raters.} For the main study, we selected papers from ICSE, FSE, TSE and TOSEM over the last four years. These are the same venues used in similar paper classification studies \cite{Bertolino2018, glass2002research}. We selected all papers in TSE and TOSEM. For FSE, we used all papers from the main track, and for ICSE, we used all papers from the main track, from the Software Engineering in Practice (SEIP) track, and from the Software Engineering in Society (SEIS) track. The motivation for selecting tracks is to choose tracks which publish full research papers, not shorter papers. In total, there were 1350 papers published in the chosen venues over the years 2015--2018. This is a high sample size compared to similar studies (e.g., 976 in Bertolino et al. \cite{Bertolino2018} and 369 in Glass \cite{glass2002research}). Table \ref{tab:papercount} shows the distribution of selected papers by venue, track and year.
\begin{table}
\input{Table_Papercount.tex}
\end{table}
The papers were randomly allocated among the seven raters, two raters per paper. Each rater received around 400 papers to classify. We manually extracted links for each of the 1350 papers from digital databases, and provided a spreadsheet with these links and values and value categories for raters to select from.
\textit{(ii) Paper classification.}
Similar to the pilot phase, raters were asked to classify papers on the basis of their title, abstract and keywords. However, the main study used a different definition of relevance, as suggested by the pilot study. Raters were asked to classify papers as directly relevant or not relevant, where the definition of direct relevance is as given in Section \ref{subsec:Pilot-Phase}. Papers found directly relevant to values were further classified into a category and then to a specific value(s). Throughout the process, raters complied with the decisions made during the \textit{calibration} phase in the pilot study.
\textit{(iii) Disagreement resolution.} Given the subjective nature of the classification, raters sometimes disagreed. This could arise at three levels:
(a) relevance level, where raters disagreed on whether a paper was directly relevant or not; (b) value category level, where raters disagreed on the choice of value category; and (c) value level, where raters disagreed on the choice of value.
To attempt to resolve these disagreements, raters met to discuss their views about why the paper in question was classified in a certain way. If the raters could not come to an agreement, a third rater was introduced as an arbiter. The arbiter facilitated a second round of discussion, sharing his or her own views, to facilitate a consensus. However, if the disagreement persisted, the arbiter did not force a decision.
Aligned with previous studies \cite{Bertolino2018}, we calculated inter-rater agreement using Fleiss' Kappa, once attempts at resolving disagreements had taken place. The results of the Kappa measure are interpreted according to the agreement strengths introduced by Landis and Koch \cite{landis1977measurement}. We achieved \textit{almost perfect} agreements on relevance level and category level with Kappa values equal to 0.92 and 0.87, respectively. The agreement of value level was found as \textit{substantial} with Kappa value equal to 0.79. The results from the main study are further discussed in section \ref{sec:result}.
\begin{comment}
. The results are shown in table \ref{tab:KappaValue}.
We achieved \textit{'almost perfect'} agreement on \textit{relevance level} and \textit{category level} while the \textit{value level} agreement remained as \textit{'substantial'} (see Table \ref{tab:KappaValue}) \cite{landis1977measurement}. However, along with the agreement levels of classification scheme, the Kappa value decreases. One possible explanation for this would be the subjectivity of the definitions of values, especially in relation to their meanings in SE \cite{Mougouei2018}. The results from the classification phase is further discussed in section \ref{sec:result}.
\begin{table}[h]
\input{Table_kappa.tex}
\end{table}
\end{comment}
\begin{comment}
\arif{Notes to Harsha about the results:
\begin{itemize}
\item 1350 papers were classified
\item 216 papers were classified as directly relevant
\item from 216 papers: 129 individual values were found; one paper may classified to more than one values
\item from 216 papers: 195 values category (proposed by the raters) were found
\item from 216 papers: 188 values category (mapping from individual values to Schwartz values category) were found; one paper may classified to more than one values
\end{itemize}
}
\end{comment}
\begin{comment}
\label{sec:methodology}
Studying the prevalence of human values in SE research,
\begin{comment}
\subsection{Pilot Phase}
\label{subsec:Pilot-Phase}
We performed the pilot phase with the following objectives:
\begin{enumerate}[(i)]
\item clarifying ambiguities related to the definition of values due to unavailability of such definition in the SE context \cite{Mougouei2018}.
\item clarifying ambiguities related to SVS as the classification scheme.
\item given the subjective nature of human values, collect insights to improve the the classification process.
\end{enumerate}
\textit{Paper Selection and Allocation.} We randomly selected 49 papers from ICSE 2018 and allocated those papers among the seven raters in a way that each paper has three raters.\harsha{should we explain the background of raters?} All raters worked individually and independently. Raters were asked to read the title, abstract and keywords of a paper and classify it according to SVS. \harsha{need a reference to justify why just these three parts} SVS (figure \ref{fig:values}) and motivational goals of value categories (table \ref{tab:valuecategories_defOnly}) were available for raters all the time as supportive information.
\textit{Classification.} First, raters classified paper based on the relevance of a paper to human values. The question raised was \textit{' has this paper considered any human value or not?'}. If a particular paper found related to human values, then rater further classify it to values categories and values based on the SVS. Therefore, when classifying related papers, raters had 10 value categories and 58 values as options.
\textit{Calibration.} Due to the subjective nature of Human Values, as we expected, raters had faced challenges in the classification step. At a calibration meeting raters have shared their observations. After a discussion, a set of decision were taken and move forward as an input to the classification phase. Following are the 6 main observations and decisions taken at the calibration meeting.
\begin{enumerate}
\item \textit{Observation 1:} When classifying relevance to human values, every paper can be mapped to at least one value. For example, every research can somehow be viewed as \textit{helpful} considering the general role of research is to help advancing knowledge and propels the chosen field forward. If we consider such general contributions, obviously, such a classification does not serve the main purpose of this research.
\textit{Decision:} Therefore, raters agreed to avoid such general contributions from future classification and restricted themselves to classify papers if only they found a research to be \textit{directly relevant}. i.e. the intention or claimed contribution of the research focused on considering one or more human values.
\item \textit{Observation 2:} Raters found that some papers can be categorized under more than one human value thus falling under more than one value category. \arif{Can be linked with \cite{stol2015holistic}} \textit{Decision:} To facilitate classification of such papers, we have introduced a new value category called \textit{Holistic View} to the classification scheme. \harsha{should we elaborate more? what holistic view means?}
\item \textit{Observation 3:} Raters found papers which had considered many values within the same value category. \textit{Decision:} Therefore, for the future classifications, raters were allowed to select up to maximum three individual values where necessary, but under the same value category.
\item \textit{Observation 4:} Given the SE engineering background, raters recognized that some of values should belongs to a different category compare to the SVS. One of the most common examples was the \textit{Security}-\textit{Privacy} pair. The raters found several papers which should be categorized under the security as the value category and privacy as the value. According to Schwartz, privacy does not belong to the security value category but for the the self-direction. \waqar{good, but need to be careful in explaining it, we need to do a good job at this one in particular , perhaps bring in why the raters thought privacy should be brought under security and not self-direction}
\textit{Decision:} Based on the argument, if it is necessary, raters were allowed to pick value category and value regardless the given hierarchy of SVS in future classifications.
\item \textit{Observation 5:} In some cases, even though a paper can be classify into a value category, can not match with a particular value. For example, papers discuss about \textit{sustainability} can be easily classified under \textit{Universalism} value category based on the motivational goal but in SVS we can not find sustainability as a value.
\textit{Decision:} Therefore, raters were allowed to classify only the value category where necessary.
\item \textit{Observation 6:} Given the number of papers we had for the classification phase, we observed that allocating three raters a paper as resource intensive. \textit{Decision:} Therefore, we decided to reduce the number of raters to two in future classifications.
\end{enumerate}
\end{comment}
\end{comment}
\begin{comment}
\begin{enumerate}[(i)]
\item \textit{Observation:} When classifying relevance to human values, every paper has some sort of a connection to at least one value. For example, every paper we read, we may classify as \textit{Helpful} since every research is helpful somehow.\waqar{can somehow be viewed as helpful considering the general role of research is to help advance our knowledge and propels the chosen field forward.} Obviously, such a classification does not serve the main purpose of this research. \waqar{We agreed to avoid such general contribution for our future classification and restricted ourselves to classify papers if we found them to be directly relevant. i.e. the intention or claimed contribution of the research focused on delivering one or more human values} Similarly, some papers have the motivation related to human values but the solution provided is far away from operationalizing \waqar{no need to bring in operationalization here} any human value \cite{Mougouei2018} \arif{Did not found explanation about operationalizing human values in Intro and Background yet}.
\textit{Decision:} Therefore, when classifying papers, raters have agreed to have two checkpoints: (a) look for an explicit link between the main contribution of the paper to any values category or at least one value; (b) not just to look at the high-level problem that publication addresses, but has the given solution significantly impact \arif{reviewer may asks what does it mean by significantly impact}\waqar{ditto arif} to any human value? \waqar{Raters agreed to classify a publication as directly relevant to human values }If a publication has passed both checkpoints, then it was classified as a \waqar{if it passed both checkpoints}\textit{directly relevant} publication to human values.
\item \textit{Observation:} Some of the papers had considered a collection of human values and categories \waqar{more than one human value thus falling under more than one value category }as a single entity \arif{What does it mean by 'single entity'? Can be linked with \cite{stol2015holistic}}. \textit{Decision:} To facilitate \waqar{classification of} such papers, we have introduced a new value category called \textit{Holistic View} to the classification scheme. \waqar{can we have the heading as Observation 1 or something??}
\item \textit{Observation:} we recognized the chance of picking a value category and the value for an abstract which is not necessarily aligned with the original Schwartz's model hierarchy. One of the most common examples was the \textit{Security}-\textit{Privacy} pair. The raters found several papers which can be categorized under the security value category and which belonged to privacy as the value. According to Schwartz, privacy does not belong to the security value category but for the the self-direction.
\textit{Decision:} Therefore, if raters see it is necessary, they were allowed to pick values category and values irrespective to the given hierarchy of SVS.
\item \textit{Observation:} Some of the papers may have considered many individual values at the same time. \textit{Decision} Then, raters agreed to select up to maximum three individual values where necessary, but just one value category. \waqar{bringing the observations related to more than one value category consideration together}
\item \textit{Observation:} In some cases, even though a paper can be classify into a value category, can not match with a particular value. Some of the values such as sustainability, productivity, efficiency are not available in the SVS. \waqar{good, but need to be careful in explaining it, we need to do a good job at this one in particular , perhaps bring in why the raters thought privacy should be brought under security and not self-direction} \textit{Decision} Therefore, raters \waqar{were}allowed to classify only the value category where necessary.
\item \textit{Observation:} Having three raters \waqar{do we need a break in the sentence here, unclear at the moment}a paper is resource consuming and hard to observe any total agreements on relevance or any other level. \textit{Decision:} Therefore, number of raters a paper would be reduced to be two. \waqar{rephrase-grammar}
\end{enumerate}
\end{comment}
\begin{comment}
\subsection{Classification Process}
\label{subsec:classification-process}
Complying with decisions of the pilot phase, we started the classification phase. As depicts in figure \ref{fig:methodologysteps}, classification phase also had three steps.
\textit{Paper Selection and Allocation.} We have considered 1350 SE publications from four top-tier SE venues (two journals and two conferences) as our sample which were published during four years, 2015 to 2018. We have considered main tracks of ICSE and FSE as well as two additional track of ICSE (SEIS and SEIP) \harsha{Why?? We have to justify}. Paper distribution against tracks and years are shown in table \ref{tab:papercount}.
First, we have manually extracted 1350 links of research papers from databases such as ACM digital library and IEEE Xplore digital library. Then, considering research experience and domain knowledge, we have prepared 14 pairs of raters using seven raters available. Using an online worksheet, each pair was randomly allocated to papers in a way that every rater will get nearly an equal number of papers (around 400) to classify. All raters worked individually and independently.
\textit{Paper Classification.} Similarly to the pilot phase, raters were asked to do the classification in three levels - (a) relevance, (b) values category and (c) values. Based on the decision took at pilot phase, raters were more attentive towards intention and contribution when classifying a paper as directly relevant. In the next two levels, raters still used the SVS but with the holistic view as an added value category. SVS image (figure \ref{fig:values}) and Value category description (table \ref{tab:valuecategories_defOnly}) were available as supporting information. \harsha{following part can be taken to the results section} As per the the observation 4 in the pilot phase, raters have classified 33 papers regardless the given categorization of the SVS. We have captured these cases separately and compared the difference between rater-suggested classification and what if those 33 papers were categorized under SVS. Comparative results are presented in figure \ref{fig:value-categories-side} in the result section.
If both raters agree on their opinions at any level of classification, we consider it as an agreement at that level. Consequently, there are three levels of agreements aligned with three levels of classification. At the end of the classification there were 176 disagreements found at all three levels. In order to have an in-depth observation in these cases, we have decided to do a disagreement resolution.
\textit{Disagreement Resolution.}
In case of a disagreement at any classification level, both raters met and discussed the disagreements. At the end of the discussion, the rater had the freedom to stand on the previous opinion or change it. One of the remaining authors acted as an arbiter to discuss with the raters and figure out the reason(s) for the disagreement. Some of the raters change their opinions after having the discussion.During this discussion, the raters discussed the definitions of values in software engineering. The discussion helped raters came to a mutual understanding about human values definition in software engineering context for the majority of the values. By the end of this step, 145 disagreements were sorted out of 176.
\harsha{@Arif, Please check the number make sense with text}At the end of the classification, in a quick analysis, we found that 216 papers out of 1350 were agreed to be classified as directly relevant by both raters. Out of that 216 paper, 195 papers were classified into the same value category. Similarly, 188 papers were classified into the same value. In order to observe the inter-rater agreement, we calculated Fleiss' Kappa on all three levels of agreement and results are shown in table \ref{tab:KappaValue}. \harsha{why this kappa value?}
\arif{Fleiss' kappa support more than two raters, while Cohen's kappa only support two raters. In reality, we have two raters, but the raters were different on each paper.}
\begin{table}[h]
\input{Table_kappa.tex}
\end{table}
\harsha{should we explain why or should this addressed in the discussion section; Should we say about these values and its categorization according to landis and koch}
These kappa values are insightful as it leads to a broader discussion on availability of definitions for value in SE. Classify a paper on relevance has a significant level of agreement of 0.92 where it is easy to decide whether a publication has explicitly considered human values. Once we go inside the classification scheme where we find value categories and values, we can see the level of agreement goes down. This was due to the subjectivity of the definitions of the values \cite{Mougouei2018}, especially in relation to their meanings in SE. We, moreover, had to exclude 2\% of the papers (classified as undecided as in Figure \ref{fig:relevant}) from our analysis as an agreement was not reached about them, at the relevance level. Thus, subjectivity of human values has been widely recognized in social sciences~\cite{Schwartz2012} as well as in software engineering~\cite{Mougouei2018}.
\end{comment}
\begin{comment}
\begin{figure}[h]
\centering
\includegraphics[width=6cm]{Figures/Classificatioprocess.png}
\caption{Classification process \davoud{low quality pic and large fonts}}
\label{fig:Classifysteps}
\end{figure}
\end{comment}
\input{table-example}
\begin{comment}
\begin{figure}[h]
\centering
\hspace{0.7cm}\includegraphics[width=0.35\textwidth]{Figures/methodology.png}
\caption{Methodology phases and steps}
\label{fig:methodologysteps}
\end{figure}
\end{comment}
\section{Threats to Validity}
In this section we discuss limitations of this research categorized as \textit{Internal}, \textit{External} and \textit{Construct} validity threats.
\textit{{\textbf{Construct Validity}:} }Choosing a classification scheme suited for the software engineering domain was one of the main challenges for this research. In the absence of an SE-specific scheme to classify human values, we selected the Schwartz Values Structure (SVS). SVS is a well established theory for understanding human values in the social sciences. It has been successfully applied in Human and Computer Interaction (HCI) and Information and Computer Technologies (ICT) to study and explain human values~\cite{thew2018value}. Using SVS as an independent classification scheme, instead of developing our own, mitigated the risk of introducing researcher bias.
Similar to Glass et al. \cite{glass2002research}, lack of mutual exclusion was a challenge for our classification scheme. It was often possible to classify a paper as relating to more than one individual value. This we believe was more to do with the ill-defined nature of human values than a limitation of the chosen classification scheme. Still, the potential threat was mitigated by using an iterative process and conducting rater training to understand and clarify relationships between values and their categories.
In some cases, the raters found that certain papers related to human values in general rather than any particular value. Forcing such papers into a single value category would have influenced results. To mitigate this, we added a new \emph{Holistic view} category. Some papers relating to \textit{Privacy} were categorized under \emph{Security} rather than \emph{Self direction}. Raters, based on their understanding of \emph{Privacy} in an SE context, considered it to be more relevant to \emph{Security} than \emph{Self direction}. This may have influenced the results. To mitigate this, we provided results for both rater preferred and SVS prescribed categories in Figure~\ref{fig:value-categories-side}. Some common SE values were not found in SVS, such as \emph{Sustainability}.
SVS may not be the ideal classification scheme for SE, and we expect useful further research to adapt SVS to SE context.
\textit{\textbf{Internal Validity}} threats for this study arise from the complexities of categorizing papers into the selected classification scheme. It is possible that the raters' own expertise in understanding the scheme categories and definitions of values may have influenced paper classifications. This risk however, was mitigated as the classification process forced random assignment of each paper to two raters and in case of a disagreement an independent arbiter was introduced to facilitate agreement.
Some disagreements (2\%, see Figure~\ref{fig:relevant}) remained even after the arbiter's intervention.\waqar{(@Arif can you provide percentage here, and also the figure Fig abc)} \arif{I put the one on the Directly Relevant level. Do you also need for the category and value also?}\waqar{need to discuss this in person} In such cases we did not force consensus.
\textbf{\textit{External validity} }threats may arise from potential limitations of our choice of publication venues and the block of time period under study (i.e., 2015-2018). The chosen venues are widely acknowledged as the top-tier venues of SE research; however, we accept that the results may be different if other more specialist conferences/journals had been considered.
Generalizability of results based on a subset of papers is often a concern for empirical studies. In our research, this risk was mitigated by using 1350 papers published in the last 4 years which can be considered a good representation of trends in SE research as suggested in \cite{Bertolino2018}. The findings of this study, however, may be biased towards ICSE and FSE as they published more papers in the selected period compared to journals (ICSE 559, FSE 512 vs. TSE 215 and TOSEM 64).
While a detailed review of the entire papers (rather than just the abstract, title and keywords) could have provided more accurate results, we adopted a procedure similar to those used in previous studies \cite{shaw2003writing,Bertolino2018}. The time required for reliable classification means that reading all 1350 papers is infeasible.
|
1,116,691,497,177 | arxiv | \section{Introduction and main results}
We are interested in studying the law of the so-called exponential functional of L\'evy processes which is defined as follows
\[{\rm{I}}_{\xi}=\int_0^{\infty}e^{\xi_t} dt,\]
where $\xi=(\xi_t)_{t\geq0}$ is a L\'{e}vy process starting from $0$ and drifting to $-\infty$.
Recall that a L\'{e}vy process $\xi$ is a process with stationary and independent increments and its law is characterized completely by its L\'evy-Khintchine exponent $\Psi$ which takes the following form
\begin{equation}\label{Levy-K}
\log \mathbb{E}\left[e^{z \xi_{1}}\right]=\Psi(z)=bz +\frac{\sigma^{2}}{2}z^{2}+\int_{-\infty}^{\infty}\left(e^{zy} - 1 - zy\mathbb{I}_{\{|y|<1\}}\right)\Pi(dy), \text{ for any $z\in i\mathbb{R}$,}
\end{equation}
where $\sigma\geq0, b \in \mathbb{R}$ and $\Pi$ is a L\'{e}vy measure satisfying the condition
$\int_{\mathbb{R}}(y^{2}\wedge 1)\Pi(dy)<\infty$. See \cite{Bertoin-96} for more information on L\'{e}vy processes.
The exponential functional ${\rm{I}}_{\xi}$ has attracted the interest of many researchers over the last two decades. This is mostly due to the prominent role played by the law of ${\rm{I}}_{\xi}$ in the study of important processes, such as self-similar Markov processes, fragmentation and branching processes but also in various settings ranging from astrophysics, biology to financial and insurance mathematics, see the survey paper \cite{Bertoin-Yor-05}.
So far there are two main approaches which have been developed and used to derive information about the law of the exponential functional. The first one uses the fact that the Mellin transform of ${\rm{I}}_{\xi}$ is a solution to a functional equation, see \eqref{Maulik} below, and is due to Carmona et al.~\cite{Carmona-Petit-Yor-97} and has been extended by Maulik and Zwart \cite{Maulik-Zwart-06}. It is important to note that $\eqref{Maulik}$ is useful only under the additional assumption that $\xi$ possesses some finite, positive exponential moments since then it is defined on a strip in the complex plane. This equation can be solved for exponential functionals of negative of subordinators and spectrally positive L\'evy processes yielding some simple expressions for their positive and negative integer moments respectively, which, in both cases, determine the law. Recently, Kuznetsov and Pardo \cite{Kuznetsov-Pardo-11} have used some special instances of L\'evy processes, for which the solution of the functional equation can directly be guessed and verified from \eqref{Maulik}, to derive some information concerning the law of ${\rm{I}}_{\xi}$. It is worth pointing out that, in general, it is not an easy exercise to invert the Mellin (or moments) transform of $\rm{I}_{\xi}$ since a fine analysis of its asymptotic behavior is required. This Mellin transform approach relies on two difficult tasks: to find a solution of the functional equation and to provide a general criterion to ensure the uniqueness of its solution. For instance, this approach does not seem to successfully cope with the whole class of spectrally negative L\'evy processes.
The second methodology, which has been developed recently by the second author in \cite{Patie-06c} and \cite{Patie-abs-08}, is based on the well-known relation between the law of ${\rm{I}}_{\xi}$ and the distribution of the absorption time of positive self-similar Markov processes which were introduced by Lamperti \cite{Lamperti-72} in the context of limit theorems for Markov processes. Indeed, in \cite{Patie-abs-08}, it is shown that the law of ${\rm{I}}_{\xi}$ can be expressed as an invariant function of a transient Ornstein-Uhlenbeck companion process to the self-similar Markov process. Using some potential theoretical devices, a power series and a contour integral representation of the density is provided when $\xi$ is a possibly killed spectrally negative L\'evy process.
In this paper, starting from a large class of L\'evy processes, we show that the law of ${\rm{I}}_{\xi}$ can be factorized into the product of independent exponential functionals associated with two companion L\'evy processes, namely the descending ladder height process of $\xi$ and a spectrally positive L\'evy process constructed from its ascending ladder height process. It is well-known that these two subordinators appear in the Wiener-Hopf factorization of L\'evy processes. The laws of these exponential functionals are uniquely determined either by their positive or negative integer moments. Moreover, whenever the law of any of these can be expanded in series we can in general develop the law of ${\rm{I}}_{\xi}$ in series. Thus, for example, the requirements put on the L\'{e}vy measure of $\xi$ in \cite{Kuznetsov-Pardo-11} can be relaxed to conditions only on the positive jumps (the L\'{e}vy measure on the positive half-line) of $\xi$ thus enlarging considerably the class of L\'{e}vy processes $\xi$, for which we can obtain a series expansion of the law of ${\rm{I}}_{\xi}$.
Although our main result may have a formal explanation through the Wiener-Hopf factorization combined with the functional equation \eqref{Maulik}, the proof is rather complicated and involves a careful study of some generalized Ornstein-Uhlenbeck (for short GOU) processes, different from the ones mentioned above. For this purpose, we deepen a technique used by Carmona et al.~\cite[Proposition 2.1]{Carmona-Petit-Yor-97}
and further developed in \cite{Kuznetsov-Pardo-Savov-11}, which relates the law of ${\rm{I}}_{\xi}$ to the stationary measure of a GOU process. More precisely, we show that the density function of ${\rm{I}}_{\xi}$, say $m_{\xi}$, is, under very mild conditions, the unique function satisfying the equation $\mathcal{L}m_{\xi}=0$, where $\mathcal{L}$ is an "integrated infinitesimal" operator, which is strictly of an integral form. The latter allows for a smooth and effortless application of Mellin and Fourier transforms. We believe this method itself will attract some attention as it removes generic difficulties related to the study of the invariant measure via the dual Markov process such as the lack of smoothness properties for the density of the stationary measure and also application of transforms which usually requires the use of Fubini Theorem which is difficult to verify when dealing with non-local operators.
Before stating our main result let us introduce some notation. First, since in our setting $\xi$ drifts to $-\infty$, it is well-known that the ascending (resp.~descending) ladder height process $H^+=(H^+(t))_{t\geq0}$ (resp.~$H^{-}=-H^{-,*}=(-H^{-,*}(t))_{t\geq0}$) is a killed (resp.~proper) subordinator. Then, we write, for any $z\in i\mathbb{R}$,
\begin{equation}\label{L-KLadder}
\phi_{+}(z)=\log \mathbb{E}\left[\exp(zH^+(1))\right] =
\delta_{+}z + \int_{(0,\infty)}({\rm e}^{zy}-1)\mu_+({\rm d} y)-k_{+}\,,
\end{equation}
where $\delta_+\geq0$ is the drift and $k_{+}>0$ is the killing rate. Similarly, with $\delta_-\geq0$, we have
\begin{equation}\label{L-KLadder1}
\phi_{-}(z)=\log \mathbb{E}\left[\exp(zH^-(1))\right]=
-\delta_{-}z -\int_{(0,\infty)}(1-{\rm e}^{-zy})\mu_-({\rm d} y)\,.
\end{equation}
We recall that the integrability condition $\int_0^{\infty} (1\wedge y)\mu_{\pm}(dy)<\infty$ holds. The Wiener-Hopf factorization then reads off as follows
\begin{equation}\label{eq:wh}
\Psi(z)=-c\phi_{+}(z)\phi_{-}(z)=-\phi_{+}(z)\phi_{-}(z), \text{ for any $z\in i\mathbb{R}$,}
\end{equation}
where we have used the convention that the local times have been normalized in a way that $c=1$, see (5.3.1) in \cite{Doney}. \begin{comment}Note that this normalization can implicitly be incorporated by two multiplying factors in the L\'{e}vy quantities $\mu_{\pm}$, $\delta_{\pm}$ and $k_{+}$\end{comment}
We avoid further discussion as we assume \eqref{eq:wh} holds with $c=1$.
\begin{definition}\label{Definition}
We denote by $\mathcal{ P}$ the set of positive measures on $\mathbb{R}_{+}$ which admit a non-increasing density.
\end{definition}
Before we formulate the main result of our paper we introduce the two main hypothesis:
\begin{enumerate}
\item[($\mathcal{H}_1$)] Assume further that $-\infty<\mathbb{E}\left[\xi_{1}\right]$ and that one of the following conditions holds:
\begin{description}
\item[E${}_+$] $\mu_+ \in \mathcal{ P}$ and there exists $z_+>0$ such that for all $z$ with, $\Re(z) \in (0,z_+),$ we have $|\Psi(z)|<\infty$.
\item [P+] $\Pi_+ \in \mathcal{ P}$.
\end{description}
\item [($\mathcal{H}_2$)] Assume that
\begin{description}
\item[P${}_{\pm}$] $\mu_+ \in \mathcal{ P},\: k_+>0$ and $\mu_- \in \mathcal{ P}.$
\end{description}
\end{enumerate}
Then the following result holds.
\begin{theorem}\label{MainTheorem}
Assume that $\xi$ is a L\'evy process that drifts to $-\infty$ with characteristics of the ladder height processes as in \eqref{L-KLadder} and \eqref{L-KLadder1}. Let either ($\mathcal{H}_1$) or ($\mathcal{H}_2$) holds. Then,
in both cases, there exists a spectrally positive L\'evy process $Y$ with a negative mean whose Laplace exponent $\psi_+$ takes the form \begin{equation}
\label{Phi}\psi_+(-s)=-s\phi_{+}(-s)=\delta_{+}s^{2}+k_{+}s+s^{2}\int_{0}^{\infty}e^{-sy}\mu_+(y,\infty)dy,\: s\geq0,
\end{equation}
and the following factorization holds
\begin{equation}\label{MainAssertion}
{\rm{I}}_{\xi}\stackrel{d}={\rm{I}}_{H^{-}}\times {\rm{I}}_{Y}
\end{equation}
where $\stackrel{d}=$ stands for the identity in law and $\times$ for the product of independent random variables.
\end{theorem}
\begin{remark}
We mention that the case when the mean is $-\infty$ together with other problems will be treated in a subsequent study as it demands techniques different from the spirit of this paper.
\end{remark}
The result in Theorem \ref{MainTheorem} can be looked at from another perspective. Let us have two subordinators with L\'{e}vy measures $\mu_{\pm}$ such that $\mu_+ \in \mathcal{ P},\: k_+>0$ and $\mu_- \in \mathcal{ P}$. Then according to Vigon's theory of philanthropy, see \cite{Vigon}, we can construct a process $\xi$ such that its ladder height processes have exponents as in \eqref{L-KLadder} and \eqref{L-KLadder1} and hence $\xi$ satisfies the conditions of Theorem \ref{MainTheorem}. Therefore we will be able to synthesize examples starting from the building blocks, i.e. the ladder height processes. We state this as a separate result.
\begin{corollary}\label{CorollaryMain}
Let $\mu_{\pm}$ be the L\'{e}vy measures of two subordinators and $\mu_+ \in \mathcal{ P},\: k_+>0$ and $\mu_- \in \mathcal{ P}$. Then there exists a L\'{e}vy process which drifts to $-\infty$ whose ascending and descending ladder height processes have the Laplace exponents respectively \eqref{L-KLadder} and \eqref{L-KLadder1}. Then all the claims of Theorem \ref{MainTheorem} hold and in particular we have the factorization \eqref{MainAssertion}.
\end{corollary}
We postpone the proof of the Theorem to the Section \ref{proof:mt}. In the next section, we provide some interesting consequences whose proofs will be given in Section \ref{proof_cons}.
Finally, in Section \ref{O-U}, we state and prove several results concerning some generalized Ornstein-Uhlenbeck processes. They will be useful for our main proof and since they have an independent interest, we present them in a separate section.
\section{Some consequences of Theorem \ref{MainTheorem}}\label{SecMain}
Theorem \ref{MainTheorem} allows for a multiple of applications. In this section we discuss only a small part of them but we wish to note that almost all results that have been obtained in the literature under restrictions on all jumps of $\xi$ can now be strengthened by imposing conditions only on positive jumps.
This is due to \eqref{MainAssertion} and the fact that on the right-hand side of the identity the law of the exponential functionals has been determined by its integral moments which admit some simple expressions, see Propositions \ref{prop:ms} and \ref{prop:msp} below.
The factorization allows us to derive some interesting distributional properties. For instance, we can show that the random variable ${\rm{I}}_{\xi}$ is unimodal for a large class of L\'evy processes. We recall that a positive random variable (or its distribution function) is said to be unimodal if there exists $a \in \mathbb{R}^+$, the mode, such that its distribution function $F(x)$ and the function $1-F(x)$ are convex respectively on $(0, a)$ and $(a,+\infty)$. It can be easily shown, see e.g.~\cite{Rivero-05}, that the random variable ${\rm{I}}_{Y}$, as defined in Theorem \ref{MainTheorem}, is self-decomposable and thus, in particular, unimodal. It is natural to ask whether this property is preserved or not for ${\rm{I}}_{\xi}$. We emphasize that this is not necessarily true even if ${\rm{I}}_{H^-}$ is unimodal itself. Cuculescu and Theodorescu \cite{Cuculescu} provide a criterion for a positive random variable to be multiplicative strongly unimodal (for short MSU), that is, its product with any independent unimodal random variable remains unimodal. More precisely, they show that either the random variable has a unique mode at $0$ and the independent product with any random variable has also an unique mode at $0$ or the law of the positive random variable is absolutely continuous with a density $m$ having the property that the mapping $x\rightarrow \log m(e^x)$ is concave on $\mathbb{R}$. We also point out that it is easily seen that the MSU
property remains unchanged under rescaling and power transformations and we refer to the recent paper \cite{Simon} for more information about this class of random variables.
We proceed by recalling that as a general result on the exponential functional Bertoin et al.~\cite[Theorem 3.9]{Bertoin-Lindner-07} have shown that the law of ${\rm{I}}_{\xi}$ is absolutely continuous with a density which we denote throughout by $m_{\xi}$.
In what follows, we show that when $\xi$ is a spectrally negative L\'evy process (i.e.~$\Pi(dy)\mathbb{I}_{\{y>0\}}\equiv 0$ in \eqref{Levy-K} and $\xi$ is not the negative of a subordinator), we recover the power series representation obtained by the second author in \cite{Patie-abs-08} for the density of ${\rm{I}}_{\xi}$. We are now ready to state the first consequence of our main factorization.
\begin{corollary}\label{Corollary1}
Let $\xi$ be a spectrally negative L\'evy process with a negative mean.
\begin{enumerate}
\item Then, we have the following factorization
\begin{equation}\label{SpecNeg}
{\rm{I}}_{\xi}\stackrel{d}={\rm{I}}_{H^{-}} \times G^{-1}_{\gamma},
\end{equation}
where $G_{\gamma}$ is a Gamma random variable of parameter $\gamma>0$, where $\gamma>0$ satisfies the relation $\Psi(\gamma)=0$. Consequently, if ${\rm{I}}_{H^-}$ is unimodal then ${\rm{I}}_{\xi}$ is unimodal.
\item The density function of ${\rm{I}}_{\xi}$ has the form
\begin{equation}\label{SpecNeg1}
m_{\xi}(x)=\frac{x^{-\gamma-1}}{\Gamma(\gamma)}\int_{0}^{\infty}e^{-y/x}y^{\gamma}m_{H^-}(y)dy, \: x>0,
\end{equation}
where $\Gamma$ stands for the Gamma function. In particular, we have
\[\lim_{x\rightarrow \infty }x^{\gamma+1}m_{\xi}(x) = \frac{\mathbb{E}[{\rm{I}}_{H^-}^{\gamma}]}{\Gamma(\gamma)}. \]
\item Moreover, for any $1/x<\lim_{s\rightarrow \infty}\frac{\Psi(s)}{s}$,
\begin{eqnarray}
m_{{\xi}}(x)
&=&\frac{\mathbb{E}[{\rm{I}}_{H^-}^{\gamma}]}{\Gamma(\gamma)\Gamma(\gamma+1)}x^{-\gamma-1}\sum_{n=0}^{\infty}(-1)^n \frac{\Gamma(n+\gamma+1)}{\prod_{k=1}^{n}\Psi(k+\gamma)}x^{-n}.
\end{eqnarray}
\item Finally, for any $\beta\geq\gamma+1$, the mapping $x\mapsto x^{-\beta}m_{\xi}(x^{-1})$ is completely monotone on $\mathbb{R}^+$, and, consequently, the law of the random variable ${\rm{I}}^{-1}_{\xi}$ is infinitely divisible with a decreasing density whenever $\gamma \leq 1$.
\end{enumerate}
\end{corollary}
\begin{remark}
\begin{enumerate}
\item From \cite[Corollary VII.5]{Bertoin-96} we get that
\begin{equation*}
\lim_{s\rightarrow \infty} \frac{\Psi(s)}{s}= \begin{cases}
b-\int_{-1}^0 y \Pi(dy) & \textrm{ if } \sigma=0 \textrm{ and } \int_{-\infty}^0 (1 \wedge y)\Pi(dy)<\infty,\\
+\infty & \textrm{ otherwise.}
\end{cases}
\end{equation*}
Since we excluded the degenerate cases, we easily check that $b-\int_{-1}^0 y\Pi(dy)>0$.
\item We point out that in \cite{Patie-abs-08}, it is proved that the density extends to a function of a complex variable which is analytical on the entire complex plane cut along the negative real axis and admits a power series representation for all $x>0$.
\end{enumerate}
\end{remark}
To illustrate the results above, we consider $\Psi(s)=-(s-\gamma)\phi_{-}(s),s>0$, with $\gamma>0$, and where for any $\alpha \in (0,1)$,
\begin{eqnarray} \label{eq:dpa}
-\phi_{-}(s)&=& s\frac{\Gamma(\alpha(s-1)+1)}{\Gamma(\alpha s+1)}\\
&=&\int_0^{\infty}(1-e^{-sy})\frac{(1-\alpha)e^{y/\alpha}}{\alpha \Gamma(\alpha+1)(e^{y/\alpha}-1)^{2-\alpha}}dy=\int_0^{\infty}(1-e^{-sy})\pi_{\alpha}(y)dy \nonumber
\end{eqnarray}
is the Laplace exponent of a subordinator. Observing that the density $\pi_{\alpha}(y)$ of the L\'evy measure of $\phi_{-}$ is decreasing, we readily check that $\Psi$ is the Laplace exponent of a spectrally negative L\'evy process. Next, using the identity ${\rm{I}}_{H^-}\stackrel{(d)}{=}{G_1}^{\alpha}$, see e.g.~\cite{Patie-aff}, we get
\[{\rm{I}}_{\xi}\stackrel{(d)}{=}{G_1}^{\alpha} \times G^{-1}_{\gamma}\]
which, after some easy computations, yields, for any $x>0$,
\begin{eqnarray}
m_{{\xi}}(x)
&=&\frac{x^{-\gamma-1}}{\Gamma(\gamma)\Gamma(\gamma+1)}\sum_{n=0}^{\infty}\Gamma(\alpha( n+\gamma)+1)\frac{(-x)^{-n}}{n!}\\
&=& \frac{ \Gamma(\alpha \gamma +1) x^{-\gamma-1}}{\Gamma(\gamma)\Gamma(\gamma+1)} {}_1F_0((\alpha,\alpha \gamma+1); -x^{-1}),
\end{eqnarray}
where ${}_1F_0$ stands for the so-called Wright hypergeometric function, see e.g.~\cite[Section 12.1]{Braaksma-64}. Finally, since ${G_1}^{\alpha}$ is unimodal, we deduce that ${\rm{I}}_{\xi}$ is unimodal. Actually, we have a stronger result in this case since ${\rm{I}}_{\xi}$ is itself MSU being the product of two independent MSU random variables, showing in particular that the mapping $x\mapsto {}_1F_0((\alpha,\alpha \gamma+1);e^{x})$ is log-concave on $\mathbb{R}$ for any $\alpha \in (0,1)$ and $\gamma>0$.
We now turn to the second application as an illustration of the situation ${\bf{P +}}$ of Theorem \ref{MainTheorem}. We would like to emphasize that in this case in general we do not require the existence of positive exponential moments. We are not aware of general examples that work without such a restriction as \eqref{Maulik} is always crucially used and it is of real help once it is satisfied on a strip.
\begin{corollary}\label{Corollary2}
Let $\xi$ be a L\'{e}vy process with $-\infty<\mathbb{E}[\xi_{1}]<0$ and $\sigma^{2}>0$. Moreover assume that
\[\Pi(dy)\mathbb{I}_{\{y>0\}}=c \lambda e^{-\lambda y}dy,\]
where $c,\lambda>0$. Then, we have, for any $s>-\lambda$,
\begin{eqnarray*}
\psi_+(-s)
&=&\delta_+ s^2+k_+s + c_- \frac{s^{2}}{\lambda+s},
\end{eqnarray*}
where $c_-=c/\phi_-(\lambda)$ and $\delta_+>0$. Consequently, the self-decomposable random variable ${\rm{I}}_{Y}$ admits the following factorization
\begin{equation}\label{eq:hy}
{\rm{I}}_{Y}\stackrel{d}=\delta_+ G^{-1}_{\theta_2} \times B^{-1}(\theta_1,\lambda-\theta_1),
\end{equation}
where $0<\theta_1<\lambda<\theta_2$ are the two positive roots of the equation $\psi_+(s)=0$ and $B$ stands for a Beta random variable. Then, assuming that $\theta_2-\theta_1$ is not an integer, we have, for any $1/x<\lim_{s\rightarrow \infty} |\phi_-(s)|$,
\begin{eqnarray*}
m_{\xi}(x)&=&\frac{k_+\Gamma(\lambda+1)x^{-1}}{ \Gamma(\theta_1+1)\Gamma(\theta_2+1)}\left(\sum_{i=1}^2\frac{\mathbb{E}\left[{\rm{I}}_{H^-}^{\theta_i}\right]}{\Gamma(\theta_i+1)}x^{-\theta_i}\mathcal{I}_{\phi_-,i}(\theta_i+1;-x^{-1})\right),
\end{eqnarray*}
where
\begin{eqnarray}
\mathcal{I}_{\phi_-,i}(\theta_i+1;x) &=&\sum_{n=0}^{\infty}a_{n}(\phi_-,\theta_i)\frac{x^{n}}{n!}
\end{eqnarray}
and $a_n(\phi_-,\theta_i)= \prod_{\stackrel{j=1}{j\neq i}}^2\frac{\Gamma(\theta_j-\theta_i-n)}{\Gamma(\lambda-\theta_i-n)}\frac{\Gamma(n+\theta_i+1)}{\prod_{k=1}^{n}\phi_-(k+\theta_i)},\: i=1,2$.
\end{corollary}
\begin{remark}
The assumption $\sigma^{2}>0$, as well as the restriction on $\theta_2-\theta_1$, have been made in order to avoid dealing with different cases but they can both be easily removed. The latter will affect the series expansion \eqref{Cor.2.4}. The computation is easy but lengthy and we leave it out.
\end{remark}
\begin{remark}
The methodology and results we present here can also be extended to the case when the L\'{e}vy measure $\Pi(dy)\mathbb{I}_{\{y>0\}}$ is a mixture of exponentials as in \cite{Gai-Kou-11} and \cite{Kuznetsov-Pardo-11} but we note that here we have no restrictions on the negative jumps whatsoever.
\end{remark}
We now provide an example of Theorem \ref{MainTheorem} in the situation ${\bf{P}_{\pm}}$.
\begin{corollary}\label{Corollary3}
For any $\alpha \in (0,1)$, let us set
\begin{eqnarray}
\Psi(z)= \frac{\alpha z \Gamma(\alpha (-z+1)+1)}{(1-z)\Gamma(-\alpha z+1)} \phi_+(z), \: z \in i\mathbb{R},
\end{eqnarray}
where $\phi_+$ is as in \eqref{L-KLadder} with $\mu_+ \in \mathcal{P}, k_+>0$. Then $\Psi$ is the Laplace exponent of a L\'evy process $\xi$ which drifts to $-\infty$. Moreover, the density of ${\rm{I}}_{\xi}$ admits the following representation
\begin{eqnarray} \label{eq:dis}
m_{{\xi}}(x)
&=&\frac{x^{-1/\alpha}}{\alpha}\int_0^{\infty}g_{\alpha}\left(\left(y/x\right)^{1/\alpha}\right)m_{Y}(y)y^{1/\alpha-1}dy,\: x>0,
\end{eqnarray}
where $g_{\alpha}$ is the density of a positive $\alpha$-stable random variable. Furthermore, if $\lim_{s\to \infty}s^{\alpha-1}\phi_+(-s)=0$, then for all $x>0$,
\begin{eqnarray}
m_{{\xi}}(x)
&=&\frac{k_+}{\alpha}\sum_{n=1}^{\infty} \frac{\prod_{k=1}^{n}\phi_+(-k)}{\Gamma(-\alpha n) n!}x^{n}.
\end{eqnarray}
Finally, the positive random variable ${\rm{I}}_{H^-}$ is MSU if and only if $\alpha\leq 1/2$. Hence ${\rm{I}}_{\xi}$ is unimodal for any $\alpha\leq 1/2$.
\end{corollary}
\begin{remark}
The fact that ${\rm{I}}_{H^-}$ is MSU if and only if $\alpha\leq 1/2$ is a consequence of the main result of \cite{Simon}.
\end{remark}
\begin{remark}
Note that this is a very special example of the approach of building the L\'{e}vy process from $\phi_{\pm}$ when $\mu_{\pm}\in \mathcal{P}$. One could construct many examples like this and this allows for interesting applications in mathematical finance and insurance, see e.g.~\cite{Patie-09-cras}.
\end{remark}
As a specific instance of the previous result, we may consider the case when \[\phi_+(-s)=-\frac{ \Gamma(\alpha' s+1)}{\Gamma(\alpha'(s+1)+1)},\: s\geq0,\] with $\alpha' \in (0,1)$. We easily obtain from the identity \eqref{eq:msn} below that
\[\mathbb{E}\left[{\rm{I}}_{Y}^{-m}\right] = \frac{\Gamma(\alpha'm+1-\alpha')}{\Gamma(1-\alpha')}, \: m=1,2,\ldots, \]
that is ${{\rm{I}}_{Y}}\stackrel{d}= G^{-\alpha'}_{1-\alpha'}$. Hence, as the product of independent MSU random variables, ${\rm{I}}_{\xi}$ is MSU for any $\alpha' \in (0,1)$ and $\alpha\leq 1/2$. Moreover, using the asymptotic behavior of the ratio of gamma functions given in \eqref{eq:ag} below, we deduce that for any $\alpha' \in (0,1-\alpha)$ we have
\begin{eqnarray}
m_{{\xi}}(x)
&=&\frac{1}{\Gamma(1-\alpha')\alpha}\sum_{n=1}^{\infty} \frac{\Gamma(\alpha' n+1)}{\Gamma(-\alpha n) n!}(-1)^{n}x^{n},
\end{eqnarray}
which is valid for any $x>0$.
We end this section by describing another interesting factorization of exponential functionals. Indeed, assuming that $\mu_-\in \mathcal{P}$, it is shown in \cite[Theorem 1]{Patie-aff} that there exists a spectrally positive L\'evy process $\overline{Y}=(\overline{Y}_t)_{t\geq0}$ with a negative mean and Laplace exponent given by $\overline{\psi}_{+}(-s)=-s\phi_-(s+1),\: s>0,$ such that the following factorization of the exponential law
\begin{equation} \label{eq:dy}
{\rm{I}}_{H^-}\times {\rm{I}}^{-1}_{\overline{Y}}\stackrel{d}= G_1
\end{equation}
holds. Hence, combining \eqref{eq:dy} with \eqref{MainAssertion}, we obtain that
\[{\rm{I}}_{\xi}\times {\rm{I}}^{-1}_{\overline{Y}}\stackrel{d}= G_1 \times {\rm{I}}_{Y}.\]
Consequently, we deduce from \cite[Theorem 51.6]{Sato-99} the following.
\begin{corollary}\label{Corollary33}
If in one of the settings of Theorem \ref{MainTheorem}, we assume further that $\mu_-\in \mathcal{P}$, then the density of the random variable ${\rm{I}}_{\xi}\times {\rm{I}}^{-1}_{\overline{Y}}$, where ${\rm{I}}_{\overline{Y}}$ is taken as defined in \eqref{eq:dy}, is a mixture of exponential distributions and in particular it is infinitely divisible and non-increasing on $\mathbb{R}^+$.
\end{corollary}
Considering as above that ${\rm{I}}_{H^-}\stackrel{(d)}{=}{G^{\alpha}_1}$ in Corollary \ref{Corollary1} and \ref{Corollary2}, we deduce from \cite[Section 3.2]{Patie-aff} that
the random variable $S^{-\alpha}_{\alpha}\times {\rm{I}}_{\xi}$ is a mixture of exponential distributions, where $S_{\alpha}$ is a positive stable law of index $\alpha$.
\section{Some results on generalized Ornstein-Uhlenbeck processes}\label{O-U}
The results we present here will be central in the development of the proof of our main theorem. However, they also have some interesting implications in the study of generalized Ornstein-Uhlenbeck processes (for short GOU), and for this reason we state and prove them in a separate section.
We recall that for a given L\'evy process $\xi$ the GOU process $U^{\xi}$, is defined, for any $t\geq0,\,x\geq0$, by
\begin{equation}\label{Ornstein-U}
U^{\xi}_t(x)=xe^{\xi_{t}}+e^{\xi_{t}}\int_{0}^{t}e^{-\xi_{s}}ds.
\end{equation}
This family of positive strong Markov processes has been intensively studied by Carmona et al.~\cite{Carmona-Petit-Yor-97} and we refer to \cite{Patie-08a} for some more recent studies and references. The connection with our current problem is explained as follows. From the identity in law $(\xi_{t}-\xi_{(t-s)-})_{0\leq s\leq t}=(\xi_{s})_{s\leq t}$, we easily deduce that, for any fixed $t\geq0$,
\begin{equation*}
U^{\xi}_t(x)\stackrel{d}=xe^{\xi_{t}}+\int_{0}^{t}e^{\xi_{s}}ds.
\end{equation*}
Thus, we have if $\lim_{t\to \infty}\xi_t=-\infty$ a.s., that
\begin{equation*}
U^{\xi}_{\infty}(x)\stackrel{d}={\rm{I}}_{\xi}
\end{equation*}
and hence the law of ${\rm{I}}_{\xi}$ is the unique stationary measure of $U^{\xi}$, see \cite[Proposition 2.1]{Carmona-Petit-Yor-97}.
In the sequel we use the standard notation $C_b(\mathbb{R})$ (resp.~$C_b(\mathbb{R}_+)$) to denote the set of bounded and continuous functions on $\mathbb{R}$ (resp.~on $\mathbb{R}_+$). Furthermore, we set $\mathcal{V}'=C^{2}_{b}(\overline{\mathbb{R}})$, where $C^{2}_{b}(\overline{\mathbb{R}})$ is the set of twice continuously differentiable bounded functions which together with its first two derivatives are continuous on $\overline{\mathbb{R}}=[-\infty,\infty]$. Then, we recall that, see e.g.~\cite{Carmona-Petit-Yor-97} for the special case when $\xi$ is the sum of a Brownian motion and an independent L\'{e}vy process with bounded variation and finite exponential moments and \cite{Kuznetsov-Pardo-Savov-11} for the general case, the infinitesimal generator $L^{U^{\xi}}$ of $U^{\xi}$ takes the form
\begin{align}\label{InfGen}
&L^{U^{\xi}}f(x)=L^{\xi}f_e(\ln{x})+f'(x), \: x>0,
\end{align}
whenever $\mathbb{E}[|\xi_{1}|]<\infty$ and $f_e(x)=f(e^{x})\in Dom(L^\xi)$, where $L^{\xi}$ stands for the infinitesimal generator of the L\'{e}vy process $\xi$, considered in the sense of It\^o and Neveu (see
\cite[p.~628-630]{Loeve}). Recall in this sense $\mathcal{V}'\subset Dom(L^\xi)$ and hence $\mathcal{V}=\{f:\overline{\mathbb{R}}_+\mapsto\overline{\mathbb{R}} | f_e\in \mathcal{V}'\}\subset Dom(L^{U^{\xi}})$. \begin{comment}\textbf{It is crucial to mention that the classical approach requires that $f_e\in C^{2}_{b}(\mathbb{R})\cap C_0(\mathbb{R})$, see \cite[ p. 24]{Bertoin-96}. However, this is only because the semigroup in the general theory of Feller processes acts on $C_0(\mathbb{R})$. Since $\xi$ and $U^{\xi}$ are semimartingales and their semigroups extend to $C_{b}(\mathbb{R})$ a computation using Ito's formula in the spirit of .. in \cite{Kuznetsov-Pardo-Savov-11} shows that the $Dom(L^\xi)$ extends to contain $C^{2}_b(\mathbb{R})$ {\color{red} Maybe we find reference but we DO need the extended spaces. Indeed we need $\mathcal{K}$ as already defined which excludes the opportunity $lim_{x\to -\infty}f_e(x)=0$ as 3.13 will not hold if we assume the latter !!! I am 100 percent sure the arguments in bold are true as we get to the point from semimartingales and Ito and two times differentiablity suffices and the boundedness comes to extend Dynkin! Any ideas for reference ?!}. Hereafter, we assume that the semigroups and the generators are extended and hence $f_e\in C^{2}_b(\mathbb{R})$ suffices for \eqref{InfGen} to hold.} \end{comment}
In what follows we often appeal to the quantities, defined for $x>0$, by
\begin{equation}\label{PiTail}
\overline{\Pi}(x):=\int_{|y|>x}\Pi(dy);\,\,\overline{\Pi}_{\pm}(x):=\int_{y>x}\Pi_{\pm}(dy),
\end{equation}
\begin{equation}\label{DoublePiTail}
\overline{\overline{\Pi}}(x):=\int_{y>x}\overline{\Pi}(y)dy;\,\,\overline{\overline{\Pi}}_{\pm}(x):=\int_{y>x}\overline{\Pi}_{\pm}(y)dy,
\end{equation}
where $\Pi_{+}(dy)=\Pi(dy)1_{\{y>0\}}$ and $\Pi_{-}(dy)=\Pi(-dy)1_{\{y>0\}}$.
Note that the quantities in \eqref{DoublePiTail} are finite when $\mathbb{E}\left[|\xi_{1}|\right]<\infty$.
Moreover, when $\mathbb{E}[\xi_{1}]<\infty$, \eqref{Levy-K} can be rewritten, for all $z\in\mathbb{C}$, where it is well defined, as follows
\begin{equation}\label{psi}
\Psi(z)=\mathbb{E}\left[\xi_{1}\right]z+\frac{\sigma^{2}}{2}z^{2}+z^{2}\int_{0}^{\infty}e^{zy}\overline{\overline{\Pi}}_{+}(y)dy+z^{2}\int_{0}^{\infty}e^{-zy}\overline{\overline{\Pi}}_{-}(y)dy.
\end{equation}
For the proof of our main theorem we need to study the stationary measure of $U^{\xi}$ and in particular $L^{U^{\xi}}$ in detail. To this end, we introduce the following functional space
\begin{equation}\label{K}
\nonumber \mathcal{K}=\Big\{f:\overline{\mathbb{R}}_+\mapsto\overline{\mathbb{R}}|\,f_e\in\mathcal{V}';\,\lim_{x\to -\infty}\Big(|f'_e(x)|+|f''_{e}(x)|\Big)=0;\,\int_{\mathbb{R}} \Big(|f'_e(x)|+|f''_e(x)|\Big)dx<\infty\Big\}
\end{equation}
where $f_e(x)=f(e^{x})$.
\begin{proposition}\label{Prop1}
Let $U^{\xi}$ be a GOU process with $\mathbb{E}[|\xi_{1}|]<\infty$. Then $\mathcal{K}\subset Dom(L^{U^{\xi}})$. Moreover, for any $f\in \mathcal{K}$, we have, for all $x>0$,
\begin{equation}\label{InfGen1}
L^{U^{\xi}}f(x)=\frac{g(x)}{x}+\mathbb{E}[\xi_{1}]g(x)+\frac{\sigma^{2}}{2}xg'(x)+\int_{x}^{\infty}g'(y)\overline{\overline{\Pi}}_{+}\Big(\ln{\frac{y}{x}}\Big)dy+\int_{0}^{x}g'(y)\overline{\overline{\Pi}}_{-}\Big(\ln{\frac{x}{y}}\Big)dy,
\end{equation}
where $g(x)=xf'(x)$. Finally, for any function $h$ such that $\int_{0}^{\infty}(y^{-1}\wedge 1)|h(y)|dy<\infty$ and $f\in\mathcal{K}$ we have
\begin{equation}\label{InfGen2}
(L^{U^{\xi}}f,h)=(g',\mathcal{L}h),
\end{equation}
where $(f_1,f_2)=\int_0^{\infty}f_1(x)f_2(x)dx$ and
\begin{equation}\label{InfGen3}
\mathcal{L}h(x)=\frac{\sigma^{2}}{2}xh(x)+\int_{x}^{\infty}\left(\frac{1}{y}+\mathbb{E}[\xi_{1}]\right) h(y)dy
+\int_{x}^{\infty}\overline{\overline{\Pi}}_{-}\Big(\ln{\frac{y}{x}}\Big)h(y)dy+
\int_{0}^{x}\overline{\overline{\Pi}}_{+}\Big(\ln{\frac{x}{y}}\Big)h(y)dy.
\end{equation}
\end{proposition}
\begin{remark} There are certain advantages when using the linear operator $\mathcal{L}$ instead of the generator of the dual GOU. Its integral form allows for minimal conditions on the integrability of $|h|$ and requires no smoothness assumptions on $h$. Moreover, if $h$ is positive, Laplace and Mellin transforms can easily be applied to $\mathcal{L}h(x)$ since the justification of Fubini Theorem is straightforward.
\end{remark}
\begin{proof}
Let $f\in \mathcal{K}$ then by the very definition of $\mathcal{K}$ we have that $f_e\in\mathcal{V}'$ and from \eqref{InfGen} we get that $\mathcal{K}\subset Dom(L^{U^{\xi}})$. Next, \eqref{InfGen1} can be found in \cite{Kuznetsov-Pardo-Savov-11} but can equivalently be recovered from \eqref{InfGen} by simple computations using the expression for $L^{\xi}$, which can be found on \cite[ p. 24]{Bertoin-96}.
To get \eqref{InfGen2} and \eqref{InfGen3}, we recall that $g(x)=xf'(x)=f'_e(\ln x)$ and use \eqref{InfGen1} combined with a formal application of the Fubini Theorem to write
\begin{eqnarray}\label{eqn:Referee}
\nonumber(L^{U^{\xi}} f, h)&=&\int_{0}^{\infty} \frac{g(y)}{y}h(y)dy+\frac{\sigma^{2}}{2}\int_{0}^{\infty} y g'(y)h(y)dy+\mathbb{E}[\xi_{1}]\int_{0}^{\infty} g(y)h(y)dy+\\
\nonumber & & \int_{0}^{\infty}
\int_{0}^{y}g'(v)\overline{\overline{\Pi}}_{-}\big(\ln{\frac{y}{v}}\big)dv h(y)dy+\int_{0}^{\infty}\int_{y}^{\infty}g'(v)\overline{\overline{\Pi}}_{+}\big(\ln{\frac{v}{y}}\big)dvh(y)dy \\
\nonumber &=&
\int_{0}^{\infty} g'(v)\int_{v}^{\infty}\frac{h(y)}{y}dydv+\mathbb{E}[\xi_{1}]\int_{0}^{\infty} g'(v)\int_{v}^{\infty}h(y)dydv+\frac{\sigma^{2}}{2}\int_{0}^{\infty} vg'(v)h(v)dv+\\
\nonumber & & \int_{0}^{\infty} g'(v)\int_{v}^{\infty}\overline{\overline{\Pi}}_{-}\big(\ln{\frac{y}{v}}\big)h(y)dydv+\int_{0}^{\infty} g'(v)\int_{0}^{v}\overline{\overline{\Pi}}_{+}\big(\ln{\frac{v}{y}}\big)h(y)dydv\\ &=&
(g',\mathcal{L}h).
\end{eqnarray}
To justify Fubini Theorem, note that $f\in\mathcal{K}$ implies that $\lim_{x\to 0}g(x)=\lim_{x\to 0}f'_e(\ln x)=0$, $g(x)=\int_{0}^{x}g'(v)dv$ and
\begin{align}\label{PropertiesG}
\nonumber &\int_{0}^{\infty}|g'(v)|dv= \int_{\mathbb{R}}|f''_e(y)|dy\leq C(g)<\infty,\\
&|g(x)|+x|g'(x)|= |f'_e(\ln x)|+|f''_e(\ln x)| \leq C(g)<\infty,
\end{align}
where $C(g)>0$. Note that \eqref{PropertiesG} and the integrability of $(1\wedge y^{-1})|h(y)|$ imply that
\[\int_{0}^{\infty} \Big|\frac{g(y)}{y}\Big|h(y)dy\leq \int_{0}^{\infty} \int_{0}^{y}|g'(v)|dv y^{-1}|h(y)|dy\leq C(g) \int_{0}^{\infty} y^{-1}|h(y)|dy<\infty,\]
and so Fubini Theorem applies to the first term in \eqref{eqn:Referee}. The second term in \eqref{eqn:Referee} remains unchanged whereas for the third one we do the same computation noting that only $y^{-1}$ is not present. From \eqref{PropertiesG} and the fact that $\overline{\overline{\Pi}}_{+}(1)+\overline{\overline{\Pi}}_{-}(1)<\infty$ since $\mathbb{E}[|\xi_{1}|]<\infty$, we note that for the other two terms, we have with the constant $C(g)>0$ in \eqref{PropertiesG},
\begin{eqnarray*}\label{Bound1}
\int_{0}^{x}|g'(v)|\overline{\overline{\Pi}}_{-}\Big(\ln{\frac{x}{v}}\Big)dv&=&\int_{0}^{\infty} |xe^{-w}g'(xe^{-w})|\overline{\overline{\Pi}}_{-}(w)dw \\
&\leq&\overline{\overline{\Pi}}_{-}(1)\int_{0}^{\infty}|g'(v)|dv+C(g)\int_{0}^{1}\overline{\overline{\Pi}}_{-}(w)dw<\infty\\
\int_{x}^{\infty}|g'(v)|\overline{\overline{\Pi}}_{+}\Big(\ln{\frac{v}{x}}\Big)dv&=&\int_{0}^{\infty} |xe^{w}g'(xe^{w})|\overline{\overline{\Pi}}_{+}(w)dw \\
&\leq& \overline{\overline{\Pi}}_{+}(1)\int_{0}^{\infty}|g'(v)|dv+ C(g) \int_{0}^{1}\overline{\overline{\Pi}}_{+}(w)dw<\infty.
\end{eqnarray*}
Therefore we can apply Fubini Theorem which completes the proof of Proposition \ref{Prop1}.
\end{proof}
The next result is known and can be found in \cite{Kuznetsov-Pardo-Savov-11} but we include it and sketch its proof for sake of completeness and for further discussion.
\begin{theorem}\label{O-UTheorem1}
Let $U^{\xi}$ be a GOU where $-\infty<\mathbb{E}[\xi_{1}]<0$. Then $U^{\xi}$ has a unique stationary distribution which is absolutely continuous with density $m$ and satisfies
\begin{equation}\label{IntegralEqn}
\mathcal{L}m(x)=0 \text{ for a.e. $x>0$}.
\end{equation}
\end{theorem}
\begin{remark}
Note that due to the discussion in Section \ref{O-U}, $m=m_{\xi}$, i.e it equals the density of the law of ${\rm{I}}_{\xi}$. Therefore all the information we gathered for $m_{\xi}$ in Section \ref{SecMain} is valid here for the density of the stationary measure of $U^{\xi}$, i.e.~$m$.
\end{remark}
\begin{remark}
Equation \eqref{IntegralEqn} can be very useful. In this instance it is far easier to be studied than an equation coming from the dual process which is standard when stationary distributions are discussed. It does not presuppose any smoothness of $m$ but only its existence. Moreover, as noted above \eqref{IntegralEqn} is amenable to various transforms and difficult issues such as interchanging integrals using Fubini Theorem are effortlessly overcome.
\end{remark}
\begin{remark} It is also interesting to explore other cases when a similar equation to \eqref{IntegralEqn} can be obtained. It seems the approach is fairly general but requires special examples to reveal its full potential. For example, if $L$ is an infinitesimal generator, $\mathcal{N}$ is a differential operator, $\mathcal{L}$ is an integral operator and it is possible for all $f\in C_{0}^{\infty}(\mathbb{R}_{+})$, i.e. infinitely differentiable functions with compact support, and a stationary density $u$ to write
\[(Lf,u)=(\mathcal{N}f,\mathcal{L}u)=0\]
then we can solve the equation in the sense of Schwartz to obtain
\[\tilde{\mathcal{N}}\mathcal{L}u=0,\]
where $\tilde{\mathcal{N}}$ is the dual of $\mathcal{N}$. If we show that necessarily for probability densities $\mathcal{L}u=0$, then we can use $\mathcal{L}$ to study stationarity.
\end{remark}
\begin{proof}
From \eqref{InfGen2} and the fact that $m$ is the stationary density we get, for all $g(x)=xf'(x)$, with $f\in C_{0}^{\infty}(\mathbb{R}_{+})\subset \mathcal{K}$,
\[(g',\mathcal{L}m)=0.\]
Then from Schwartz theory of distributions we get $\mathcal{L}m(x)=C\ln{x}+D$ a.e.. Integrating \eqref{InfGen3} and the right-hand side of the latter from $1$ to $z$, multiply the resulting identity by $z^{-1}$, subsequently letting $z\rightarrow\infty$ and using the fact that $m$ is a probability density we can show that necessarily $C=D=0$. The latter requires some efforts but they are mainly technical.
\end{proof}
\begin{theorem}\label{O-UTheorem}
Let $\overline{m}$ be a probability density function such that $\int_{0}^{\infty}\overline{m}(y)y^{-1}dy<\infty$ and \eqref{IntegralEqn} holds for $\overline{m}$ then
\begin{equation}\label{uniqueness}
m(x)=\overline{m}(x)\text{ a.e.,}
\end{equation}
where $m$ is the density of the stationary measure of $U^{\xi}$.
\end{theorem}
\begin{remark} This result is very important in our studies. The fact that we have uniqueness on a large class of probability measures allows us by checking that $\eqref{IntegralEqn}$ holds to pin down the density of the stationary measure of $U^{\xi}$ which is of course the density of ${\rm{I}}_{\xi}$. The requirement that $\int_{0}^{\infty}\overline{m}(y)y^{-1}dy<\infty$ is in fact no restriction whatsoever since the existence of a first negative moment of ${\rm{I}}_{\xi}$ is known from the literature, see \cite{Bertoin-Yor-02-b}.
\end{remark}
\begin{remark} Also it is well known that if $L^{\hat{U}}$ is the generator of the dual Markov process then $L^{\hat{U}}\overline{m}=0$ does not necessarily have a unique solution when $L^{\hat{U}}$ is a non-local operator. Moreover one needs assumptions on the smoothness of $\overline{m}$ so as to apply $L^{\hat{U}}$. Using $\mathcal{L}$ circumvents this problem.
\end{remark}
\begin{proof}
Let $(P_{t})_{t\geq 0}$ be the semigroup of the GOU $U^{\xi}$, that is, for any $f\in C_b(\overline{\mathbb{R}}_+)$,
\[P_tf(x)= \mathbb{E}\left[f\left(U^{\xi}_t(x)\right)\right],\: x\geq 0,\,t\geq0.\]
If \eqref{IntegralEqn} holds for some probability density $\overline{m}$ then \eqref{InfGen2} is valid, i.e. for all $f\in \mathcal{K}$,
\[(L^{U^{\xi}}f,\overline{m})=(g',\mathcal{L}\overline{m})=0.\]
Assume for a moment that
\begin{equation}\label{Invariant set}
P_s\mathcal{K}\subset \mathcal{K}, \text{ for all $s>0$},
\end{equation}
and, there exists a constant $C(f,\xi)>0$ such that, for all $s\leq t$,
\begin{equation}\label{Bound}
\left|L^{U^{\xi}}P_{s}f(x)\right|\leq C(f,\xi)(x^{-1}\wedge 1).
\end{equation}
Then integrating out with respect to $\overline{m}(x)$ the standard equation
\[P_{t}f(x)=f(x)+\int_{0}^{t}L^{U^{\xi}}P_{s}f(x)ds,\]
we get, for all $f\in\mathcal{K}$,
\[\int_{0}^{\infty}P_{t}f(x)\overline{m}(x)dx=\int_{0}^{\infty}f(x)\overline{m}(x)dx.\]
Since $C_{0}^{\infty}(\mathbb{R}_{+})\subset \mathcal{K}$ and $C_{0}^{\infty}(\mathbb{R}_{+})$ is separating for $C_{0}(\mathbb{R}_{+})$, the last identity shows that $\overline{m}$ is a density of a stationary measure. Thus by uniqueness of the stationary measure we conclude \eqref{uniqueness}. Let us prove \eqref{Invariant set} and \eqref{Bound}. For $f\in \mathcal{K}$ write
\[g_s(x):=P_{s}f(x)=\mathbb{E}\left[ f\left(U^{\xi}_{s}(x)\right)\right]=\mathbb{E} \left[f\left(xe^{\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right].\]
Put $\tilde{g}_{s}(x)=g_{s}(e^x)=(g_{s})_e(x)$ .
Note that since $f\in \mathcal{K}$ and $0<e^{x+\xi_{s}}\leq e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv$ we have the following bound
\begin{equation}\label{NewBound}\left|e^{x+\xi_{s}}f'\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right|+\left|e^{2(x+\xi_{s})}f''\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right|\leq C(f)\end{equation}
which holds uniformly in $x\in \mathbb{R}$ and $s\geq 0$.
In view of \eqref{NewBound} the dominated convergence theorem gives
\begin{align}\label{New1}
\nonumber &\tilde{g}'_{s}(x)=\mathbb{E}\left[ e^{x+\xi_{s}}f'\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right],\\
\nonumber &\tilde{g}''_{s}(x)=
\mathbb{E}\left[ e^{x+\xi_{s}}f'\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right]+\mathbb{E}\left[ e^{2(x+\xi_{s})}f''\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right],\\
&\max\{|\tilde{g}'_{s}(x)|,|\tilde{g}''_{s}(x)|\}\leq C(f).
\end{align}
Clearly then from \eqref{NewBound}, \eqref{New1}, the dominated convergence theorem and the fact that $f\in \mathcal{K}$ which implies the existence of $\lim_{x\to\infty}f''_{e}(x)=b$, we have
\begin{align*}
& \lim_{x\to\infty}\tilde{g}''_{s}(x)=\mathbb{E}\left[ \lim_{x\to\infty}\left(e^{x+\xi_{s}}f'\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)+ e^{2(x+\xi_{s})}f''\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right)\right]=
b.
\end{align*}
Similarly, we show that $\lim_{x\to\infty}\tilde{g}'_{s}(x)=\lim_{x\to\infty}f'_e(x)$ and trivially $\lim_{x\to\pm\infty}\tilde{g}_{s}(x)=\lim_{x\to\pm\infty} f_e(x)$.
Finally using \eqref{NewBound}, \eqref{New1}, $f\in\mathcal{K}$, the dominated convergence theorem and the fact that for all $s>0$ almost surely $\int_{0}^{s}e^{\xi_{v}}dv>0$, we conclude that
\[ \lim_{x\to -\infty}|\tilde{g}'_{s}(x)|+|\tilde{g}''_{s}(x)|\leq
2\mathbb{E}\left[\lim_{x\to-\infty}\left|e^{x+\xi_{s}}f'\left(e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right|+\left|e^{2(x+\xi_{s})}f''\left( e^{x+\xi_{s}}+\int_{0}^{s}e^{\xi_{v}}dv\right)\right|\right],\]
which together with the limits above confirms that $\tilde{g}_s \in \mathcal{V}'$ and proves that
\[\lim_{x\to -\infty}|\tilde{g}'_{s}(x)|+|\tilde{g}''_{s}(x)|=0.\]
Finally since $f\in \mathcal{K}$ and \eqref{NewBound}, we check that
\begin{align*}
&\int_{0}^{\infty} |\tilde{g}'_{s}(y)|dy\leq \mathbb{E}\left[\int_{\int_{0}^{s}e^{\xi_{v}}dv}^{\infty}|f'(u)|du\right]\leq\int_{0}^{\infty}|f'(u)|du=\int_{\mathbb{R}}|f'_e(u)|du<C(f),
\end{align*}
and
\begin{eqnarray*}
\int_{0}^{\infty} |\tilde{g}''_{s}(y)|dy&\leq& E\left[\int_{\int_{0}^{s}e^{\xi_{v}}dv}^{\infty}\left(u-\int_{0}^{s}e^{\xi_{v}}dv\right)|f''(u)|du\right]\leq
\int_{0}^{\infty}u|f''(u)|du \\&\leq& 2\int_{\mathbb{R}_+}|f'_e(\ln x)|+|f''_e(\ln x)|\frac{dx}{x}=2\int_{\mathbb{R}}(|f'_e(y)|+|f''_e(y)|)dy<C(f),
\end{eqnarray*}
where $C(f)$ is chosen to be the largest constant in all the inequalities above and we have used the trivial inequality $u^{2}|f''(u)|\leq |f'_e(\ln u)|+|f''_e(\ln u)|$.
Thus using all the information above we conclude that $g_s=P_s f\in \mathcal{K}$ and \eqref{Invariant set} holds. Next we consider \eqref{Bound} keeping in mind that all estimates on $\tilde{g}_s$ we used to show that $g_{s}\in\mathcal{K}$ are uniform in $s$ and $x$. We use \eqref{InfGen1} with $g(x)=xg'_{s}(x)=\tilde{g}'_s(\ln x)$, the bounds on $\tilde{g}_{s}$ and its derivatives to get
\begin{align*}
&\Big|\frac{g(x)}{x}+\mathbb{E}[\xi_{1}]g(x)+\frac{\sigma^{2}}{2}xg'(x)\Big|\leq C(f)x^{-1}+C(f)|\mathbb{E}[\xi_{1}]|+C(f)\frac{\sigma^{2}}{2}
\leq C(f,\sigma,\mathbb{E}[\xi_{1}])(1\wedge x^{-1}).
\end{align*}
Moreover, as in the proof of Proposition \ref{Prop1}, we can estimate
\begin{align*}
&\Big|\int_{0}^{x}g'(v)\overline{\overline{\Pi}}_{-}\big(\ln{\frac{x}{v}}\big)dv\Big|+\Big|\int_{x}^{\infty}g'(v)\overline{\overline{\Pi}}_{+}\big(\ln{\frac{v}{x}}\big)dv\Big|\leq\\
& \left(\overline{\overline{\Pi}}_{-}(1)+\overline{\overline{\Pi}}_{+}(1)\right)\int_{0}^{\infty}|g'(s)|ds+C(f)\left( \int_{0}^{1}\overline{\overline{\Pi}}_{-}(s)ds+\int_{0}^{1}\overline{\overline{\Pi}}_{+}(s)ds\right)= \\
&\left(\overline{\overline{\Pi}}_{-}(1)+\overline{\overline{\Pi}}_{+}(1)\right)\int_{-\infty}^{\infty}|\tilde{g}''(y)|dy+C(f)\left( \int_{0}^{1}\overline{\overline{\Pi}}_{-}(s)ds+\int_{0}^{1}\overline{\overline{\Pi}}_{+}(s)ds\right) <C
\end{align*}
and therefore \eqref{Bound} holds since
\[L^{U^{\xi}}g_{s}(x)=\frac{g(x)}{x}+\mathbb{E}[\xi_{1}]g(x)+\frac{\sigma^{2}}{2}xg'(x)+\int_{0}^{x}g'(v)\overline{\overline{\Pi}}_{-}\big(\ln{\frac{x}{v}}\big)dv+\int_{x}^{\infty}g'(v)\overline{\overline{\Pi}}_{+}\big(\ln{\frac{v}{x}}\big)dv.\]
This concludes the proof.
\end{proof}
\begin{theorem}\label{Lemma1}
Let $(\xi^{(n)})_{n\geq1}$ be a sequence of L\'{e}vy processes with negative means such that
\[ \lim_{n \to \infty }\xi^{(n)}\stackrel{d}= \xi,\]
where $\xi$ is a L\'{e}vy process with $\mathbb{E}[\xi_{1}]<0$. Moreover, if for each $n\geq 1$, $m^{(n)}$ stands for the law of the stationary measure of the GOU process $U^{^{\xi^{(n)}}}$ defined, for any $t\geq 0,\,x\geq0$, by
\[ U^{^{\xi^{(n)}}}_{t}=xe^{\xi^{(n)}_{t}}+e^{\xi^{(n)}_{t}}\int_{0}^{t}e^{-\xi^{(n)}_{s}}ds\]
and the sequence $(m^{(n)})_{n\geq 1}$ is tight then $(m^{(n)})_{n\geq 1}$ converges weakly to $m^{(0)}$, which is the unique stationary measure of the process $U^{\xi}$, i.e.
\begin{equation}\label{L1-1}
\lim_{n\to\infty}m^{{(n)}}\stackrel{w}=m^{(0)}.
\end{equation}
\end{theorem}
\begin{proof}
Without loss of generality we assume using Skorohod-Dudley theorem, see Theorem 3.30 in Chapter 3 in \cite{Kallenberg}, that the convergence $\xi^{(n)}\rightarrow \xi$ holds a.s. in the Skorohod space $\mathcal{D}((0,\infty))$. Due to the stationarity properties of $m^{(n)}$, for each $t>0$, we have, for any $f\in C_b(\overline{\mathbb{R}}_+)$,
\[\left(f,m^{(n)}\right)=\left(P^{\left(n\right)}_{t}f,m^{\left(n\right)}\right)=\left(P^{\left(n\right)}_{t}f-P_{t}f,m^{\left(n\right)}\right)+\left(P_{t}f,m^{\left(n\right)}\right),\]
where $P^{\left(n\right)}_{t}$ and $P_{t}$ are the semigroups of $U^{\xi^{\left(n\right)}}_{t}$ and $U^{\xi}_{t}$.
For any $x>0$,
\begin{eqnarray}\label{ToProve2}
\left|\left(P^{\left(n\right)}_{t}f-P_{t}f,m^{\left(n\right)}\right)\right|&\leq & 2||f||_{\infty}m^{\left(n\right)}\left(x,\infty\right)+\sup_{y \leq x}\big|P^{\left(n\right)}_{t}f\left(y\right)-P_{t}f\left(y\right)\big| \nonumber \\
&\leq&
2\big|\big|f\big|\big|_{\infty}m^{\left(n\right)}\left(x,\infty\right)+\mathbb{E}\left[\sup_{y \leq x}\left|f\left(U^{\xi^{(n)}}_{t}\left(y\right)\right)-f\left(U^{\xi}_{t}\left(y\right)\right)\right|\right].
\end{eqnarray}
Taking into account that $(m^{(n)})_{n\geq1}$ is tight we may fix $\delta>0$ and find $x>0$ big enough such that
\[\sup_{n\geq 1}m^{(n)}(x,\infty)<\delta.\]
Also since $f\in C_b(\overline{\mathbb{R}}_+)$ then $f$ is uniformly continuous on $\mathbb{R_{+}}$. Therefore, to show that
\[\lim_{n\to\infty}\mathbb{E}\left[\sup_{y \leq x}\left|f\left(U^{\xi^{(n)}}_{t}(y)\right)-f\left(U^{\xi}_{t}(y)\right)\right|\right]=0,\]
due to the dominated convergence theorem all we need to show is that
\begin{equation}\label{ToProve1}\lim_{n\to\infty}\sup_{y\leq x}|U^{\xi^{\left(n\right)}}_{t}(y)-U^{\xi}_{t}(y)|=0.\end{equation}
From the definition of $U^{\xi^{\left(n\right)}}$ and $U^{\xi}$, we obtain that, for $y\leq x$,
\begin{align*}
&\left|U^{\xi^{\left(n\right)}}_{t}(y)-U^{\xi}_{t}(y)\right|\leq x\left|e^{\xi^{(n)}_{t}}
-e^{\xi_{t}}\right|+\left|e^{\xi^{(n)}_{t}}-e^{\xi_{t}}\right|\int_{0}^{t}e^{-\xi^{(n)}_{s}}ds+e^{\xi_{t}}\left|\int_{0}^{t}e^{-\xi^{(n)}_{s}}-e^{-\xi_{s}}ds\right|.
\end{align*}
Since $\xi^{(n)}\stackrel{a.s.}\rightarrow \xi$ in the Skorohod topology and
\[\mathbb{P}\left(\left\{\exists n\geq1:\: \xi^{(n)}_t-\xi^{(n)}_{t-}>0\right\} \cap \left\{ \xi_{t}-\xi_{t-}>0\right\}\right)=0\]
the first term on the right-hand side of the last expression converges a.s. to zero as $n\rightarrow\infty$. The a.s. convergence in the Skorohod space implies the existence of changes of times $(\lambda_{n})_{n\geq 1}$ such that, for each $n\geq 1$, $\lambda_{n}(0)=0$, $\lambda_{n}(t)=t$, the mapping $s \mapsto \lambda_{n}(s)$ is increasing and continuous on $[0,t]$, and
\begin{equation}\label{Skorohod3}\lim_{n\to\infty}\sup_{s\leq t}|\lambda_{n}(s)-s|=\lim_{n\to\infty}\sup_{s\leq t}\left|\lambda^{-1}_{n}(s)-s\right|=0
\end{equation}
\begin{equation}\label{Skorohod4}\lim_{n\to\infty}\sup_{s\leq t}\left|\xi^{(n)}_{\lambda_{n}(s)}-\xi_{s}\right|=\lim_{n\to\infty}\sup_{s\leq t}\left|\xi^{(n)}_{s}-\xi_{\lambda^{-1}_{n}(s)}\right|=0.
\end{equation}
Hence,
\[\Big|\int_{0}^{t}e^{-\xi^{(n)}_{s}}-e^{-\xi_{s}}ds\Big|\leq \Big|\int_{0}^{t}e^{-\xi^{(n)}_{s}}-e^{-\xi_{\lambda^{-1}_{n}(s)}}ds\Big|+\Big|\int_{0}^{t}e^{-\xi_{\lambda^{-1}_{n}(s)}}-e^{-\xi_{s}}ds\Big|.\]
The first term on the right-hand side clearly goes to zero due to \eqref{Skorohod4} whereas \eqref{Skorohod3} implies that the second term goes to zero a.s. due to the dominated convergence theorem and the fact that pathwise, for $s\leq t$,
\[\limsup_{n\to\infty}\left|e^{-\xi_{\lambda^{-1}_{n}(s)}}-e^{-\xi_{s}}\right|>0\]
only on the set of jumps of $\xi$ and this set has a zero Lebesgue measure.
Thus we conclude that
\[\lim_{n\to\infty}e^{\xi_{t}}\left|\int_{0}^{t}e^{-\xi^{(n)}_{s}}-e^{-\xi_{s}}ds\right|=0.\]
Similarly we observe that
\begin{equation}\label{3}
\lim_{n\to\infty}\left|e^{\xi^{(n)}_{t}}-e^{\xi_{t}}\right|\int_{0}^{t}e^{-\xi^{(n)}_{s}}ds\leq \lim_{n\to\infty}t\left|e^{\xi^{(n)}_{t}}-e^{\xi_{t}}\right|e^{\sup_{s\leq t}(-\xi^{(n)}_{s})}=0,
\end{equation}
where the last identity follows from
\[\sup_{s\leq t}\left|(-\xi^{(n)}_{s})\right|\leq \sup_{s\leq t}\left|\xi_{\lambda^{-1}_{n}(s)}\right|+\sup_{s\leq t}\left|\xi^{(n)}_{s}-\xi_{\lambda^{-1}_{n}(s)}\right|=\sup_{s\leq t}\left|\xi_{s}\right|+\sup_{s\leq t}\left|\xi^{(n)}_{s}-\xi_{\lambda^{-1}_{n}(s)}\right|\]
and an application of \eqref{Skorohod4}.
Therefore, \eqref{ToProve1} holds and
\[\lim_{n\to\infty}\sup_{y \leq x}\left|f\left(U^{(n)}_{t}(y)\right)-f\left(U^{\xi}_{t}(y)\right)\right|=0.\]
The dominated convergence theorem then easily gives that the right-hand side of \eqref{ToProve2} goes to zero and hence
\[\limsup_{n\to\infty}\left|\left(P^{(n)}_{t}f-P_{t}f,m^{(n)}\right)\right|\leq 2||f||_{\infty}\sup_{n\geq 1}m^{(n)}(x,\infty)\leq 2||f||_{\infty}\delta.\]
As $\delta>0$ is arbitrary we show that
\[\lim_{n\to\infty}\left|\left(P^{(n)}_{t}f-P_{t}f,m^{(n)}\right)\right|=0.\]
Since $(m^{(n)})_{n\geq 1}$ is tight we choose a subsequence $(m^{(n_{k})})_{k\geq 1}$ such that $\lim_{k\to\infty}m^{(n_{k})}\stackrel{d}=\nu$ with $\nu$ a probability measure. Then, for each $t\geq 0$,
\[(f,\nu)=\lim_{k\to\infty}\left(f,m^{(n_{k})}\right)=\lim_{k\to\infty}\left(P^{(n_{k})}_{t}f,m^{(n_{k})}\right)=\lim_{k\to\infty}\left(P_{t}f,m^{(n_{k})}\right)=\left(P_{t}f,\nu\right).\]
Therefore $\nu$ is a stationary measure for $U^{\xi}$. But since $m^{(0)}$ is the unique stationary measure we conclude that
\[\lim_{n\to\infty}m^{(n)}{\stackrel{w}{=}} \nu= m^{(0)}.\]
This translates to the proof of \eqref{L1-1}.
\end{proof}
\section{Proof of Theorem \ref{MainTheorem}} \label{proof:mt}
We start the proof by collecting some useful properties in two trivial lemmas. The first one discusses the properties of $\Psi$.
\begin{lemma}{\cite[Theorem 25.17]{Sato-99}}\label{PrelimLemma1}
The function $\Psi$, defined in \eqref{psi}, is always well-defined on $i\mathbb{R}$. Moreover, $\Psi$ is analytic on the strip $\{z\in\mathbb{C};\,-a_-<\Re(z)<a_+\}$, where $a_-,a_+>0$ if and only if $\mathbb{E}\left[e^{(-a_-+\epsilon)\xi_{1}}\right]<\infty$ and $\mathbb{E}\left[e^{(a_+-\epsilon)\xi_{1}}\right]<\infty$ for all $0<\epsilon<a_-\wedge a_+$.
\end{lemma}
The second lemma concerns the properties of $\phi_{\pm}$ and is easily obtained using Lemma \ref{PrelimLemma1}, \eqref{eq:wh} together with the analytical extension and the fact that subordinators have all negative exponential moments.
\begin{lemma}\label{PrelimLemma2}
Let $\xi$ be a L\'{e}vy process with $\mathbb{E}[\xi_{1}]<\infty$. Then $\phi_{+}$ is always analytic on the strip $\{z\in\mathbb{C};\: \Re(z)<0\}$ and is well-defined on $i\mathbb{R}$. Moreover $\phi_{+}$ is analytic on $\{z\in\mathbb{C}; \:\Re(z)<a_+\}$, for $a_+\geq 0$, if and only if $\mathbb{E}\left[e^{(a_+-\epsilon)\xi_{1}}\right]<\infty$, for some $\epsilon>0$. Similarly $\phi_{-}$ is always analytic on the strip $\{z\in\mathbb{C}; \:\Re(z)>0\}$ and is well-defined on $i\mathbb{R}$ and $\phi_{-}$ is analytic on $\{z\in\mathbb{C};\: \Re(z)< -a_-\}$, for $a_-\geq 0$, if and only if $\mathbb{E}[e^{(-a_-+\epsilon)\xi_{1}}]<\infty$, for some $\epsilon>0$.
Finally, the Wiener-Hopf factorization \eqref{eq:wh} holds
on the intersection of the strips where $\phi_{+}$ and $\phi_{-}$ are well-defined.
\end{lemma}
\subsection{ Proof in the case ${\bf{E}_+}$}
We recall that in this part we assume, in particular, that $\xi$ is a L\'evy process with a finite negative mean and that there exists $a_+>0$ such that $|\Psi(z)|<\infty$ for any $0<\Re(z)<a_+$.
Next, we write $\theta^*= \max(\theta,a_+)$, where $\theta=\inf \{s>0; \: \Psi(s)=0\}$ (with the convention that $\inf \emptyset =+\infty$). We also recall from \cite{Carmona-Petit-Yor-97}, see also \cite{Maulik-Zwart-06}, that the Mellin transform of ${\rm{I}}_{\xi}$ defined by
\[\mathcal{M}_{m_{\xi}}(z)=\int_{0}^{\infty}x^{z-1}m_{\xi}(x)dx\]
satisfies, for any $0<\Re(z)<\theta^*$, the following functional equation
\begin{equation}\label{Maulik}
\mathcal{M}_{m_{\xi}}(z+1)=-\frac{z}{\Psi(z)}\mathcal{M}_{m_{\xi}}(z).
\end{equation}
We proceed by proving the following easy result.
\begin{lemma}\label{lem:ey}
If $\mu_+ \in \mathcal{P}$ then there exists a spectrally positive L\'evy process $Y$ with Laplace exponent $\psi_+(-s)=-s\phi_{+}(-s),\: s\geq0,$ and a negative finite mean $-\phi_+(0)$. Moreover, if $\xi$ has a negative finite mean then $\mathbb{E}\left[\left({\rm{I}}_{H^-}\times{\rm{I}}_{Y}\right)^{-1}\right]=-\phi_+(0)\phi_-'(0^+)<+\infty$.
\end{lemma}
\begin{proof}
The first claim follows readily from \cite[Theorem VII.4(ii)]{Bertoin-96} and by observing that $\psi_+'(0^-)=\phi_+(0)$. From \eqref{eq:msn} we get that $\mathbb{E}\left[{\rm{I}}_{Y}^{-1}\right] =k_+$. Next, since $-\infty<\mathbb{E}[\xi_1]<0$, using the dual version of \cite[Corollary 4.4.4(iv)]{Doney}, we get that $-\infty<\mathbb{E}[H^-_1]<0$ and thus $¦\phi'_-(0^+)¦<\infty$. From the functional equation \eqref{Maulik}, we easily deduce that $\mathbb{E}\left[{\rm{I}}_{H^-}^{-1}\right]=-\phi^{\prime}_-(0^+)$ which completes the proof since the two random variables are independent.
\end{proof}
\begin{lemma}\label{AuxLemma1}
Assume that $\xi$ has a finite negative mean and condition ${\bf{E}}_+$ holds. Let $\eta$ be a positive random variable with density $\kappa(x)$, such that $\mathbb{E}[\eta^{-1}]<\infty$ and $\mathbb{E}[\eta^{\delta}]<\infty$, for some $\theta^{*}>\delta>0$. Then, for any $z$ such that $\Re(z)\in (0,\delta)$,
\begin{equation}\label{New111}
\mathcal{M}_{\mathcal{L}\kappa}(z)=\int_{0}^{\infty}x^{z-1}\mathcal{L}\kappa(x)dx=\frac{\Psi(z)}{z^{2}}\mathcal{M}_{\kappa}(z+1)+\frac{1}{z}\mathcal{M}_{\kappa}(z)
\end{equation}\label{g(x)}
and if $\mathcal{M}_{\mathcal{L}\kappa}(z)=0$, for $0<a<\Re(z)<b<\delta$, then $\mathcal{L}\kappa(x)=0$ a.e..
Furthermore the law of the positive random variable ${\rm{I}}_{Y}\times{\rm{I}}_{H^{-}}$, as defined in Theorem \ref{MainTheorem}, is absolutely continuous with a density, denoted by $\overline{m}$, which satisfies
\begin{equation}\label{k-tilde1}
\mathcal{L}\overline{m}(x)= 0 \text{ for a.e. $x>0$}.
\end{equation}
\end{lemma}
\begin{remark}
Note that the proof of this lemma shows that we have uniqueness for the probability measures with first negative moment that satisfy \eqref{Maulik}. This is a rather indirect approach and seems to be more general than the verification approach of \cite{Kuznetsov-Pardo-11}, see Proposition 2, where precise knowledge on the rate of decay of the Mellin transform $\mathcal{M}_{m_{\xi}}(z)$ is needed. In general such an estimate on the decay seems impossible to obtain.
\end{remark}
\begin{proof} We start by proving \eqref{New111}. Note that since $\int_0^{\infty}y^{-1}\kappa(y)dy<\infty$, we can use Proposition \ref{Prop1} to get
\begin{equation}\label{g(x)}
\mathcal{L}\kappa(x)=\frac{\sigma^{2}}{2}x\kappa(x)+\int_{x}^{\infty}\frac{\kappa(y)}{y}dy+\mathbb{E}[\xi_{1}]\int_{x}^{\infty}\kappa(y)dy
+\int_{x}^{\infty}\overline{\overline{\Pi}}_{-}\left(\ln{\frac{y}{x}}\right)\kappa(y)dy+
\int_{0}^{x}\overline{\overline{\Pi}}_{+}\left(\ln{\frac{x}{y}}\right)\kappa(y)dy.
\end{equation}
As $\kappa$ is a density, one can use Fubini Theorem to get, after some easy computations, that
for any $\epsilon<\Re(z)<\delta<\theta^{*}$, with $0<\epsilon<\delta$,
\begin{eqnarray*}
\mathcal{M}_{\mathcal{L}\kappa}(z)&=&\int_{0}^{\infty}x^{z-1}\mathcal{L}\kappa(x)dx\\
&=& \mathcal{M}_{\kappa}(z+1)\left(\frac{\sigma^{2}}{2}+\frac{\mathbb{E}[\xi_{1}]}{z}+\int_{0}^{\infty}\overline{\overline{\Pi}}_{-}(y)e^{-zy}dy+\int_{0}^{\infty}\overline{\overline{\Pi}}_{+}(y)e^{zy}dy \right)+\frac{1}{z}\mathcal{M}_{\kappa}(z)\\
&=&\frac{\Psi(z)}{z^{2}}\mathcal{M}_{\kappa}(z+1)+\frac{1}{z}\mathcal{M}_{\kappa}(z).
\end{eqnarray*}
Let $\mathcal{M}_{\mathcal{L}\kappa}(z)=0$ for $\epsilon<\Re(z)<\delta$. We show using that all terms in \eqref{g(x)} are positive except the negative one due to $\mathbb{E}[\xi_{1}]<0$ that, with $u=\Re(z)$,
\begin{eqnarray*}
\int_{0}^{\infty}x^{u-1}\left|\mathcal{L}\kappa(x)\right|dx &\leq&
\mathcal{M}_{\kappa}(u+1)\left(\frac{\Psi(u)}{u^{2}}-2\frac{\mathbb{E}[\xi_{1}]}{u}\right)+\frac{1}{u}\mathcal{M}_{\kappa}(u)<\infty.
\end{eqnarray*}
Given the absolute integrability of $x^{z-1}\mathcal{L}\kappa(x)$ along imaginary lines determined by $\epsilon<\Re(z)<\delta$ we can apply the Mellin inversion theorem to the identity $\mathcal{M}_{\mathcal{L}\kappa}(z)=0$ to get $\mathcal{L}\kappa(x)=0$ a.e., see Theorem $6$ in Section $6$ in \cite{Butzer-Jansche-97}.
Next it is plain that the law of ${\rm{I}}_{Y}\times{\rm{I}}_{H^{-}}$ is absolutely continuous since, for any $x>0$,
\begin{equation}\label{k-tilde}
\overline{m}(x)=\int_{0}^{\infty}m_{Y}\Big(\frac{x}{y}\Big)y^{-1}m_{H^{-}}(y)dy.
\end{equation}
Furthermore from the Wiener-Hopf factorization \eqref{eq:wh} and the definition of $\psi_+$, we have that
\[\frac{-z}{\Psi(z)}=\frac{-z}{\phi_-(z)}\frac{-z}{\psi_+(z)}\]
which is valid, for any $0<\Re(z)<\theta^{*}$.
Thus, we deduce from the functional equation \eqref{Maulik} and the independency of $Y$ and $H^-$ that, for any $0< \Re(z)<\theta^{*}$,
\begin{equation}\label{Maulik2}
\mathcal{M}_{\overline m}(z+1)=-\frac{z}{\Psi(z)}\mathcal{M}_{\overline m}(z).
\end{equation}
Next, since from Lemma \ref{lem:ey}, we have that $\int_0^{\infty}y^{-1}\overline{m}(y)dy<\infty$, we can use Proposition \ref{Prop1} and thus \eqref{g(x)} and subsequently \eqref{New111} are valid for $\overline{m}$.
Moreover due to the representation \eqref{psi} of $\Psi$ and relation \eqref{Maulik2} we have that
for any $\epsilon<\Re(z)<\theta^{*}$ with $0<\epsilon<\theta^{*}/4$,
\begin{eqnarray*}
\mathcal{M}_{\mathcal{L}\overline{m}}(z)&=&\frac{\Psi(z)}{z^{2}}\mathcal{M}_{\overline{m}}(z+1)+\frac{1}{z}\mathcal{M}_{\overline{m}}(z)=0
\end{eqnarray*}
and we conclude that $\mathcal{L}\overline{m}(x)=0$ a.e.
\end{proof}
We are now ready to complete the proof of Theorem \ref{MainTheorem} in the case ${\bf{E}}_+$. Indeed, since $m_{\xi}$, the density of ${\rm{I}}_{\xi}$, is the density of the stationary measure of $U^{\xi}$, we have that $m_{\xi}$ is also solution to \eqref{IntegralEqn}. Combining Lemma \ref{AuxLemma1} with the uniqueness argument of Theorem \ref{O-UTheorem}, we conclude that the factorization \eqref{MainAssertion} holds.
\subsection{Proof of the two other cases : ${\bf P}+$ and ${\bf P_\pm}$}
We start by providing some results which will be used several times throughout this part.
\begin{proposition}[Carmona et al.~\cite{Carmona-Petit-Yor-97}] \label{prop:ms}
Let $H$ be the negative of a (possibly killed) subordinator with Laplace exponent $\phi$, then the law of ${\rm{I}}_{H}$ is determined by its positive entire moments as follows
\begin{eqnarray}\label{eq:ms}
\mathbb{E}[{\rm{I}}_{H}^m] &=&\frac{\Gamma(m+1)}{\prod_{k=1}^{m}\left(-\phi(k)\right)
}, \: m=1,2,\ldots
\end{eqnarray}
\end{proposition}
\begin{proposition}[Bertoin and Yor \cite{Bertoin-Yor-02}]\label{prop:msp}
Let $Y$ be an unkilled spectrally positive L\'evy process with a negative mean and Laplace exponent $\psi_+$, then the law of $1/{\rm{I}}_{Y}$ is determined by its positive entire moments as follows
\begin{eqnarray} \label{eq:msn}
\mathbb{E}[{\rm{I}}_{Y}^{-m}] &=&\mathbb{E}[-Y_1]\frac{\prod_{k=1}^{m-1}\psi_+(-k)}{\Gamma(m)}, \: m=1,2,\ldots,
\end{eqnarray}
with the convention that the right-hand side is $\mathbb{E}[-Y_1]$ when $m=1$.
\end{proposition}
In order to get \eqref{MainAssertion} in the case when $\xi$ does not have some finite positive exponential moments, we will develop some approximation techniques. However, the exponential functional is not continuous in the Skorohod topology and therefore we have to find some criteria in order to secure the weak convergence of sequences of exponential functionals. This is the aim of the next result.
\begin{lemma}\label{Lemma111}
Let $(\xi^{(n)})_{n\geq1}$ be a sequence of L\'{e}vy processes with negative means such that
\[ \lim_{n\to\infty}\xi^{(n)}\stackrel{d}= \xi\]
where $\xi$ is a L\'{e}vy process with $\mathbb{E}[\xi_{1}]<0$. Let us assume further that at least one of the following conditions holds:
\begin{enumerate}[(a)]
\item for each $n\geq 1$, $\xi^{(n)}$ and $\xi$ are unkilled spectrally positive L\'evy processes such that $\lim_{n\to \infty}\mathbb{E}[\xi_1^{(n)}] =\mathbb{E}[\xi_1],$
\item for each $n\geq 1$, $\xi^{(n)}$ and $\xi$ are the negative of unkilled subordinators,
\item the sequence $(m_{\xi^{(n)}})_{n\geq 1}$ is tight, where $m_{\xi^{(n)}}$ is the law of ${\rm{I}}_{\xi^{(n)}}.$
\end{enumerate}
Then, in all cases, we have
\begin{equation}\label{L1-111}
\lim_{n\to\infty}{\rm{I}}_{\xi^{(n)}}\stackrel{d}={\rm{I}}_{\xi}.
\end{equation}
\end{lemma}
\begin{proof}
To prove \eqref{L1-111} in the case (a), we simply observe that writing $\psi^{(n)}_+$ for the Laplace exponent of $\xi_1^{(n)}$, we have, by L\'evy continuity Theorem, see e.g. \cite[Theorem XIII.1.2]{Feller-71}, that for all $s\geq0$, $\psi^{(n)}_+(-s)\rightarrow \psi_+(-s)$ as $n \rightarrow \infty$. Next, putting $M^{(n)}_m$ for the sequence of negative entire moments of ${\rm{I}}_{\xi^{(n)}}$, we easily deduce, from \eqref{eq:msn} for all $m=1,2\ldots,$ that $\lim_{n\to \infty}M^{(n)}_m = M_m$ where $M_m$ is the sequence of negative entire moments of ${\rm{I}}_{\xi}$. These random variables being moment determinate, see Proposition \ref{prop:msp}, we conclude (a) by invoking \cite[Examples (b) p.269]{Feller-71}. The second case follows by applying a similar line of reasoning to the expression \eqref{eq:ms}. Finally, the case (c) is a straightforward consequence of \eqref {L1-1} of Theorem \ref{Lemma1}.
\end{proof}
Before stating our next result, we need to introduce the following notation. Let us first recall that the reflected processes
$\left(R^+_t=\sup_{0\leq s\leq t}\xi_s-\xi_t\right)_{t\geq 0}$ and $\left(R^-_t=\xi_t-\inf_{0\leq s\leq t}\xi_s\right)_{t\geq 0}$ are Feller processes in $[0,\infty)$ which possess local times $L^{\pm}=(L^{\pm}_t)_{t\geq0}$ at the level $0$. The ascending and descending ladder times, $l^{\pm}=(l^{\pm}(t))_{t\geq0}$, are defined as the right-continuous inverses of $L^{\pm}$, i.e. for any $t\geq0$, $l^{\pm}(t)=\inf\{s> 0;\: L^{\pm}_s>t\}$
and the ladder height processes
$H^+=(H^+(t))_{t\geq0}$ and $-H^-=(-H^-(t))_{t\geq0}$ by
$$H^+(t)=\xi_{l^{+}(t)}=\sup_{0\leq s\leq l^{+}(t)}\xi_s\,, \qquad \hbox{ whenever } l^{+}(t)<\infty\,,$$
$$-H^-(t)=\xi_{l^{-}(t)}=\inf_{0\leq s\leq l^{-}(t)}\xi_s\,, \qquad \hbox{ whenever } l^{-}(t)<\infty\,.$$
Here, we use the convention $\inf\{\varnothing\} =\infty$ and $H^{+}(t)=\infty$ when $L^{+}_{\infty}\leq t$ and $-H^{-}(t)=-\infty$ when $L^{-}_{\infty}\leq t$.
From \cite[p. 27]{Doney}, we have, for $\alpha,\beta\geq 0$,
\begin{equation}\label{BivLadder}
\log{ \mathbb{E}\left[e^{-\alpha l^{+}(1)-\beta H^{+}(1)}\right]} = -k(\alpha,\beta)=-k_{+}-\eta_{+}\alpha-\delta_{+}\beta-\int_{0}^{\infty}\int_{0}^{\infty}\Big(1-e^{-(\alpha y_{1}+\beta y_{2})}\Big)\mu_{+}(dy_{1},dy_{2}),
\end{equation}
where $\eta_{+}$ is the drift of the subordinator $l^{+}$ and $\mu_{+}(dy_{1},dy_{2})$ is the L\'{e}vy measure of the bivariate subordinator $(l^{+},H^{+})$. Similarly, for $\alpha,\beta\geq 0$,
\begin{equation}\label{BivLadder1}
\log \mathbb{E}\left[e^{-\left(\alpha l^{-}(1)-\beta H^{-}(1)\right)}\right]=-k_{*}(\alpha,\beta)=-\eta_{-}\alpha-\delta_{-}\beta-\int_{0}^{\infty}\int_{0}^{\infty}\Big(1-e^{-(\alpha y_{1}+\beta y_{2})}\Big)\mu_{-}(dy_{1},dy_{2}),
\end{equation}
where $\eta_{-}$ is the drift of the subordinator $l^{-}$ and $\mu_{-}(dy_1,dy_2)$ is the L\'{e}vy measure of the bivariate subordinator $(l^{-},-H^{-})$.
\begin{lemma}\label{Lemma3}
Let $\xi$ be a L\'{e}vy process with triplet $(a,\sigma,\Pi)$ and Laplace exponent $\psi$. Let, for any $n\geq 1$, $\xi^{(n)}$ be the L\'{e}vy process with Laplace exponent denoted by $\psi^{(n)}$ and triplet $(a,\sigma,\Pi^{(n)})$ such that $\Pi^{(n)}=\Pi$ on $\mathbb{R}_{-}$ and on $\mathbb{R}_{+}$
\[ \Pi^{(n)}(dy)= h^{(n)}(y)\Pi(dy),\]
where for all $y>0$, $0\leq h^{(n)}(y) \uparrow 1$ as $n\rightarrow \infty$ and uniformly for $n\geq 1$ we have that for some $C\geq0$, $\limsup_{y\to 0}y^{-1}(1-h_{n}(y))\leq C$. Then,
\begin{equation}\label{eqn:Referee1}
\lim_{n\to\infty} \xi^{(n)}\stackrel{d}=\xi,
\end{equation}
and for all $\alpha\geq 0,\,\beta\geq0$, we have, as $n\rightarrow \infty $,
\begin{align}\label{LadderHeight}
k^{(n)}(\alpha,\beta) \rightarrow k (\alpha,\beta) ,\\
k_*^{(n)}(\alpha,\beta) \rightarrow k_* (\alpha,\beta), \nonumber
\end{align}
where $k^{(n)}(\alpha,\beta)$ and $k_{*}^{(n)}(\alpha,\beta)$ stand for the bivariate Laplace exponents of the ladder processes of $\xi^{(n)}$, normalized such that $k^{(n)}(1,0)=k_{*}^{(n)}(1,0)=1$. Also $k (\alpha,\beta)$ and $k_* (\alpha,\beta)$ stand for the bivariate Laplace exponents of the ladder processes of $\xi$, normalized such that $k(1,0)=k_{*}(1,0)=1$.
\end{lemma}
\begin{remark}
Denote by $\left( l^{+}_{(n)},H^+_{(n)}\right)$ $\left(\text{resp.~}\left( l^{-}_{(n)},-H^{-}_{(n)}\right)\right)$ the bivariate ascending (resp.~descending) ladder processes of $\xi^{(n)}$ and $\left( l^{+},H^+\right)$ $\left(\text{resp.~}\left( l^{-},-H^{-}\right)\right)$ the bivariate ascending (resp.~descending) ladder processes of $\xi$, then from the L\'evy continuity Theorem we deduce that as $n\rightarrow \infty $,
\begin{align}\label{LadderHeight}
\nonumber &\left( l^{+}_{(n)},H^+_{(n)}\right)\stackrel{d}{\rightarrow }\left( l^{+},H^{+}\right),\\
&\left( l^{-}_{(n)},-H^-_{(n)}\right)\stackrel{d}{\rightarrow }\left( l^{-},-H^{-}\right),
\end{align}
where in the convergence the sequence of killing rates of the ladder height processes also converge to the killing rate of the limiting process.
\end{remark}
\begin{proof}
For the sake of completeness and also to include both the compound Poisson case and the cases when $\alpha=0$ and/or $\beta=0$, we must improve the proof of Lemma 3.4.2 in \cite{Vigon}. Next, since $\Pi^{(n)}(dx) \stackrel{v}{\rightarrow} \Pi(dx)$, where $\stackrel{v}{\rightarrow}$ stands for the vague convergence, we get \eqref{eqn:Referee1} from e.g. \cite[Theorem 13.14(i)]{Kallenberg}. We note the identity
\begin{equation}\label{eq:id-t}
\xi\stackrel{d}{=} \xi^{(n)}+\tilde{\xi}^{(n)},
\end{equation} where $\tilde{\xi}^{(n)}$ is a subordinator with L\'evy measure $\tilde{\Pi}^{(n)}(dy)=(1-h_{n}(y)) \Pi(dy)$ and no drift, since $1-h_{n}(y)=O(y)$ at zero. Then, when $\xi$ is a compound Poisson process we have that $\tilde{\xi}^{(n)}$ is a compound Poisson process and, for all $t>0$,
\[ \mathbb{P}\left(\xi^{(n)}_t=0\right)=\mathbb{P}\left(\xi_t=0, t<\tilde{T}^{(n)}\right)+\mathbb{P}\left(\xi^{(n)}_t=0, t\geq \tilde{T}^{(n)}\right),\]
where $\tilde{T}^{(n)}=\inf\{ s>0; \: \tilde{\xi}^{(n)}_s >0\}$. Since for all $y>0$, $h^{(n)}(y) \uparrow 1$, then $\mathbb{P}( t> \tilde{T}^{(n)}) \rightarrow 0$ as $n\rightarrow \infty$ and
\[ \mathbb{P}\left(\xi^{(n)}_t \in dy\right)\mathbb{I}_{\{y\geq0\}} \stackrel{v}{\rightarrow} \mathbb{P}\left(\xi_t \in dy\right)\mathbb{I}_{\{y\geq0\}}.\]
When $\xi$ is not a compound Poisson process, the law of $\xi^{(n)}$ does not charge $\{0\}$ and thus as $n\rightarrow \infty$
\[ \mathbb{P}\left(\xi^{(n)}_t \in dy\right)\mathbb{I}_{\{y>0\}} \stackrel{v}{\rightarrow} \mathbb{P}\left(\xi_t \in dy\right)\mathbb{I}_{\{y>0\}}.\]
Henceforth, from the expression
\begin{equation}\label{eq:def-biv} k^{(n)}(\alpha,\beta) = \exp\left(\int_0^{\infty}dt\int_0^{\infty}\left(e^{-t} -e^{-\alpha t -\beta y} \right)t^{-1}\mathbb{P}(\xi^{(n)}_t \in dy)\right)
\end{equation}
which holds for any $\alpha>0$ and $\beta>0$, see e.g.~\cite[Corollary VI.2.10]{Bertoin-96}, we deduce easily that for both cases
\begin{equation} \label{eq:cv-be}
\lim_{n\to \infty} k^{(n)}(\alpha,\beta) = k(\alpha,\beta).
\end{equation} Moreover, we can write
\begin{equation}\label{eq:def-biva}
k^{(n)}(\alpha,\beta) =k^{(n)}(0,0)+\tilde{k}^{(n)}(\alpha,\beta),
\end{equation}
where $\tilde{k}^{(n)}$ are the Laplace exponents of unkilled bivariate subordinators, see \cite[p.~27]{Doney}. Note from \eqref{eq:def-biv} that
\begin{equation*}k^{(n)}(0,0) = \exp\left(-\int_0^{\infty}\left(1-e^{-t}\right)\mathbb{P}\left(\xi^{(n)}_t \geq 0\right)\frac{dt}{t}\right).
\end{equation*}
Next from \eqref{eq:id-t} and the fact that $\tilde{\xi}^{(n)}$ is a subordinator, we have that $\mathbb{P}\left(\xi^{(n)}_t \geq 0\right) \leq \mathbb{P}\left(\xi_t \geq 0\right) $ and appealing to the monotone convergence theorem we get that $k^{(n)}(0,0) \downarrow k(0,0)$. Hence we deduce from \eqref{eq:cv-be} and \eqref{eq:def-biva} that for any $\alpha,\beta >0$, $\tilde{k}^{(n)}(\alpha,\beta) \rightarrow \tilde{k}(\alpha,\beta)$ where $ \tilde{k}(\alpha,\beta)=k(\alpha,\beta) -k(0,0)$. From the L\'evy continuity theorem, we have, writing $\left(\tilde{l}_{(n)}^{+},\tilde{H}_{(n)}^{+}, \right)$ for the unkilled versions of the ascending bivariate ladder processes, that $\left(\tilde{ l}^{+}_{(n)},\tilde{H}^+_{(n)}\right)\stackrel{d}{\rightarrow }\left(\tilde{l}^+,\tilde{H}^{+}\right)$, where $\left(\tilde{l}^+,\tilde{H}^{+}\right)$ stands also for the unkilled version of $\left(\tilde{l}^+,\tilde{H}^{+}\right)$. These probability distributions being proper, we have that for all $\alpha,\beta \in \mathbb{R}$, $\tilde{k}^{(n)}(i\alpha,i\beta) \rightarrow \tilde{k}(i\alpha,i\beta)$, see \cite[Theorem XV.3.2]{Feller-71}. Hence $k^{(n)}(0,i\beta) \rightarrow k(0,i\beta)$ for all $\beta \in \mathbb{R}$ which completes the proof for the ascending ladder height processes. The proof of the convergence of the Laplace exponent of the bivariate descending ladder process follows readily from the identities
\begin{eqnarray*}
\psi^{(n)}(i\beta)-\alpha&=&-k^{(n)}(\alpha,-i\beta)k^{(n)}_*(\alpha,i\beta) \\
\psi(i\beta)-\alpha&=&-k(\alpha,-i\beta)k_*(\alpha,i\beta)
\end{eqnarray*}
and the convergence of $\psi^{(n)}$ to $\psi$ and $k^{(n)}$ to $k$.
\end{proof}
\subsubsection{The case ${\bf P}+$}
We first consider the case when $\xi$ satisfies both the conditions ${\bf P}+$ and $\mathbb{E}[\xi_1]>-\infty$. We start by showing that the condition ${\bf P}+$ implies that $\mu_+ \in \mathcal{P}$. To this end, we shall need the so-called \emph{equation amicale invers\'ee} derived by Vigon, for all $x>0$,
\begin{equation} \label{eq:ami-inv}
\bar{\mu}_+(x)=\int_{0}^{\infty}\overline{\Pi}_{+}(x+y)\mathcal{U}_-(dy),
\end{equation}
where $\mathcal{U}_-$ is the renewal measure corresponding to the subordinator $H^-$, see e.g. \cite[Theorem 5.16]{Doney}.
\begin{lemma}\label{Vigon}
Let us assume that $\overline{\Pi}_{+}(x)$ has a non-positive derivative $\pi_+(x)$ defined for all $x>0$ and such that $-\pi_+(x)$ is non-increasing. Then $\bar\mu_+(x)$ is differentiable with derivative $u(x)$ such that $-u(x)$ is non-increasing.
\end{lemma}
\begin{proof}
Fix $x>0$ and choose $0<h<x/3$. Then we have the trivial bound using the non-increasing property of $-\pi_+(x)$ and the description \eqref{eq:ami-inv} of $\bar{\mu}_+(x)$
\begin{eqnarray*}
\frac{\left|\bar\mu_+(x\pm h)-\bar\mu_+(x)\right|}{h}&\leq& \int_{0}^{\infty}\frac{\left|\overline{\Pi}_{+}(x+y\pm h)-\overline{\Pi}_{+}(x+y)\right|}{h}\mathcal{U}_-(dy)\\ &\leq& \int_{0}^{\infty}\left(-\pi_+\left(x+y-h\right)\right)\mathcal{U}_-(dy)\\ &\leq& \int_{0}^{\infty}\left(-\pi_+\left(\frac{2x}{3}+y\right)\right)\mathcal{U}_-(dy).
\end{eqnarray*}
We show now that the last expression is finite. Note that
\[\int_{0}^{\infty}\left(-\pi_+\left(\frac{2x}{3}+y\right)\right)\mathcal{U}_-(dy)\leq \sum_{n\geq 0}-\pi_+\left(\frac{2x}{3}+n\right)\left(\mathcal{U}_-(n+1)-\mathcal{U}_-(n)\right).\]
From the trivial inequality $\mathcal{U}_-(n+1)-\mathcal{U}_-(n)\leq \mathcal{U}_-(1)$, see \cite[ Chapter 2, p.11]{Doney}, and since $-\pi_+(x)$ is the non-increasing density of $\overline{\Pi}_{+}(x)$, we have with $C=\mathcal{U}_-(1)>0$,
\begin{eqnarray*}
\int_{0}^{\infty}-\pi_+\left(\frac{2x}{3}+y\right)\mathcal{U}_-(dy) & \leq & C\sum_{n\geq 0}-\pi_+\left(\frac{2x}{3}+n\right)\\
&\leq&
-C\pi_+\left(\frac{2x}{3}\right)+C\sum_{n\geq 1}\left(\overline{\Pi}_{+}\left(\frac{2x}{3}+n-1\right)-\overline{\Pi}_{+}\left(\frac{2x}{3}+n\right)\right)\\
&\leq& -C\pi_+\left(\frac{2x}{3}\right)+C\overline{\Pi}_{+}\left(\frac{2x}{3}\right)<\infty.
\end{eqnarray*}
Therefore, for all $x>0$, the dominated convergence applies and gives
\[u(x)=\int_{0}^{\infty}\pi_+(x+y)\mathcal{U}_-(dy).\]
As $-\pi_+(x)$ is non-increasing we deduce that $-u(x)$ is non-increasing as well.
\end{proof}
In the case ${\bf P}+$, in comparison to the case ${\bf E}_+$, we have that $\xi$ does not necessarily have some positive exponential moments.
To circumvent this difficulty we introduce the sequence of L\'{e}vy processes $\xi^{(n)}$ obtained from $\xi$ by the following construction: we keep the negative jumps intact and we discard some of the positive ones. More precisely, we thin the positive jumps of $\xi$ to get a L\'{e}vy process $\xi^{(n)}$ with $\overline{\Pi}_{+}^{(n)}$ whose density has the form
\begin{align}\label{modifiedPi}
&\pi^{(n)}_+(x)=\pi_+(x)\left(\mathbb{I}_{\{0<x\leq 1\}}+e^{-n^{-1}(x-1)}\mathbb{I}_{\{x>1\}}\right).
\end{align}
Clearly, $-\pi^{(n)}_+(x)$ is non-increasing and $\mathbb{E}\left[e^{s\xi^{(n)}_{1}}\right]<\infty$, for $s\in(0,n^{-1})$, see \eqref{modifiedPi}. Moreover, since we have only thinned the positive jumps and pointwise $\lim_{n\to\infty}\pi^{(n)}_+(x)=\pi_+(x)$, see \eqref{modifiedPi},
\begin{equation}\label{convergence}
\lim_{n\to\infty}\xi^{(n)}\stackrel{a.s.}=\xi
\end{equation}
almost surely in the Skorohod space $\mathcal{D}(0,\infty)$.
Finally, since $-\infty<\mathbb{E}\left[\xi^{(n)}_{1}\right]<\mathbb{E}\left[\xi_{1}\right]<0$ and $-\pi^{(n)}_+(x)$ is non-increasing then Lemma \ref{Vigon} applies and we deduce that the L\'{e}vy measure of the ascending ladder height process of $\xi^{(n)}$ has a negative density whose absolute value is non-increasing in $x$. Then since, for each $n\geq1$, $\xi^{(n)}$ has some finite positive exponential moments, we have that
\begin{equation}\label{approx}
{\rm{I}}_{\xi^{(n)}}\stackrel{d}={\rm{I}}_{H_{(n)}^{-}} \times {\rm{I}}_{Y^{(n)}}.
\end{equation}
Since we thinned the positive jumps of $\xi$, for all $t\geq 0$, $\xi^{(n)}_{t}\leq \xi_{t}$ and the monotone convergence theorem together with \eqref{convergence} imply that
\begin{equation}\label{limit}
\lim_{n\to\infty}{\rm{I}}_{\xi^{(n)}}\stackrel{a.s.}={\rm{I}}_{\xi}.
\end{equation}
By the choice of the approximating sequence $\xi^{(n)}$ we can first use Lemma \ref{Lemma3} to get
\begin{equation}\label{Ingredient1}
\lim_{n\to\infty}H_{(n)}^{-}\stackrel{d}=H^{-}
\end{equation}
and then Lemma \ref{Lemma1} (b) to obtain that
\begin{equation}\label{ConvSub}
\lim_{n\to\infty}{\rm{I}}_{H_{(n)}^{-}}\stackrel{d}={\rm{I}}_{H^{-}}.
\end{equation}
Again from Lemma \ref{Lemma3} we deduce that $k^{(n)}(0,-s) \rightarrow k(0,-s)$, for all $s\geq0$, and $\lim_{n\to\infty}\mathbb{E}[Y^{(n)}_1]=-\lim_{n\to\infty}k^{(n)}(0,0)=\mathbb{E}[Y_1]$, so we can apply Lemma \ref{Lemma1} (a) to get that
\[\lim_{n\to\infty}{\rm{I}}_{Y^{(n)}}\stackrel{d}={\rm{I}}_{Y},\]
which completes the proof in this case. \hfill $\Box$
\subsubsection{The case ${\bf P}_{\pm}$}
First from the philanthropy theory developed by Vigon \cite{Vigon}, we know that the conditions $\mu_+ \in \mathcal{P}$ and $\mu_- \in \mathcal{P}$ ensure the existence of a L\'evy process $\xi$ with ladder processes $H^+$ and $H^-$ and such that the Wiener-Hopf factorization \eqref{eq:wh} holds on $i\mathbb{R}$. Since we also assume that $k_+>0$, this L\'evy process necessarily drifts to $-\infty$.
Next let us introduce the Laplace exponents
\begin{eqnarray}\label{lL-KLadder}
\phi^{(p)}_{+}(z)&=&
\delta_{+}z + \int_{(0,\infty)}({\rm e}^{zx}-1)\mu^{(p)}_+({\rm d} x)-k_{+}\,,\\
\phi_-^{(n)}(z)&=&
-\delta_{-}z -\int_{(0,\infty)}(1-{\rm e}^{-zx})\mu^{(n)}_-({\rm d} x),
\end{eqnarray}
where we set $\mu_+^{(p)}(dx)=e^{-x/p}\mu_+(dx),\,p>0$, and $\mu_-^{(n)}(dx)=e^{-x/n}\mu_+(dx),\,n>0$. Plainly, for any $p>0,\,n>0$, $\mu_+^{(p)}\in \mathcal{P}$ and $\mu_-^{(n)} \in \mathcal{P}$, hence there exists a L\'evy process $\xi^{(p,n)}$ with Laplace exponent $\Psi^{(p,n)}$ satisfying
\begin{equation}
\Psi^{(p,n)} (z) = -\phi^{(p)}_{+}(z)\phi^{(n)}_{-}(s),
\end{equation}
which is easily seen to be analytic on the strip $-1/n <\Re(z)<1/p$. Moreover, from \cite[Corollary 4.4.4]{Doney}, we have $\mathbb{E}[\xi_1^{(p,n)}] = -k_+ \left(\int_0^{\infty}xe^{-x/n}\mu_+(dx)+\delta_-\right) $, which is clearly finite and negative. Hence the conditions ${\bf E}_+$ are satisfied and we have, with the obvious notation, that
\[{\rm{I}}_{\xi^{(p,n)}}\stackrel{d}={\rm{I}}_{H_{(n)}^{-}}\times {\rm{I}}_{Y^{(p)}}\]
where for any $p>0$, $Y^{(p)}$ is a spectrally positive L\'evy process with Laplace exponent $\psi_+^{(p)}(-s)=-s\phi^{(p)}_{+}(-s),\: s\geq0$.
Let us first deal with the case $n\rightarrow \infty$. Since $ \phi_-^{(n)}(s) \rightarrow \phi_-(s)$, for all $s\geq0$, we have that
\[\lim_{n\rightarrow \infty}H_{(n)}^{-}\stackrel{d}=H^{-}\]
and from Lemma \ref{Lemma1} (b) we get that
\[\lim_{n\rightarrow \infty}{\rm{I}}_{H_{(n)}^{-}}\stackrel{d}={\rm{I}}_{H^{-}}.\]
Thus, we deduce that, for any fixed $p>0$, the sequence $({\rm{I}}_{\xi^{(p,n)}})_{n\geq1}$ is tight. Moreover, for any fixed $p>0$, we also have $\xi^{(p,n)}\stackrel{d}{\rightarrow}\xi^{(p)}$, as $n\rightarrow \infty$, where $\xi^{(p)}$ has a Laplace exponent $\Psi^{(p)}$ given by
\begin{equation}
\Psi^{(p)}(z) = -\phi^{(p)}_{+}(z)\phi_{-}(z).
\end{equation}
Indeed this is true by the philanthropy theory. Then from Lemma \ref{Lemma1} (c), we have that
\[\lim_{n\rightarrow \infty}{\rm{I}}_{\xi^{(p,n)}}\stackrel{d}={\rm{I}}_{\xi^{(p)}}\stackrel{d}={\rm{I}}_{H^{-}} \times {\rm{I}}_{Y^{(p)}},\]
which provides a proof of the statement in the case ${\bf P}_{\pm}$ together with the existence of some finite positive exponential moments. Next, as $p\rightarrow \infty,\: \phi_+^{(p)}(s) \rightarrow \phi_+(s)$, for all $s\geq0$, and we have that
\[\lim_{p\rightarrow \infty}Y^{(p)}\stackrel{d}=Y,\]
where $Y$ is a spectrally positive L\'evy process with Laplace exponent $\psi_+(-s)=-s\phi_{+}(-s)$. As $\mathbb{E}[Y^{(p)}_1] = \phi_+^{(p)}(0)=-k_+ $, we can use Lemma \ref{Lemma1} (a) to get
\[\lim_{p\rightarrow \infty}{\rm{I}}_{Y^{(p)}}\stackrel{d}={\rm{I}}_{Y}.\]
As above, we conclude from Lemma \ref{Lemma1} (c) that
\[\lim_{p\rightarrow \infty}{\rm{I}}_{\xi^{(p)}}\stackrel{d}={\rm{I}}_{\xi}\stackrel{d}={\rm{I}}_{H^{-}}\times{\rm{I}}_{Y},\]
which completes the proof of the theorem. \hfill $\Box$
\section{Proof of the corollaries} \label{proof_cons}
\subsection{Corollary \ref{Corollary1}}
First, since $\xi$ is spectrally negative and has a negative mean, it is well known that the function $\Psi$ admits an analytical extension on the right-half plane which is convex on $\mathbb{R}^+$ drifting to $\infty$, with $\Psi'(0^+)<0$, and thus there exists $\gamma>0$ such that $\Psi(\gamma)=0$. Moreover, the Wiener-Hopf factorization for spectrally negative L\'evy processes boils down to
\[ \Psi(s)=\frac{\Psi(s)}{s-\gamma}(s-\gamma),\: s>0.\]
It is not difficult to check that with $\phi_+(s)=s-\gamma$ and $\phi_-(s)=-\frac{\Psi(s)}{s-\gamma}$, we have $\mu_-,\,\mu_+ \in \mathcal{P}$. Observing that $\psi_+(s)=s^2-\gamma s$ is the Laplace exponent of a scaled Brownian motion with a negative drift $\gamma$, it is well-known, see e.g. \cite{Yor-01}, that
\[{\rm{I}}_Y \stackrel{d}{=}G_{\gamma}^{-1}.\]
The factorization follows then from Theorem \ref{MainTheorem} considered under the condition ${\bf P}_{\pm}$. Since the random variable $G_{\gamma}^{-1}$ is MSU, see \cite{Cuculescu}, we have that if ${\rm{I}}_{H^-}$ is unimodal then ${\rm{I}}_{\xi}$ is unimodal, which completes the proof of (1).
Next, (2) follows easily from the identity
\begin{eqnarray}
m_{{\xi}}(x)&=&\frac{1}{\Gamma(\gamma)}x^{-\gamma-1}\int_0^{\infty}e^{-y/x}y^{\gamma}m_{H^-}(y)dy \label{eq:dsn}
\end{eqnarray}
combined with an argument of monotone convergence.
Further, we recall that Chazal et al.~\cite[Theorem 4.1]{Chazal-al-10} showed, that for any $\beta \geq 0$, $\phi_{\beta}(s) = \frac{s}{s+\beta}
\phi_-(s+\beta)$ is also the Laplace exponent of a negative of a subordinator and with the obvious notation
\begin{eqnarray} \label{eq:tbs}
m_{H^-_{\beta}}(x) &=& \frac{x^{\beta}m_{H^-}(x)}{\mathbb{E}[{\rm{I}}_{H^-}^\beta]},\quad x>0.
\end{eqnarray}
Then, assuming that $1/x<\lim_{u\rightarrow \infty} \Psi(u)/u$, we have, from \eqref{eq:ms}, \eqref{eq:dsn} and \eqref{eq:tbs},
\begin{eqnarray*}
m_{{\xi}}(x)&=&\frac{1}{\Gamma(\gamma)}x^{-\gamma-1}\sum_{n=0}^{\infty}(-1)^n \frac{x^{-n}}{n!}\int_0^{\infty}y^{n+\gamma}m_{H^-}(y)dy \\
&=&\frac{\mathbb{E}[{\rm{I}}_{H^-}^{\gamma}]}{\Gamma(\gamma)}x^{-\gamma-1}\sum_{n=0}^{\infty}(-1)^n \frac{x^{-n}}{n!}\frac{n!}{\prod_{k=1}^{n}-\frac{k}{k+\gamma}\phi_-(k+\gamma)} \\
&=&\frac{\mathbb{E}[{\rm{I}}_{H^-}^{\gamma}]}{\Gamma(\gamma)\Gamma(\gamma+1)}x^{-\gamma-1}\sum_{n=0}^{\infty}(-1)^n \frac{\Gamma(n+\gamma+1)}{\prod_{k=1}^{n}-k\phi_-(k+\gamma)} x^{-n}\\
&=&\frac{\mathbb{E}[{\rm{I}}_{H^-}^{\gamma}]}{\Gamma(\gamma)\Gamma(\gamma+1)}x^{-\gamma-1}\sum_{n=0}^{\infty}(-1)^n \frac{\Gamma(n+\gamma+1)}{\prod_{k=1}^{n}\Psi(k+\gamma)}x^{-n},
\end{eqnarray*}
where we used an argument of dominated convergence and the identity $-k\phi_-(k+\gamma)=\Psi(k+\gamma)$. Next, again from \eqref{eq:dsn}, we deduce that
\begin{eqnarray*}
x^{-\beta}m_{\xi}(x^{-1})&=&\frac{1}{\Gamma(\gamma)}x^{\gamma+1-\beta}\int_0^{\infty}e^{-xy}y^{\gamma}m_{H^-}(y)dy
\end{eqnarray*}
from where we easily see that, for any $\beta\geq \gamma+1$, the mapping $x\mapsto x^{-\beta}m_{\xi}(x^{-1})$ is completely monotone as the product of two Laplace transforms of positive measures. The proof of the Corollary is completed by invoking \cite[Theorem 51.6]{Sato-99} and noting that $\rm{I}^{-1}_{\xi}$ has a density given by $x^{-2}m_{\xi}(x^{-1})$, i.e. with $\beta=2$.
\subsection{Corollary \ref{Corollary2}}
We first observe from the equation \eqref{eq:ami-inv} that, in this case,
\begin{eqnarray*}
\bar{\mu}_+(x)&=&c e^{-\lambda x} \int_0^{\infty}e^{-\lambda y}\mathcal{U}_-(dy)\\
&=&c_-e^{-\lambda x},
\end{eqnarray*}
where the last identity follows from \cite{Doney} and we have set $c_-=\frac{c}{\phi_-( \lambda)}$. From \eqref{Phi}, we deduce that $Y$ is a spectrally positive L\'evy process with Laplace exponent given, for any $s<\lambda$, by
\begin{eqnarray*}
\psi_+(s)&=&\delta_+ s^2-k_+s + c_- \frac{s^{2}}{\lambda-s}\\
&=&\frac{s}{\lambda-s}\left(-\delta_+ s^2-(\delta_+\lambda+k_+ + c_-)s -k_+\lambda \right),
\end{eqnarray*}
where $\delta_+>0$ since $\sigma>0$, see \cite[Corollary 4.4.4]{Doney}. Thus, using the continuity and convexity of $\psi_+$ on $(-\infty, \lambda)$ and on $(\lambda, \infty)$, studying its asymptotic behavior on these intervals and the identity $\psi_+'(0)=-k_+<0$, we easily show that the equation $\psi_+(s)=0$ has 3 roots which are real, one is obviously $0$ and the two others $\theta_1$ and $\theta_2$ are such that $0<\theta_1<\lambda<\theta_2$. Thus,
\begin{eqnarray*}
\psi_+(-s)
&=&\frac{\delta_+ s}{\lambda+s}\left(s+\theta_1\right)\left(s+\theta_2\right),\: s>-\lambda\,.
\end{eqnarray*}
Next, from \eqref{eq:msn}, we have, with $C=k_+\frac{\Gamma(\lambda+1)}{ \Gamma(\theta_1+1)\Gamma(\theta_2+1)}$ and for $m=2,\ldots,$ that
\[ \mathbb{E}[{\rm{I}}_{Y}^{-m}] = C \delta_+^{m-1} \frac{ \Gamma(m+\theta_1) \Gamma(m+\theta_2)}{\Gamma(m+\lambda)}\]
from where we easily deduce \eqref{eq:hy} by moments identification. Note that a simple computation gives that $\theta_1\theta_2=\delta_+\lambda k_+$ securing that the distribution of ${\rm{I}}_{Y}$ is proper. Next, the random variable ${\rm{I}}_{Y}^{-1}$ being moment determinate, we have, for $\Re(z)<\theta_1+1$, that
\[ \mathbb{E}[{\rm{I}}_{Y}^{z-1}] = C \delta_+^{-z} \frac{ \Gamma(-z+\theta_1+1) \Gamma(-z+\theta_2+1)}{\Gamma(-z+\lambda+1)}.\]
Applying the inverse Mellin transform, see e.g.~\cite[Section 3.4.2]{Paris}, we get
\begin{eqnarray}\label{Cor.2.4}
m_{Y}\left(\frac{x}{\delta^+}\right)&=& C \sum_{i=1}^2x^{-\theta_i-1} \mathcal{I}_i(- x^{-1}), \: x>0,
\end{eqnarray}
where $\mathcal{I}_i(x) = \sum_{n=0}^{\infty}b_{n,i}\frac{x^n}{n!}$, $b_{n,1}= \frac{\Gamma(\theta_2-\theta_1-n)}{\Gamma(\lambda-\theta_1-n)}$ and $ b_{n,2}=\frac{\Gamma(\theta_1-\theta_2-n)}{\Gamma(\lambda-\theta_2-n)}$.
The proof of the Corollary is completed by following a line of reasoning similar to the proof of Corollary \ref{Corollary1}.
\subsection{Corollary \ref{Corollary3}}
For any $\alpha \in (0,1)$, let us observe that, for any $s \geq 0$,
\begin{eqnarray}
\phi_-(-s)&=& \frac{\alpha s \Gamma(\alpha (s+1)+1)}{(1+s)\Gamma(\alpha s+1)} \\
&=&\int_0^{\infty}(1-e^{-sy})u_{\alpha}(y)dy \label{eq:lpp},
\end{eqnarray}
where $u_{\alpha}(y) =\frac{e^{-y}e^{-y/\alpha}}{\Gamma(1-\alpha)(1-e^{-y/\alpha})^{\alpha+1}}$.
We easily check that $u_{\alpha}(y)dy \in \mathcal{P}$ and hence $\Psi$ is a Laplace exponent of a L\'evy process which drifts to $-\infty$. Next, we know, see e.g. \cite{Patie-aff}, that \[{\rm{I}}_{\tilde{H}^-}\stackrel{d}=S_\alpha^{-\alpha},\]
where $\tilde{H}^{-}$ is the negative of the subordinator having Laplace exponent \[\tilde{\phi}_-(-s) = \frac{\alpha\Gamma(\alpha s+1)}{\Gamma(\alpha(s-1)+1)}.\]
Observing that $\phi_-(-s) = \frac{-s}{-s+1} \tilde{\phi}_-(-s+1)$, we deduce, from \eqref{eq:tbs}, that
\begin{equation} \label{eq:ds}
m_{H^{-}}(x)=\frac{x^{-1/\alpha}}{\alpha}g_{\alpha}\left(x^{-1/\alpha}\right) ,\: x>0,\end{equation}
from which we readily get the expression \eqref{eq:dis}. Then, we recall the following power series representation of positive stable laws, see e.g. \cite[Formula (14.31)]{Sato-99},
\[ g_{\alpha}(x)= \sum_{n=1}^{\infty} \frac{(-1)^n}{\Gamma(-\alpha n) n!}x^{-(1+\alpha n)},\: x>0.\]
Then, by means of an argument of dominated convergence justified by the condition $\lim_{s\rightarrow \infty}s^{\alpha-1}\phi_+(-s)=0$, we get, for all $x>0$, that
\begin{eqnarray*}
m_{{\xi}}(x)
&=&\frac{k_+}{\alpha}\sum_{n=1}^{\infty} \frac{(-1)^n}{\Gamma(-\alpha n) n!}x^{n} \int_0^{\infty} y^{-(n+1)}f_{Y}(y)dy \\
&=&\frac{k_+}{\alpha}\sum_{n=1}^{\infty} \frac{\prod_{k=1}^{n}\phi_+(-k)}{\Gamma(-\alpha n) n!}x^{n},
\end{eqnarray*}
where we used the identities \eqref{eq:msn}, $ \mathbb{E}[-Y_1]=k_+$ and $\psi_+(-k)=-k\phi_{+}(-k)$. The fact that the series is absolutely convergent is justified by using classical criteria combined with the Euler's reflection formula $\Gamma(1-z)\Gamma(z) \sin(\pi z)= \pi$ with the asymptotics
\begin{equation} \label{eq:ag}\frac{\Gamma(z+a)}{\Gamma(z+b)} = z^{a-b}\left(1+O\left(|z|^{-1}\right)\right) \quad \textrm{ as } z \to \infty, \: |arg(z)|<\pi,\end{equation}
see e.g.~\cite[Chap.~1]{Lebedev-72}.
We complete the proof by mentioning that Simon \cite{Simon} proved recently that the positive stable laws are MSU if and only if $\alpha\leq 1/2$ which implies, from \eqref{eq:ds}, that ${\rm{I}}_{H^{-}}$ is also MSU in this case.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,497,178 | arxiv | \section{Introduction}
Games are abstract models of decision-making in which
decision-makers (players) interact in a shared environment to
accomplish their goals. Several models have been proposed to analyze
a wide variety of applications in many disciplines such as
mathematics, computer science and even political and social sciences
among others.
Game Theory \cite{Games} has its roots in the work of von Neumann
and Morgenstern \cite{TGEB} and uses mathematics in order to model
and analyze games in which the decision-makers pursue rational
behavior in the sense that they choose their actions after some
process of optimization and take into account their knowledge or
expectations of the other players' behavior. Game Theory provides
general game definitions as well as reasonable solution concepts for
many kinds of situations in games. Typical examples of this kind of
research come from phenomena emerging from Markets, Auctions and
Elections.
Although historically Game Theory has been considered more suitable
to perform quantitative analysis than qualitative ones, there has
been a lot of approaches that emphasizes Game Analysis on a
qualitative basis, by using an adequate logic in order to express
games as well as their solution concepts. Some of the most
representatives of these logics are: Coalitional Logic
\cite{PaulyThesis01}; Alternating-time Temporal Logic (ATL)
\cite{ATL} and its variation Counter-factual ATL (CATL)
\cite{WiebeCATL}; Game Logic \cite{GameLogicPaulyP03a}; Game Logic
with Preferences \cite{PreferencesGameLogicsOtterlooHW04};
Coalitional Game Logic (CGL) \cite{WiebeCGL} that reasons about
coalitional games. To see more details about the connections and
open problems between logic and games, we point out
\cite{OpenProblemsLogicGameVanBenthem}.
The technique of Model Checking \cite{SMV} is frequently employed in
computer science to accomplish formal validation of both software
and hardware \cite{Burck91}. Model Checking consists of achieving
automatic verification and other forms of formal analysis of a
system behavior. A lot of implementations are available in the
literature, such as Symbolic Model Verifier (SMV) \cite{Mc93Thesis},
SPIN \cite{Hol97} and MOCHA \cite{Mocha}. Some other implementations
also include specific features in the modeling, UPPAAL \cite{Uppaal}
works in real-time, HYTECH \cite{Hytech} with hybrid automata and
PRISM \cite{Parker02} with stochastic automata. Recently, model
checking has also been used to verify proprieties in games
\cite{Vasconcelos03,CheckersSMC01,PreferencesGameLogicsOtterlooHW04,VerificationGamesAAMAS06}.
There is a wide range of problems approachable by means of Game
Theory. Besides problems and models coming from economics, which
usually have quantitative features, as those normally present in
econometric models, there is also a range of problems that are
strongly related to Multi-Agent systems modeling and that can be
consequently also validated by means of well-known CAV tools.
However, the presence of intrinsic and quantitative measures in this
kind of modeling prevent us from an standard use of the most
popular (and efficient) CAV tools, such as Model Checkers (MCs) based on
the propositional logic language. One could argue that MCs, like
the SPIN MC, that have a richer operational semantics, such as its
ability to assign computable meaning to transitions by means of
fragments of a Programming Language like coding assertion, might be
the right answer to the specification of such kind of modeling.
However, the SPIN logic language is not a First-Order logic
language, and hence, cannot make assertions on the internal
structure of an state, mainly regarding the relationship between the
values assigned to the individuals in these states (worlds),
properties regarding the very individuals and so generalizations and
existential assertions on a state and its individuals cannot be, in
general, expressible. Most of the solution concepts used in Game
Theory are expressed as general assertions on the relationship
between individuals of a possible state-of-affairs in the game. The
SPIN logic language is unable, in general, to express such kind of
concept either. Concerning the usefulness of an approach based on a
logic language more expressible than the presently used in CAV
tools, it is worth mentioning new directions in the MC community
towards the use of First-Order Logic (see \cite{CADE-17,MC-FOL-01}).
Thus, this article contributes for this kind of research in the
Formal Methods community, by providing a First-Order based approach
to the problem of validating models able to be expressed by Game
Theoretical means. The present approach is not so restricted, since
the authors have already presented a result showing how Multi-Agent
systems can be viewed and strongly considered as a Game, such that,
main solution concepts on the Game side represent important concepts
on the Systems side (\cite{VasconcelosWRAC}).
The aim of this article is to present GAL (Game Analysis Logic), a
logic based on first-order CTL, in order to reason about games in
which a model of GAL is a game, and a formula of GAL is an analysis.
We illustrate our approach by showing that GAL is suitable to
express models of Game Theory as well as their solution concepts.
Precisely, we specify extensive game with perfect information by
means of models of GAL. We also express their main solution concepts
- namely Nash equilibrium and subgame perfect equilibrium - by means
of formulas of GAL. In \cite{VasconcelosThesis07}, we express the
standard noncooperative models (strategic games and the solution
concept of Nash equilibrium) and cooperative models (coalition game
and the solution concept of Core). In this article, we focus on the
extensive games and the solution concepts of Nash equilibrium and
subgame perfect equilibrium.
As GAL has a first-order apparatus, we are able to define many
concepts, such as utility, in an easier way, when compared to the
logics mentioned above. Moreover, a first-order apparatus is
essential to model and reason about social problems that have been
modeled by Game Theory, Econometric Models, etc, as already said. It
is worth mentioning that the ATL logic, in which the operators of
CTL are parameterized by sets of players, can be seen as a fragment
of GAL, using the first-order feature of GAL; thus, there is no need
for such a parameterization in GAL. In addition, the CGL logic,
which is designed to reason about cooperative models, can also be
embed in GAL. See \cite{VasconcelosThesis07} for the proofs that ATL
and CGL can be seen as fragments of GAL. We do not focus on such
proofs here.
We also provide a model checking algorithm for GAL in order to
demonstrate that GAL can be used in practice to analyze games
automatically. We have a prototype of a model checker for GAL that
has been developed according to the main intentions of the approach
advocated here. The model checker is available for download at
www.tecmf.inf.puc-rio.br/DaviRomero. All of the examples in this
article are implemented in the tool. We will show that, using our
prototype, we are able to find solution concepts of Game Theory and
to analyze players that are based on standard algorithms
\cite{Russell02} of the AI community.
This work is divided into six parts: Section 2 introduces Game
Analysis Logic; A model checking algorithm for GAL is presented in
Section 3. Standard concepts of Game Theory are expressed in GAL in
Section 4. Section 5 presents some experimental results using our
algorithm. Finally, Section 6 concludes this work.
\section{Game Analysis Logic (GAL)}
GAL is a many-sorted modal first-order logic language that is a
logic based on the standard Computation Tree Logic (CTL)
\cite{Clarke81}. A game is a model of GAL, called game analysis
logic structure, and an analysis is a formula of GAL.
The \emph{games} that we model are represented by a set of states
$\mathcal{S}E$ and a set of actions $\mathcal{CA}$.
A \emph{state} is defined by both a first-order interpretation and a
set of players, where: 1- The first-order interpretation is used to
represent the choices and the consequences of the players'
decisions. For example, we can use a list to represent the history
of the players' choices until certain state; 2- The set of players
represents the players that have to decide simultaneously at a
state. This set must be a subset of the players' set of the game.
The other players cannot make a choice at this state. For instance,
we can model games such as auction games, where all players are in
all states, or even games as Chess or turn-based synchronous game
structure, where only a single player has to make a choice at each
state. Notice that we may even have some states where none of the
players can make a decision that can be seen as states of the
nature.
An \emph{action} is a relation between two states $e_{1}$ and
$e_{2}$, where all players in the state $e_{1}$ have committed
themselves to move to the state $e_{2}$. Note that this is an
extensional view of how the players committed themselves to take a
joint action.
We refer to $(A_{k})_{k\in K}$ as a sequence of $A_{k}$'s with the
index $k\in K$. Sometimes we will use more than one index as in the
example $(A_{k,l})_{k,l\in K\times L}$. We can also use
$(A_{k},B_{l})_{k\in K, l\in L}$ to denote the sequence of
$(A_{k})_{k\in K}$ followed by the sequence $(B_{l})_{l\in L}$.
Throughout of this article, when the sets of indexes are clear in
the context, we will omit them.
A \emph{path} $\pi(e)$ is a sequence of states (finite or infinite)
that could be reached through the set of actions from a given state
$e$ that has the following properties: 1- The first element of the
sequence is $e$; 2- If the sequence is infinite
$\pi(e)=(e_{k})_{k\in\mathbb{N}}$, then $\forall k\geq0$ we have
$\langle e_{k} ,e_{k+1}\rangle\in\mathcal{CA}$; 3- If the sequence
is finite $\pi(e)=(e_{0},\ldots,e_{l})$, then $\forall k$ such that
$0\leq k<l$ we have $\langle e_{k},e_{k+1}\rangle\in\mathcal{CA}$
and there is no $e^{\prime}$ such that $\langle
e_{l},e^{\prime}\rangle\in\mathcal{CA}$. The game behavior is
characterized by its paths that can be finite or infinite. Finite
paths end in a state where the game is over, while infinite ones
represent a game that will never end.
Below we present the formal syntax and semantics of GAL. As usual,
we call the sets of sorts $S$, predicate symbols $P$, function
symbols $F$ and players $N$ as a non-logic language in contrast to
the logic language that contains the quantifiers and the
connectives. We define a term of a sort in a standard way. We denote
a term $t$ of sort $s$ as $t_{s}$. The modalities can be read as
follows.
\begin{itemize}
\item $[EX]\alpha$ - `exists a path $\alpha$ in the next state'
\item $[AX]\alpha$ - `for all paths $\alpha$ in the next
state'
\item $[EF]\alpha$ - `exists a path $\alpha$ in the future'
\item $[AF]\alpha$ - `for all paths $\alpha$ in the future'
\item $[EG]\alpha$ - `exists a path $\alpha$ globally'
\item $[AG]\alpha$ - `for all paths $\alpha$ globally'
\item $E(\alpha\mathcal{U}$$\beta)$ - `exists a path
$\alpha$ until $\beta$'
\item $A(\alpha\mathcal{U}$$\beta)$ - `for all paths $\alpha$ until $\beta$'
\end{itemize}
\begin{definition}[Syntax of GAL]
Let $\langle S,F,P,N\rangle $ be a non-logic language, and
$t_{s_{1}}^{1},...,t_{s_{n}}^{n}$ be terms, and $t_{s_{1}}^{\prime}$
be a term, and $p:s_{1}...s_{n}$ be a predicate symbol, and $i$ be a
player, and $x_{s}$ be a variable of sort $s$. The \textbf{logic
language of GAL} is generated by the following BNF definition:
\[\Phi::=
\top~|~\bot~|~i~|~p(t_{s_{1}}^{1},\ldots,t_{s_{n}}^{n})~|~(t_{s_{1}}^{1}\approx
t_{s_{1}}^{\prime})~|~(\lnot\Phi)~|~(\Phi\wedge\Phi)~|~(\Phi\vee\Phi)~|~(\Phi\rightarrow\Phi)\]
\[|~[EX]\Phi~|~[AX]\Phi~|~[EF]\Phi~|~[AF]\Phi~|~[EG]\Phi~|~[AG]\Phi~|~E(\Phi~\mathcal{U}~\Phi)~|~A(\Phi~\mathcal{U}~\Phi)\]
\[~|~\exists x_s\Phi~|~\forall x_s\Phi\]
\end{definition}
It is well-known that the operators
$\wedge,\vee,\bot,[EX],[AF],[EF],[AG],[EG]$ and $\forall x$ can be
given by the following usual abbreviations.
\begin{itemize}
\item
$\bot$ $\Longleftrightarrow$ $\lnot\top$
\item $\alpha\wedge\beta$ $\Longleftrightarrow$
$\lnot(\alpha\rightarrow\lnot\beta) $
\item $\alpha\vee\beta$
$\Longleftrightarrow$ $(\lnot\alpha\rightarrow\beta)$
\item $[EX]\alpha$ $\Longleftrightarrow$ $\lnot[AX]\lnot\alpha$
\item $[AF]\alpha$ $\Longleftrightarrow$
$A(\top~\mathcal{U}~\alpha)$
\item $[EF]\alpha$
$\Longleftrightarrow$ $E(\top~\mathcal{U}~\alpha)$
\item $[AG]\alpha$ $\Longleftrightarrow$ $\lnot
E(\top~\mathcal{U}~\lnot\alpha)$
\item $[EG]\alpha$
$\Longleftrightarrow$ $\lnot A(\top~\mathcal{U}~\lnot\alpha)$
\item $\forall x\alpha(x)$ $\Longleftrightarrow$ $\lnot\exists
x\lnot\alpha(x)$
\end{itemize}
\begin{definition}[Structure of GAL]
Let $\langle S,F,P,N\rangle $ be a non-logic language of GAL. A
\textbf{Game Analysis Logic Structure} for this non-logic language
is a tuple $\mathcal{G}=\langle
\mathcal{S}E,\mathcal{S}E_{o},\mathcal{CA},~(\mathcal{D}_{s}),$
$~(\mathcal{F}_{f,e}),~(\mathcal{P}_{p,e}),~(N_{e})\rangle$ such
that:
\begin{itemize}
\item $\mathcal{S}E$ is a non-empty set, called the set of
states.
\item $\mathcal{S}E_{o}$ is a set of initial states, where
$\mathcal{S}E_{o}\subseteq\mathcal{S}E$.
\item For each state $e\in\mathcal{S}E$, $N_{e}$ is a subset of
$N$.
\item $\mathcal{CA}\subseteq\mathcal{S}E\times\mathcal{S}E$,
called the set of actions of the game\footnote{This relation is not
required to be total as in the CTL case. The idea is because we have
finite games.}, in which if there is at least one player in the
state $e_{1}$, then exists a state $e_{2}$ such that $\langle
e_{1},e_{2}\rangle\in\mathcal{CA}$.
\item For each sort $s\in S$, $\mathcal{D}_{s}$ is a non-empty
set, called the domain of sort $s$\footnote{In algebraic terminology
$\mathcal{D}_{s}$ is a carrier for the sort $s$.}.
\item For each function symbol $f:s_1\times\ldots\times s_n\rightarrow s$ of $F$ and each
state $e\in \mathcal{S}E$, $\mathcal{F}_{f,e}$ is a function such
that $\mathcal{F}_{f,e}:\mathcal{D}_{s_{1}}\times\ldots\times
\mathcal{D}_{s_{n}}\rightarrow \mathcal{D}_{s}$.
\item For each predicate symbol $p:s_1\times\ldots\times s_n$ of $P$ and state $e\in
\mathcal{S}E$, $\mathcal{P}_{p,e}$ is a relation such that
$\mathcal{P}_{p,e}\subseteq \mathcal{D}_{s_{1}}\times \ldots\times
\mathcal{D}_{s_{n}}$.
\end{itemize}
\end{definition}
A \textbf{function or predicate is rigidly interpreted} if its
interpretation is the same for every state. A \textbf{GAL-structure
is finite} if the set of states $\mathcal{S}E$ and each set of
domains $D_{s}$ are finite. Otherwise, it is infinite. Note that
even when a GAL-structure is finite we might have infinite paths.
In order to provide the semantics of GAL, we define a valuation
function as a mapping $\sigma_{s}$ that assigns to each free
variable $x_{s}$ of sort $s$ some member $\sigma_{s}(x_{s})$ of
domain $\mathcal{D}_{s}$. As we use terms, we extend every function
$\sigma_{s}$ to a function $\bar{\sigma}_{s}$ from state and term to
element of sort $s$ that is done in a standard way. When the
valuation functions are not necessary, we will omit them.
\begin{definition}[Semantics of GAL]
Let
$\mathcal{G}=\langle\mathcal{S}E,\mathcal{S}E_{o},\mathcal{CA},(\mathcal{D}_{s}),
(\mathcal{F}_{f,e}),(\mathcal{P}_{p,e}),$ $(N_{e})\rangle $ be a
GAL-structure, and $(\sigma_{s})$ be valuation functions, and
$\alpha$ be a GAL-formula, where $s\in S, f\in F, p\in P$ and
$e\in\mathcal{S}E$. \textbf{We write
$\mathcal{G},(\sigma_{s})\models_{e}\alpha$ to indicate that the
state $e$ satisfies the formula $\alpha$ in the structure
$\mathcal{G}$ with valuation functions $(\sigma_{s})$}. The formal
definition of satisfaction $\models$ proceeds as follows:
\begin{itemize}
\item $\mathcal{G},(\sigma_{s})\models_{e}\top$.
\item $\mathcal{G},(\sigma_{s})\models_{e}i\Longleftrightarrow
i\in N_{e}$
\item $\mathcal{G},(\sigma_{s})\models_{e}p(t^{1}_{s_{1}},...,t^{n}_{s_{n}}
)\Longleftrightarrow\langle
\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}}),...,\bar{\sigma}_{s_{n}}
(e,t^{n}_{s_{n}})\rangle \in \mathcal{P}_{p,e}$
\item $\mathcal{G},(\sigma_{s})\models_{e}(t^{1}_{s_{1}}\approx
t^{\prime}_{s_{1}})\Longleftrightarrow
\bar{\sigma}_{s_1}(e,t^{1}_{s_1})=\bar{\sigma}_{s_1}(e,t^{\prime}_{s_1})$
\item $\mathcal{G},(\sigma_{s})\models_{e}\lnot\alpha$
$\Longleftrightarrow$ NOT
$\mathcal{G},(\sigma_{s})\models_{e}\alpha$
\item $\mathcal{G},(\sigma_{s})\models_{e}(\alpha\rightarrow\beta)$
$\Longleftrightarrow$ IF $\mathcal{G},(\sigma_{s})\models_{e}\alpha$
THEN $\mathcal{G},(\sigma_{s})\models_{e}\beta$
\item $\mathcal{G},(\sigma_{s})\models_{e}[AX]\alpha\Longleftrightarrow$
$\forall e^{\prime}\in\mathcal{S}E\ $such that $\langle
e,e^{\prime}\rangle \in\mathcal{CA}$ we have
$\mathcal{G},(\sigma_{s})\models_{e^{\prime}}\alpha$ (see Figure
\ref{figModalConectives}.a).
\item $\mathcal{G},(\sigma_{s})\models_{e}E(\alpha~\mathcal{U}$
$\beta)$ $\Longleftrightarrow$ exists a finite (or infinite) path
$\pi(e)=(e_{0}e_{1}e_{2}...e_{i}),$ such that exists a $k$ where
$k\geq0$, and
$\mathcal{G},(\sigma_{s})\models_{e_{k}}\beta$, and for all $j$ where $0\leq j<
k$, and $\mathcal{G},(\sigma_{s})\models_{e_{j}}\alpha$ (see Figure
\ref{figModalConectives}.b).
\item $\mathcal{G},(\sigma_{s})\models_{e} A(\alpha~\mathcal{U}$ $\beta)$
$\Longleftrightarrow$ for all finite (and infinite) paths such that
$\pi(e)=(e_{0}e_{1}e_{2}...e_{i}),$ exists a $k$ where $k\geq0$, and
$\mathcal{G},(\sigma_{s})\models_{e_{k}}\beta$, and for all $j$ where $0\leq j<
k$, and $\mathcal{G},(\sigma_{s})\models_{e_{j}}\alpha$ (see Figure
\ref{figModalConectives}.c).
\item $\mathcal{G},(\sigma_{s},\sigma_{s_{k}})\models_{e}\exists
x_{s_{k}}\alpha\Longleftrightarrow$ exists $d\in
\mathcal{D}_{s_{k}}$ such that
$\mathcal{G},(\sigma_{s},\sigma_{s_{k}}(x_{s_{k}}|d))\models_{e}\alpha$,
where $\sigma_{s_{k}}(x_{s_{k}}|d)$ is the function which is exactly
like $\sigma_{s_{k}}$ except for one thing: At the variable
$x_{s_{k}}$ it assumes the value $d$. This can be expressed by the
equation:
\[ \sigma_{s}(x_{s_{k}}|d)(y)=\left\{
\begin{array}
[c]{l} \sigma_{s}(y), \textrm{ if }\ y\neq x_{s_{k}} \\ d, \qquad
\textrm{ if }\ y=x_{s_{k}}
\end{array}
\right. \]
\end{itemize}
\end{definition}
\begin{figure}[h]
\begin{tabular}[l]{ccc}
\raisebox{-0pt}{
\includegraphics[width=.27\textwidth]{forallnext.pdf
}
&
\raisebox{-0pt}{
\includegraphics[width=.27\textwidth]{existsalfabeta.pdf
}
&
\raisebox{-0pt}{
\includegraphics[width=.27\textwidth]{forallalfabeta.pdf}
} \\
(a) - $[AX]\alpha$ & (b) - $E(\alpha~\mathcal{U}\beta)$ & (c) - $A(\alpha~\mathcal{U}\beta)$
\end{tabular}
\caption{Modal Connectives of GAL.}\label{figModalConectives}
\end{figure}
\section{Satisfatibility and Model Checking for GAL}
\label{sectionGALV} It is well-known that there is no sound and
complete system for a first-order CTL
\cite{UndecidableFOCTLMontagnaPT02}. Thus, GAL is also
non-axiomatizable. However, we argue that, using model checking for
GAL, we can reason about games as well. Besides that we can also
define a non-complete axiomatization of GAL in order to cope with
proofs of interesting results, such as the existence of mixed Nash
equilibrium in strategic games, but we do not focus on this in this
article. In the sequel we state the model checking problem for GAL
and also discuss briefly a model checking algorithm for GAL.
Let $\mathcal{G}=\langle
\mathcal{S}E,\mathcal{S}E_{o},\mathcal{CA},(\mathcal{D}_{s}),$
$(\mathcal{F}_{f,e}),(\mathcal{P}_{p,e}),(N_{e})\rangle $ be a
GAL-structure with the non-logic language $\langle S,F,P,N\rangle $,
and $(\sigma_{s})$ be valuation functions and $\alpha$ be a
GAL-formula. The GAL model checking problem is to find the set of
states that satisfies the formula $\alpha$.
\[\{e\in\mathcal{S}E \textrm{ }| \textrm{ }\mathcal{G},(\sigma_{s})\models_{e}\alpha\}\]
In order to have a model checking algorithm for GAL, we assume that
all of the games are finite; however, we might still have infinite
behavior.
The algorithm for solving the GAL model checking problem uses an
explicit representation of the GAL-structure as a labelled, directed
graph. The nodes represent the states $\mathcal{S}E$, the arcs in
the graph provide the set of actions $\mathcal{CA}$ and the labels
associated with the nodes describe both the players' set $N_{e}$ and
the first-order interpretation (the interpreted functions' set
$(\mathcal{F}_{f,e})$ and the interpreted predicates' set
$(\mathcal{P}_{p,e})$). The algorithm also uses the functions
$\mathcal{D}:S\rightarrow \mathcal{D}_{s}$,
$\mathcal{N}:\mathcal{S}E\rightarrow N_{e}$,
$\mathcal{F}:F\times\mathcal{S}E\rightarrow\mathcal{F}_{f,e}$ and
$\mathcal{P}:P\times\mathcal{S}E\rightarrow\mathcal{P}_{p,e}$ in
order to provide an implicit representation of the domains' set
$(\mathcal{D}_{s})$, the players' set $N_{e}$, the functions
$(\mathcal{F}_{f,e})$ and the relations $(\mathcal{P}_{p,e})$,
respectively. Thus, we only evaluate them on demand.
The algorithm is similar to the CTL model checking algorithm
\cite{SMV} that operates by labelling each state $e\in\mathcal{S}E$
with the set of $labels(e)$ of sub-formulas of $\alpha$ which are
true in $e$. The algorithm starts with the set $labels(e)$ as the
empty set\footnote{The CTL model checking algorithm starts the set
of labels(e) as the set of propositions in $e$. In our algorithm we
just evaluate the predicates and functions on demand.} and then goes
by a series of steps (the number of operators in $\alpha$). At each
step $k$, sub-formulas with $k-1$ nested GAL operators are
processed. When a formula is processed, it is added to the labelling
of the state in which it is true. Thus,
$\mathcal{G},(\sigma_{s})\models_{e}\alpha\Longleftrightarrow\alpha\in
labels(e)$.
As GAL-formulas are represented in terms of $i$,
$p(t^{1}_{s_{1}},...,t_{s_{n}}^{n})$,
$(t^{1}_{s_{1}}$$\approx$$t_{s_{1}}^{\prime})$, $(\lnot\alpha)$,
$(\alpha\rightarrow\beta)$, $\exists x_{s_k}\alpha$, $[AX]\alpha$,
$E(\alpha\mathcal{U}\beta)$, $A(\alpha\mathcal{U}\beta)$, it is
sufficient to handle these cases. The cases $(\lnot\alpha)$,
$(\alpha\rightarrow\beta)$, $[AX]\alpha$,
$E(\alpha\mathcal{U}\beta)$ and $A(\alpha\mathcal{U}\beta)$ are
similar to the CTL model checking algorithm and we do not present
here (see \cite{Mc93Thesis} for more details). Below we present and
give the time complexity of the other procedures. In order to
guarantee termination of the algorithm, the functions
$(\mathcal{F}_{f,e})$ and the relations $(\mathcal{P}_{p,e})$ must
terminate since the model is finite this is accomplished. We use the
notation $\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}})$ as the function
that interprets the term $t^{1}_{s_{1}}$ at the state~$e$. We take
its complexity as an upper bound on the implementation of
$\bar{\sigma}_{s_{1}}$ taking all states into account. We refer to
this upper bound as $|\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}})|$.
\begin{itemize}
\item Case $i$:
The procedure \emph{verifyPlayer} (see Algorithm \ref{procPlayer})
labels all states $e\in\mathcal{S}E$ with the player $i$ if the
player $i$ belongs to the set of players in $e$. This procedure
requires time $O(|\mathcal{S}E|)$.
\item Case $p(t^{1}_{s_{1}},...,t^{n}_{s_{n}})$: The procedure
\emph{verifyPredicate} (see Algorithm \ref{procPred}) labels all
states $e\in\mathcal{S}E$ in which the interpretation of the
predicate $p$ with the interpretation of terms
$t^{1}_{s_{1}},...,t^{n}_{s_{n}}$ is true in $e$. This procedure
requires time
$O((|\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}})|+...+|\bar{\sigma}_{s_{n}}
(e,t^{n}_{s_{n}})|)\times |\mathcal{S}E|)$. \footnote{Notice that
the evaluation of the terms and the predicate are done in all states
and the time complexity of them could not be polynomial.}
\item Case $t^{1}_{s_{1}}$$\approx$$t^{\prime}_{s_{1}}$: The
procedure \emph{verifyEquality} (see Algorithm \ref{procEqual})
labels all state $e\in\mathcal{S}E$ in which the interpretation of
the terms $t^{1}_{s_{1}}$ and $t^{\prime}_{s_{1}}$ are equal. The
time complexity is
$O((|\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}})|+|\bar{\sigma}_{s_{n}}
(e,t^{\prime}_{s_{1}})|)\times |\mathcal{S}E|)$.
\item The
procedure \emph{verifyExists} (see Algorithm \ref{procExists})
labels all states $e\in\mathcal{S}E$ in which the formula $\alpha$
with all occurrences of the variable $x_{s_{k}}$ substituted by at
least one element of the domain is true. We use the notation
$\alpha[x_{s_k}\leftarrow d]$ as a function that substitutes all
occurrence of $x_{s_k}$ by $d$ in $\alpha$. This procedure requires
$O(|\mathcal{D}_{s_{k}}|\times |\mathcal{S}E|)$.
\end{itemize}
Thus, the complexity of the algorithm regards to: 1- The size of the
domains' set; 2- The size of the states' set; 3- The size of the
actions' set; 4- The complexity of both functions and predicates in
each state.
\begin{algorithm}
\caption{procedure verifyPlayer(i)} \label{procPlayer}
\begin{algorithmic}
\FORALL { $e\in\mathcal{S}E$}
\IF{$i\in \mathcal{N}(e)$}
\STATE $label(e):=label(e)\cup\{i\}$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm} \begin{algorithm}
\caption{procedure
verifyPredicate($p(t^{1}_{s_{1}},...,t^{n}_{s_{n}}))$}
\label{procPred}
\begin{algorithmic}
\FORALL { $e\in\mathcal{S}E$}
\IF{$\langle\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}}),...,\bar{\sigma}_{s_{n}}
(e,t^{n}_{s_{n}})\rangle \in \mathcal{P}(p,e)$}
\STATE $label(e):=label(e)\cup\{p(t^{1}_{s_{1}},...,t^{n}_{s_{n}})\}$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{procedure verifyEquality$(t^{1}_{s_{1}}\approx
t^{\prime}_{s_{1}})$} \label{procEqual}
\begin{algorithmic}
\FORALL { $e\in\mathcal{S}E$}
\IF{$\bar{\sigma}_{s_{1}}(e,t^{1}_{s_{1}})=\bar{\sigma}_{s_{1}}(e,t^{\prime}_{s_{1}})$}
\STATE $label(e):=label(e)\cup\{t^{1}_{s_{1}}\approx t^{\prime}_{s_{1}}\}$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{procedure verifyExists$(\exists x_{s_{k}}\alpha)$}
\label{procExists}
\begin{algorithmic}
\FORALL { $d\in\mathcal{D}({s_{k}})$} \STATE $T := \{e \textrm{ } |
\textrm{ } \alpha[x_{s_{k}}\leftarrow d]\in label(e)\}$
\FORALL { $e\in T$}
\IF{$\exists x_{s_{k}}\alpha \not\in
label(e)$}
\STATE $label(e):=label(e)\cup\{\exists
x_{s_{k}}\alpha\}$
\ENDIF
\ENDFOR \ENDFOR
\end{algorithmic}
\end{algorithm}
Below we consider a simpler version of GAL, for the sake of a
simpler presentation of the algorithm's complexity. The non-logical
language has only one sort $D$ and one unary predicate $p:D$. Let
$\mathcal{G}_{S}$ be a GAL-structure, where: 1- The predicate $p$ is
interpreted as constant for all states and its time complexity is
represented by $O(p)$; 2- The size of sort D's domain is
$|\mathcal{D}|$; 3- The size of the states' set is $|\mathcal{S}E|$;
4- The size of the actions' set is $|\mathcal{CA}|$. Let $\alpha$ be
a GAL-formula for this language, where $\alpha_{M}$ and $\alpha_{D}$
are the number of modal connectives and the number of quantifier
connectives, respectively, in the formula $\alpha$. The time
complexity to verify $\alpha$ for $\mathcal{G}_{S}$ is
\[O(|\mathcal{D}|^{\alpha_{D}}\times |\alpha_{M}|\times((|\mathcal{S}E|\times
O(p))+|\mathcal{CA}|))\]
We have a prototype, namely Game Analysis Logic Verifier (GALV),
that was written as \emph{framework} in Java. GALV is available for
download at http://www.tecmf.inf.puc-rio.br/DaviRomero. All of the
examples that we will show in this article are implemented in our
prototype. The main advantages of this model checker are: 1- It
allows the use of abstract data types, for example, a list can be
used to represent the history of the game; 2- It might use a large
sort of libraries that are available in Java; 3- Functions and
predicates might be used to analyze games, such as the evaluation
functions that are used in the AI community to provide an estimate
of the expected utility of the game from a given position; 4- GALV
allows computational aspects to define the players' actions, for
example, a \emph{minimax} algorithm can be used to define the
actions of a certain player, while the other players might use
different algorithms. So, the time complexity to generate a game
might not be polynomial, i.e., it depends on the algorithms that
have been used to define the players' actions.
\section{Game Theory in Game Analysis Logic}\label{sectionGTinGAL}
We can model both the standard models and the standard solution
concepts of Game Theory using GAL. In this section we show that the
standard models are related to as GAL-structures and the standard
solution concepts are related to as GAL-formulas. Precisely, we
present the correspondence between the extensive games and the
GAL-structures as well as the solution concepts of Nash equilibrium
(NE) and subgame perfect equilibrium (SPE) and the formulas of GAL.
For more details about the rationale of the
definitions related to Game Theory see \cite{Games}. In the
sequel, we write down the definitions and theorems used in this
article.
An extensive game is a model in which each player can consider his
or her plan of action at every time of the game at which he or she
has to make a choice. There are two kinds of models: game with
perfect information; and games with imperfect information. For the
sake of simplicity we restrict the games to models of perfect
information. A general model that allows imperfect information is
straightforward. Below we present the formal definition and the
example depicted in Figure \ref{ExtensiveGameFigure}.a.
\begin{definition}\label{extensiveDefinition}
An \textbf{extensive game with perfect information} is a tuple
$\langle
\textbf{N},\textbf{H},\textbf{P},(\textbf{u$_{\textbf{i}}$})
\rangle$, where
\begin{itemize}
\item \textbf{N} is a set, called the set of players.
\item \textbf{H} is a set of sequences of actions (finite or infinite),
called the set of histories, that satisfies the following properties
\begin{itemize}
\item the empty sequence is a history, i.e. $\emptyset\in H$.
\item if $(a_{k})_{k\in K}\in H$ where $K\subseteq\mathbb{N}$ and for all $l\leq |K|$, then $(a_{k})_{k=0,\ldots,l}\in
H$.
\item if $(a_{0}\ldots a_{k})\in H$ for all $k\in\mathbb{N}$, then
the infinite sequence $(a_{0}a_{1}\ldots)\in H$.
\end{itemize}
A history $h$ is
\textbf{terminal} if it is infinite or it has no action $a$ such
that $(h,a)\in H$. We refer to $\textbf{T}$ as the set of terminals.
\item $\textbf{P}$ is a function that assigns to each non-terminal history a
player.
\item For each player $i\in N$, a utility function $\textbf{u$_{\textbf{i}}$}$
on $T$.
\end{itemize}
\end{definition}
\begin{example}\label{extensiveGameExample}
An example of a two-player extensive game $\langle
\textbf{N},\textbf{H},\textbf{P},(\textbf{u$_{\textbf{i}}$})
\rangle$, where:
\begin{itemize}
\item $\textbf{N}=\{1,2\}$;
\item $\textbf{H}=\{\emptyset,(A),(B),(A,L),(A,R)\}$;
\item $\textbf{P}(\emptyset)=1$ and $\textbf{P}((A))=2$;
\item $\textbf{u$_{\textbf{1}}$}((B))=1,$ $\textbf{u$_{\textbf{1}}$}((A,L))=0,$ $\textbf{u$_{\textbf{1}}$}((A,R))=2$;
\item $\textbf{u$_{\textbf{2}}$}((B))=2,\textbf{u$_{\textbf{2}}$}((A,L))=0,\textbf{u$_{\textbf{2}}$}((A,R))=1$.
\end{itemize}
\end{example}
A \emph{strategy of player i} is a function that assigns an action
for each non-terminal history for each $P(h)=i$. For the purpose of
this article, we represent a strategy as a tuple. In order to avoid
confusing when we refer to the strategies or the histories, we use
`$\langle$' and `$\rangle$' to the strategies and `$($' and `$)$' to
the histories. In Example \ref{extensiveGameExample}, player $1$ has
to make a decision only after the initial state and he or she has
two strategies $\langle A\rangle$ and $\langle B\rangle$. Player $2$
has to make a decision after the history $(A)$ and he or she has two
strategies $\langle L\rangle$ and $\langle R\rangle$. We denote
$\textbf{S}_{\textbf{i}}$ as the set of player i's strategies. We
denote $s=(s_{i})$ as a \textbf{strategy profile}. We refer to
$\textbf{O(s}_{\textbf{1}},\ldots,\textbf{s}_{\textbf{n}}\textbf{)}$
as an outcome that is the terminal history when each player follows
his or her strategy $s_{i}$. In Example \ref{extensiveGameExample},
$\langle \langle B\rangle,\langle L\rangle\rangle$ is a strategy
profile in which the player $1$ chooses $B$ after the initial state
and the player 2 chooses $L$ after the history $(A)$, and $O(\langle
B\rangle,\langle L\rangle)$ is the outcome $(B)$. In a similar way,
we refer to
$\textbf{O}_{\textbf{h}}\textbf{(h,}\textbf{s}_{\textbf{1}},\ldots,\textbf{s}_{\textbf{n}}\textbf{)}$
as the outcome when each player follows his or her strategy $s_{i}$
from history $\textbf{h}$. In Example \ref{extensiveGameExample},
$O_h((A),\langle B\rangle,\langle L\rangle)$ is the outcome $(A,L)$
and $u_1((A),\langle B\rangle,\langle L\rangle)=u_1((A,L))=0$.
\begin{figure}[h]
\setlength{\unitlength}{1 cm}
\begin{picture}(4,5.0)
\put(3.3,0.8){$0$,$0$}
\put(4.9,0.8){$2$,$1$}
\put(6.5,2.2){$1$,$2$}
\put(4.3,3.2){A}
\put(6.3,3.2){B}
\put(3.5,1.7){L}
\put(4.7,1.7){R}
\put(5.5,4){\circle{0.5}}
\put(5.4,3.9){1}
\put(5.5,3.75){\vector(1,-1){1.2}}
\put(5.5,3.75){\vector(-1,-1){1.2}}
\put(4.3,2.3){\circle{0.5}}
\put(4.2,2.2){2}
\put(4.3,2.05){\vector(1,-1){0.8}}
\put(4.3,2.05){\vector(-1,-1){0.8}}
\put(2,0){(a) - Extensive form representation}
\put(8.3,0){(b) - A GAL representation}
\put(11.5,4.5){\oval(1.7,1)}
\put(11.1,4.6){$h=\emptyset$}
\put(11.3,4.2){$\{1\}$}
\put(11.5,4.0){\vector(-2,-1){1}}
\put(11.5,4.0){\vector(2,-1){1}}
\put(10.5,3){\oval(2.0,1)}
\put(9.8,3.1){$h=(A)$}
\put(10.3,2.7){$\{2\}$}
\put(10.5,2.5){\vector(-3,-1){1.2}}
\put(10.5,2.5){\vector(3,-1){1.2}}
\put(9.2,1.5){\oval(2.2,1.1)}
\put(8.3,1.6){$h=(A,L)$}
\put(9.1,1.2){$\{\}$}
\put(11.5,1.5){\oval(2.2,1.1)}
\put(10.6,1.6){$h=(A,R)$}
\put(11.4,1.2){$\{\}$}
\put(12.6,3){\oval(2.0,1)}
\put(11.9,3.1){$h=(B)$}
\put(12.4,2.7){$\{\}$}
\end{picture}
\caption{\emph{Mapping an extensive game into a GAL model.
}}\label{ExtensiveGameFigure}
\end{figure}
We can model an extensive game $\Gamma=\langle N,H,P,(u_{i})\rangle$
as a GAL-structure in the following way. Each history $h\in H$ (from
the extensive game) is represented by a state, in which a 0-ary
symbol $h$ designates a history of $\Gamma$ (the one that the state
is coming from), so $h$ is a non-rigid designator. The set of the
actions of the GAL-structure is determined by the set of actions of
each history, i.e., given a history $h\in H$ and an action $a$ such
that $(h,a)\in H$, then the states namely $h$ and $(h,a)$ are in the
set of actions of the GAL-structure, i.e. $\langle
h,(h,a)\rangle\in\mathcal{CA}$. Function $P$ determines the player
that has to make a choice at every state, i.e. $N_{h}=\{P(h)\}$. The
utilities functions are rigidly defined as in the extensive game.
The initial state is the state represented by the initial history of
the extensive game, i.e. $H_o=\{\emptyset\}$. Sorts $H$ and $T$ are
interpreted as the histories and terminal histories of the extensive
game, respectively, i.e., $\mathcal{D}_{H}=H$ and
$\mathcal{D}_{T}=T$. Sort $U$ represents the utility values and is
interpreted as the set of all possible utility values of the
extensive game\footnote{Note that this set is finite if the game is
finite.}. In order to define the solution concept of the subgame
perfect equilibrium and the Nash equilibrium, we add to this
structure the sets of players' strategies $(\mathcal{D}_{S_{i}})$
and functions $O$ and $O_h$. To summarize, a \textbf{GAL-structure
for an extensive game with perfect information} $\Gamma =\langle
N,P,H,(u_{i})\rangle$ is the tuple $\langle
H,H_{o},\mathcal{CA},\textbf{(}\mathcal{D}_{H},\mathcal{D}_{T},\mathcal{D}_{S_{i}},\mathcal{D}_{U}\textbf{)},
\textbf{(}u_{i},h_{h},O,O_h\textbf{)},~(\geq)~,(N_{h})\rangle$ with
non-logic language $\langle (H,T,S_i,U)$ $,(h:\rightarrow
H,u_i:T\rightarrow U,O:S\rightarrow T,O_h:H\times S\rightarrow T)$
$,(\geq:U\times U),N\rangle$. The example below is the GAL-structure
(see Figure \ref{ExtensiveGameFigure}.b) of Example
\ref{extensiveGameExample} (see Figure \ref{ExtensiveGameFigure}.a).
\begin{example}\label{exampleExtenGameGal2}
The GAL-structure of Example \ref{extensiveGameExample} is $\langle
H,H_{o},\mathcal{CA},$ $\textbf{(}\mathcal{D}_{H},\mathcal{D}_{T},\mathcal{D}_{S_{1}},\mathcal{D}_{S_{2}},\mathcal{D}_{U}\textbf{)},\textbf{(}h_{h},u_{1},u_{2},$
$O,O_h\textbf{)},(\geq),\textbf{(}N_{h}\textbf{)}\rangle$
with non-logic language $\langle(H,T,S_1,S_2,U),(h:\rightarrow H,u_{1}:T\rightarrow U,
u_{2}:T\rightarrow U,O:S_1\times S_2\rightarrow T,$ $O_h:H\times S_1\times S_2\rightarrow T),(\geq:U \times U),\{1,2\}\rangle$
where
\begin{itemize}
\item
$H=\{\emptyset,~(A),~(B),~(A,L),~(A,R)\}$ and $H_{o}=\{\emptyset\}$.
\item $\mathcal{CA}=\{\langle\emptyset,~(A)\rangle,~\langle\emptyset,(B)\rangle,~\langle(A),(A,L)\rangle,~\langle(A),(A,R)\rangle\}$.
\item $\mathcal{D}_{S_{1}}=\{\langle A\rangle,\langle B\rangle\}$, $\mathcal{D}_{S_{2}}=\{\langle L\rangle,\langle R\rangle\}$ and $\mathcal{D}_{U}=\{0,1,2\}$.
\item $\mathcal{D}_{H}=\{\emptyset,~(A),~(B),~(A,L),~(A,R)\}$ and $\mathcal{D}_T=\{(B),~(A,L),~(A,R)\}$.
\item $h_{\emptyset}=\emptyset$, $h_{(A)}=(A)$, $h_{(B)}=(B)$, $h_{(A,L)}=(A,L)$,
$h_{(A,R)}=(A,R)$.
\item $N_{\emptyset}=\{1\}$, $N_{(A)}=\{2\}$,
$N_{(B)}=N_{(A,L)}=N_{(A,R)}=\{\}$.
\item Functions $O$, $O_h$, $u_1$ and $u_2$ are rigidly defined as in the
extensive game.
\end{itemize}
\end{example}
The most used solution concepts for extensive games are Nash
equilibrium (NE) and subgame perfect equilibrium (SPE). The solution
concept of NE requires that each player's strategy be optimal, given
the other players' strategies. And, the solution concept of SPE
requires that the action prescribed by each player's strategy be
optimal, given the other players' strategies, after every history.
In SPE concept, the structure of the extensive game is taken into
account explicitly, while, in the solution concept of NE, the
structure is taken into account only implicity in the definition of
the strategies. Below we present the SPE definition in a standard
way. The NE definition below regards to the structure of an
extensive game, yet is an equivalent one to the standard.
\begin{definition}\label{defSubgame1}
A \textbf{subgame perfect equilibrium (SPE)} of an extensive game
$\Gamma=\langle N,H,P,(u_i)\rangle$ is a strategy profile
$s^{*}=\langle s_1^{*},\ldots,s_n^{*}\rangle$ such that for every
player $i\in N$ and every history $h\in H$ for which $P(h)=i$ we
have
\[u_i(O_h(h,s^*_{1},\ldots,s^*_n))\geq
u_i(O_h(h,s^*_{1},\ldots,s_i,\ldots,s^*_n)),\] for every strategy
$s_{i}\in S_{i}$.
\end{definition}
\begin{definition}\label{defNash1}
A \textbf{Nash equilibrium (NE)} of an extensive game
$\Gamma=\langle N,H,P,(u_i)\rangle$ is a strategy profile
$s^*=\langle s_1^{*},\ldots,s_n^{*}\rangle$ such that for every
player $i\in N$ and every history on the path of the strategy
profile $s^*$ (i.e. $h\in O(s^*)$) for which $P(h)=i$ we have
\[u_i(O_h(h,s^*_{1},\ldots,s^*_n))\geq
u_i(O_h(h,s^*_{1},\ldots,s_i,\ldots,s^*_n)),\] for every strategy
$s_i\in S_{i}$.
\end{definition}
We invite the reader to verify that the strategy profiles $\langle
\langle A\rangle ,\langle R\rangle\rangle$ and $\langle \langle
B\rangle ,\langle L\rangle\rangle$ are the Nash equilibria in
Example \ref{extensiveGameExample}. Game theorists can argue that
the solution $\langle \langle B\rangle ,\langle L\rangle\rangle$ is
not reasonable when the players regard to the sequence of the
actions. To see that the reader must observe that after the history
$(A)$ there is no way for player 2 commit himself or herself to
choose $L$ instead of $R$ since he or she will be better off
choosing $R$ (his or her utility is 1 instead of 0). Thus, player 2
has an incentive to deviate from the equilibrium, so this solution
is not a subgame perfect equilibrium. On the other hand, we invite
the reader to verify that the solution $\langle \langle A\rangle
,\langle R\rangle\rangle$ is the only subgame perfect equilibrium.
Consider formulas \ref{formSPE1} and \ref{formNash1} as expressing
subgame perfect equilibrium definition \ref{defSubgame1} and Nash
equilibrium definition \ref{defNash1}, respectively. A strategy
profile $s^{*}=\langle s_1^{*},\ldots,s_n^{*}\rangle$ is a SPE (or
NE) if and only if formula \ref{formSPE1} (or formula
\ref{formNash1}) holds at the initial state $\emptyset$, where each
$\sigma_{S_i}(v_{s_i}^{*})=s_{i}^{*}$.
\begin{small}
\begin{equation}[AG]\left({\textstyle\bigwedge\limits_{i\in N}}
i\rightarrow\forall v_{s_{i}}
\left(u_{i}(O_h(h,v_{s_{1}}^{*},\ldots,v_{s_{n}}^{*}))\geq
u_{i}(O_h(h,v_{s_{1}}^{*},\ldots,v_{s_{i}},\ldots,v_{s_{n}}^{*}))\right)\right)\label{formSPE1}
\end{equation}
\begin{equation}[EG]\left(
\begin{array}{c}
h\in O(v_{s_{1}}^{*},\ldots,v_{s_{n}}^{*})~~~~\wedge
\\ \left({\textstyle\bigwedge\limits_{i\in N}} i\rightarrow\forall
v_{s_{i}} \left(u_{i}(O_h(h,v_{s_{1}}^{*},\ldots,v_{s_{n}}^{*}))\geq
u_{i}(O_h(h,(v_{s_{1}}^{*},\ldots,v_{s_{i}},\ldots,v_{s_{n}}^{*})))\right)\right)\end{array}\right)\label{formNash1}
\end{equation}
\end{small}
In order to guarantee the correctness of the representation of both
subgame perfect equilibrium and Nash equilibrium, we state the
theorem below. The proof is provided in Appendix \ref{appendix}.
\begin{theorem}\label{teorema}
Let $\Gamma$ be an extensive game, and $\mathcal{G}_{\Gamma}$ be a
GAL-structure for $\Gamma$, and $\alpha$ be a subgame perfect
equilibrium formula for $\mathcal{G}$ as defined in Equation
\ref{formSPE1}, and $\beta$ be a Nash equilibrium formula as defined
in Equation \ref{formNash1}, and $(s_{i}^{*})$ be a strategy
profile, and $(\sigma_{S_{i}})$ be valuations functions for sorts
$(S_{i})$.
\begin{itemize}
\item $\textrm{A strategy profile } (s_{i}^{*}) \textrm{ is a SPE of }\Gamma\Longleftrightarrow
\mathcal{G}_{\Gamma},\!(\sigma_{S_i}\!)\!\!\models_{\emptyset}\alpha$, where
each $\sigma_{S_i}(v_{s_{i}}^{*})=s_{i}^{*}$
\item $\textrm{A strategy profile } (s_{i}^{*}) \textrm{ is a NE of }\Gamma
\Longleftrightarrow\mathcal{G}_{\Gamma},\!(\sigma_{S_i}\!)\!\!\models_{\emptyset}\beta$,
where each $\sigma_{S_i}(v_{s_{i}}^{*})=s_{i}^{*}$
\end{itemize}
\end{theorem}
\section{Experimental Results}\label{sectionExper}
In this section we show the performance of the GAL model checking
algorithm against other algorithms. The algorithm was written in
Java and the experiments were executed on a 2.4GHz Celeron with 512
MBytes of RAM, running Windows XP Home Edition.
Several algorithms for the problem of finding a Nash equilibrium are
proposed in the literature (see \cite{mckelvey96computation} for a
survey). Most of them compute a mixed Nash equilibrium. Gambit
\cite{GambitManual} is the best-known Game Theory software that
implements most of all algorithms. We use both Gambit (with its
\emph{EnumPureSolve} method) and our algorithm in order to compute
the pure Nash Equilibria. Figure \ref{figStratGame} shows the
running times (in seconds) of several two-player games in which the
payoffs of the games were randomly generated (Figure
\ref{figStratGame}.a) or were taken as the constant value 0 (Figure
\ref{figStratGame}.b). The difference between the games in Figure
\ref{figStratGame}.a and Figure \ref{figStratGame}.b relies on the
size of the set of equilibria. Our algorithm took almost the same
time to find the solution concept regardless of the size of
equilibria. On the other hand, Gambit's performance was much more
dependent on the size of equilibria as shown in Figure
\ref{figStratGame}.
\noindent\begin{figure}[h]
\begin{tabular}[l]{cc}
\raisebox{-0pt}{
\includegraphics[width=.45\textwidth]{gambitNashRandom.pdf}
}
&
\raisebox{-0pt}{
\includegraphics[width=.45\textwidth]{nash.pdf}
}
\\ (a) - Randomized payoffs & (b) - Constant payoffs
\end{tabular}
\caption{Two-player games.}\label{figStratGame}
\end{figure}
In \cite{Vasconcelos03,Vasconcelos03Laptec} is proposed a
metalanguage to describe games, namely \emph{RollGame}, as well as a
translation into the input language of the well-known SMV model
checker \cite{Mc93Thesis}, in order to reason about games. In this
section, we take Tic-Tac-Toe game in order to provide an example
that an explicit representation of such a game can be more efficient
than using an OBDD approach as in SMV. It is worth mentioning that
SMV uses a propositional logic (CTL), so it cannot express many
solution concepts as defined in Section \ref{sectionGTinGAL}.
Moreover, it does not allow the use of abstract data types, yet the
usage of integer is prohibited in many situations, such as when one
wants to use utilities values.
In \cite{Vasconcelos03,Vasconcelos03Laptec}, a version of a
Tic-Tac-Toe game is modeled and analyzed. In this version, one of
the players (PlayerX) uses a certain strategy, while the other
player (PlayerO) spreads all possible actions. It is also shown that
the strategy of PlayerX never reaches a losing position in the game.
This property is expressed by the CTL formula defined in Equation
\ref{formAFWinX} below, which states that PlayerX will always win or
draw. We also model this game with the same strategy using our
algorithm, and the performance of verifying this formula is much
better in our algorithm (0.001 seconds) than using the SMV model
checker (45.211 seconds). However, we should also take into account
the time to generate this game in order to compare our algorithm
with SMV. The required time was 0.289 seconds; so our algorithm took
0.290 seconds to generate and analyze this version of Tic-Tac-Toe
game\footnote{Here, we refer to the average (arithmetic mean) time
of 10 runs of each approach. The standard deviation with SMV and our
algorithm were 1.333 and 0.009, respectively.}.
\begin{equation}
[AF](winX \vee Draw)\label{formAFWinX}
\end{equation}
As we have claimed at the end of Section \ref{sectionGALV}, one of
the main advantages of the GALV model checker is that it allows
computational aspects in the modeling language. Thus, we are able to
use standard algorithms of the AI community to model and analyze a
game. We take Tic-Tac-Toe as an example again, and we define one of
the players (PlayerX), using a \emph{minimax} algorithm with maximal
depth (9), while the other player (PlayerO) spreads all the possible
actions. The required time for generate the game was 14.718 seconds
and to analyze the GAL formula defined in Equation \ref{formAFWinX}
was 0.001 seconds. Note that this approach is not possible using a
standard model checker, such as SMV or SPIN.
\section{Conclusion and Future Works}
In this work, we have presented a first-order modal logic (GAL) to
model and analyze games. We have also provided a model checking
algorithm for GAL to achieve automatic verification for finite
games. We have illustrated in Section \ref{sectionGTinGAL} that
standard concepts of Game Theory can be modeled in GAL. Using our
prototype of a GAL model checker, we have performed case studies in
at least two directions: as a tool to find solution concepts of Game
Theory; and as a tool to analyze games that are based on standard
algorithms of the AI community, such as \emph{minimax} algorithm.
Despite the fact that our algorithm uses an explicit representation,
it outperforms the SMV model-checker as shown in Section
\ref{sectionExper}. This might suggest that an explicit
representation is better for games than using a symbolic
representation as OBDD. However, a general conclusion cannot be
drawn. Some future works are still needed and are listed below
\begin{itemize}
\item Define an adequate and sound system of GAL that is able to prove
formal theorems of Game Theory, such as the existence of mixed Nash
equilibrium in strategic games.
\item Implement a player of a game using formulas of GAL, such as the subgame perfect
equilibrium formula as shown in Section \ref{sectionGTinGAL}. This
approach might use evaluation functions and be limited to a certain
depth as in a \emph{minimax} procedure. As this is an heuristic
approach, we argue that define other solution concepts in a logic
framework is easier than to implement new algorithms. For instance,
we can define the strategy of a player according to a conjunction of
the subgame perfect formula and a Pareto Optimal formula.
\item Improve
the performance of the GAL model checker, since it uses an explicit
representation. We cannot use an OBDD-approach since in GAL we are
dealing with a first-order interpretation that may vary over the
states of the game.
\end{itemize}
\newif\ifabfull\abfulltrue
|
1,116,691,497,179 | arxiv | \section{Introduction}
Accurate numerical simulations of wave propagation through complex media are becoming increasingly important in seismology, especially as modern computational resources make the use of high fidelity subsurface models feasible for seismic imaging and full waveform inversion. A host of different numerical methods are currently in use, the most popular of which are high order finite difference methods \cite{virieux2011review}. While finite difference methods tend to perform excellently for simple geometries and smoothly varying data, their accuracy is degraded for heterogeneous media with interfaces or sharp gradients \cite{symes2009interface}.
In order to address these issues, high order finite element methods for wave propagation have been considered as alternatives to finite difference methods. A drawback of using continuous finite elements for time-domain simulations using explicit timestepping is the inversion of a global mass matrix system at each timestep. Spectral Element Methods (SEM) \cite{komatitsch1998spectral} address this issue by diagonalizing this mass matrix system through the use of mass-lumping, which co-locates interpolation nodes for Lagrange basis functions and Gauss-Legendre-Lobatto quadrature points. Since SEM is limited to unstructured hexahedral meshes, which are less geometrically flexible than tetrahedral meshes, triangular and tetrahedral mass-lumped spectral element methods have been investigated as alternatives \cite{chin1999higher, cohen2001higher, zhebel2014comparison}. However, due to a mismatch in the number of natural quadrature nodes and the dimension of polynomial approximation spaces on simplices, these methods necessitate adding additional nodes in the interior of the element to construct sufficiently accurate nodal points suitable for mass-lumping. Additionally, mass-lumpable nodal points on tetrahedra have only been determined for polynomial bases of degree four or less \cite{chin1999higher}.
High order discontinuous Galerkin (DG) methods have been considered as an alternative to Spectral Element Methods for seismic wave propagation \cite{dumbser2006arbitrary, dumbser2007arbitrary, de2008interior, etienne2010hp}. Instead of using mass-lumping to arrive at a diagonal mass matrix, DG methods naturally induce a block diagonal mass matrix through the use of arbitrary-order approximation spaces which are discontinuous across element boundaries. Weak continuity of approximate solutions in such spaces is enforced through numerical fluxes on shared faces. The local nature and fixed communication patterns DG methods also makes them well-suited for parallelization, and the scalability of DG methods for time-domain wave propagation problems has been demonstrated for hundreds of thousands of cores \cite{wilcox2010high}. Additionally, the computational structure of DG methods has been shown to be well-suited to many-core and accelerator architectures such as Graphics Processing Units (GPU). DG implementations on a single GPU have demonstrated significant speedups over conventional architectures \cite{klockner2009nodal,fuhry2014discontinuous}, while implementations using multiple GPUs still demonstrate high scalability \cite{godel2010scalability,modave2015accelerated}.
A limitation of many implementations of DG is that the wavespeed is assumed to be piecewise constant over each element, which can lead to spurious reflections and loss of high order accuracy. In order to accomodate locally heterogeneous models over each element, Castro et al.\ discretize a pseudo-conservative form of the wave equation \cite{castro2010seismic}. However, this requires including additional source terms to account for local spatial variation of material parameters, which makes it difficult to prove energy stability or high order accuracy. An alternative approach was taken by Mercerat and Glinsky in \cite{mercerat2015nodal}, where the spatial variation of the wavespeed is incorporated into local elemental mass matrices as a weighting function. This approach can be shown to be energy stable; however, since the wavespeed can vary from element to element, this necessitates either expensive on-the-fly solutions of dense matrix equations or the storage of factorizations/inverses of local mass matrices. This presents a challenge for GPU implementations, as the former is computationally expensive and not well-suited to the fine-grain parallelism of GPUs, while the latter greatly increases storage costs for high order approximations. Storage costs are especially problematic for GPU implementations of DG, due to limited global memory on accelerator architectures. Efficient implementations have also typically relied on the fact that, for affinely mapped tetrahedra and triangles, each block of the mass matrix is identical up to a constant scaling of a single reference mass matrix. Additionally, since GPUs require sufficiently large problem sizes for peak efficiency, increased storage costs can decrease the efficiency of GPU-based implementations.
Since similar storage issues are encountered for DG methods on non-affine elements, the Low-Storage Curvilinear DG (LSC-DG) method was introduced in \cite{warburton2010low,warburton2013low} to reduce the asymptotic storage costs for high order DG methods on curvilinear meshes by incorporating locally varying geometric factors into the basis functions on each element. When coupled with an \textit{a priori} stable quadrature-based variational formulation, the LSC-DG method can be shown to be both energy stable and high order accurate. It is straightforward to adapt LSC-DG to reduce storage costs for DG in the presence of heterogeneous wavespeeds; however, doing so forfeits the computational advantages available under specific choices of basis, such as nodal or Bernstein-Bezier polynomials \cite{hesthaven2007nodal, chan2015bbdg}.
This work addresses these issues by introducing a weight-adjusted DG (WADG) method for heterogeneous media. In particular, the weight-adjusted DG method is energy stable and high order convergent, while maintaining much of the computational structure of existing DG methods for isotropic media. The techniques in this work resemble those used in quadrature-free DG methods for hyperbolic problems \cite{atkins1998quadrature}, though the implementations presented in this work still rely explicitly on quadrature for a low-storage implementation. The main idea of the WADG method is to replace the weighted mass matrices of Mercerat and Glinsky \cite{mercerat2015nodal} with an equivalent weight-adjusted mass matrix which yields a low-storage inversion. The structure of this paper is as follows: Section~\ref{sec:form} introduces standard DG methods for wave propagation in heterogeneous media based on the use of weighted $L^2$ inner products \cite{mercerat2015nodal}. Section~\ref{sec:ip} introduces operators used to define an equivalent weight-adjusted inner product, and Section~\ref{sec:wadg} introduces the weight-adjusted DG method, along with discussions of local conservation and an \textit{a priori} error analysis. Finally, Section~\ref{sec:num} provides numerical experiments which validate theoretical estimates.
\section{Mathematical notation}
\label{sec:notation}
We begin with the assumption that the domain $\Omega$ is Lipschitz, and is represented exactly by a triangulation $\Omega_h$ consisting of elements $D^k$, where each element is the image of a reference element under the elemental mapping
\[
\bm{x}^k = \bm{\Phi}^k \widehat{\bm{x}},
\]
where $\bm{x}^k = \LRc{x^k,y^k,z^k}$ are physical coordinates on the $k$th element and $\widehat{\bm{x}} = \LRc{\widehat{x},\widehat{y},\widehat{z}}$ are coordinates on the reference element. We denote the Jacobian of the transformation for the element $D^k$ as $J^k$.
Over each element $D^k \in \Omega_h$, the approximation space $V_h\LRp{D^k}$ is defined as
\[
V_h\LRp{D^k} = \bm{\Phi}^k \circ V_h\LRp{\widehat{D}}.
\]
where $V_h\LRp{\widehat{D}}$ is an approximation space over the reference element. In this work, $\widehat{D}$ is taken to be the reference bi-unit triangle or tetrahedron, while $V_h\LRp{\widehat{D}}$ is taken to be the space of total degree $N$ polynomials on the reference triangle
\[
V_h\LRp{\widehat{D}} = P^N\LRp{\widehat{D}} = \LRc{ \widehat{x}^i \widehat{y}^j, \quad 0 \leq i + j \leq N}.
\]
or on the reference tetrahedron
\[
V_h\LRp{\widehat{D}} = P^N\LRp{\widehat{D}} = \LRc{ \widehat{x}^i \widehat{y}^j \widehat{z}^k, \quad 0 \leq i + j + k \leq N}.
\]
However, the analysis and methods are readily extendible to other affinely mapped element types and approximation spaces, such as tensor product degree $N$ polynomials on quadrilaterals and hexahedra. The global approximation space is taken to be the direct sum of approximation spaces over each element
\[
V_h\LRp{\Omega_h} = \bigoplus_{D^k} V_h\LRp{D^k}.
\]
We define $\Pi_N$ as the $L^2$ projection onto $P^N\LRp{D^k}$ such that
\[
\LRp{\Pi_N u,v}_{L^2\LRp{D^k}} = \LRp{u,v}_{L^2\LRp{D^k}}, \qquad v\in P^N\LRp{D^k},
\]
where $\LRp{\cdot,\cdot}_{L^2\LRp{D^k}}$ denotes the $L^2$ inner product over $D^k$.
We also introduce the standard Lebesgue $L^p$ norms over a general domain $\Omega$
\begin{align*}
\nor{u}_{L^p\LRp{\Omega}} &= \LRp{\int_{\Omega} u^p}^{1/p} \qquad 1 \leq p < \infty \\
\nor{u}_{L^{\infty}\LRp{\Omega}} &= \inf\LRc{C \geq 0: \LRb{u\LRp{\bm{x}}} \leq C \quad \forall \bm{x}\in \Omega},
\end{align*}
and the associated $L^p$ spaces
\begin{align*}
L^p\LRp{\Omega} &= \LRc{u: \Omega\rightarrow \mathbb{R}, \quad \nor{u}_{L^p\LRp{\Omega}} < \infty} \qquad 1\leq p < \infty \\
L^{\infty}\LRp{\Omega} &= \LRc{u: \Omega\rightarrow \mathbb{R}, \quad \nor{u}_{L^{\infty}\LRp{\Omega}} < \infty}.
\end{align*}
The $L^p$ Sobolev seminorms and norms of degree $s$ are then defined
\begin{align*}
\LRb{u}_{W^{s,p}\LRp{\Omega}} &= \LRp{\sum_{\LRb{\alpha}= s} \nor{ D^{\alpha} u}_{L^p\LRp{\Omega}}^p}^{1/p}, \qquad \LRb{u}_{W^{s,\infty}\LRp{\Omega}} = \max_{\LRb{\alpha}= s} \nor{D^{\alpha}u}_{L^{\infty}\LRp{\Omega}}\\
\nor{u}_{W^{s,p}\LRp{\Omega}} &= \LRp{\sum_{\LRb{\alpha}\leq s} \nor{ D^{\alpha} u}_{L^p\LRp{\Omega}}^p}^{1/p}, \qquad \nor{u}_{W^{s,\infty}\LRp{\Omega}} = \max_{\LRb{\alpha}\leq s} \nor{D^{\alpha}u}_{L^{\infty}\LRp{\Omega}}.
\end{align*}
where $\alpha = \LRc{\alpha_1,\alpha_2,\alpha_3}$ is a multi-index such that
\[
D^{\alpha}u = \pd{^{\alpha_1}}{x^{\alpha_1}}\pd{^{\alpha_2}}{y^{\alpha_2}}\pd{^{\alpha_3}}{z^{\alpha_3}} u,
\]
\section{Discontinuous Galerkin methods for the acoustic wave equation}
\label{sec:form}
We introduce the jump and average of $u\in V_h\LRp{\Omega_h}$ as follows: let $f$ be a shared face between two elements $D^{k^-}$ and $D^{k^+}$, and let $u$ and $\bm{u}$ be scalar and vector valued functions, respectively. The jumps and averages of $u, \bm{u}$ are defined as
\[
\jump{u} = u^+ - u^-, \qquad \avg{u} = \frac{u^+ + u^-}{2}, \qquad \jump{\bm{u}} = \bm{u}^+ - \bm{u}^-, \qquad \avg{\bm{u}} = \frac{\bm{u}^+ + \bm{u}^-}{2}.
\]
In this work, we consider the acoustic wave equation as a model problem. In first order form, this is given by
\begin{align*}
\frac{1}{\rho c^2}\pd{p}{t}{} + \Div \bm{u} &= 0,\\
\rho\pd{\bm{u}}{t}{} + \Grad p &= 0,
\end{align*}
where $t$ is time, $p$ is pressure, $\bm{u}$ is the vector velocity, and $\rho$ and $c^2$ are density and wavespeed, respectively. \note{
We will assume that $c^2$ is bounded from above and below
\[
0 < c_{\min}\leq c^2(\bm{x})\leq c_{\max} < \infty.
\]
}
We adopt the discontinuous Galerkin variational formulation of \cite{warburton2013low}, which is given over each element $D^k$ by
\begin{align}
\int_{D^k} \frac{1}{\rho c^2}\pd{p}{t}{}v \diff x &= -\int_{D^k} \Div\bm{u}v \diff x + \int_{\partial D^k} \frac{1}{2}\LRp{\tau_p\jump{p} - \bm{n}\cdot \jump{\bm{u}} }v^- \diff x, \nonumber \\
\int_{D^k} \rho\pd{\bm{u}}{t}{}\bm{\tau} \diff x &= - \int_{D^k} \Grad p \cdot \bm{\tau} \diff x + \int_{\partial D^k} \frac{1}{2}\LRp{\tau_u \jump{\bm{u}}\cdot \bm{n}^- - \jump{p}}\bm{\tau}^-\cdot \bm{n}^- \diff x.
\label{eq:form}
\end{align}
where $\bm{n}$ is the outward unit normal vector, $\tau_p = 1/\avg{\rho c}$, and $\tau_u = \avg{\rho c}$. We refer to this DG method as the standard DG method for the remainder of this work. Finally, we note that the weight-adjusted DG method proposed in this paper impacts only the computation of mass matrices, and thus is not tied to a single choice of DG formulation or numerical flux.
The formulation (~\ref{eq:form}) can be shown to be energy stable for any choice of $\tau_p, \tau_u \geq 0$ \cite{warburton2013low}, and the specific choice of $\tau_p, \tau_u$ reduce the numerical flux to the upwind fluxes (as determined by the solution of a Riemann problem) for constant $\rho, c$. For the remainder of this work, we assume $\rho = 1$ for simplicity, though it is straightforward to adapt the results to non-constant $\rho$.
Finally, for this work, we assume homogeneous Dirichlet boundary conditions $p=0$ on $\partial \Omega$. These are enforced through reflection conditions at boundary faces $f \in \partial \Omega$
\[
\left.p^+\right|_{f} = -\left.p^-\right|_{f}, \qquad \left.\bm{n}^+\bm{u}^+\right|_{f} = \left.\bm{n}^-\bm{u}^-\right|_{f}.
\]
\subsection{Discrete formulation}
Assuming that $V_h\LRp{\widehat{D}}$ is spanned by the basis $\LRc{\phi_i}_{i=1}^{N_p}$, the discrete formulation of the DG method is given most simply in terms of mass, (weak) differentiation, and lift matrices. The mass matrix $\bm{M}^k$, weighted mass matrix $\bm{M}_{1/c^2}^k$ and face mass matrix $\bm{M}^{k}_f$ for the element $D^k$ are defined as
\begin{align*}
\LRp{\bm{M}^k}_{ij} &= \int_{D^k} \phi_j \phi_i = \int_{\widehat{D}}{ \phi_j \phi_i} J^k,\\
\LRp{\bm{M}^k_{1/c^2}}_{ij} &= \int_{D^k} \frac{1}{c^2}\phi_j \phi_i = \int_{\widehat{D}}{ \frac{1}{c^2}\phi_j \phi_i} J^k,\\
\LRp{\bm{M}^{k}_f}_{ij} &= \int_{\partial D^{k}_f} \phi_j \phi_i = \int_{\widehat{D}_f} \phi_j \phi_i J^{k}_f.
\end{align*}
where $J^{k}_f$ is the Jacobian of the mapping from the face of a reference element $\widehat{D}_f$ to the face of a physical element $D^{k}_f$. We also define weak differentiation matrices $\bm{S}_x, \bm{S}_y, \bm{S}_z$ with entries
\begin{align*}
\LRp{\bm{S}_x}_{ij} = \int_{\widehat{D}} \pd{\phi_j}{x} \phi_i J^k, \qquad \LRp{\bm{S}_y}_{ij} = \int_{\widehat{D}} \pd{\phi_j}{y} \phi_i J^k, \qquad \LRp{\bm{S}_z}_{ij} = \int_{\widehat{D}} \pd{\phi_j}{z} \phi_i J^k.
\end{align*}
The discrete standard DG formulation is then given in terms of these matrices. For succinctness, we relabel subscripts $x,y,z$ as $1,2,3$ such that
\[
\LRc{\bm{S}^k_x, \bm{S}^k_y, \bm{S}^k_z} = \LRc{\bm{S}^k_1, \bm{S}^k_2, \bm{S}^k_3}, \qquad \bm{n} = \LRc{n_x, n_y, n_z} = \LRc{n_1, n_2, n_3}
\]
Then, the discrete formulation is
\begin{align*}
\bm{M}_{w}^k\td{\bm{p}}{t} &= -\sum_{j = 1,2,3}\bm{S}_{j}^k \bm{U}_j + \sum_{f=1}^{N_{\text{faces}}}\bm{M}^k_f F_p(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+),\\
\bm{M}^k\td{\bm{U}_i}{t} &= -\bm{S}_{i}^k \bm{p} + \sum_{f=1}^{N_{\text{faces}}} {n}_{i}\bm{M}^k_f F_{u}(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+), \qquad i = 1,2,3.
\end{align*}
where $w = 1/c^2$, $\bm{U}_i$ and $\bm{p}$ are degrees of freedom for $\bm{u}_i$ and $p$, and the superscripts $+$ and $-$ indicate degrees of freedom for functions on $D^k$ and its neighbor across face $f$. $F_p,F_u$ are defined such that
\begin{align*}
\LRp{ \bm{M}^k_f F_p(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+)}_i &= \int_{f_{D^k}} \frac{1}{2}\LRp{\tau_p \jump{p} - \bm{n}^-\cdot\jump{\bm{u}}}\phi_i^-,\\
\LRp{ \bm{n}_i \bm{M}^k_f F_u(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+)}_i &= \int_{f_{D^k}} \frac{1}{2}\LRp{\tau_u\jump{\bm{u}} \cdot \bm{n}^- - \jump{p}}\bm{\tau}_i^- \bm{n}_i^-.
\end{align*}
Inverting $\bm{M}^k_{1/c^2},\bm{M}^k$ produces a system of ODEs which can be solved using standard time-integration techniques.
\subsection{Energy stability in a weighted $L^2$ norm}
\label{sec:energy}
When the wavespeed $1/c^2$ is incorporated into the mass matrix, it is straightforward to show that the discrete DG formulation is energy stable (in the sense that an appropriate norm of the solution is dissipative in time). This can be shown by taking $v = p, \bm{\tau} = \bm{u}$ in the local DG formulation. Integrating the divergence term of the pressure equation by parts gives
\begin{align*}
\int_{D^k} \frac{1}{ c^2}\pd{p}{t}{}p \diff x &= \int_{D^k} \bm{u}\Grad p \diff x + \int_{\partial D^k} \LRp{\frac{\tau_p}{2}\jump{p} - \bm{n}\cdot \avg{\bm{u}} }p \diff x, \nonumber \\
\int_{D^k} \pd{\bm{u}}{t}{}\bm{u} \diff x &= - \int_{D^k} \Grad p \cdot \bm{u} \diff x + \int_{\partial D^k} \frac{1}{2}\LRp{\tau_u \jump{\bm{u}}\cdot \bm{n}^- - \jump{p}}\bm{u}\cdot \bm{n}^- \diff x.
\end{align*}
Then, adding the pressure and velocity equations together and summing over all elements $D^k$ gives
\begin{align}
\pd{}{t}\sum_{k}\int_{D^k} \frac{1}{c^2}p^2 + \LRb{\bm{u}}^2 = \pd{}{t}\sum_{k} \wip{p,p}{1/c^2} + \LRp{\bm{u},\bm{u}} = - \sum_{k}\frac{1}{2}\int_{\partial D^k} \tau_p \jump{p}^2 + \tau_u \LRp{\bm{n}\cdot\jump{\bm{u}}}^2 < 0.
\label{eq:stability}
\end{align}
where we have introduced the weighted $L^2$ inner product over $D^k$
\[
\LRp{wp,v}_{L^2\LRp{D^k}} = \int_{D^k} w p v.
\]
Assuming that the wavespeed is bounded from above and below by $0 < c_{\min} \leq c \leq c_{\max} < \infty$, the quantity
\begin{align}
\sum_{k} \LRp{\frac{p}{c^2},p}_{L^2\LRp{D^k}} + \LRp{\bm{u},\bm{u}}
\label{eq:weightedL2}
\end{align}
defines a squared norm on $\LRp{p,\bm{u}}$, and (\ref{eq:stability}) implies that this weighted $L^2$ norm of the solution is non-increasing in time. Thus, incorporating wavespeed into the left hand side of the DG formulation (and into the mass matrices of the discrete formulation) results in an energy stable method. This approach is taken by Mercerat and Glinsky \cite{mercerat2015nodal} to develop a nodal DG method for elastic wave propagation in heterogeneous media. However, this also greatly increases storage costs if $c$ varies locally over each element.
Consider the case when all elements $D^k$ are planar simplices (implying that the mapping $\bm{\Phi}^k$ is affine and $J^k$ is constant) and $c$ is piecewise constant over each element $D^k$. Then, the mass matrices $\bm{M}^k_{1/c^2}, \bm{M}^k$ satisfy
\[
\bm{M}^k_{1/c^2} = \frac{1}{c^2}J^k \widehat{\bm{M}}, \qquad \bm{M}^k = J^k \widehat{\bm{M}}.
\]
Under these assumptions, all mass matrices are simply scalings of the reference mass matrix. Inversion of the mass matrix can be dealt with by pre-multiplying reference matrices by the inverse of the reference mass matrix \cite{hesthaven2007nodal}. However, when $c$ varies locally over an element, each mass matrix is distinct, requiring either iterative solvers or storage of dense matrices/factorizations to apply the inverse.
Several approaches can be taken to address these storage costs. Castro et al.\ \cite{castro2010seismic} multiply the pressure equation on both sides by $c^2$ to remove the variation of $c$ from the mass matrix. However, this rewrites the wave equation in non-conservative form of the wave equation, which does not lend itself readily to an energy stable DG formulation. Castro et al.\ introduce new source terms into the formulation to overcome this difficulty, rewriting the wave equation in a pseudo-conservative form. However, it is not obvious whether this formulation is energy stable. It is also possible to build the variation of $c$ into the basis, as is done with spatially varying Jacobian factors $J^k$ for non-affine elements in \cite{warburton2013low}. However, this introduces rational basis functions, which require explicit quadrature-based \textit{a priori} stable variational formulations for energy stability. We propose an alternative approach in this work, which allows for the use of polynomial basis functions while maintaining a low storage implementation based on a weight-adjusted inner product.
\section{Approximating weighted $L^2$ inner products}
\label{sec:ip}
In order to introduce the new DG method, we introduce a new inner product under which the proposed method is energy stable. The construction of this inner product is based on operators $T_w, T^{-1}_w$ which approximate polynomial multiplication and division by a weight $w$, respectively. Intuitively, this inner product approximates the weighted $L^2$ inner product (\ref{eq:weightedL2}) under which the DG method is shown to be energy stable in Section~\ref{sec:energy}.
\subsection{Approximating polynomial multiplication and division}
Let $w(\bm{x})$ be a scalar weight defined on the domain $\Omega$ which is bounded from above and below
\[
0 < w_{\min} \leq w \leq w_{\max} < \infty.
\]
We define the operator $T_w: L^2\LRp{D^k} \rightarrow P^N\LRp{D^k}$
\[
T_w u = \Pi_N\LRp{wu}.
\]
Since $T_w$ also satisfies
\begin{align*}
\LRp{T_wu,v}_{L^2\LRp{D^k}} &= \LRp{\Pi_N(wu),v}_{L^2\LRp{D^k}}= \LRp{u,wv}_{L^2\LRp{D^k}}\\
&= \LRp{u,\Pi_N\LRp{wv}}_{L^2\LRp{D^k}} = \LRp{u,T_wv}_{L^2\LRp{D^k}},
\end{align*}
it is self-adjoint and positive definite, and induces a weighted inner product $\wip{\cdot,\cdot}{w}$ over $D^k$
\[
\wip{u,v}{w} \coloneqq \LRp{wu,v}_{L^2\LRp{D^k}}.
\]
For $u,v \in P^N\LRp{D^k}$, this weighted inner product reduces to the weighted $L^2$ inner product
\[
\wip{u,v}{w} = \LRp{T_wu,v}_{L^2\LRp{D^k}} = \LRp{\Pi_N\LRp{wu},v}_{L^2\LRp{D^k}}= \LRp{wu,v}_{L^2\LRp{D^k}}, \qquad u,v \in P^N\LRp{D^k} .
\]
We also define an operator $T_w^{-1}$ as
\[
T_w^{-1}: L^2\LRp{D^k} \rightarrow P^N\LRp{D^k}, \qquad \LRp{w T_w^{-1}u,v}_{{D}^k} = \LRp{u,v}_{{D}^k}.
\]
$T_w^{-1}$ can be considered the inverse of $T_w$ in the following sense:
\begin{lemma}
$T_w^{-1}T_w = T_w T_w^{-1} = \Pi_N.$
\label{lemma:properT1}
\end{lemma}
\begin{proof}
By the definitions of $T_w, T_w^{-1}$,
\begin{align*}
\LRp{T_w T_w^{-1} u,v}_{L^2\LRp{D^k}} &=\LRp{ wT_w^{-1} u,v}_{L^2\LRp{D^k}} = \LRp{u,v}_{L^2\LRp{D^k}}, \qquad \forall v\in P^N\LRp{D^k},\\
\LRp{w T_w^{-1} T_w u,v}_{L^2\LRp{D^k}} &=\LRp{ \Pi_N \LRp{wu},v}_{L^2\LRp{D^k}} = \LRp{ wu,v}_{L^2\LRp{D^k}}, \qquad \forall v\in P^N\LRp{D^k}.
\end{align*}
These implies that when the domain of $T_w$ is restricted to $P^N\LRp{D^k}$, $T_w^{-1}$ satisfies $T_w^{-1}T_w = T_wT_w^{-1} = I$. More generally, when the domain of $T_w, T_w^{-1}$ is $L^2\LRp{D^k}$,
\[
T_w^{-1}T_w = T_wT_w^{-1} = \Pi_N.
\]
\end{proof}
We also have the following properties of the operator $T_w^{-1}$
\begin{lemma}
The weighted operator $T^{-1}_w$ satisfies
\[
\Pi_N T^{-1}_w = T^{-1}_w \Pi_N = T^{-1}_w, \qquad \nor{T^{-1}_w u }_{L^2} \leq \frac{1}{w_{\min}} \nor{u}_{L^2}.
\]
\label{lemma:properT}
\end{lemma}
\begin{proof}
The first equality is simply because $T^{-1}_w u \in P^N\LRp{D^k}$ and $\Pi_N$ restricted to $P^N\LRp{D^k}$ is the identity map. The second equality is verified by using the definition of $T^{-1}_w, \Pi_N$ and showing that
\[
\LRp{wT^{-1}_w \Pi_N u,v}_{D^k} = \LRp{\Pi_N u,v}_{D^k} = \LRp{u,v}_{D^k} = \LRp{wT^{-1}_w u,v}_{D^k}.
\]
The norm of $\nor{T^{-1}_wu}_{L^2}$ can be bounded by noting
\begin{align*}
\nor{T^{-1}_w u}_{L^2} &\leq \LRp{\frac{1}{w_{\min}} \LRp{wT^{-1}_w u,T^{-1}_w u}}^{1/2} \leq\LRp{\frac{1}{w_{\min}} \LRp{u,T^{-1}_w u}}^{1/2},\\
& \leq \LRp{\frac{1}{w^2_{\min}} \LRp{u, wT^{-1}_w u}}^{1/2} = \LRp{\frac{1}{w^2_{\min}} \LRp{u, u}}^{1/2}.
\end{align*}
\end{proof}
This also implies that $\nor{T_w^{-1}}_{L^2\LRp{D^k}} \leq \frac{1}{w_{\min}}$.
\subsection{A weight-adjusted inner product}
The introduction of the weight-adjusted DG method relies an approximation of the weighted $L^2$ inner product
\[
\LRp{wu,v}_{L^2\LRp{D^k}} = \LRp{T_w u,v}_{L^2\LRp{D^k}} = \wip{u,v}{w}
\]
by an equivalent inner product, based on the observation that
\[
T_{w}u \approx T^{-1}_{1/w}u.
\]
In other words (for appropriate weighting functions $w$) the projected multiplication operator $T_w$ is well-approximated by the inverse of the projected polynomial division operator $T^{-1}_{1/w}$. This weight ``adjustment'' will make it possible to approximate the inverse of the weighted mass matrix in a low-storage, matrix-free manner.
We introduce the map $\waip{\cdot,\cdot}{1/w}: L^2\LRp{D^k} \times L^2\LRp{D^k} \rightarrow \mathbb{R}$ using $T_{1/w}^{-1}$
\[
\waip{u,v}{1/w} \coloneqq \LRp{T^{-1}_{1/w} u,v}_{{D}^k}.
\]
For positive weight function $w$, this map defines an inner product, which we refer to as the weight-adjusted inner product:
\begin{lemma}
\label{lemma:equiv}
$\waip{u,v}{1/w}$ defines an inner product on $P^N\LRp{D^k}\times P^N\LRp{D^k}$ with induced norm $\wanor{u}{w}$. Additionally, $\wanor{u}{w}$ is equivalent to the $L^2$ norm over ${D}^k$ with equivalence constants
\[
{\sqrt{w_{\min}}} \nor{u}_{L^2\LRp{{D}^k}} \leq \wanor{u}{w} \leq {\sqrt{w_{\max}}} \nor{u}_{L^2\LRp{{D}^k}}.
\]
\end{lemma}
\begin{proof}
It is straightforward to show that $\waip{u,v}{1/w}$ is bilinear. Symmetry follows from the self-adjoint nature of $T_{1/w}$ and Lemma~\ref{lemma:properT1}
\begin{align*}
\waip{u,v}{1/w} &= \LRp{T_{1/w}^{-1} u, v}_{L^2\LRp{D^k}} = \LRp{T_{1/w}^{-1} u,T_{1/w} T_{1/w}^{-1}v}_{L^2\LRp{D^k}} = \LRp{u,T_{1/w}^{-1} v}_{L^2\LRp{D^k}},
\end{align*}
while positive definiteness is a result of
\[
\waip{u,u}{1/w} = \LRp{T^{-1}_{1/w} u,u}_{L^2\LRp{D^k}} \geq {w_{\min}} \LRp{ \frac{1}{w}T^{-1}_{1/w} u,u}_{L^2\LRp{{D}^k}} = {w_{\min}} \LRp{u,u}_{L^2\LRp{{D}^k}}.
\]
To show equivalence of the norm, all that remains is showing the upper bound
\begin{align*}
\wanor{u}{w}^2 &= \LRp{T^{-1}_{1/w} u,u}_{L^2\LRp{D^k}} = \LRp{\frac{1}{w} w T^{-1}_{1/w} u,u}_{L^2\LRp{{D}^k}}\\
&\leq {w_{\max}} \LRp{\frac{1}{w}T^{-1}_{1/w} u,u}_{L^2\LRp{{D}^k}} = {w_{\max}} \LRp{u,u}_{L^2\LRp{{D}^k}}.
\end{align*}
\end{proof}
For $w$ constant, $\waip{u,v}{1/w}$ reduces to a scaling of the standard $L^2$ inner product by $w$.
We also note that the equivalence constants in this case are the same as for the weighted $L^2$ inner product $\wip{\cdot,\cdot}{w}$ over $P^N\LRp{D^k} \times P^N\LRp{D^k}$
\[
{\sqrt{w_{\min}}} \nor{u}_{L^2\LRp{{D}^k}} \leq \sqrt{\LRp{{w}u,u}_{L^2\LRp{{D}^k}}} = \sqrt{\wip{u,u}{w}} \leq {\sqrt{w_{\max}}} \nor{u}_{L^2\LRp{{D}^k}},
\]
which appears in the standard DG formulation for spatially varying wavespeed.
\subsection{Estimates for $T_{w}, T_{1/w}^{-1}$, and $\waip{\cdot,\cdot}{1/w}$}
\label{sec:estimates}
Intuitively, both $T_{w}u$ and $T^{-1}_{1/w} u$ approximate ${w} u$, and we can quantify the accuracy of this approximation by bounding $\nor{{u}{w}-T_{w} u}_{{D}^k}$ and $\nor{{u}{w}-T^{-1}_{1/w} u}_{{D}^k}$ for weights $w$ which are sufficiently regular. These regularity requirements are made explicit using Sobolev norms introduced in Section~\ref{sec:notation}.
To bound the difference between $uw$ and $T_wu, T^{-1}_{1/w} u$, we require the standard interpolation estimate
\begin{align*}
\nor{u-\Pi_N u}_{{D}^k} &\leq Ch^{N+1} \nor{u}_{W^{N+1,2}\LRp{{D}^k}},
\end{align*}
which assumes $u \in W^{N+1,2}\LRp{D^k}$ and follows from the Bramble-Hilbert lemma and a scaling assumption \cite{brenner2007mathematical, warburton2013low}.
We also make use of an estimate for a weighted projection, adapted from Theorem 3.1 in \cite{warburton2013low} for an affinely mapped element:
\begin{theorem}
\label{thm:wproj}
Let $D^k$ be a quasi-regular element with representative size $h = {\rm diam}\LRp{D^k}$. For $N \geq 0$, $w\in W^{N+1,\infty}\LRp{D^k}$, and $u\in W^{N+1,2}\LRp{D^k}$,
\[
\nor{u - \frac{1}{w} \Pi_N\LRp{{u}{w}}}_{L^2\LRp{D^k}} \leq C h^{N+1}\nor{\frac{1}{w}}_{L^{\infty}\LRp{D^k}} \nor{w}_{W^{N+1,\infty}\LRp{D^k}} \nor{u}_{W^{N+1,2}\LRp{D^k}}.
\]
\end{theorem}
\begin{proof}
By the Bramble-Hilbert lemma \cite{brenner2007mathematical},
\begin{align*}
\nor{u - \frac{1}{w} \Pi_N\LRp{{u}{w}}}_{L^2\LRp{D^k}} &\leq \sqrt{J^k}\nor{\frac{1}{w}}_{L^{\infty}\LRp{\widehat{D}}} \nor{uw - \Pi_N \LRp{uw}}_{L^2\LRp{\widehat{D}}} \\
&\leq \sqrt{J^k}\nor{\frac{1}{w}}_{L^{\infty}\LRp{\widehat{D}}} \LRb{uw}_{W^{N+1,2}\LRp{\widehat{D}}}.
\end{align*}
For quasi-regular elements, a scaling argument gives
\[
\LRb{uw}_{W^{N+1,2}\LRp{\widehat{D}}} \leq C_1 h^{N+1} \frac{1}{\sqrt{J^k}} \nor{uw}_{W^{N+1,2}\LRp{D^k}}.
\]
Finally, the Sobolev norm of $uw$ may be bounded by the product of the norms of $u,w$ using the Leibniz product rule and H\"older's inequality \cite{burenkov1998sobolev}
\[
\nor{uw}_{W^{N+1,2}\LRp{D^k}} \leq C_2 \nor{w}_{W^{N+1,\infty}\LRp{D^k}}\nor{u}_{W^{N+1,2}\LRp{D^k}}.
\]
Combining these gives the desired bound.
\end{proof}
We can now prove the following bounds:
\begin{theorem}
Let $D^k$ be a quasi-regular element with representative size $h = {\rm diam}\LRp{D^k}$. For $N \geq 0$, $w\in W^{N+1,\infty}\LRp{D^k}$, and $u\in W^{N+1,2}\LRp{D^k}$,
\begin{align}
\nor{{u}{w}-T_{w}u}_{L^2\LRp{D^k}} &\leq C_wh^{N+1} \nor{u}_{W^{N+1,2}\LRp{D^k}},\\
\nor{{u}{w}-T^{-1}_{1/w}u}_{L^2\LRp{D^k}} &\leq C_wh^{N+1} \nor{u}_{W^{N+1,2}\LRp{D^k}}.
\end{align}
where $C_w$ depends on $w$ as follows:
\[
C_w = C\nor{w}_{L^{\infty}\LRp{D^k}}\nor{\frac{1}{w}}_{L^{\infty}\LRp{D^k}} \nor{w}_{W^{N+1,\infty}\LRp{D^k}}.
\]
\label{lemma:mult}
\end{theorem}
\begin{proof}
The first bound is a direct application of Theorem~\ref{thm:wproj} to
\[
\nor{{u}{w}-T_{w}u}_{L^2\LRp{D^k}} \leq \nor{w}_{L^{\infty}\LRp{D^k}} \nor{{u}-\frac{1}{w}\Pi_N\LRp{{u}{w}}}_{L^2\LRp{D^k}}.
\]
This second bound is derived by bounding first the projection error of $uw$ and the deviation of $T^{-1}_{1/w} u$ from $\Pi_N\LRp{{u}{w}}$. The introduction of $\Pi_N \LRp{{u}{w}}$ allows us to use the fact that $T_{1/w}^{-1} T_{1/w} = I$ over $P^N$.
\[
\nor{{u}{w}-T^{-1}_{1/w} u}_{L^2\LRp{D^k}} \leq \nor{{u}{w}-\Pi_N\LRp{{u}{w}}}_{L^2\LRp{D^k}} + \nor{\Pi_N\LRp{{u}{w}}-T^{-1}_{1/w} u}_{L^2\LRp{D^k}}
\]
The former term is bounded by the standard interpolation estimate and regularity of $u$ and $w$. The latter term can be bounded as follows:
\begin{align*}
&\nor{T^{-1}_{1/w} u-\Pi_N\LRp{{u}{w}}}_{L^2\LRp{D^k}} = \nor{T^{-1}_{1/w} {\Pi_N\LRp{{u}}} - T_{1/w}^{-1}T_{1/w}\Pi_N \LRp{{u}{w}}}_{L^2\LRp{D^k}}\\
&\leq \nor{T_{1/w}^{-1}} \nor{\Pi_N \LRp{{u}} - \Pi_N \LRp{\frac{1}{w}\Pi_N\LRp{{u}{w}}}}_{L^2\LRp{D^k}}\\
&\leq \nor{w}_{L^{\infty}\LRp{D^k}} \nor{\Pi_N}_{L^2\LRp{D^k}} \nor{u - {\frac{1}{w}\Pi_N\LRp{{u}{w}}}}_{L^2\LRp{D^k}} \\
&\leq {C} h^{N+1}\nor{w}_{L^{\infty}\LRp{D^k}}\nor{\frac{1}{w}}_{L^{\infty}\LRp{D^k}} \nor{w}_{W^{N+1,\infty}\LRp{D^k}} \nor{u}_{W^{N+1,2}\LRp{D^k}},
\end{align*}
where we have used Lemma~\ref{lemma:properT} and the fact that $\nor{\Pi_N}_{L^2} = 1$ for affinely mapped elements.
\end{proof}
Finally, we give an estimate for moments of the difference between the weighted and weight-adjusted inner products:
\begin{theorem}
Let $u\in W^{N+1,2}\LRp{D^k}$, $w\in W^{N+1,\infty}\LRp{D^k}$, and $v \in P^M\LRp{D^k}$ for $0 \leq M\leq N$; then
\begin{align*}
&\LRb{\LRp{{w}u,v}_{L^2\LRp{D^k}} - \waip{u,v}{1/w}} \\%&= \LRb{\LRp{{w}u - T^{-1}_{1/w}u,v}_{L^2\LRp{D^k}}}\\
&\leq Ch^{2N+2 - M} \nor{{w}}_{L^{\infty}\LRp{D^k}}\nor{\frac{1}{w}}_{L^{\infty}\LRp{D^k}}^2 \nor{w}^2_{W^{N+1,\infty}\LRp{D^k}} \nor{u}_{W^{N+1,2}\LRp{D^k}} \nor{v}_{L^{\infty}\LRp{D^k}}.
\end{align*}
\label{lemma:cons}
\end{theorem}
\begin{proof}
Over each element $D^k$, the weight-adjusted inner product gives
\[
\waip{u,v}{1/w} = \LRp{T^{-1}_{1/w} u,v}_{L^2\LRp{D^k}} = \LRp{\frac{1}{w}wT^{-1}_{1/w} u,v}_{L^2\LRp{D^k}}.
\]
If ${w}$ is polynomial of degree $N-M$, then $\LRp{\frac{1}{w}T^{-1}_{1/w} u,{v}{w}}_{L^2} = \LRp{u,{v}{w}}_{L^2}$ and the moment of the difference is zero. If ${w} \not\in P^{N-M}\LRp{D^k}$, then $\LRp{\frac{1}{w}T^{-1}_{1/w} u,{v}{w}}_{L^2} \neq \LRp{u,{v}{w}}_{L^2}$. To bound the difference, we can add and subtract the projection of ${vw}$ onto $P^N$
\begin{align*}
&\LRp{\frac{1}{w}wT^{-1}_{1/w} u,v}_{L^2\LRp{D^k}} \\
&= \LRp{\frac{1}{w}T^{-1}_{1/w} u, {v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} + \LRp{ \frac{1}{w}T^{-1}_{1/w} u, \Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} \\
&= \LRp{\frac{1}{w}T^{-1}_{1/w} u, {v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} + \LRp{ u,\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}}.
\end{align*}
The difference then becomes
\begin{align*}
&\LRb{ \LRp{u,{v}{w}}_{L^2\LRp{D^k}} - \waip{u,v}{1/w}} \\
&= \LRb{\LRp{u,{v}{w} - \Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} + \LRp{\frac{1}{w}T_{1/w}^{-1} u,{v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}}}\\
&\leq \LRb{\LRp{u - \frac{1}{w}T^{-1}_{1/w} u,{v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}}} \\
&\leq \nor{u-\frac{1}{w}T^{-1}_{1/w} u}_{L^2\LRp{D^k}} \nor{{v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}}.
\end{align*}
For $vw$ sufficiently regular, the Bramble-Hilbert lemma implies
\[
\nor{{v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} \leq C\sqrt{J^k} \LRb{{v}{w}}_{W^{N+1,2}\LRp{\widehat{D}}}.
\]
By quasi-regularity of $D^k$ and the Leibniz product rule, the seminorm can be bounded by
\[
\LRb{{v}{w}}_{W^{N+1,2}\LRp{\widehat{D}}} \leq C\frac{1}{\sqrt{J^k}}h^{N+1}\nor{v}_{W^{N+1,2}\LRp{D^k}}\nor{w}_{W^{N+1,\infty}\LRp{D^k}}.
\]
Applying a scaling argument for $v\in P^{M}\LRp{D^k}$ and Bernstein's inequality \cite{sarantopoulos1991bounds} then yields
\[
\nor{v}_{W^{N+1,2}\LRp{D^k}} \leq C_B h^{-M} \nor{v}_{L^{\infty}\LRp{D^k}}.
\]
where $C_B$ is a constant depending on $N$. This implies that
\[
\nor{{v}{w}-\Pi_N\LRp{{v}{w}}}_{L^2\LRp{D^k}} \leq C h^{N+1-M}\nor{w}_{W^{N+1,2}\LRp{D^k}}\nor{v}_{L^{\infty}\LRp{D^k}}.
\]
We can then use Theorem~\ref{lemma:mult} to bound the remaining term
\begin{align*}
&\nor{u-\frac{1}{w}T^{-1}_{1/w}u}_{L^2\LRp{D^k}}\\
&= \nor{\frac{1}{w}\LRp{{u}{w}-T^{-1}_{1/w}u}}_{L^2\LRp{D^k}} \leq \nor{\frac{1}{w}}_{L^{\infty}\LRp{D^k}}\nor{{u}{w} - T^{-1}_{1/w} u}_{L^2\LRp{D^k}}\\
&\leq C h^{N+1}\nor{{w}}_{L^{\infty}\LRp{D^k}}\nor{\frac{1}{w}}^2_{L^{\infty}\LRp{D^k}} \nor{w}_{W^{N+1,\infty}\LRp{D^k}} \nor{u}_{W^{N+1,2}\LRp{D^k}}.
\end{align*}
Combining these two estimates
gives the desired bound.
\end{proof}
\section{A low storage weight-adjusted DG method}
\label{sec:wadg}
Using the weight-adjusted inner product, we can now introduce the weight-adjusted DG method. Recall the DG formulation of the pressure equation introduced in Section~\ref{sec:form}
\begin{align*}
\int_{D^k} \frac{1}{c^2}\pd{p}{t}{}v \diff x &= -\int_{D^k} \Div\bm{u}v \diff x + \int_{\partial D^k} \frac{1}{2}\LRp{\tau_p\jump{p} - \bm{n}\cdot \jump{\bm{u}} }v^- \diff x, \quad \forall v\in P^N\LRp{D^k}.
\end{align*}
The standard DG method is shown to be energy stable with respect to the $L^2$ norm weighted by $1/c^2$ which appears on the left hand of the pressure equation, which corresponds to the weighted $L^2$ inner product
\[
\wip{p,v}{w} = \int_{D^k} T_{w} p v = \int_{D^k} w p v = \qquad \forall v\in P^N\LRp{D^k}.
\]
where
\[
w(\bm{x}) = 1/c^2(\bm{x}).
\]
For the remainder of this paper, we will assume this specific definition of $w(x)$ for the acoustic wave equation. Motivated by the fact that $T_{1/w}^{-1}u \approx wu$, the weight-adjusted DG method approximates the weighted left hand side inner product in the DG pressure equation with the weight-adjusted inner product in Section~\ref{sec:ip}
\begin{align*}
\int_{D^k} T^{-1}_{1/w}\LRp{\pd{p}{t}} v\diff x &= -\int_{D^k} \LRp{\Div\bm{u}}v \diff x + \int_{\partial D^k} \frac{1}{2}\LRp{\tau_p\jump{p} - \bm{n}\cdot \jump{\bm{u}} }v^- \diff x.
\end{align*}
We note that the constants appearing in the bounds for Theorem~\ref{lemma:mult} are identical for both $T_{w}$ and $T^{-1}_{1/w}$, which suggests that the behavior of the weight-adjusted DG method should be very similar to that of the standard DG method.
A crucial aspect of the weight-adjusted DG method is that it is energy stable, due to the use of an equivalent inner product in the DG pressure equation.
Repeating the analysis in Section~\ref{sec:energy} for the weight-adjusted DG method gives that
\begin{align}
\pd{}{t}\sum_{k}\int_{D^k} \LRp{T_{1/w}^{-1}p}p + \LRb{\bm{u}}^2 = - \sum_{k}\frac{1}{2}\int_{\partial D^k} \tau_p \jump{p}^2 + \tau_u \jump{\bm{u}}^2 < 0,
\label{eq:stability2}
\end{align}
and since
\[
\sum_{k}\int_{D^k} \LRp{T_{1/w}^{-1}p} p = \sum_{k}\LRp{T_{1/w}^{-1}p,p}_{L^2\LRp{D^k}} = \sum_k \waip{p,p}{1/w} > 0
\]
for $w = 1/c^2$. The left hand side of (\ref{eq:stability2}) implies that a squared norm on $\LRp{p,\bm{u}}$ is non-increasing in time. Additionally, by Lemma~\ref{lemma:equiv}, this normed quantity is equivalent to the $L^2$ norm of $p,\bm{u}$ with the same equivalence constants as the weighted $L^2$ inner product used in (\ref{eq:stability}) for the standard DG method.
By replacing the weighted inner product on the left hand side with an approximation, a different mass matrix $\tilde{\bm{M}}^k$ is induced
\[
\LRp{\tilde{\bm{M}}^k}_{ij} = \waip{\phi_j,\phi_i}{1/w}.
\]
For polynomial functions $u$ on an element $D^k$ with expansion coefficients $\bm{u}$, computing $u_w = T_{1/w}^{-1}u$ reduces to a square matrix multiplication
\[
\bm{u}_w = \LRp{\bm{M}^k_{1/w}}^{-1} \bm{M}^k \bm{u},
\]
where $\bm{u}_w$ are the expansion coefficients of $u_w$ and $\bm{M}_{1/w}^k$ is defined entrywise
\[
\LRp{\bm{M}_{1/w}^k}_{ij} = \int_{D^k} \frac{1}{w}\phi_j \phi_i
\]
Thus, the Gram matrix associated with the weight-adjusted inner product has the form
\[
\tilde{\bm{M}}^k = \bm{M}^k\LRp{\bm{M}^k_{1/w}}^{-1}{\bm{M}^k},
\]
resulting in a discrete formulation for the weight-adjusted DG method
\begin{align*}
\bm{M}^k\LRp{\bm{M}^k_{1/w}}^{-1}{\bm{M}^k}\td{\bm{p}}{t} &=\sum_{i = 1,2,3}\bm{S}_{i}^k \bm{U}_j + \sum_{f=1}^{N_{\text{faces}}}\bm{M}^k_f F_p(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+),\\
\bm{M}^k\td{\bm{U}_{x_i}}{t} &= \bm{S}_{i}^k \bm{p} + \sum_{f=1}^{N_{\text{faces}}} \bm{n}_{i}\bm{M}^k_f F_{u}(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+), \qquad i = 1,2,3.
\end{align*}
For hexahedral elements with quadrature-based mass-lumping, this procedure reduces to collocation of $w(x) = 1/c^2(x)$ at quadrature points. For tetrahedral elements (which do not admit high order mass lumped schemes under a $P^N$ approximation space \cite{chin1999higher,cohen2001higher}), this method provides a low storage implementation through the fact that
\[
\LRp{\bm{M}^k\LRp{\bm{M}^k_{1/w}}^{-1}{\bm{M}^k} }^{-1} = \LRp{\bm{M}^k}^{-1}{\bm{M}^k_{1/w}}\LRp{\bm{M}^k}^{-1}
\]
For planar tetrahedra (and other affinely mapped elements), $\LRp{\bm{M}^k}^{-1} = \frac{1}{J^k} \widehat{\bm{M}}^{-1}$, requiring storage of only the reference array $\widehat{\bm{M}}^{-1}$. The application of $\bm{M}_{1/w}^k$ can be done in a matrix-free manner: for $u \in P^N$ with expansion coefficients $\bm{u}$,
\[
\LRp{\bm{M}_{1/w}^k \bm{u}}_i = \int_{\widehat{D}} \frac{1}{w} u \phi_i J^k.
\]
Each integral can be computed over the reference element using quadrature, requiring only $O(N^3)$ storage for values of ${c^2}$ at nodal or quadrature points.
We introduce the weak differentiation matrices and lift matrices $\bm{L}^k_f$ for the face $f$ of $D^k$
\begin{align*}
\bm{D}_x = \LRp{\bm{M}^k}^{-1}\bm{S}_x, \qquad
\bm{D}_y = \LRp{\bm{M}^k}^{-1}\bm{S}_y, \qquad
\bm{D}_z = \LRp{\bm{M}^k}^{-1}\bm{S}_z, \qquad
\bm{L}^k_f = \LRp{\bm{M}^k}^{-1}\bm{M}^k_f.
\end{align*}
For planar tetrahedra, these differentiation and lift matrices can be computed from linear combinations and scalings of reference derivative and lift matrices \cite{hesthaven2007nodal}. The matrix form of the semi-discrete weight-adjusted DG pressure equation can then be written as
\begin{align}
\td{\bm{p}}{t} =\LRp{\bm{M}^k}^{-1}{\bm{M}^k_{1/w}} \LRp{\sum_{i = 1,2,3}\bm{D}_{i}^k \bm{U}_j + \sum_{f=1}^{N_{\text{faces}}}\bm{L}^k_f F_p(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+)},
\label{eq:WadgDiscretePressure}
\end{align}
where we have referred to the weak differentiation matrices $\LRc{\bm{D}_x,\bm{D}_y,\bm{D}_z}$ as $\LRc{\bm{D}_1,\bm{D}_2,\bm{D}_3}$ for succinctness. We note that (for an appropriate choices of flux $F_p$) the contribution
\begin{align}
\LRp{\sum_{i = 1,2,3}\bm{D}_{x_i}^k \bm{U}_j + \sum_{f=1}^{N_{\text{faces}}}\bm{L}^k_f F_p(\bm{p}^-,\bm{p}^+,\bm{U}^-,\bm{U}^+)}
\label{eq:wadgDiv}
\end{align}
is simply the the DG discretization of the divergence operator and the pressure equation DG right hand contribution for wave propagation in homogeneous media. This illustrates the fact that implementation of the weight-adjusted DG method is relatively non-invasive. For example, a time-domain DG code with explicit timestepping for homogeneous media typically involves one step to compute right hand side contributions and one step to evolve the solution in time using a time integration scheme. For such a code, the weight-adjusted DG method for heterogeneous media could be implemented by adding only a single additional step which applies $\LRp{\bm{M}^k}^{-1}{\bm{M}^k_{1/w}}$ to the right hand side (for homogeneous media) before time integration.
\subsection{Consistent scaling by $c^2$}
The strong form of the pressure equation can be rescaled by $c^2$
\begin{align}
\pd{p}{t} + c^2 \Div u = 0.
\label{eq:repressure}
\end{align}
However, since this is in non-conservative form, it is non-trivial to derive appropriate formulations and numerical fluxes which result in an energy stable DG methods.
As suggested by (\ref{eq:WadgDiscretePressure}) and (\ref{eq:wadgDiv}), the weight-adjusted DG method can be interpreted as a way to consistently rescale by $c^2$ while maintaining the conservative form of the wave equation. Recall the definition of the lift operator $L^k_f$ for a given face $f$ of the element $D^k$
\[
\LRp{ L^k_f(u), v}_{ D^k} = \LRp{ u, v}_{\partial D^k_f}, \qquad v \in V_h\LRp{D^k}.
\]
The weight-adjusted DG formulation can then be expressed using $L_f^k$ as
\begin{align}
\int_{D^k} T_{1/w}^{-1}{\pd{p}{t}} v\diff x &+ \int_{D^k} \LRp{\Div\bm{u} -\sum_{f} L^k_f\LRp{F_p({p}^-,{p}^+,\bm{u}^-,\bm{u}^+)}}v \diff x = 0 \nonumber \\
\int_{D^k} \pd{\bm{u}}{t}{}\bm{\tau} \diff x &+ \int_{D^k} \LRp{\Grad p - \sum_f L^k_f\LRp{F_{u}({p}^-,{p}^+,\bm{u}^-,\bm{u}^+)} \bm{n}^- }\bm{\tau} \diff x = 0.
\end{align}
This is sometimes written in a more compact form
\begin{align}
\LRp{T_{1/w}^{-1}\pd{p}{t},v }_{L^2\LRp{D^k}} &+ \LRp{ \Grad_h \cdot \bm{u},v}_{L^2\LRp{D^k}}=0\\
\LRp{\pd{\bm{u}}{t},\bm{\tau}}_{L^2\LRp{D^k}} &+\LRp{ \Grad_h p,\bm{\tau}}_{L^2\LRp{D^k}}=0, \qquad (v,\bm{\tau}) \in V_h \times \LRp{V_h}^d.
\end{align}
where we have introduced the weak DG gradient and divergence $\Grad_h, \Grad_h\cdot$. These weak DG differential operators are defined such that their restriction to an element $D^k$ yields
\begin{align}
\left.\Grad_h \cdot p\right|_{D^k} &= \left.\LRp{\Div p}\right|_{D^k} - \sum_{f} L^k_f\LRp{F_p(p,\bm{u})}\nonumber\\
\left.\Grad_h \bm{u}\right|_{D^k} &= \left.\LRp{\Grad \bm{u}}\right|_{D^k} - \sum_{f} \bm{n}^-L^k_f\LRp{F_u(p,\bm{u})},
\label{eq:consistentrescale}
\end{align}
where $F_p(p,\bm{u}), F_u(p,\bm{u})$ are the numerical fluxes over a face $f$.
The weight-adjusted DG method can be derived using the weak DG divergence in (\ref{eq:consistentrescale}) instead of the exact divergence. Replacing the strong divergence of (\ref{eq:repressure}) with the weak DG divergence, then multiplying both sides of by a test function in $V_h$ and integrating results in the weight-adjusted DG formulation. This incorporates the scaling by $c^2$ in a consistent manner, multiplying terms within volume integrals only. Without introducing the lift operator, it is not immediately clear how to incorporate the scaling by $c^2$ within surface integrals.
\subsection{Convergence}
With the estimates in Section~\ref{sec:estimates} and consistency of the formulation, it is possible to derive \textit{a priori} error estimates for the weight-adjusted DG method. We follow the approach of \cite{warburton2013low} to obtain an $O\LRp{h^{N+1/2}}$ bound on the $L^2$ error.
For functions $u \in L^2\LRp{\Omega}$ such that $\left.u\right|_{D^k} \in W^{N+1,2}\LRp{D^k}$, we define the norm
\[
\nor{u}_{W^{N+1,p}\LRp{\Omega_h}} = \LRp{ \sum_k \nor{u}_{W^{N+1,p}\LRp{D^k}}^2}^{1/2}.
\]
We consider solutions $\LRp{p,\bm{u}} \in W^{N+1,2}\LRp{\Omega_h} \times \LRp{W^{N+1,2}\LRp{\Omega_h}}^d$ over the time interval $[0,T]$ such that
\begin{align*}
\sup_{t' \in [0,T]}\nor{p}_{W^{N+1,2}\LRp{\Omega_h}} &< \infty, \qquad \sup_{t' \in [0,T]}\nor{\bm{u}}_{W^{N+1,2}\LRp{\Omega_h}} < \infty,\\
\sup_{t' \in [0,T]}\nor{\pd{p}{t}}_{W^{N+1,2}\LRp{\Omega_h}} &< \infty, \qquad \sup_{t' \in [0,T]}\nor{\pd{\bm{u}}{t}}_{W^{N+1,2}\LRp{\Omega_h}} < \infty.
\end{align*}
Under these regularity assumptions,\footnote{These assumptions may be relaxed somewhat using techniques from \cite{grote2007interior}.} the following theorem holds for $p$ and the components $\bm{u}_i$ of the velocity:
\begin{theorem}[Theorem 3.3 of \cite{warburton2013low}]
\begin{align*}
\nor{p - \Pi_N p}_{\partial D^k} &\leq C h^{N+1/2}\nor{p}_{W^{N+1,2}(D^k)}\\
\nor{\bm{u}\cdot \bm{n} - \Pi_N \bm{u}\cdot \bm{n}}_{\partial D^k} &\leq C h^{N+1/2}\nor{\bm{u}}_{W^{N+1,2}(D^k)}, \qquad i = 1,2,3.
\end{align*}
\label{thm:tracereg}
\end{theorem}
\note{
We will also use the following modified Gronwall's inequality
\begin{lemma}[Lemma 1.10 in \cite{dolejvsi2015discontinuous}]
Suppose that $a,b,c,d \in C[0,T]$ are non-negative functions and that
\[
a^2(t) + b(t) \leq c(t) + 2\int_0^t d(s) a(s) \diff{s}, \qquad \forall t\in [0,T].
\]
Then, for any $t\in [0,T]$,
\[
\sqrt{a^2(t) + b(t)} \leq \sup_{s\in[0,t]} \sqrt{c(s)} + \int_0^t d(s) \diff{s}.
\]
\label{lemma:gron}
\end{lemma}
Then, we have the following \textit{a priori} estimate for the weight-adjusted DG solution $\LRp{p_h,\bm{u}_h}$ at time $T$
\begin{theorem}
\begin{align*}
&\nor{\LRp{p(\bm{x},T),\bm{u}(\bm{x},T)} -\LRp{p_h(\bm{x},T),\bm{u}_h(\bm{x},T)}}_{\L} \leq \\
&\LRp{C_1 + C_2 T} h^{N+1/2}\sup_{t'\in [0,T]}\LRp{ \nor{\LRp{p,\bm{u}}}_{W^{N+1,2}(\Omega_h)} + h^{1/2}\nor{\frac{1}{c^2}}_{W^{N+1,\infty}\LRp{\Omega_h}}\nor{\pd{}{t}\LRp{p,\bm{u}}}_{W^{N+1,2}(\Omega_h)}},
\end{align*}
where $C_2$ depends on $c_{\min},c_{\max}$.
\end{theorem}
}
\begin{proof}
We introduce group variables $U = \LRp{p,\bm{u}}$ and $V = \LRp{v,\bm{\tau}}$ to rewrite the variational formulation as
\[
\LRp{\pd{U}{t},V}_{w,\Omega} + a(U,V) + b(U,V) = 0,
\]
where $\LRp{U,V}_{w,\Omega}$ is
\[
\LRp{U,V}_{w,\Omega} = \sum_{k} \waip{p,v}{c^2} + \LRp{\bm{u},\bm{\tau}}_{L^2\LRp{D^k}}.
\]
The volume and surface contributions to the formulation are given by
\begin{align*}
a(U,V) &= \sum_k \int_{D^k} -\bm{u}\cdot\Grad v + \Grad p\cdot \bm{\tau}\\
b(U,V) &= \sum_{k}\int_{\partial D^k} \LRp{\frac{\tau_p}{2}\jump{p} - \avg{\bm{u}}\cdot\bm{n}^-} v + \frac{1}{2}\LRp{{\tau_u}\jump{u}\cdot\bm{n}^- - \jump{p}} \bm{\tau}\cdot \bm{n}^-.
\end{align*}
The proof of energy stability implies that $b(U,V)$ is positive semi-definite, and that
\begin{align*}
b(U,U) &= \frac{1}{2}\sum_k \tau_p \nor{\jump{p}}^2_{L^2\LRp{\partial D^k}} + \tau_u \nor{\jump{\bm{u}}\cdot \bm{n}}^2_{L^2\LRp{\partial D^k}}\\
\frac{1}{2}\pd{}{t}(U,U)_{w,\Omega} &= -b(U,U).
\end{align*}
Let $\Pi_h: L^2\LRp{\Omega} \rightarrow \bigoplus_{k} P^N\LRp{D^k}$ be the $L^2$ projection onto the space of degree $N$ polynomials over the triangulation $\Omega_h$. The error $E$ between the exact solution $U$ and the the weight-adjusted DG solution $U_h = \LRp{p_h,\bm{u}_h}$ can be defined in terms of the interpolation error $\epsilon$ and discretization error $\eta$
\[
E = U-U_h = \LRp{U-\Pi_h U} + \LRp{\Pi_h U - U_h} = \epsilon + \eta.
\]
Since the interpolation error $\epsilon$ can be bounded by regularity assumptions, what remains is to bound the discretization error $\eta = \Pi_h\LRp{U-U_h}$ at time $T$.
\note{
Assuming sufficient regularity \cite{brenner2007mathematical, hesthaven2007nodal}, the exact solution at time $T$ satisfies the DG formulation (\ref{eq:form}) with weighted $L^2$ inner product
\begin{align*}
\LRp{\frac{1}{c^2}\pd{p}{t},v}_{\Omega} + \LRp{\pd{\bm{u}}{t},\bm{\tau}}_{\Omega} + a(U,V) + b(U,V) &= 0, \qquad \forall V\in V_h,
\end{align*}
while the discrete solution satisfies the WADG formulation
\begin{align*}
\LRp{\pd{U_h}{t},V}_{w,\Omega} + a(U_h,V) + b(U_h,V) &= 0, \qquad \forall V\in V_h.
\end{align*}
Taking $V = \eta$, subtracting these two equations and rearranging yields the error equation
\begin{equation}
\LRp{\frac{1}{c^2}\pd{p}{t},\eta_p}_{\Omega} + \LRp{\pd{\bm{u}}{t},\bm{\eta}_u}_{\Omega} - \LRp{\pd{U_h}{t},\eta}_{w,\Omega} + b(\eta,\eta) = a(\epsilon,\eta) + b(\epsilon,\eta).
\label{eq:erroreq}
\end{equation}
where we have used $a(\eta,\eta) = 0$ by skew-symmetry. Integrating by parts gives
\[
a(\epsilon,\eta) = \sum_k \int_{D^k} -\bm{\epsilon}_u\Grad \eta_p - \epsilon_p \Div \bm{\eta}_u + \int_{\partial D^k} p^- \bm{\eta}_u\cdot \bm{n},
\]
where $\epsilon_p, \bm{\epsilon}_u$ and $\eta_p,\bm{\eta}_u$ are the $p$ and $\bm{u}$ components of the interpolation and discretization error, respectively. For affinely mapped elements, $\Div\bm{\eta}_u, \Grad\eta_p$ are polynomial, and volume terms disappear through orthogonality of the $L^2$ projection to polynomials up to degree $N$. We can then bound the contribution by combining contributions over shared faces and applying the arithmetic-geometric mean inequality
\begin{align*}
a(\epsilon,\eta) + b(\epsilon,\eta) &= \frac{1}{2} \sum_{k}\int_{\partial D^k} \LRp{\frac{\tau_p}{2}\jump{\epsilon_p} - \avg{\bm{\epsilon}_u}\cdot\bm{n}^-}\jump{ \eta_p} + \LRp{\frac{\tau_u}{2}\jump{\bm{\epsilon}_u}\cdot\bm{n}^- - \avg{\epsilon_p}} \jump{\bm{\eta}_u}\cdot \bm{n}^- \\
&\leq \tilde{C_\tau} \sum_{k}\int_{\partial D^k} \LRp{\jump{\epsilon_p} - \avg{\bm{\epsilon}_u}\cdot\bm{n}^-} \frac{\tau_p}{2}\jump{ \eta_p} + \LRp{\jump{\bm{\epsilon}_u}\cdot\bm{n}^- - \avg{\epsilon_p}} \frac{\tau_u}{2}\jump{\bm{\eta}_u}\cdot \bm{n}^- \\
&\leq C_\tau \sum_k\int_{\partial D^k} \LRb{\epsilon}^2 \LRp{ \frac{\tau_p}{2}\nor{\jump{ \eta_p}}^2_{L^2\LRp{\partial D^k}} + \frac{\tau_u}{2}\nor{\jump{\bm{\eta}_u}\cdot \bm{n}}^2_{L^2\LRp{\partial D^k}} }.
\end{align*}
Applying Young's inequality with $\alpha = C_\tau/2$ then gives
\[
\LRb{a(\epsilon,\eta) + b(\epsilon,\eta)} \leq b(\eta,\eta) + \frac{C_\tau^2}{4} \sum_k \nor{{\epsilon}}^2_{L^2\LRp{\partial D^k}}.
\]
Terms involving the time derivative of pressure can be controlled by introducing the $L^2$ projection and using properties of $T_{c^2}^{-1}$
\begin{align*}
\LRp{\frac{1}{c^2}\pd{p}{t}-T_{c^2}^{-1} \Pi_h \pd{p}{t},\eta_p}_{\Omega} &= \LRp{\frac{1}{c^2}\pd{p}{t} - T_{c^2}^{-1} \Pi_h \pd{p}{t},\eta_p}_{\Omega} + \LRp{T_{c^2}^{-1} \Pi_h\pd{p}{t}-T_{c^2}^{-1} \pd{p_h}{t},\eta_p}_{\Omega} \\
&= \LRp{\pd{\delta_p}{t},\eta_p}_{\Omega} + \LRp{T_{c^2}^{-1}\pd{\eta_p}{t} ,\eta_p}_{\Omega} = \LRp{\pd{\delta_p}{t},\eta_p}_{\Omega} + \frac{1}{2}\pd{}{t}\LRp{T_{c^2}^{-1}\eta_p ,\eta_p}_{\Omega},
\end{align*}
where $\delta_p = \frac{1}{c^2}p - T_{c^2}^{-1} \Pi_h p= \frac{1}{c^2}p - T_{c^2}^{-1} p$ is the WADG consistency error in the pressure variable. Terms involving time derivatives of velocity satisfy
\[
\LRp{\pd{\bm{u}}{t},\bm{\eta}_u}_{\Omega} - \LRp{\pd{\bm{u}_h}{t},\bm{\eta}_u}_{\Omega} = \LRp{\pd{\bm{\eta}_u}{t},\bm{\eta}_u}_{\Omega} + \LRp{\pd{\bm{\epsilon}_u}{t},\bm{\eta}_u}_{\Omega}.
\]
Combining these gives
\begin{align*}
\LRp{\frac{1}{c^2}\pd{p}{t},\eta_p}_{\Omega} + \LRp{\pd{\bm{u}}{t},\bm{\eta}_u}_{\Omega} - \LRp{\pd{U_h}{t},\eta}_{w,\Omega} &= \pd{}{t}\frac{1}{2}\LRp{\eta,\eta}_{\Omega} + \LRp{\pd{\delta}{t},\eta}_{\Omega}.
\end{align*}
where
\[
\LRp{\pd{\delta}{t},\eta}_{\Omega} = \LRp{\pd{\delta_p}{t},\eta_p}_{\Omega} + \LRp{\pd{\bm{\epsilon}_u}{t},\bm{\eta}_u}_\Omega.
\]
Substituting these expressions into the error equation (\ref{eq:erroreq}) gives
\begin{align*}
\pd{}{t}\frac{1}{2} (T_{c^2}^{-1}\eta,\eta)_\Omega + b(\eta,\eta) &\leq \LRb{\LRp{\pd{\delta}{t},\eta}_{\Omega}} + b(\eta,\eta) + \frac{C_\tau^2}{4} \sum_{k} \nor{\epsilon}_{L^2\LRp{\partial D^k}}^2.
\end{align*}
We eliminate factors of $\frac{1}{2}$ and $b(\eta,\eta)$ on both sides. Then, integrating over $[0,T]$, applying Theorem~\ref{lemma:equiv}, and using Cauchy-Schwarz yields
\[
\frac{1}{c_{\max}} \nor{\eta}_{L^2\LRp{\Omega}}^2 \leq \int_0^T \nor{\eta}_{L^2\LRp{\Omega}}\nor{\pd{\delta}{t}}_{L^2\LRp{\Omega}} + \frac{C_\tau^2}{2} \sum_{k} \nor{\epsilon}_{L^2\LRp{\partial D^k}}^2.
\]
The modified Gronwall inequality then yields a bound on $\nor{\eta}_{L^2\LRp{\Omega}}$
\begin{align*}
\nor{\eta}_{L^2\LRp{\Omega}} &\leq \tilde{C}\int_0^T \nor{\pd{\delta}{t}}_{L^2\LRp{\Omega}} + \sup_{t'\in [0,T]} \sqrt{\int_0^T \frac{C_\tau^2}{2} \sum_{k} \nor{\epsilon}_{L^2\LRp{\partial D^k}}^2}\\
&\leq CT \sup_{t'\in [0,T]} \LRp{ \nor{\pd{\delta}{t}}_{L^2\LRp{\Omega}} + \sqrt{\sum_{k} \nor{\epsilon}_{L^2\LRp{\partial D^k}}^2}}.
\end{align*}
where $C$ depends on $ c_{\max}$ and the penalty parameters.
The right hand side terms are then bounded using regularity assumptions. The time derivative term is bounded using Theorem~\ref{lemma:mult}
\begin{align*}
\nor{\pd{\delta}{t}}_{L^2\LRp{\Omega}} &\leq \nor{\pd{\delta_p}{t}}_{L^2\LRp{\Omega}} + \nor{\pd{\bm{\epsilon}_u}{t}}_{L^2\LRp{\Omega}}\\
&\leq \nor{\pd{\delta_p}{t}}_{L^2\LRp{\Omega}} + Ch^{N+1} \nor{\pd{\bm{u}}{t}}_{W^{N+1,2}\LRp{\Omega_h}}\\
&\leq C\frac{c_{\max}}{c_{\min}} \nor{\frac{1}{c^2}}_{W^{N+1,\infty}\LRp{\Omega_h}} h^{N+1} \nor{\pd{}{t}(p,\bm{u})}_{W^{N+1,2}\LRp{\Omega_h}}.
\end{align*}
while the trace term is bounded using Theorem~\ref{thm:tracereg}
\[
\sqrt{\sum_k \nor{\epsilon}_{\partial D^k}^2} \leq \sqrt{C \sum_k h^{2N+1}\nor{\LRp{p,\bm{u}}}^2_{W^{N+1,2}(D^k)}} \leq Ch^{N+1/2} \nor{\LRp{p,\bm{u}}}_{W^{N+1,2}(\Omega_h)}
\]
Taking the supremum over $[0,T]$ and applying the triangle inequality to $U-U_h = \epsilon + \eta$ completes the proof.
}
\end{proof}
\subsection{Local conservation}
\label{sec:conservation}
While standard DG methods are locally conservative, the use of the weight-adjusted mass matrix does not preserve local conservation of the same quantities conserved by the standard DG method. However, Theorem~\ref{lemma:cons} gives an estimate which implies a higher order $O(h^{2N+2})$ convergence of the conservation error for smooth solutions. Since conservation conditions for DG are recovered by testing with piecewise constant test functions \cite{ellis2014locally}, we define the local conservation error as the $M=0$ moment of the error between the standard DG and weight-adjusted DG inner products for polynomial $u$, summed over all elements $D^k$
\begin{align*}
&\sum_k \LRb{\LRp{\frac{1}{c^2}u,1}_{L^2\LRp{D^k}} - \LRp{u,1}_{T^{-1}_{c^2}}} \\
&\leq Ch^{2N+2} \nor{c^2}^2_{L^{\infty}\LRp{\Omega_h}}\sup_k\nor{\frac{1}{c^2}}^2_{W^{N+1,\infty}\LRp{D^k}}{\sum_k \nor{u}_{W^{N+1,2}\LRp{\Omega_h}}} .
\end{align*}
We note that the above bound depends on the regularity of both $c^2$ and the solution $u$. As noted in the proof of Theorem~\ref{lemma:cons}, it is possible to restore local conservation by replacing $c^2$ with its degree $N$ polynomial projection or interpolant on each element, though this can introduce an error if $c^2$ is poorly approximated by $P^N\LRp{D^k}$.
Alternatively, it is also simple to restore conservation through a rank-one update to the mass matrix. Let $\bm{e}$ be the vector of degrees of freedom representing a constant; then, we seek $\alpha \bm{vv}^T$ such that
\[
{\LRp{\bm{M}^k\LRp{\bm{M}^k_{c^2}}^{-1}\bm{M}^k + \alpha \bm{v}\bm{v}^T}\bm{e} - \bm{M}^k_{1/c^2}\bm{e}} = 0.
\]
This implies that $\bm{v}$ is the conservation error up to a scaling constant. This constant can be determined as follows: define
\[
\bm{v} = \LRp{\bm{M}^k\LRp{\bm{M}^k_{c^2}}^{-1}\bm{M}^k - \bm{M}^k_{1/c^2}}\bm{e}
\]
Multiplying by $\bm{e}^T$ on the left gives
\[
\bm{e}^T \bm{v} = \bm{e}^T\LRp{\bm{M}^k\LRp{\bm{M}^k_{c^2}}^{-1}\bm{M}^k- \bm{M}^k_{1/c^2}}\bm{e} = -\alpha \LRp{\bm{v}^T\bm{e}}^2.
\]
Defining $\alpha = -{\rm sign}\LRp{\bm{v}^T\bm{e}}/\LRp{\bm{v}^T\bm{e}}$ then implies that the rank-one correction $\alpha \bm{v}\bm{v}^T$ is sufficient to enforce conservation. Since $\LRp{\bm{v}^T\bm{e}}$ can be very small, $\alpha$ can be set to zero if $\LRb{\bm{v}^T\bm{e}} \leq \delta \nor{\bm{v}}$ for some tolerance $\delta$ to ensure numerical stability. The inverse of this conservative mass matrix can be applied using the Shermann-Morrison formula. Define $\tilde{\bm{v}} = \LRp{\bm{M}^k}^{-1}\bm{M}^k_{c^2}\LRp{\bm{M}^k}^{-1} \bm{v}$; assuming that ${1 + \alpha\bm{v}^T\tilde{\bm{v}}} \neq 0$,
\[
\LRp{\bm{M}^k\LRp{\bm{M}_{c^2}^k}^{-1}\bm{M}^k + \alpha \bm{v}\bm{v}^T}^{-1} = \LRp{\bm{M}^k}^{-1}\bm{M}^k_{c^2}\LRp{\bm{M}^k}^{-1} - \frac{\alpha\tilde{\bm{v}} \tilde{\bm{v}}^T}{1 + \alpha\bm{v}^T\tilde{\bm{v}}},
\]
requiring only $O(N^3)$ additional storage per element.
For nonlinear hyperbolic problems with non-smooth solutions such as shocks, as a non-conservative scheme can lead to incorrect shock speeds \cite{leveque2002finite}. The exact enforcement of local conservation is especially important in this context, since Theorem~\ref{lemma:cons} suggests that conservation errors depend otherwise on the regularity of $u$.
\section{Numerical examples}
\label{sec:num}
In this section, we give numerical examples confirming the estimates in Section~\ref{sec:estimates}, as well as numerical verification of convergence for the weight-adjusted DG method. Numerical experiments are performed using a nodal DG method \cite{hesthaven2007nodal}; however, the weight-adjusted DG method is agnostic to the choice of basis used.
\subsection{Comparisons between weighted and weight-adjusted inner products}
The DG method of Mercerat and Glinsky \cite{mercerat2015nodal} is energy stable with respect to the scaled $L^2$ norm induced by the inner product
\[
\int_{D^k} {w} p v + \bm{u}\cdot\bm{\tau} = \wip{p,v}{w} + \LRp{\bm{u},\bm{\tau}}_{L^2\LRp{D^k}}
\]
with $w = 1/c^2$. The weight-adjusted DG method approximates this using the weight-adjusted inner product
\[
\int_{D^k} T_{1/w}^{-1}p v + \bm{u}\cdot\bm{\tau} = \waip{ p,v}{1/w} + \LRp{\bm{u},\bm{\tau}}_{L^2\LRp{D^k}}.
\]
We perform a numerical study to assess the quality of this approximation, which will influence how much the behavior of the weight-adjusted DG method will deviate from that of the standard DG method.
Consider $u_{w,1}, u_{w,2}$ defined by the two scaled projection problems
\begin{align*}
\wip{u_{w,1},v}{w} &= \LRp{u,v}_{L^2\LRp{D^k}}, \qquad V \in V_h\LRp{D^k} \\
\waip{u_{w,2},v}{1/w} &= \LRp{u,v}_{L^2\LRp{D^k}}, \qquad V \in V_h\LRp{D^k}.
\end{align*}
$u_{w,1}, u_{w,2}$ approximate $u /w$. If $u_{w,1}$ and $u_{w,2}$ are very close, the two projection problems are close to equivalent for that choice of $w$, and we expect the standard DG and weight-adjusted DG methods to behave similarly. Polynomial expansion coefficients for $u_{w,1}, u_{w,2}$ are computed over each element by solving the matrix equations
\begin{align}
\bm{M}^k_{w}\bm{u}_{w,1} &= \bm{b} \label{eq:proj1}\\
\bm{M}^k \LRp{\bm{M}^k_{1/w}}^{-1}\bm{M}^k \bm{u}_{w,2} &= \bm{b}\label{eq:proj2},
\end{align}
where $\bm{b}_i = \int_{D^k} u \phi_i$. We also examine convergence of $u_{w,3}$ to $uw$ as well, where coefficients for $u_{w,3}$ are computed by solving
\begin{align}
\LRp{\bm{M}^k \LRp{\bm{M}^k_{1/w}}^{-1}\bm{M}^k + \alpha \bm{v}\bm{v}^T}\bm{u}_{w,3} = \bm{b}. \label{eq:proj3}
\end{align}
Here, $\alpha$ and $\bm{v}$ define the rank-1 correction used to restore local conservation in Section~\ref{sec:conservation}.
\subsubsection{Regular solutions and weighting functions}
Table~\ref{table:projrates} shows $L^2$ errors for $\nor{{u_{w,1}} - u/{w}}_{L^2\LRp{\Omega}}$, $\nor{{u_{w,2}} - u/{w}}_{L^2\LRp{\Omega}}$, and $ \nor{{u_{w,3}} - u/w}_{L^2\LRp{\Omega}}$ on a sequence of uniform triangular meshes, with
\[
u(x,y) = e^{x+y}, \qquad w(x,y) = 1 + \frac{1}{2}\sin(\pi x)\sin(\pi y).
\]
In all cases, the errors are very similar, though the error for $u_{w,1}$ (corresponding to the weighted $L^2$ inner product used in the standard DG method) appears to be consistently smaller than the errors for $u_{w,2}, u_{w,3}$. Interestingly, the error for $u_{w,3}$, defined using the conservation-corrected mass matrix in (\ref{eq:proj3}), is smaller than the error for $u_{w,2}$ which does not include the rank-1 correction.
\begin{table}
\centering
\begin{tabular}{|c|c||c|c|c|c||c|}
\hline
&& $h = 1$ & $h = 1/2$, & $h = 1/4$ & $h = 1/8$ & Est.\ rate \\
\hline\hline
&$\nor{u_{w,1} - u/w}_{L^2}$ & 1.3920e-01 & 3.9460e-02 & 1.0207e-02 & 2.5739e-03 &1.922190 \\
$N = 1$&$\nor{u_{w,2} - u/w}_{L^2}$ & 1.4259e-01 & 3.9672e-02 & 1.0221e-02 & 2.5748e-03 &1.933027 \\
&$\nor{u_{w,3} - u/w}_{L^2}$ &1.4042e-01 & 3.9517e-02 & 1.0213e-02 & 2.5743e-03 &1.926034 \\
\hline\hline
&$\nor{u_{w,1} - u/w}_{L^2}$ & 3.1823e-02 & 4.5986e-03 & 5.9382e-04 & 7.4836e-05 &2.914944 \\
$N = 2$&$\nor{u_{w,2} - u/w}_{L^2}$ & 3.2454e-02 & 4.6209e-03 & 5.9455e-04 & 7.4859e-05 &2.923835 \\
&$\nor{u_{w,3} - u/w}_{L^2}$ & 3.2037e-02 & 4.6037e-03 & 5.9400e-04 & 7.4842e-05 &2.917925 \\
\hline\hline
&$\nor{u_{w,1} - u/w}_{L^2}$ & 6.2528e-03 & 4.0795e-04 & 2.5978e-05 & 1.6317e-06 &3.968489 \\
$N = 3$&$\nor{u_{w,2} - u/w}_{L^2}$ & 6.4703e-03 & 4.1129e-04 & 2.6034e-05 & 1.6326e-06 &3.983907 \\
&$\nor{u_{w,3} - u/w}_{L^2}$ & 6.2660e-03 & 4.0852e-04 & 2.5985e-05 & 1.6318e-06 &3.969530 \\
\hline\hline
&$\nor{u_{w,1} - u/w}_{L^2}$ & 7.9047e-04 & 2.8889e-05 & 9.3214e-07 & 2.9371e-08 &4.910195 \\
$N = 4$ &$\nor{u_{w,2} - u/w}_{L^2}$ & 7.9446e-04 & 2.8996e-05 & 9.3304e-07 & 2.9378e-08 &4.912661 \\
&$\nor{u_{w,3} - u/w}_{L^2}$ & 7.9433e-04 & 2.8902e-05 & 9.3226e-07 & 2.9377e-08 &4.912262 \\
\hline
\end{tabular}
\caption{$L^2$ errors and estimated rates of convergence for approximations $u_{w,1}, u_{w,2}, u_{w,3}$ of $u/w$ (defined by (\ref{eq:proj1}), (\ref{eq:proj2}), and (\ref{eq:proj3}) respectively) under uniform mesh refinement. In this case, $u$ and $w$ are taken to be regular functions.}
\label{table:projrates}
\end{table}
\subsubsection{Solutions and weighting functions with decreased regularity}
It is worth noting that the results of Section~\ref{sec:estimates} involve terms $\nor{w}_{W^{N+1,\infty}},\nor{1/w}_{W^{N+1,\infty}}$ which depend on the regularity of $w$ over $D^k$. For this reason, we expect the approximations $u_{w,1}, u_{w,2}, u_{w,3} \approx u/w$ resulting from the solutions of (\ref{eq:proj2}) and (\ref{eq:proj3}) to degenerate in quality as $w$ becomes less regular. To test this, we take
\[
c^2(x,y) = 1 + \sqrt{x^2 + y^2 + a}, \qquad a \in [0,\infty).
\]
which produces a non-differentiable cone as $a\rightarrow 0$.\footnote{Since typical quadratures are designed for more regular integrands, we double the quadrature strength when evaluating integrands with $a \approx 0$. One-dimensional numerical experiments which compare increased quadrature strength with adaptive quadrature achieve qualitatively similar results. Irregular weighting functions may also be dealt with using techniques from immersed DG methods \cite{adjerid2007higher}.} Figure~\ref{fig:nonsmoothrates} shows the effect decreasing regularity of $w$ on the convergence of $u_{w,1}, u_{w,2}, u_{w,3}$ for $N = 3$. While we do observe increases in error as $w$ loses regularity, we still observe that $u_{w,1}, u_{w,2}, u_{w,3}$ all behave very similarly independently of the regularity of $w$. Along with the results of Theorem~\ref{lemma:mult}, this implies that the behavior of the weight-adjusted DG method should be very close to that of the standard DG method for both smooth and irregular $w$. Interestingly, as $w$ approaches a non-differentiable function, the convergence of $u_{w,1}, u_{w,2}$, and $u_{w,3}$ to $u/w$ reduces to $O(h^2)$ for all orders $N$ tested.
\begin{figure}
\centering
\subfloat[$N=3$]{
\begin{tikzpicture}
\begin{loglogaxis}[
legend cell align=left,
width=.475\textwidth,
xlabel={Mesh size $h$},
ylabel={$L^2$ error},
xmin=.01, xmax=1.5,
ymin=1e-10, ymax=5e-3,
legend pos=south east,
xmajorgrids=true,
ymajorgrids=true,
grid style=dashed,
]
\addplot[color=magenta,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000811841)(0.25,5.2479e-05)(0.125,3.34793e-06)(0.0625,2.10171e-07)};
\addplot[color=magenta,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000815045)(0.25,5.25257e-05)(0.125,3.34867e-06)(0.0625,2.10183e-07)};
\addplot[color=magenta,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000813198)(0.25,5.25014e-05)(0.125,3.34828e-06)(0.0625,2.10177e-07)};
\node at (axis cs:.03,2.1e-07) {$a = 10^{-1}$};
\addplot[color=black,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00118692)(0.25,0.000190298)(0.125,1.821e-05)(0.0625,1.27298e-06)};
\addplot[color=black,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00119612)(0.25,0.000190761)(0.125,1.82156e-05)(0.0625,1.27306e-06)};
\addplot[color=black,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00118848)(0.25,0.000190313)(0.125,1.82102e-05)(0.0625,1.27299e-06)};
\node at (axis cs:.03,1.27e-06) {$a = 10^{-2}$};
\addplot[color=red,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00123752)(0.25,0.000276383)(0.125,6.84113e-05)(0.0625,1.03818e-05)};
\addplot[color=red,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00124408)(0.25,0.00027695)(0.125,6.84631e-05)(0.0625,1.03834e-05)};
\addplot[color=red,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00123967)(0.25,0.000276403)(0.125,6.84115e-05)(0.0625,1.03818e-05)};
\node at (axis cs:.03,.75e-05) {$a = 10^{-3}$};
\addplot[color=blue,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.0012956)(0.25,0.000301894)(0.125,7.66855e-05)(0.0625,1.9284e-05)};
\addplot[color=blue,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.0013018)(0.25,0.000302283)(0.125,7.67179e-05)(0.0625,1.92874e-05)};
\addplot[color=blue,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00129769)(0.25,0.000301911)(0.125,7.66858e-05)(0.0625,1.92853e-05)};
\node at (axis cs:.03,2.25e-05) {$a = 10^{-4}$};
\legend{$u_{w,1}$, $u_{w,2}$, $u_{w,3}$}
\end{loglogaxis}
\end{tikzpicture}
}
\subfloat[$N=4$]{
\begin{tikzpicture}
\begin{loglogaxis}[
legend cell align=left,
width=.475\textwidth,
xlabel={Mesh size $h$},
ylabel={$L^2$ error},
xmin=.01, xmax=1.5,
ymin=1e-10, ymax=5e-3,
legend pos=south east,
xmajorgrids=true,
ymajorgrids=true,
grid style=dashed,
]
\addplot[color=magenta,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,7.68472e-05)(0.25,3.6451e-06)(0.125,1.25564e-07)(0.0625,4.00732e-09)};
\addplot[color=magenta,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,7.71628e-05)(0.25,3.64757e-06)(0.125,1.25583e-07)(0.0625,4.00747e-09)};
\addplot[color=magenta,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,7.69985e-05)(0.25,3.6457e-06)(0.125,1.25573e-07)(0.0625,4.00743e-09)};
\node at (axis cs:.03,4.e-09) {$a = 10^{-1}$};
\addplot[color=black,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000379655)(0.25,4.1344e-05)(0.125,2.53271e-06)(0.0625,1.42708e-07)};
\addplot[color=black,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.00038317)(0.25,4.1436e-05)(0.125,2.53325e-06)(0.0625,1.42717e-07)};
\addplot[color=black,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000379681)(0.25,4.13447e-05)(0.125,2.53276e-06)(0.0625,1.42708e-07)};
\node at (axis cs:.03,1.42e-07) {$a = 10^{-2}$};
\addplot[color=red,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000461945)(0.25,0.000126468)(0.125,2.25822e-05)(0.0625,1.63882e-06)};
\addplot[color=red,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000465012)(0.25,0.000126831)(0.125,2.26025e-05)(0.0625,1.639e-06)};
\addplot[color=red,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000462034)(0.25,0.000126466)(0.125,2.25821e-05)(0.0625,1.63882e-06)};
\node at (axis cs:.03,1.5e-06) {$a = 10^{-3}$};
\addplot[color=blue,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000483624)(0.25,0.000126702)(0.125,3.34245e-05)(0.0625,8.06305e-06)};
\addplot[color=blue,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000485939)(0.25,0.000126923)(0.125,3.34475e-05)(0.0625,8.06499e-06)};
\addplot[color=blue,mark=x,semithick, mark options={fill=markercolor}]
coordinates{(0.5,0.000483704)(0.25,0.000126703)(0.125,3.34244e-05)(0.0625,8.06305e-06)};
\node at (axis cs:.03,8.5e-06) {$a = 10^{-4}$};
\legend{$u_{w,1}$, $u_{w,2}$, $u_{w,3}$}
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Convergence of $L^2$ errors for solutions $u_{w,1}, u_{w,2}, u_{w,3}$ of (\ref{eq:proj1}), (\ref{eq:proj2}), (\ref{eq:proj3}) under uniform mesh refinement for $N = 3,4$. In this case, $w$ is taken to be a function whose regularity decreases as $a\rightarrow 0$. }
\label{fig:nonsmoothrates}
\end{figure}
\subsection{Local conservation errors}
Section~\ref{sec:conservation} discusses the fact that the weight-adjusted DG method does not locally conserve the same quantities conserved by the standard DG method. However, estimates show that for sufficiently regular $u$ and $w$, the conservation error converges at $O(h^{2N+2})$.
\subsubsection{Regular solutions and weighting functions}
We test this first for regular $u,w$ by taking
\[
u(x,y) = e^{x+y}, \qquad w(x,y) = 1 + \frac{1}{2}\sin(\pi x)\sin(\pi y).
\]
and computing the conservation errors for $u_{w,2}, u_{w,3}$. For $u_{w,2}$, this error is defined as
\[
\sum_k \LRp{\int_{D^k} \frac{u_{w,1}}{c^2} - \int_{D^k}T_{1/w}^{-1} u_{w,2}},
\]
for $u_{w,1}, u_{w,2}$ as defined in (\ref{eq:proj1}),(\ref{eq:proj2}), and (\ref{eq:proj3}), respectively. For $u_{w,3}$ since the conservation-corrected mass matrix does not have a clear inner product analogue, we measure the conservation error via
\[
\sum_k \bm{e}^T\bm{M}_{1/c^2}^k\bm{u}_{w,1} - \bm{e}^T\bm{M}^k\LRp{\bm{M}^{k}_{c^2}}^{-1} \bm{M}^k\bm{u}_{w,3},
\]
where $\bm{e}$ are the polynomial expansion coefficients for the constant $1$ over $D^k$.
In all experiments, $\alpha$ is set to zero if $\LRb{\bm{v}^T\bm{e}} \leq \delta \nor{\bm{v}}$ for $\delta = 10^{-8}$. Table~\ref{table:smoothuw} shows the conservation errors
\[
\LRb{\overline{u_{w,1}/c^2} - \overline{T^{-1}_{c^2}u_{w,2}}}, \qquad \LRb{\overline{u_{w,1}/c^2} - \overline{T^{-1}_{c^2}u_{w,3}}}
\]
for $u_{w,2}$ and $u_{w,3}$ respectively. The estimated rate of convergence for $u_{w,2}$ is also reported. As predicted in Section~\ref{sec:conservation}, the conservation error for $u_{w,2}$ is observed to converge at a rate of $O(h^{2N+2})$, while $u_{w,3}$ is observed to reduce conservation error to machine precision values.
\begin{table}
\centering
\begin{tabular}{|c|c||c|c|c|c||c|}
\hline
&& $h = 1$ & $h = 1/2$, & $h = 1/4$ & $h = 1/8$ & Est.\ rate \\
\hline
$N = 1$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,2}/c^2}}$ & 9.5935e-03 & 7.9155e-04 & 5.2323e-05 & 3.2990e-06 &3.953251 \\
$N = 1$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,3}/c^2}}$ & 2.7409e-16 & 2.7712e-16 & 2.5468e-16 & 2.5320e-16 & \\
\hline\hline
$N = 2$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,2}/c^2}}$ & 4.4236e-04 & 1.4430e-05 & 2.3578e-07 & 3.7821e-09 &5.948822 \\
$N = 2$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,3}/c^2}}$ & 2.9046e-16 & 3.1423e-16 & 3.3770e-16 & 3.4679e-16 & \\
\hline\hline
$N = 3$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,2}/c^2}}$ & 7.7600e-05 & 3.5645e-07 & 1.5276e-09 & 6.2161e-12 &7.903656 \\
$N = 3$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,3}/c^2}}$ & 3.6527e-16 & 2.9679e-16 & 3.5446e-16 & 3.5605e-16 & \\
\hline\hline
$N = 4$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,2}/c^2}}$ & 2.5627e-06 & 7.8864e-09 & 1.2094e-11 & 1.3714e-14 &9.566707 \\
$N = 4$& $\LRb{\overline{u_{w,1}/c^2}-\overline{u_{w,3}/c^2}}$ & 3.2904e-16 & 2.9661e-16 & 3.2352e-16 & 3.3249e-16 & \\
\hline
\end{tabular}
\caption{Conservation errors at different orders of approximation $N$ under uniform mesh refinement for solutions $u_{w,2}, u_{w,3}$ to (\ref{eq:proj2}), (\ref{eq:proj3}). In this case, $u,w$ are taken to be regular functions. Estimated orders of convergence are also reported for $u_{w,2}$. }
\label{table:smoothuw}
\end{table}
\subsubsection{Solutions and weighting functions with decreased regularity}
We also investigate how the regularity of $u,w$ affect local conservation errors. We consider $u,w$ given both a by smooth exponential and a regularized cone
\begin{align*}
u(x,y) = e^{x+y}, \qquad w(x,y) = 1 + \sqrt{x^2 + y^2 + a}, \quad a \in [0,\infty),\\
u(x,y) = 1 + \sqrt{x^2 + y^2 + a}, \quad a \in [0,\infty), \qquad w(x,y) = e^{x+y}.
\end{align*}
Figure~\ref{fig:conserrregularity} shows the effects of decreasing regularity of $w$ and $u$ separately on the conservation errors. Decreasing regularity of $w$ is observed to reduce convergence of conservation errors to $O(h^4)$. Interestingly, only decreasing the regularity of $u$ affects conservation errors far less than only decreasing the regularity of $w$, suggesting that the bound in Theorem~\ref{lemma:cons} may not be sharp. Additionally, for less regular $u$ and discontinuous $u$, we observe numerically that conservation errors decrease at a rate of $O(h^{N+2})$. Both of these behaviors are better than expected from Theorem~\ref{lemma:cons}, and suggest that conservation errors do not depend strongly on the regularity of $u$.
\begin{figure}
\centering
\subfloat[Conservation errors for less-regular $w$]{
\begin{tikzpicture}
\begin{loglogaxis}[
legend cell align=left,
width=.49\textwidth,
xlabel={Mesh size $h$},
xmin=.01, xmax=1.5,
ymin=1e-15, ymax=1e-5,
legend pos=south east,
xmajorgrids=true,
ymajorgrids=true,
grid style=dashed,
]
\addplot[color=blue,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.65584e-07)(0.25,8.21601e-10)(0.125,3.86473e-12)(0.0625,1.61111e-14)};
\node at (axis cs:.03,1.6e-14) {$a = 10^{-1}$};
\addplot[color=red,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.37583e-06)(0.25,5.34628e-08)(0.125,4.48005e-10)(0.0625,2.10452e-12)};
\node at (axis cs:.03,2.1e-12) {$a = 10^{-2}$};
\addplot[color=black,mark=triangle*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.65946e-06)(0.25,1.0755e-07)(0.125,5.90813e-09)(0.0625,1.26104e-10)};
\node at (axis cs:.03,1e-10) {$a = 10^{-3}$};
\addplot[color=magenta,mark=diamond*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.92368e-06)(0.25,1.24548e-07)(0.125,7.16551e-09)(0.0625,4.17446e-10)};
\node at (axis cs:.03,4.6e-10) {$a = 10^{-4}$};
\end{loglogaxis}
\end{tikzpicture}
}
\subfloat[Conservation errors for less-regular $u$]{
\begin{tikzpicture}
\begin{loglogaxis}[
legend cell align=left,
width=.49\textwidth,
xlabel={Mesh size $h$},
xmin=.01, xmax=1.5,
ymin=1e-15, ymax=1e-5,
legend pos=south east,
xmajorgrids=true,
ymajorgrids=true,
grid style=dashed,
]
\addplot[color=blue,mark=*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.93878e-07)(0.25,7.97778e-10)(0.125,3.16898e-12)(0.0625,1.34386e-14)};
\node at (axis cs:.03,.5e-14) {$a = 10^{-1}$};
\addplot[color=red,mark=square*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,2.12529e-07)(0.25,1.28147e-09)(0.125,5.42966e-12)(0.0625,2.17705e-14)};
\node at (axis cs:.03,1.5e-14) {$a = 10^{-2}$};
\addplot[color=black,mark=triangle*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.91045e-07)(0.25,1.29447e-09)(0.125,8.90727e-12)(0.0625,4.98508e-14)};
\node at (axis cs:.03,4.5e-14) {$a = 10^{-3}$};
\addplot[color=magenta,mark=diamond*,semithick, mark options={fill=markercolor}]
coordinates{(0.5,1.84712e-07)(0.25,1.16594e-09)(0.125,7.75168e-12)(0.0625,5.98593e-14)};
\node at (axis cs:.03,1.5e-13) {$a = 10^{-4}$};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Convergence of conservation errors for solution $u_{w,2}$ to (\ref{eq:proj2}) under uniform mesh refinement. In this case, $u$, $w$ are taken to be functions whose regularity decreases as $a\rightarrow 0$. Results are shown for $N = 3$. }
\label{fig:conserrregularity}
\end{figure}
\subsection{Convergence of DG for heterogeneous wavespeed}
In this section, we examine the convergence of high order standard and weight-adjusted DG methods to manufactured and reference solutions under a wavespeed which varies spatially with each element.
\subsubsection{Convergence to a manufactured solution}
For the acoustic wave equation with smoothly varying wavespeed, there are few analytic reference solutions in higher dimensions. For this reason the method of manufactured solutions is often used to analyze the convergence of methods for wave propagation in heterogeneous media \cite{castro2010seismic, mercerat2015nodal}. The method of manufactured solutions chooses expressions for $p, \bm{u}$ and determines a source term $f\LRp{\bm{x},t}$ such that the inhomogeneous acoustic wave equations
\begin{align}
\frac{1}{c^2}\pd{p}{t}{} + \Div u &= f \nonumber\\
\rho\pd{\bm{u}}{t}{} + \Grad p &= 0,
\label{eq:dgconv}
\end{align}
have solution $p,\bm{u}$. Table~\ref{table:manusol} shows the convergence of $L^2$ errors for both standard DG and weight-adjusted DG on a sequence of 2D uniform triangular meshes for
\[
c^2(x,y) = 1 + \frac{1}{2}\sin\LRp{\pi x} \sin\LRp{\pi y}, \qquad p(x,y,t) = \cos\LRp{\frac{\pi}{2} x}\cos\LRp{\frac{\pi}{2} y}\cos\LRp{\frac{\pi}{2}\sqrt{2}t}.
\]
A triangular quadrature from Xiao and Gimbutas \cite{xiao2010quadrature} (chosen to be exact for polynomials up to degree $3N$) is used to compute both the weighted and weight-adjusted mass matrices for standard DG and the application of the weighted-adjusted mass matrix for weight-adjusted DG. We do not correct the mass matrix with $\alpha \bm{v}\bm{v}^T$ to enforce local conservation in the following numerical experiments.
\begin{table}
\centering
\subfloat[Standard DG $L^2$ errors, manufactured solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
$N$ & $h = 1$ & $h = 1/2$ & $h = 1/4$ & $h = 1/8$\\
\hline
$ 1$ & 2.13e-01 & 6.25e-02 & 1.64e-02 & 4.19e-03 \\
\hline
$ 2$ & 3.01e-02 & 3.60e-03 & 4.21e-04 & 5.07e-05 \\
\hline
$ 3$ & 6.10e-03 & 3.33e-04 & 2.04e-05 & 1.22e-06 \\
\hline
$ 4$ & 6.61e-04 & 2.12e-05 & 6.39e-07 & 1.94e-08 \\
\hline
\end{tabular}
}
\subfloat[Weight-adjusted DG $L^2$ errors, manufactured solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
$N$ & $h = 1$ & $h = 1/2$ & $h = 1/4$ & $h = 1/8$\\
\hline
$ 1$ & 2.05e-01 & 5.99e-02 & 1.62e-02 & 4.18e-03 \\
\hline
$ 2$ & 2.89e-02 & 3.54e-03 & 4.18e-04 & 5.07e-05 \\
\hline
$ 3$ & 8.69e-03 & 3.47e-04 & 2.03e-05 & 1.22e-06 \\
\hline
$ 4$ & 1.09e-03 & 2.27e-05 & 6.30e-07 & 1.93e-08 \\
\hline
\end{tabular}
}\\
\subfloat[Standard DG $L^2$ errors, reference solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
$N$ & $h = 1$ & $h = 1/2$ & $h = 1/4$ & $h = 1/8$\\
\hline
$ 1$ & 2.48e-01 & 7.58-02 & 1.69e-02 & 4.46e-03 \\
\hline
$ 2$ & 5.95e-02 & 9.95e-03 & 1.10e-03 & 1.22e-04 \\
\hline
$ 3$ & 2.29e-02 & 1.98e-03 & 9.52e-05 & 6.56e-06 \\
\hline
$ 4$ & 4.90e-03 & 3.01e-04 & 1.78e-05 & 7.27e-07 \\
\hline
\end{tabular}
}
\subfloat[Weight-adjusted DG $L^2$ errors, reference solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
$N$ & $h = 1$ & $h = 1/2$ & $h = 1/4$ & $h = 1/8$\\
\hline
$ 1$ & 2.50e-01 & 7.72e-02 & 1.69e-02 & 4.47e-03 \\
\hline
$ 2$ & 6.09e-02 & 1.02e-02 & 1.10e-03 & 1.22e-04 \\
\hline
$ 3$ & 1.98e-02 & 1.98e-03 & 9.52e-05 & 6.56e-06 \\
\hline
$ 4$ & 4.64e-03 & 3.02e-04 & 1.78e-05 & 7.28e-07 \\
\hline
\end{tabular}
}
\caption{Convergence of $L^2$ errors for standard and weight-adjusted DG solutions to (\ref{eq:dgconv}) for manufactured and reference solutions at time $T = 1$ under uniform triangular mesh refinement.}
\label{table:manusol}
\end{table}
\subsubsection{Convergence to a reference solution}
We also compare the convergence of DG for heterogeneous media in a more realistic setting by computing the error with respect to a fine-grid reference solution computed using a spectral method over the bi-unit square $[-1,1]^2$ with $N = 100$. The timestep for the reference solution is taken sufficiently small as to make temporal errors negligible. The same wavespeed $c$ used for the manufactured solution is used again for the manufactured solution, with an initial condition of $p(x,y,0) = \cos\LRp{\frac{\pi}{2} x}\cos\LRp{\frac{\pi}{2} y}$. Table~\ref{table:rates} shows estimated rates of convergence for both standard and weight-adjusted DG. For both methods, rates of convergence between $O(h^{N+1/2})$ and $O(h^{N+1})$ are observed for $N = 1,\ldots,4$. In all cases, the errors for the standard and weight-adjusted DG methods are nearly identical for on all but the coarsest mesh.
\begin{table}
\centering
\subfloat[Rates of convergence to manufactured solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
& $N = 1$ & $N = 2$ & $N = 3$ & $N = 4$\\
\hline
DG& 1.9220 & 3.0752 & 4.0440 & 5.0446 \\
\hline
WADG & 1.9211 & 3.0629 & 4.0752 & 5.0990\\
\hline
\end{tabular}
}
\subfloat[Rates of convergence to reference solution]{
\begin{tabular}{|c||c|c|c|c|}
\hline
& $N = 1$ & $N = 2$ & $N = 3$ & $N = 4$\\
\hline
DG & 1.8256 & 3.1796 & 3.8589 & 4.6171 \\
\hline
WADG & 1.8425 & 3.1807 & 3.8583 & 4.6128 \\
\hline
\end{tabular}
}
\caption{Estimated rates of convergence of standard and weight-adjusted DG solutions of (\ref{eq:dgconv}) to both manufactured and reference solutions at $T=1$.}
\label{table:rates}
\end{table}
Finally, Figure~\ref{fig:snap} shows a comparison of the standard and weight-adjusted DG method for the discontinuous wavespeed
\begin{align}
c^2(x,y) = \begin{cases}
1 + \frac{1}{2}\sin\LRp{2\pi x}\sin\LRp{2\pi y}, \qquad y \leq 0\\
2 + \frac{1}{2}\sin\LRp{2\pi x}\sin\LRp{2\pi y}, \qquad y > 0.
\end{cases}
\label{eq:cdisc}
\end{align}
The initial condition is taken to be a initial Gaussian pulse centered at $\LRp{0,1/4}$. For $N = 4$, $h = 1/8$, and $T = .5$, both the standard DG and weight-adjusted DG solutions are indistinguishable.
\begin{figure}
\centering
\subfloat[Standard DG]{
\includegraphics[width=.375\textwidth]{cmass_wave.png}
}
\vspace{1em}
\subfloat[Weight-adjusted DG]{
\includegraphics[width=.375\textwidth]{cproj_wave.png}
}
\caption{Snapshot of from standard and weight-adjusted DG solutions of the acoustic wave equation with $c^2$ defined by (\ref{eq:cdisc}). The order of approximation is $N = 4$, and the final time is taken to be $T=.5$. The initial condition is a Gaussian pulse centered around $(0,.25)$, and $c^2$ varies spatially with a discontinuity at $y = 0$. }
\label{fig:snap}
\end{figure}
\subsection{Effect of reduced quadrature}
It was noted in \cite{warburton2013low} that, for the LSC-DG formulation, it is possible to reduce the order of the quadrature used to evaluate the variational formulation significantly without compromising the estimated order of convergence implied by theory. This can be attributed to two facts: first, that stability of the LSC-DG formulation does not depend on quadrature strength, and secondly, that errors for a degree $2N$ quadrature rule are of the same order as the discretization error.
Similarly, the weight-adjusted DG method is energy stable so long as the weight-adjusted inner product (computed using quadrature) induces a norm. Numerical experiments indicate that quadrature degrees which integrate degree $2N+1$ polynomials exactly rule are sufficient, and that increasing quadrature strength beyond this degree does not offer any significant advantages. Table~\ref{table:quad} shows the effect of varying the quadrature strength $N_q$ from degree $2N-1$ to $3N$ for an $N = 4$ discretization. While the error decreases very slightly by increasing the degree of quadrature from $2N-1$ to $2N$ or $2N+1$, no significant change in error is observed by increasing the degree of quadrature beyond than $2N+1$. Results are not reported for quadratures of lower degree than $2N-1$, as numerically singular mass matrices are generated.
\begin{table}
\centering
\subfloat[Manufactured solution]{
\begin{tabular}{|c||c|c|}
\hline
$N_q$ & $L^2$ error (DG) & $L^2$ error (WADG) \\
\hline
7 & 1.0102e-07 & 2.9122e-08\\
\hline
8 & 2.1710e-08 & 2.1709e-08\\
\hline
9 & 1.9548e-08 &1.9544e-08\\
\hline
10 & 1.9443e-08 & 1.9544e-08\\
\hline
11 & 1.9443e-08 &1.9324e-08\\
\hline
12 & 1.9443e-08 & 1.9324e-08\\
\hline
\end{tabular}
}
\subfloat[Reference solution]{
\begin{tabular}{|c||c|c|}
\hline
$N_q$ & $L^2$ error (DG) & $L^2$ error (WADG) \\
\hline
7 & 7.7932e-07 & 8.3296e-07\\
\hline
8 & 7.6739e-07 & 7.6732e-07 \\
\hline
9 & 7.6568e-07 & 7.6553e-07 \\
\hline
10 & 7.6504e-07 & 7.6410e-07\\
\hline
11 & 7.6410e-07 & 7.6502e-07\\
\hline
12 & 7.6501e-07 & 7.6412e-07 \\
\hline
\end{tabular}
}
\caption{Effect of varying quadrature degree from $2N-1$ to $3N$ on $L^2$ errors for the standard and weight-adjusted DG solution of (\ref{eq:dgconv}). Results are for $N = 4$ and a uniform $h = 1/8$ mesh.}
\label{table:quad}
\end{table}
\section{Conclusions and future work}
This work introduces a weight-adjusted DG (WADG) method for the simulation of wave propagation in heterogeneous media which is both provably energy stable and high order accurate for heterogeneous media with wavespeeds which are locally smooth over each element. Additionally, the implementation of the WADG method is non-invasive, and can be incorporated into a DG code for wave propagation in isotropic media with only minor modifications.
The WADG method relies on an approximation of the weighted mass matrix by an equivalent weight-adjusted mass matrix, which implies that unlike the DG method, the method is no longer Galerkin consistent or locally conservative (for non-polynomial wavespeeds). However, the method is shown to be asymptotically consistent and high order accurate, while conservation errors are shown to superconverge at rate $O(h^{2N+2})$ for smooth solutions and wavespeeds. Finally, numerical experiments also indicate that a low-rank correction to the mass matrix can be used to recover exact conservation properties in the case of non-polynomial wavespeed.
Future work will involve the efficient implementation of the WADG method on GPUs for more realistic velocity models in three dimensions, as well as the extension of the WADG method to curvilinear meshes, which can be used to the control interface errors resulting from the approximation of non-planar interfaces by piecewise planar surfaces \cite{wang2009discontinuous}. We note that while the implementation of the WADG method for curvilinear meshes is relatively similar, the analysis differs from the case of affine elements.
\section{Acknowledgments}
The authors thank TOTAL for permission to publish. JC and TW are funded by a grant from TOTAL E\&P Research and Technology USA.
\bibliographystyle{unsrt}
|
1,116,691,497,180 | arxiv | \section{Introduction}
Let $\S=\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{d}\right\} $
be a smooth parametric family of density operators on a Hilbert space
$\H$. An estimator is represented by a pair $(M,\hat{\theta})$ of
a POVM $M$ taking values on any finite set $\X$ and a map $\hat{\theta}:\X\to\Theta$.
An estimator $(M,\hat{\theta})$ is called unbiased if
\begin{equation}
E_{\theta}[M,\hat{\theta}]=\sum_{x\in\X}\hat{\theta}(x)\Tr\rho_{\theta}M_{x}=\theta\label{eq:unbias}
\end{equation}
is satisfied for all $\theta\in\Theta$. An estimator $(M,\hat{\theta})$
is called locally unbiased\cite{holevo} at a given point $\theta_{0}\in\Theta$
if the condition (\ref{eq:unbias}) is satisfied around $\theta_{0}$
up to the first order of the Taylor expansion, i.e.,
\begin{align}
\sum_{x\in\X}\hat{\theta}^{i}(x)\Tr\rho_{\theta_{0}}M_{x} & =\theta_{0}^{i}\qquad(i=1,\dots,d),\\
\sum_{x\in\X}\hat{\theta}^{i}(x)\Tr\partial_{j}\rho_{\theta_{0}}M_{x} & =\delta_{j}^{i}\qquad(i,j=1,\dots,d),
\end{align}
where $\partial_{j}\rho_{\theta_{0}}=\left.\frac{\partial}{\partial\theta^{j}}\rho_{\theta}\right|_{\theta=\theta_{0}}$.
It is well-known that the covariance matrix $V_{\theta_{0}}[M,\hat{\theta}]$
of an locally unbiased estimator $(M,\hat{\theta})$ at $\theta_{0}$
satisfies the following inequalities:
\begin{equation}
V_{\theta_{0}}[M,\hat{\theta}]\geq J_{\theta_{0}}^{(S)^{-1}},
\end{equation}
\begin{equation}
V_{\theta_{0}}[M,\hat{\theta}]\geq J_{\theta_{0}}^{(R)^{-1}},
\end{equation}
where $J_{\theta_{0}}^{(S)}:=\left[{\rm Re}\,(\Tr\rho_{\theta_{0}}L_{i}^{(S)}L_{j}^{(S)})\right]_{ij}$
is the symmetric logarithmic derivative (SLD) Fisher information matrix
at $\theta_{0}$ with SLDs $L_{i}^{(S)}$ ($1\leq i\leq d$) defined
by
\begin{equation}
\partial_{i}\rho_{\theta_{0}}=\frac{1}{2}\left(\rho_{\theta_{0}}L_{i}^{(S)}+L_{i}^{(S)}\rho_{\theta_{0}}\right),
\end{equation}
and $J_{\theta_{0}}^{(R)}:=\left[\Tr L_{i}^{(R)^{*}}\rho_{\theta_{0}}L_{j}^{(R)}\right]_{ij}$
is the right logarithmic derivative (RLD) Fisher information matrix
at $\theta_{0}$ with RLDs $L_{i}^{(R)}$ ($1\leq i\leq d$) defined
by
\begin{equation}
\partial_{i}\rho_{\theta_{0}}=\rho_{\theta_{0}}L_{i}^{(R)}.
\end{equation}
These matrix inequalities imply
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq\Tr GJ_{\theta_{0}}^{(S)^{-1}}=:C_{\theta_{0},G}^{(S)},
\end{equation}
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq\Tr GJ_{\theta_{0}}^{(R)^{-1}}+\Tr\left|\sqrt{G}{\rm Im}J_{\theta_{0}}^{(R)^{-1}}\sqrt{G}\right|=:C_{\theta_{0},G}^{(R)},
\end{equation}
for any $d\times d$ real positive matrix $G$, because
\begin{equation}
\min_{V}\{\Tr GV;\,V\geq J,V\text{ is a real matrix}\}=\Tr GJ+\Tr\left|\sqrt{G}{\rm Im}J\sqrt{G}\right|\label{eq:trabs}
\end{equation}
for any positive complex matrix $J$ (see Appendix \ref{sec:trabs_proof}
for the proof).
A tighter lower bound of $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$ than
the SLD bound $C_{\theta_{0},G}^{(S)}$ and the RLD bound $C_{\theta_{0},G}^{(R)}$
is known as the Holevo bound \cite{holevo} defined by
\begin{align}
C_{\theta_{0},G}^{(H)} & :=\min_{V,B}\left\{ \Tr GV;\,V\text{ is a real matrix such that }V\geq Z(B),\,Z_{ij}(B)=\Tr\rho_{\theta_{0}}B_{j}B_{i},\right.\\
& \qquad B_{1},\dots,B_{d}\text{ are Hermitian operators on \ensuremath{\H\ }such that }\Tr\partial_{i}\rho_{\theta_{0}}B_{j}=\delta_{ij}\},\label{eq:holevo_bound}
\end{align}
(see Appendix \ref{sec:Holevo_proof} for the derivation and the proof)
and it satisfies
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq C_{\theta_{0},G}^{(H)}\geq\max(C_{\theta_{0},G}^{(S)},C_{\theta_{0},G}^{(R)}).
\end{equation}
It is known that the Holevo bound is asymptotically achievable in
theory of quantum local asymptotic normality \cite{YFG,qlan2,guta}.
Note that the minimization problem over $V$ in (\ref{eq:holevo_bound})
is explicitly solved by using (\ref{eq:trabs}), and
\begin{align}
C_{\theta_{0},G}^{(H)} & =\min_{B}\left\{ \Tr GZ(B)+\Tr\left|\sqrt{G}{\rm Im}Z(B)\sqrt{G}\right|;\,Z_{ij}(B)=\Tr\rho_{\theta_{0}}B_{j}B_{i},\right.\label{eq:holevo_bound2}\\
& \qquad B_{1},\dots,B_{d}\text{ are Hermitian operators on \ensuremath{\H\ }such that }\Tr\partial_{i}\rho_{\theta_{0}}B_{j}=\delta_{ij}\}.\nonumber
\end{align}
However, the minimization problem over $B$ in (\ref{eq:holevo_bound2})
is not trivial in general. Suzuki\cite{suzukiHolevo} showed that,
when $\dim\H=2$ and $d=2$, the Holevo bound can be represented explicitly
by using the SLD bound and the RLD bound as
\begin{equation}
C_{\theta_{0},G}^{(H)}=\begin{cases}
C_{\theta_{0},G}^{(R)} & \text{if }C_{\theta_{0},G}^{(R)}\geq\frac{C_{\theta_{0},G}^{(Z)}+C_{\theta_{0},G}^{(S)}}{2},\\
C_{\theta_{0},G}^{(R)}+S_{\theta_{0},G} & \text{otherwise, }
\end{cases}\label{eq:suzuki_bound_ori}
\end{equation}
where $C_{\theta_{0},G}^{(Z)}$ and $S_{\theta_{0},G}$ are positive
values defined by
\begin{equation}
C_{\theta_{0},G}^{(Z)}:=\Tr GZ(L^{(S)})+\Tr\left|\sqrt{G}{\rm Im}Z(L^{(S)})\sqrt{G}\right|,
\end{equation}
\begin{equation}
S_{\theta_{0},G}:=\frac{\left[\frac{1}{2}(C_{\theta_{0},G}^{(Z)}+C_{\theta_{0},G}^{(S)})-C_{\theta_{0},G}^{(R)}\right]^{2}}{C_{\theta_{0},G}^{(Z)}-C_{\theta_{0},G}^{(R)}}.
\end{equation}
In this paper, we focus on a logarithmic derivative $L_{i}^{(\beta)}$
lies between SLD $L_{i}^{(S)}$ and RLD $L_{i}^{(R)}$, that defined
by
\begin{equation}
\partial_{i}\rho_{\theta_{0}}=\frac{(1+\beta)}{2}\rho_{\theta_{0}}L_{i}^{(\beta)}+\frac{(1-\beta)}{2}L_{i}^{(\beta)}\rho_{\theta_{0}}
\end{equation}
with $\beta\in[0,1]$. When $\beta=0$, $L_{i}^{(\beta)}$ coincides
with SLD $L_{i}^{(S)}$, and when $\beta=1$, $L_{i}^{(\beta)}$ coincides
with RLD $L_{i}^{(R)}$. The Fisher information matrix with respect
to $\left\{ L_{i}^{(\beta)}\right\} _{i=1}^{d}$ is
\begin{equation}
J_{\theta_{0}}^{(\beta)}:=\left[\Tr\partial_{i}\rho_{\theta_{0}}L_{j}^{(\beta)}\right]_{ij},
\end{equation}
and we show that the inequalities
\begin{equation}
V_{\theta_{0}}[M,\hat{\theta}]\geq J_{\theta_{0}}^{(\beta)^{-1}}
\end{equation}
and
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq\Tr GJ_{\theta_{0}}^{(\beta)^{-1}}+\Tr\left|\sqrt{G}{\rm Im}J_{\theta_{0}}^{(\beta)^{-1}}\sqrt{G}\right|=:C_{\theta_{0},G}^{(\beta)},
\end{equation}
for the covariance matrix $V_{\theta_{0}}[M,\hat{\theta}]$ of any
locally unbiased estimator $(M,\hat{\theta})$ at $\theta_{0}$. We
call
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}\label{eq:max_prob}
\end{equation}
a maximum logarithmic derivative bound.
More generally, monotone metrics
introduced by Petz can also induce Fisher information matrices and
lower bounds of $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$\cite{monotone,cptni}.
We show that the maximum logarithmic derivative bound is the largest
bound among them.
The maximization problem (\ref{eq:max_prob}) is also not trivial
in general. However, when the model $\S=\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{d}\right\} $
has $d+1$ dimensional real space $\tilde{\T}\supset{\rm span}_{\R}\{L_{i}^{(S)}\}_{i=1}^{d}$
such that $\D_{\rho_{\theta_{0}}}(\tilde{\T})\subset\tilde{\T}$ at
$\theta_{0}\in\Theta$, we show that $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
has explicit solution:
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=\begin{cases}
C_{\theta_{0},G}^{(1)} & \text{if }\hat{\beta}\geq1,\\
C_{\theta_{0},G}^{(\hat{\beta})} & \text{\text{otherwise}},
\end{cases}\label{eq:mld_expli}
\end{equation}
with
\begin{equation}
\hat{\beta}=\begin{cases}
\frac{\Tr\left|\sqrt{G}{\rm Im}J_{\theta_{0}}^{(R)^{-1}}\sqrt{G}\right|}{2\Tr G\left\{ J_{\theta_{0}}^{(S)^{-1}}-{\rm Re}(J_{\theta_{0}}^{(R)^{-1}})\right\} } & \text{if }J_{\theta_{0}}^{(S)^{-1}}\not={\rm Re}(J_{\theta_{0}}^{(R)^{-1}}),\\
\infty & \text{\text{otherwise}},
\end{cases}
\end{equation}
where $\D_{\rho_{\theta_{0}}}$ is the commutation operator (see Section
\ref{sec:rewrite_holevo}). Furthermore, when $d=2$, we show that
the maximization problem (\ref{eq:max_prob}) is the Lagrangian duality
of the minimization problem to define Holevo bound, thus
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=C_{\theta_{0},G}^{(H)}.
\end{equation}
Actually, the explicit solution (\ref{eq:mld_expli}) is a generalization
of the solution (\ref{eq:suzuki_bound_ori}) for $\dim\H=2$
{\color{black}(see Appendix \ref{sec:suzuki_mld})}.
This paper is organized as follows. In Section \ref{sec:mld}, we
introduce logarithmic derivatives and Fisher information matrices
induced by monotone metrics, and we derive the maximum logarithmic
derivative bound. In Section \ref{sec:rewrite_holevo}, we introduce
a commutation operator $\D$, and the Holevo bound is rewritten in
simpler form by using a $\D$ invariant space of Hermitian operators.
In Section \ref{sec:codimension}, we show that $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
has explicit solution (\ref{eq:mld_expli}) when the $d$ dimensional
model has $d+1$ dimensional $\D$ invariant extension of SLD tangent
space. Further, we show the maximum logarithmic derivative bound is
the same as the Holevo bound if $d=2$. At the end of the section,
we give examples of families of quantum states to which our theory
can be applied not only for $\dim\H=2$. Section \ref{sec:Conclusion}
is the conclusion. For the reader’s convenience, some additional material
is presented in the Appendix. In Appendix \ref{sec:trabs_proof},
a proof of (\ref{eq:trabs}) is given. In Appendix \ref{sec:Holevo_proof},
a brief proof and derivation of Holevo bound is presented. In Appendix
\ref{sec:schur}, the Schur complement, which plays an important role,
is introduced.
In Appendix \ref{sec:Dinv}, details of the commutation
operator $\D$ as a tool for multiple inner products is described.
{\color{black} In Appendix \ref{sec:suzuki_mld},
it is shown that the explicit form (\ref{eq:suzuki_bound_ori})
for $\dim\H=2$ and $d=2$ can be derived from (\ref{eq:mld_expli}).
}
\section{Maximum logarithmic derivative bound\label{sec:mld}}
Let $\S=\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{d}\right\} $
be a smooth parametric family of density operators on a finite dimensional
Hilbert space $\H$. The covariance matrix $V_{\theta_{0}}[M,\hat{\theta}]$
of an locally unbiased estimator $(M,\hat{\theta})$ at $\theta_{0}$
satisfies the classical Cram\'{e}r Rao inequality,
\begin{equation}
V_{\theta_{0}}[M,\hat{\theta}]\geq J_{\theta_{0}}^{(M)^{-1}}
\end{equation}
where
\begin{equation}
J_{\theta_{0}}^{(M)}=\left[\sum_{x\in\X}\frac{\left(\Tr\partial_{i}\rho_{\theta_{0}}M_{x}\right)\left(\Tr\partial_{j}\rho_{\theta_{0}}M_{x}\right)}{\Tr\rho_{\theta_{0}}M_{x}}\right]_{ij}
\end{equation}
is the classical Fisher information matrix with respect to the POVM
$M$. The equality is achieved when
\begin{equation}
\hat{\theta}^{i}(x)=\theta_{0}^{i}+\sum_{j=1}^{d}\left[J_{\theta_{0}}^{(M)^{-1}}\right]^{ij}\frac{\Tr\partial_{j}\rho_{\theta_{0}}M_{x}}{\Tr\rho_{\theta_{0}}M_{x}}.
\end{equation}
Thus the minimization of $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$ is
reduced to the minimization of $\Tr GJ_{\theta_{0}}^{(M)^{-1}}$ for
any $d\times d$ real positive matrix $G$. In this section, we consider
lower bounds of $\Tr GJ_{\theta_{0}}^{(M)^{-1}}$ directly induced
by monotone metrics.
Let $P:(0,\infty)\rightarrow(0,\infty)$ be an operator monotone function
such that $P(1)=1$ \cite{Bhatia}. Let $\B(\H)$ be the set of all
linear operators on $\H$. A monotone metric at $\theta_{0}\in\Theta$
is an inner product $K_{\rho_{\theta_{0}}}^{(P)}(\cdot,\cdot)$ on
$\B(\H)$ defined by
\begin{equation}
K_{\rho_{\theta_{0}}}^{(P)}(X,Y)=\Tr X^{*}\left[(\bR_{\rho_{\theta_{0}}}P(\bL_{\rho_{\theta_{0}}}\bR_{\rho_{\theta_{0}}}^{-1}))^{-1}Y\right],
\end{equation}
where $\bL_{\rho_{\theta_{0}}}$ and $\bR_{\rho_{\theta_{0}}}$ are
super operators on $\B(\H)$ defined by
\begin{align}
\bL_{\rho_{\theta_{0}}}(X) & =\rho_{\theta_{0}}X,\\
\bR_{\rho_{\theta_{0}}}(X) & =X\rho_{\theta_{0}},
\end{align}
with a strictly positive operator $\rho_{\theta_{0}}\in\S$ \cite{monotone,cptni}.
Note that $\bL_{\rho_{\theta_{0}}}$ and $\bR_{\rho_{\theta_{0}}}$
are commutative so the super operator $(\bR_{\rho_{\theta_{0}}}P(\bL_{\rho_{\theta_{0}}}\bR_{\rho_{\theta_{0}}}^{-1}))^{-1}$
is well-defined. The monotone metric $K_{\rho_{\theta_{0}}}^{(P)}(\cdot,\cdot)$
has the monotonicity
\begin{equation}
K_{\rho_{\theta_{0}}}^{(P)}(X,X)\geq K_{T(\rho_{\theta_{0}})}^{(P)}(T(X),T(X))\label{eq:monotonicity}
\end{equation}
under any channel $T:\B(\H)\to\B(\H')$ mapping to another Hilbert
space $\H'$.
The logarithmic derivative $L_{i}^{(P)}$ and the Fisher information
matrix $J_{\theta_{0}}^{(P)}$ with respect to $P$ are
\begin{align}
L_{i}^{(P)} & =(\bR_{\theta_{0}}P(\bL_{\theta_{0}}\bR_{\theta_{0}}^{-1}))^{-1}\partial_{i}\rho_{\theta_{0}}\qquad(i=1,\dots,d),\\
J_{\theta_{0}}^{(P)} & =\left[K_{\rho_{\theta_{0}}}^{(P)}(\partial_{i}\rho_{\theta_{0}},\partial_{j}\rho_{\theta_{0}})\right]_{ij}=\left[\Tr\partial_{i}\rho_{\theta_{0}}L_{j}^{(P)}\right]_{ij}.
\end{align}
Because a {\color{black} linear} map $T^{(M)}:X\mapsto{\rm Diag}(\Tr XM_{1},\Tr XM_{2},\dots,\Tr XM_{|\X|})$
($X\in\B(\H)$) is a quantum channel, for any POVM $M$ taking values
on $\X=\{1,2,\dots,|\X|\},$ the monotonicity (\ref{eq:monotonicity})
of $K_{\rho_{\theta_{0}}}^{(P)}(\cdot,\cdot)$ induces a matrix inequality
\begin{equation}
J_{\theta_{0}}^{(P)}\geq\left[K_{T^{(M)}(\rho_{\theta_{0}})}^{(P)}(T^{(M)}(\partial_{i}\rho_{\theta_{0}}),T^{(M)}(\partial_{j}\rho_{\theta_{0}}))\right]_{ij}=J_{\theta_{0}}^{(M)},\label{eq:monotonicity_POVM}
\end{equation}
where ${\rm Diag}(\cdots)$ indicates a diagonal matrix.
The inequality
(\ref{eq:monotonicity_POVM}) implies
\begin{align}
\Tr GJ_{\theta_{0}}^{(M)^{-1}} & \geq\min\{\Tr GV;\,V\geq J_{\theta_{0}}^{(P)^{-1}},V\text{ is a real matrix}\}\\
& =\Tr GJ_{\theta_{0}}^{(P)^{-1}}+\Tr\left|\sqrt{G}{\rm Im}J_{\theta_{0}}^{(P)^{-1}}\sqrt{G}\right|=:C_{\theta_{0},G}^{(P)},
\end{align}
due to (\ref{eq:trabs}).
{\color{black}
To obtain a tighter bound, we consider
maximizing $C_{\theta_{0},G}^{(P)}$ with respect to $P$.
In existing studies, such maximization was considered
in several models,
and SLD or RLD bounds were derived\cite{block_pure}.
In this section, we consider maximization of
$C_{\theta_{0},G}^{(P)}$ in general models.
}
In quantum state estimation,
a family of linear functions
\begin{equation}
\F=
\left\{ P^{(\beta)}(x)=\frac{1+\beta}{2}x+\frac{1-\beta}{2};\,\beta\in[-1,1]\right\} \label{eq:beta_family},
\end{equation}
is particularly important for monotone metrics among operator monotone
functions
{\color{black}
because
the operator monotone function $P$ which
maximize the lower bound $C^{(P)}_{\theta_0}$ is always
in the family $\F$ of functions
as we will show in Theorem \ref{thm:monotone_bound}.
}
We write $K_{\rho_{\theta_{0}}}^{(\beta)}$, $L_{i}^{(\beta)}$,
$J_{\theta_{0}}^{(\beta)}$, and $C_{\theta_{0},G}^{(\beta)}$ instead
of $K_{\rho_{\theta_{0}}}^{(P^{(\beta)})}$, $L_{i}^{(P^{(\beta)})}$,
$J_{\theta_{0}}^{(P^{(\beta)})}$, and $C_{\theta_{0},G}^{(P^{(\beta)})}$.
The $\beta$ logarithmic derivative $L_{i}^{(\beta)}$ is defined
by
\begin{equation}
\partial_{i}\rho_{\theta_{0}}=\frac{(1+\beta)}{2}\rho_{\theta_{0}}L_{i}^{(\beta)}+\frac{(1-\beta)}{2}L_{i}^{(\beta)}\rho_{\theta_{0}}.\label{eq:Lbeta_def}
\end{equation}
Let $\left\langle \cdot,\cdot\right\rangle ^{(\beta)}$ be an inner
product on $\B(\H)$ defined by
\begin{equation}
\left\langle X,Y\right\rangle ^{(\beta)}=\frac{1}{2}\Tr X^{*}\left\{ (1+\beta)\rho_{\theta_{0}}Y+(1-\beta)Y\rho_{\theta_{0}}\right\} .\label{eq:beta_inner}
\end{equation}
By using this inner product, the $\beta$ Fisher information matrix
can be written as
\begin{equation}
J_{\theta_{0}}^{(\beta)}=\left[K_{\rho_{\theta_{0}}}^{(\beta)}(\partial_{i}\rho_{\theta_{0}},\partial_{j}\rho_{\theta_{0}})\right]_{ij}=\left[\left\langle L_{i}^{(\beta)},L_{j}^{(\beta)}\right\rangle ^{(\beta)}\right]_{ij}.\label{eq:beta_fisher}
\end{equation}
Note that $L_{i}^{(0)}$ coincides with SLD $L_{i}^{(S)}$, and $L_{i}^{(1)}$
coincides with RLD $L_{i}^{(R)}$.
{\color{black} Let us prove the optimality of the family $\F$ of functions.
Because any operator monotone function
$P:(0,\infty)\rightarrow(0,\infty)$ is differentiable and concave\cite{Bhatia},
there always exists $\beta\in[-1,1]$ such that
$\left.\frac{{\rm d}}{{\rm d}x}P(x)\right|_{x=1}=\frac{1+\beta}{2}$
if $P(1)=1$.
Since the line $P^{(\beta)}(x)$ is a tangent to the concave function $P(x)$ at $x=1$,
\begin{equation}
P(x)\leq P^{(\beta)}(x)
\end{equation}
for any $x\in(0,\infty)$,
and this implies
\begin{equation}
J_{\theta_{0}}^{(P)}\geq J_{\theta_{0}}^{(\beta)}
\end{equation}
and
\begin{equation}
C_{\theta_{0},G}^{(\beta)}\geq C_{\theta_{0},G}^{(P)}.
\end{equation}
Further, $C_{\theta_{0},G}^{(\beta)}=C_{\theta_{0},G}^{(-\beta)}$
because $L_{i}^{(\beta)}$ is conjugate transpose of $L_{i}^{(-\beta)}$.
Therefore, we do not need to consider operator monotone functions
other than $P^{(\beta)}$ for $\beta\in[0,1]$.
}
Collecting these results, we have the following theorem.
\begin{thm}\label{thm:monotone_bound}
For any locally unbiased estimator $(M,\hat{\theta})$ at $\theta_{0}$,
a $d\times d$ real positive matrix $G$, and an operator monotone
function $P:(0,\infty)\to(0,\infty)$ such that $P(1)=1$ and $\left.\frac{{\rm d}}{{\rm d}x}P(x)\right|_{x=1}=\frac{1+\beta}{2}$
with $\beta\in[-1,1]$,
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq\Tr GJ_{\theta_{0}}^{(M)^{-1}}\geq C_{\theta_{0},G}^{(\left|\beta\right|)}\geq C_{\theta_{0},G}^{(P)}.
\end{equation}
\end{thm}
From this theorem, we have an inequality
\begin{equation}
\Tr GV_{\theta_{0}}[M,\hat{\theta}]\geq\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)},
\end{equation}
and we call the RHS of this inequality the maximum logarithmic derivative
bound.
\section{Equivalent expressions of Holevo bound\label{sec:rewrite_holevo}}
In this section, we give a simpler form of the Holevo bound by using
a commutation operator. Let $\S=\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{d}\right\} $
be a smooth parametric family of density operators on a finite dimensional
Hilbert space $\H$. Let $\D_{\rho_{\theta_{0}}}:\B(\H)\to\B(\H)$
be the commutation operator with respect to a faithful state $\rho_{\theta_{0}}\in\S$
on the set of linear operators $\B(\H)$ on $\H$ defined by
\begin{equation}
\D_{\rho_{\theta_{0}}}(X)\rho_{\theta_{0}}+\rho_{\theta_{0}}\D_{\rho_{\theta_{0}}}(X)=\sqrt{-1}(X\rho_{\theta_{0}}-\rho_{\theta_{0}}X),\label{eq:D_def}
\end{equation}
for $X\in\B(\H)$ \cite{holevo}. The commutation operator can also
be defined by
\begin{equation}
\D_{\rho_{\theta_{0}}}=\frac{1}{\ii}(\bL_{\rho_{\theta_{0}}}-\bR_{\rho_{\theta_{0}}})(\bL_{\rho_{\theta_{0}}}+\bR_{\rho_{\theta_{0}}})^{-1}.
\end{equation}
When $X$ is a Hermitian operator, $\D_{\rho_{\theta_{0}}}(X)$
is also a Hermitian operator. Through the commutation operator, the
$\beta$ logarithmic derivatives $\{L_{i}^{(\beta)}\}_{i=1}^{d}$
and the corresponding inner product are linked by the following relations:
\begin{align}
L_{i}^{(\beta)} & =(I+\beta\sqrt{-1}\D_{\rho_{\theta_{0}}})^{-1}(L_{i}^{(0)})\qquad(i=1,\dots,d),\label{eq:bLD_SLD}\\
\left\langle A,B\right\rangle^{(\beta)} & =\left\langle A,(I+\beta\sqrt{-1}\D_{\rho_{\theta_{0}}})(B)\right\rangle ^{(0)}\qquad(A,B\in\B(\H))\label{eq:bLD_SLD_inner}
\end{align}
for $\beta\in[0,1]$. Note that $I+\beta\ii\D_{\rho_{\theta_{0}}}$
is invertible for $\rho_{\theta_{0}}>0$ since the operator norm of
$\D_{\rho_{\theta_{0}}}$ is \\
$\max\{\left|\frac{\lambda-\mu}{\lambda+\mu}\right|;\lambda,\mu\text{ are eigenvalues of }\text{\ensuremath{\rho_{\theta_{0}}}}\}<1$.
For details about the commutation operator $\D_{\rho_{\theta_{0}}}$,
see Appendix \ref{sec:Dinv}. By considering a $\D_{\rho_{\theta_{0}}}$
invariant extension $\tilde{\T}\supset\T$ of the SLD tangent space
$\T:={\rm span}_{\R}\,\{L_{i}^{(S)}\}_{i=1}^{d}$, the minimization
problem to define the Holevo bound is simplified as follows.
\begin{thm}
\label{thm:rewrite_holevo}Suppose that a quantum statistical model
$\S=\left\{ \rho_{\theta};\theta\in\Theta\subset\R^{d}\right\} $
on $\H$ has a $\D_{\rho_{\theta_{0}}}$ invariant extension $\tilde{\T}$
of the SLD tangent space of $\S$ at $\theta=\theta_{0}$. Let $\{D_{j}^{(S)}\}_{j=1}^{r}$
be a basis of $\tilde{\T}$. The Holevo bound defined by (\ref{eq:holevo_bound})
is rewritten as
\begin{align}
C_{\theta_{0},G}^{(H)} & =\min_{F}\left\{ \Tr GZ+\Tr\left|\sqrt{G}{\rm Im}Z\sqrt{G}\right|;\,\,Z= F^{\intercal}\Sigma F,\right.\\
& \qquad F\text{ is an \ensuremath{r\times d} real matrix satisfying } F^{\intercal}{\rm \,Re}(\tau)=I\},\label{eq:min_F}
\end{align}
where $\Sigma$ and $\tau$ are $r\times r$ and $r\times d$ complex
matrix whose $(i,j)th$ entries are given by $\Sigma_{ij}=\Tr\rho_{\theta_{0}}D_{j}^{(S)}D_{i}^{(S)}$
and $\tau_{ij}=\Tr\rho_{\theta_{0}}L_{j}^{(S)}D_{i}^{(S)}$.
\end{thm}
\begin{proof}
Let $\tilde{\T}^{\perp}$ be the orthogonal complement of $\tilde{\T}$
in the set $\B_{h}(\H)$ of Hermitian operators with respect to the
inner product $\left\langle \cdot,\cdot\right\rangle ^{(0)}$, and
let $\P:\B_{h}(\H)\to\tilde{\T}$ and $\P^{\perp}:\B_{h}(\H)\to\tilde{\T}^{\perp}$
be the projections associated with the decomposition $\B_{h}(\H)=\tilde{\T}\oplus\tilde{\T}^{\perp}$.
For $X\in\tilde{\T}^{\perp}$ and $Y\in\tilde{\T}$,
\begin{align}
\Tr X\rho_{\theta_{0}}Y & =\left\langle X,Y\right\rangle ^{(1)}=\left\langle X,(I+\sqrt{-1}\D_{\rho_{\theta_{0}}})(Y)\right\rangle ^{(0)}=0.\label{eq:SLD_RLD_orth}
\end{align}
Let $\{B_{j}\}_{j=1}^{d}$ be observables achieving the minimum in
(\ref{eq:holevo_bound}). $\{\P(B_{j})\}_{j=1}^{d}$ also satisfies
the local unbiasedness condition
\begin{equation}
\Tr\partial_{i}\rho_{\theta_{0}}\P(B_{j})=\left\langle L_{i}^{(S)},\P(B_{j})\right\rangle ^{(0)}=\left\langle L_{i}^{(S)},B_{j}\right\rangle ^{(0)}=\delta_{ij}.
\end{equation}
Further, because of (\ref{eq:SLD_RLD_orth}),
\begin{align}
Z_{ij}(B) & =\Tr B_{i}\rho_{\theta_{0}}B_{j}=\Tr\left\{ \P(B_{i})+\P^{\perp}(B_{i})\right\} \rho_{\theta_{0}}\left\{ \P(B_{j})+\P^{\perp}(B_{j})\right\} \\
& =\Tr\P(B_{i})\rho_{\theta_{0}}\P(B_{j})+\Tr\P^{\perp}(B_{i})\rho_{\theta_{0}}\P^{\perp}(B_{j})=Z_{ij}(\P(B))+Z_{ij}(\P^{\perp}(B)).
\end{align}
This decomposition implies $Z(B)\geq Z(\P(B))$, thus $\{B_{j}\}_{j=1}^{d}\subset\tilde{\T}$.
Observables $\{B_{j}\}_{j=1}^{d}\subset\tilde{\T}$ can be expressed
by $B_{j}=\sum_{k}F_{j}^{k}D_{k}^{(S)}$ with an $r\times d$ real
matrix $F$. By using $F$, the local unbiasedness condition in (\ref{eq:holevo_bound})
is written as
\begin{equation}
\left\langle L_{i}^{(S)},B_{j}\right\rangle ^{(0)}=F_{j}^{k}\left\langle L_{i}^{(S)},D_{k}^{(S)}\right\rangle ^{(0)}=F_{j}^{k}\left({\rm Re}\tau\right)_{ik}=\delta_{ij},
\end{equation}
and $Z(B)$ is written as $Z(B)=F^{\intercal}\Sigma F$.
\end{proof}
Due to this theorem, we can easily see $C_{\theta_{0},G}^{(H)}\{\rho_{\theta}^{\otimes n}\}=\frac{1}{n}C_{\theta_{0},G}^{(H)}\{\rho_{\theta}\}$.
In this paper, we use further rewrite of the Holevo bound as follows.
\begin{cor}
\label{cor:easy_holevo2}Suppose $D_{i}^{(S)}=L_{i}^{(S)}$ for $1\leq i\leq d$
in Theorem \ref{thm:rewrite_holevo}, and let $R=\left({\rm Re}\Sigma\right)^{-1}\Sigma\left({\rm Re}\Sigma\right)^{-1}=\begin{pmatrix}R_{1} & R_{2}^{*}\\
R_{2} & R_{3}
\end{pmatrix}$ with $d\times d$, $(r-d)\times d$, and $(r-d)\times(d-d)$ block
matrices $R_{1},R_{2},$ and $R_{3}$. The Holevo bound is rewritten
as
\begin{align}
C_{\theta_{0},G}^{(H)} & =\min_{f}\left\{ \Tr GZ(f)+\Tr\left|\sqrt{G}{\rm Im}Z(f)\sqrt{G}\right|;\right.\\
& \qquad f\text{ is an \ensuremath{(r-d)\times d} real matrix}\},
\end{align}
where
\begin{equation}
Z(f)=(I,f^{\intercal})R\begin{pmatrix}I\\
f
\end{pmatrix}=R_{1}+R_{2}^{*}f+f^{*}R_{2}+f^{*}R_{3}f.
\end{equation}
\end{cor}
\begin{proof}
Let $\Sigma=\begin{pmatrix}\Sigma_{1} & \Sigma_{2}^{*}\\
\Sigma_{2} & \Sigma_{3}
\end{pmatrix}$ be partitioned in the same manner as $R$. For an $r\times d$ real
matrix $F$, the condition $F^{\intercal}{\rm \,Re}(\tau)=I$ implies
\begin{equation}
F{\rm Re}\Sigma=\begin{pmatrix}I\\
f
\end{pmatrix}
\end{equation}
with an $(r-d)\times d$ real matrix $f$ because $\tau=\begin{pmatrix}\Sigma_{1}\\
\Sigma_{2}
\end{pmatrix}$. By using $f$, $F^{\intercal}\Sigma F$ in Theorem \ref{thm:rewrite_holevo}
can be written as
\begin{align}
F^{\intercal}\Sigma F & =(I,f^{\intercal})\left({\rm Re}\Sigma\right)^{-1}\Sigma\left({\rm Re}\Sigma\right)^{-1}\begin{pmatrix}
I\\
f
\end{pmatrix}=(I,f^{\intercal})R\begin{pmatrix}
I\\
f
\end{pmatrix}
\end{align}
\end{proof}
Actually, $R$ in Corollary \ref{cor:easy_holevo2} coincides with
the inverse RLD Fisher information matrix of a supermodel $\tilde{\S}\supset\S$
of $\S$ that have SLDs $\{D_{i}^{(S)}\}_{i=1}^{r}$ due to Lemma
\ref{lem:Dinv} in Appendix \ref{sec:Dinv}. Furthermore, $\beta$
Fisher information matrix can be calculated directly by Schur complement
of ${\rm Re}R+\beta\ii{\rm Im}R$ as follows.
\begin{lem}
Let $D_{i}^{(\beta)}=(I+\beta\sqrt{-1}\D_{\rho_{\theta_{0}}})^{-1}(D_{i}^{(S)})$
be the $\beta$ logarithmic derivative with respect to the extended
SLD $D_{i}$ ($1\leq i\leq r$) at $\theta_{0}$ given in Corollary
\ref{cor:easy_holevo2}, and let
\begin{equation}
\tilde{J}_{\theta_{0}}^{(\beta)}=\left[\left\langle D_{i}^{(\beta)},D_{j}^{(\beta)}\right\rangle ^{(\beta)}\right]_{1\leq i,j\leq r}
\end{equation}
be the extended $\beta$ Fisher information matrix. Then $R$ given
in Corollary \ref{cor:easy_holevo2} satisfies
\begin{equation}
\tilde{J}_{\theta_{0}}^{(\beta)^{-1}}={\rm Re}R+\beta\ii{\rm Im}R.\label{eq:ex_invbJ}
\end{equation}
Further, the inverse $\beta$ Fisher information matrix $J_{\theta_{0}}^{(\beta)^{-1}}$can
be represented by $R$ as
\begin{equation}
J_{\theta_{0}}^{(\beta)^{-1}}=R_{1}^{(\beta)}-R_{2}^{(\beta)^{*}}R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)},\label{eq:R_bLDFisher}
\end{equation}
where $R_{1}^{(\beta)}={\rm Re}R_{1}+\beta\ii{\rm Im}R_{1}$, $R_{2}^{(\beta)}={\rm Re}R_{2}+\beta\ii{\rm Im}R_{2}$,
$R_{3}^{(\beta)}={\rm Re}R_{3}+\beta\ii{\rm Im}R_{3}.$
\end{lem}
\begin{proof}
The proof of (\ref{eq:ex_invbJ}) is given in Lemma \ref{lem:Dinv}.
The proof of (\ref{eq:R_bLDFisher}) is immediate, because $J_{\theta_{0}}^{(\beta)}$
is the $(1,1)$ block of
\begin{equation}
\tilde{J}_{\theta_{0}}^{(\beta)}=\begin{pmatrix}R_{1}^{(\beta)} & R_{2}^{(\beta)^{*}}\\
R_{2}^{(\beta)} & R_{3}^{(\beta)}
\end{pmatrix}^{-1},
\end{equation}
and it is the same as the inverse of $\tilde{J}_{\theta_{0}}^{(\beta)^{-1}}/R_{3}^{(\beta)}=R_{1}^{(\beta)}-R_{2}^{(\beta)^{*}}R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)}$,
where $\tilde{J}_{\theta_{0}}^{(\beta)^{-1}}/R_{3}^{(\beta)}$ is
the Schur complement given in Appendix \ref{sec:schur}.
\end{proof}
From this lemma, a relation between the Holevo bound and $C_{\theta_{0},G}^{(\beta)}$
can be obtained directly.
\begin{lem}
For any $\beta\in[0,1]$,
\begin{equation}
C_{\theta_{0},G}^{(H)}\geq C_{\theta_{0},G}^{(\beta)}.
\end{equation}
\end{lem}
\begin{proof}
Let $Z^{(\beta)}(f):=(I,f^{\intercal})({\rm Re}R+\beta\ii{\rm Im}R)\begin{pmatrix}I\\
f
\end{pmatrix}.$ Then we see
\begin{align}
C_{\theta_{0},G}^{(H)} & =\min_{f}\left\{ \Tr GZ(f)+\Tr\left|\sqrt{G}{\rm Im}Z(f)\sqrt{G}\right|;\right.\\
& \qquad f\text{ is an \ensuremath{(r-d)\times d} real matrix}\},\\
& \geq\min_{f}\left\{ \Tr GZ^{(\beta)}(f)+\Tr\left|\sqrt{G}{\rm Im}Z^{(\beta)}(f)\sqrt{G}\right|;\right.\\
& \qquad f\text{ is an \ensuremath{(r-d)\times d} real matrix}\}\\
& \geq\min_{f}\left\{ \Tr GZ^{(\beta)}(f)+\Tr\left|\sqrt{G}{\rm Im}Z^{(\beta)}(f)\sqrt{G}\right|;\right.\\
& \qquad f\text{ is an \ensuremath{(r-d)\times d} complex matrix}\}=C_{\theta_{0},G}^{(\beta)}.
\end{align}
The last equality is obtained from
\begin{align}
Z^{(\beta)}(f) & =R_{1}^{(\beta)}-R_{2}^{(\beta)^{*}}R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)}+(f^{*}+R_{2}^{(\beta)^{*}}R_{3}^{(\beta)^{-1}})R_{3}^{(\beta)}(f+R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)})\\
& \geq R_{1}^{(\beta)}-R_{2}^{(\beta)^{*}}R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)}=J_{\theta_{0}}^{(\beta)^{-1}}
\end{align}
and the minimum is achieved when $f=-R_{3}^{(\beta)^{-1}}R_{2}^{(\beta)}$.
\end{proof}
\textcolor{black}{For a numerical computation of the Holevo bound, it
was proposed to apply a linear semi-definite program\cite{sdp}. The
minimization problem given in Corollary \ref{cor:easy_holevo2} can
be rewritten to a linear semi-definite program:
\begin{equation}
{\rm minimize}_{f,V}\,\Tr GV
\end{equation}
\begin{equation}
\text{subject to }\begin{pmatrix}V & \begin{pmatrix}I_{d} & f^{\intercal}\end{pmatrix}\sqrt{R}\\
\sqrt{R}\begin{pmatrix}I_{d}\\
f
\end{pmatrix} & I_{r}
\end{pmatrix}\geq0,\label{eq:sdp2}
\end{equation}
where $V$ is a $d\times d$ real matrix, $I_{d}$ and $I_{r}$ are
the identity matrices of size $d$ and $r$, since we can see that
the inequality $V\geq Z(f)=\begin{pmatrix}I_{d} & f^{\intercal}\end{pmatrix}R\begin{pmatrix}I_{d}\\
f
\end{pmatrix}$ is equivalent to (\ref{eq:sdp2}) by considering the Schur complement
of (\ref{eq:sdp2}).}
\textcolor{black}{
The relationship between the bounds introduced in
this paper is
\begin{equation}
2C_{\theta_{0},G}^{(S)}\geq C_{\theta_{0},G}^{(H)}\geq\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}\geq\max\{C_{\theta_{0},G}^{(S)},C_{\theta_{0},G}^{(R)}\}.
\end{equation}
In the first inequality, $2C_{\theta_{0},G}^{(S)}$ is known as an
upper bound\cite{upper1,upper2} of the Holevo bound.
This inequality can be shown as follows.
In Corollary \ref{cor:easy_holevo2}, it can be seen that
\begin{equation}
\Tr G\,{\rm Re}Z(f^{(S)})\leq C_{\theta_{0},G}^{(H)}\leq\Tr G\,{\rm Re}Z(f^{(S)})+\Tr\left|\sqrt{G}{\rm Im}Z(f^{(S)})\sqrt{G}\right|,\label{eq:fac2proof}
\end{equation}
where $f^{(S)}:=-({\rm Re}R_{3})^{-1}({\rm Re}R_{2})$. Because
\begin{equation}
{\rm Re}Z(f^{(S)})=J_{\theta_{0}}^{(S)^{-1}}\geq\ii{\rm Im}Z(f^{(S)}),
\end{equation}
we obtain the inequality
\begin{equation}
2C_{\theta_{0},G}^{(S)}\geq C_{\theta_{0},G}^{(H)}.\label{eq:fac2}
\end{equation}
}
\textcolor{black}{For any sufficiently smooth model $\left\{ \rho_{\theta};\theta\in\Theta\subset\R^{d}\right\} $,
it is known that a sequence of i.i.d. extension models $\left\{ \rho_{\theta_{0}+h/\sqrt{n}}^{\otimes n};h\in\R^{d}\right\} $
with a local parameter $h\in\R^{d}$ has a sequence of estimators
that achieves the Holevo bound $C_{\theta_{0},G}^{(H)}$ asymptotically
by using the theory of the quantum local asymptotic normality \cite{YFG,qlan2,guta}.
On the other hand, $C_{\theta_{0},G}^{(S)},C_{\theta_{0},G}^{(R)}$,
and $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$ can not be
always achieved by considering i.i.d. extension. Therefore, these
bounds are informative only when they are consistent with the Holevo
bound. It can be obviously seen from Theorem \ref{thm:rewrite_holevo}
that $C_{\theta_{0},G}^{(H)}=C_{\theta_{0},G}^{(R)}$ if SLDs are
$\D_{\theta_{0}}$ invariant. It can be also seen from (\ref{eq:fac2proof})
that $C_{\theta_{0},G}^{(H)}=C_{\theta_{0},G}^{(S)}$ if and only
if ${\rm Im}Z(f^{(S)})=0$. Since the maximum logarithmic derivative
bound $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$ is larger
than the SLD bound and the RLD bound, if the Holevo bound $C_{\theta_{0},G}^{(H)}$
is equal to the SLD bound or RLD bound, $C_{\theta_{0},G}^{(H)}=\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
also holds. In Section \ref{sec:codimension}, we provide another
case of satisfying $C_{\theta_{0},G}^{(H)}=\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
that is different from SLD or RLD bounds and has an explicit solution.
Further, we give examples of models that can achieve $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
(see Example \ref{exa:gauss} and \ref{exa:pure}).}
\textcolor{black}{When $\rho_{\theta_{0}}$ is not strictly positive,
the $\beta$ logarithmic derivatives $\{L_{i}^{(\beta)}\}_{i=1}^{d}$
that satisfy (\ref{eq:Lbeta_def}) for $-1<\beta<1$ can be defined
on the quotient space $\B(\H)/\sim_{\rho_{\theta_{0}}}$ with respect
to an equivalence relation defined by
\begin{equation}
A\sim_{\rho_{\theta_{0}}}B\Leftrightarrow A-B\in{\rm Ker}\bL_{\rho_{\theta_{0}}}\cap{\rm Ker}\bR_{\rho_{\theta_{0}}}.
\end{equation}
The inner product $\left\langle \cdot,\cdot\right\rangle ^{(\beta)}$
on $\B(\H)/\sim_{\rho_{\theta_{0}}}$ and the $\beta$ Fisher information
matrix $J_{\theta_{0}}^{(\beta)}$ can be also defined by
(\ref{eq:beta_inner})
and
(\ref{eq:beta_fisher}).
The commutation operator
$\D_{\rho_{\theta_{0}}}$
is defined by (\ref{eq:D_def}) as a super operator on $\B(\H)/\sim_{\rho_{\theta_{0}}}$.
For $\beta=1$, RLDs $\{L_{i}^{(R)}\}_{i=1}^{d}$ cannot be defined,
however $\left\langle \cdot,\cdot\right\rangle ^{(1)}$ can be defined
as a pre-inner product on $\B(\H)/\sim_{\rho_{\theta_{0}}}$ and (\ref{eq:bLD_SLD_inner})
is valid.
Theorem \ref{thm:rewrite_holevo} and Corollary \ref{cor:easy_holevo2}
also holds in a similar way by dealing with $\B(\H)/\sim_{\rho_{\theta_{0}}}$
instead of $\B(\H)$. }
\section{\label{sec:codimension}$\protect\D_{\rho_{\theta_{0}}}$ invariant
extension with one dimension}
In Corollary \ref{cor:easy_holevo2}, if $\{D_{j}\}_{j=d+1}^{r}$
are orthogonal to $\T={\rm span}_{\R}\{D_{i}\}_{i=1}^{d}$ with respect
to the inner product $\left\langle \cdot,\cdot\right\rangle ^{(0)}$,
${\rm Re}R_{2}=0$. Further, if $r=d+1$ and $\left\langle D_{r},D_{r}\right\rangle ^{(0)}=1$,
$R$ can take form of
\begin{equation}
R=\begin{pmatrix}A & \sqrt{-1}\ket b\\
-\sqrt{-1}\bra b & 1
\end{pmatrix}\label{eq:D-1model}
\end{equation}
with a real vector $\ket b\in\R^{d}$. In this case,
\begin{equation}
J_{\theta_{0}}^{(\beta)^{-1}}={\rm Re}A+\beta\ii{\rm Im}A-\beta^{2}\ket b\bra b
\end{equation}
due to (\ref{eq:R_bLDFisher}), and
\begin{equation}
C_{\theta_{0},G}^{(\beta)}=\Tr G{\rm Re}A+\beta\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|-\beta^{2}\bra bG\ket b.
\label{eq:bLD_Ab}
\end{equation}
Therefore $A$ and $\ket b\bra b$ can be expressed by $J_{\theta_{0}}^{(R)^{-1}}$
and $J_{\theta_{0}}^{(S)^{-1}}$ as
\begin{equation}
A=J_{\theta_{0}}^{(S)^{-1}}+\sqrt{-1}{\rm Im}(J_{\theta_{0}}^{(R)^{-1}})\label{eq:A_relation}
\end{equation}
\begin{equation}
\ket b\bra b=J_{\theta_{0}}^{(S)^{-1}}-{\rm Re}(J_{\theta_{0}}^{(R)^{-1}}).\label{eq:b_relation}
\end{equation}
Let us calculate the maximum logarithmic derivative bound
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=\max_{0\leq\beta\leq1}\Tr G{\rm Re}A+\beta\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|-\beta^{2}\bra bG\ket b.\label{eq:max_program}
\end{equation}
If $\ket b\not=0$, the quadratic function
\begin{equation}
g_{1}:\beta\mapsto\Tr G{\rm Re}A+\beta\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|-\beta^{2}\bra bG\ket b
\end{equation}
is maximized at
\begin{equation}
\beta=\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}>0.
\end{equation}
If $\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}\geq1$,
$C_{\theta_{0},G}^{(\beta)}$ is maximized at $\beta=1$, thus
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=g_{1}(1)=\Tr G{\rm Re}A+\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|-\bra bG\ket b.\label{eq:max1}
\end{equation}
If $\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}<1$,
$C_{\theta_{0},G}^{(\beta)}$ is maximized at $\beta=\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}$,
thus
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=g_{1}(\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b})=\Tr G{\rm Re}A+\frac{\left\{ \Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|\right\} ^{2}}{4\bra bG\ket b}.\label{eq:max2}
\end{equation}
When $\ket b=0$, $g_{1}$ is a linear function and $C_{\theta_{0},G}^{(\beta)}$
is maximized at $\beta=1$, so
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=g_{1}(1)=\Tr G{\rm Re}A+\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|.
\end{equation}
Collecting these result, we have the following theorem.
\begin{thm}
\label{thm:mld_1}When the model has $d+1$ dimensional $\D_{\rho_{\theta_{0}}}$
invariant extended SLD tangent space, the maximum logarithmic derivative
bound is
\begin{equation}
\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}=\begin{cases}
C_{\theta_{0},G}^{(1)} & \text{if }\hat{\beta}\geq1,\\
C_{\theta_{0},G}^{(\hat{\beta})} & \text{\text{otherwise}},
\end{cases}
\end{equation}
where
\begin{equation}
\hat{\beta}=\begin{cases}
\frac{\Tr\left|\sqrt{G}{\rm Im}J_{\theta_{0}}^{(R)^{-1}}\sqrt{G}\right|}{2\Tr G\left\{ J_{\theta_{0}}^{(S)^{-1}}-{\rm Re}(J_{\theta_{0}}^{(R)^{-1}})\right\} } & \text{if }J_{\theta_{0}}^{(S)^{-1}}\not={\rm Re}(J_{\theta_{0}}^{(R)^{-1}}),\\
\infty & \text{\text{otherwise}}.
\end{cases}
\end{equation}
\end{thm}
In general, even when $R$ can take form of (\ref{eq:D-1model}),
the equality of
\begin{equation}
C_{\theta_{0},G}^{(H)}\geq\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}
\end{equation}
is not always achieved. However, when $d=2$, these two bounds are
consistent.
\begin{thm}
\label{thm:holevo_mld}When $d=2$ and the model has three dimensional
$\D_{\rho_{\theta_{0}}}$ invariant extended SLD tangent space, the
Holevo bound $C_{\theta_{0},G}^{(H)}$ is the same as $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
which is given explicitly in Theorem \ref{thm:mld_1}.
\end{thm}
\begin{proof}
By using Corollary \ref{cor:easy_holevo2}, the Holevo bound is
\begin{eqnarray}
C_{\theta_{0},G}^{(H)} & = & \min_{f}\left\{ \Tr G\,Z(f)+\Tr\left|\sqrt{G}{\rm Im}Z(f)\sqrt{G}\right|\right\} ,\label{eq:min_f_2}
\end{eqnarray}
where
\begin{equation}
Z(f)={\rm Re}A+\ket f\bra f+\sqrt{-1}\left({\rm Im}A+\ket b\bra f-\ket f\bra b\right).
\end{equation}
Letting
\begin{equation}
\ket{\hat{b}}:=\sqrt{G}\ket b,
\end{equation}
\begin{equation}
\ket{\hat{f}}:=\sqrt{G}\ket f,
\end{equation}
and
\begin{equation}
\hat{A}:=\sqrt{G}{\rm Im}A\sqrt{G}=a\begin{pmatrix}0 & -1\\
1 & 0
\end{pmatrix}
\end{equation}
with $a\in\R$, then
\begin{eqnarray}
C_{\theta_{0},G}^{(H)} & = & \min_{\hat{f}}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{f}}{\hat{f}}+\Tr\left|\hat{A}+\ket{\hat{b}}\bra{\hat{f}}-\ket{\hat{f}}\bra{\hat{b}}\right|\right\} \\
& = & \min_{\hat{f}}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{f}}{\hat{f}}+\Tr\left|(a+\hat{b}_{2}\hat{f}_{1}-\hat{b}_{1}\hat{f}_{2})\begin{pmatrix}0 & -1\\
1 & 0
\end{pmatrix}\right|\right\} \\
& = & \min_{\hat{f}}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{f}}{\hat{f}}+2\left|a+\hat{b}_{2}\hat{f}_{1}-\hat{b}_{1}\hat{f}_{2}\right|\right\} .
\end{eqnarray}
By representing $\ket{\hat{f}}$ as $\ket{\hat{f}}=s\begin{pmatrix}\hat{b}_{1}\\
\hat{b}_{2}
\end{pmatrix}+t\begin{pmatrix}\hat{b}_{2}\\
-\hat{b}_{1}
\end{pmatrix}$ with $s,t\in\R$, we have
\begin{align}
C_{\theta_{0},G}^{(H)} & =\min_{s,t}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{b}}{\hat{b}}s^{2}+\braket{\hat{b}}{\hat{b}}t^{2}+2\left|a+\braket{\hat{b}}{\hat{b}}t\right|\right\}\\
& =\min_{t}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{b}}{\hat{b}}t^{2}+2\left|a+\braket{\hat{b}}{\hat{b}}t\right|\right\}\\
& =\min_{t}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{b}}{\hat{b}}t^{2}+2\left|\left|a\right|+\braket{\hat{b}}{\hat{b}}t\right|\right\}\\
& \leq\min_{\left|a\right|+\braket{\hat{b}}{\hat{b}}t\geq0}\left\{ \Tr G\,{\rm Re}A+\braket{\hat{b}}{\hat{b}}t^{2}+2\left|a\right|+2\braket{\hat{b}}{\hat{b}}t\right\} .\label{eq:min_program}
\end{align}
Let
\begin{equation}
g_{2}(t)=\Tr G\,{\rm Re}A+\braket{\hat{b}}{\hat{b}}t^{2}+2\left|a\right|+2\braket{\hat{b}}{\hat{b}}t.
\end{equation}
Because of (\ref{eq:max1}), (\ref{eq:max2}), and $C_{\theta_{0},G}^{(H)}\geq\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
, we have
\begin{equation}
g_{2}(t)\geq\begin{cases}
\Tr G\,{\rm Re}A+2\left|a\right|-\braket{\hat{b}}{\hat{b}}, & \text{if }\left|a\right|\geq\braket{\hat{b}}{\hat{b}},\\
\Tr G\,{\rm Re}A+\frac{a^{2}}{\braket{\hat{b}}{\hat{b}}} & \text{\text{otherwise}},
\end{cases}
\end{equation}
where the equality is achieved at $t=\max\{-\frac{\left|a\right|}{\braket{\hat{b}}{\hat{b}}},-1\}$.
\end{proof}
Let us consider the Lagrangian duality of the quadratic programming
(\ref{eq:min_program}). The Lagrangian function is
\begin{align}
\mathscr{L}(t,\lambda) & =\Tr G\,{\rm Re}A+2\left|a\right|+2\braket{\hat{b}}{\hat{b}}t+\braket{\hat{b}}{\hat{b}}t^{2}-\lambda(\left|a\right|+\braket{\hat{b}}{\hat{b}}t)\\
& =\Tr G\,{\rm Re}A+2\left|a\right|-\lambda\left|a\right|+\braket{\hat{b}}{\hat{b}}((2-\lambda)t+t^{2}).
\end{align}
For any fixed $\lambda\in\R,$
$\mathscr{L}(t,\lambda)$ is minimized
at $t=\frac{\lambda-2}{2}$, and the Lagrangian dual function is
\begin{align}
g(\lambda) & =\min_{t}\mathscr{L}(t,\lambda)=\mathscr{L}(\frac{\lambda-2}{2},\lambda)=\Tr G\,{\rm Re}A-(\lambda-2)\left|a\right|-\braket{\hat{b}}{\hat{b}}\frac{(\lambda-2)^{2}}{4}\\
& =g_{1}(\frac{2-\lambda}{2}).
\end{align}
Hence the Lagrangian dual programming is
\begin{equation}
\max_{\lambda\geq0}g_{1}(\frac{2-\lambda}{2}).
\end{equation}
The solution of this maximization is the same as (\ref{eq:max_program}).
It is known that in quadratic programming the Lagrangian duality problem
has the same solution. This is the reason why the two bounds coincide.
\textcolor{black}{
The optimal observables $B_{1},B_{2}$ for the minimization
of (\ref{eq:holevo_bound2}) to define Holevo bound can be described
by $\beta^{*}:={\rm argmax_{0\leq\beta\leq1}}C_{\theta_{0},G}^{(\beta)}$
given in Theorem \ref{thm:mld_1} as follows.
From the proof of Theorem \ref{thm:holevo_mld},
we see that the minimization of
(\ref{eq:min_f_2})
is achieved when
\begin{equation}
\ket f=\ket{f^{(\beta^{*})}}:=\beta^{*}\sqrt{G}^{-1}\begin{pmatrix}0 & -1\\
1 & 0
\end{pmatrix}\sqrt{G}\ket b.\label{eq:beta2f}
\end{equation}
This means the minimization of
(\ref{eq:min_F})
is achieved when
$F=F^{(\beta^{*})}:=\begin{pmatrix}I\\
\bra{f^{(\beta^{*})}}
\end{pmatrix}({\rm Re}\Sigma)^{-1}$, and the minimization of (\ref{eq:holevo_bound2}) is achieved when
\begin{equation}
B_{i}=B_{i}^{(\beta^{*})}:=\sum_{j=1}^{3}F_{ji}^{(\beta^{*})}D_{j}\qquad(i=1,2).\label{eq:beta2B}
\end{equation}
}
When $\dim\H=2$ and $d=2$, any model $\S$ has two SLDs $L_{1}$
and $L_{2}$ at any point $\theta_{0}$, and $\tilde{\T}=\{X\in\B_{h}(\H);\Tr\rho_{\theta_{0}}X=0\}\supset{\rm span}_{\R}\{L_{1},L_{2}\}$
is $\D_{\rho_{\theta_{0}}}$ invariant three dimensional space. Therefore,
$R$ takes the form of (\ref{eq:D-1model}), thus Theorem \ref{thm:mld_1}
and Theorem \ref{thm:holevo_mld} are applicable.
This is the essential
reason why the Holevo bound can be expressed by (\ref{eq:suzuki_bound_ori}).
\textcolor{black}{
More generally, if a two-dimensional smooth parametric
family $\{\sigma_{\xi};\xi\in\Xi\subset\R^{2}\}$ of density operators
on a Hilbert space $\H$ with an open set $\Xi\subset\R^{2}$ is $\D_{\xi}$
invariant, a three-dimensional smooth parametric family $\{\tilde{\rho}_{(\xi,\eta)}=\eta\sigma_{\xi}+(1-\eta)\frac{1}{\dim\H}I;\,\xi\in\Xi\subset\R^{2},0<\eta<1\}$
is also $\D_{(\xi,\eta)}$ invariant.
Therefore, Theorem \ref{thm:mld_1}
and \ref{thm:holevo_mld} are applicable for any two-dimensional submodel
of $\left\{ \tilde{\rho}_{(\xi,\eta)}\right\} _{(\xi,\eta)}$, and
the maximum logarithmic derivative bound and the Holevo bound can
be calculated explicitly. We show examples below. }
\begin{example}
\label{exa:dim2}Let
\begin{equation}
\left\{ \rho_{\theta}=a(1-|\theta|)\frac{1}{2}(\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+\sqrt{1-|\theta|^{2}}\sigma_{3})+(1-a(1-|\theta|))\frac{I}{2};|\theta|<1\right\}
\end{equation}
be a family of density operators on $\H=\C^{2}$ parameterized by
$\theta=(\theta^{1},\theta^{2})$ with fixed $0<a<1$, where $\sigma_{1},\sigma_{2},\sigma_{3}$
are Pauli matrices. Let $D_{1}=\partial_{1}\rho_{\theta}$, $D_{2}=\partial_{2}\rho_{\theta}$,
$D_{3}=\rho_{\theta}-\frac{I}{2}$. A linear space of observables
${\rm span}_{\R}\left\{ D_{i}\right\} _{i=1}^{d}=\{X\in\B_{h}(\H);\Tr X=0\}$
is $\D_{\theta}$ invariant at any $\theta$. The extended RLD Fisher
information matrix is calculated by $\tilde{J}_{\theta,ij}^{(R)}=\Tr D_{i}\rho_{\theta}^{-1}D_{j}$
and its inverse is
\begin{equation}
\tilde{J}_{\theta,ij}^{(R)^{-1}}=\frac{1}{a^{2}(1-r)}\begin{pmatrix}1+r & -\ii a\sqrt{1-r^{2}} & \frac{1+r}{1-r}\\
\ii a\sqrt{1-r^{2}} & \frac{1}{(1-r)} & \frac{\ii a(1+r)}{\sqrt{1-r^{2}}}\\
\frac{1+r}{1-r} & -\frac{\ii a(1+r)}{\sqrt{1-r^{2}}} & a^{2}(x-1)+\frac{2}{(1-r)^{2}}
\end{pmatrix},
\end{equation}
at $\theta=(r,0)$ with $0\leq r<1$. The inverse $\beta$ Fisher
information matrix $J_{\theta}^{(\beta)^{-1}}$ is the Schur complement
of ${\rm Re}\tilde{J}_{\theta,ij}^{(R)^{-1}}+\beta\ii{\rm Im}\tilde{J}_{\theta,ij}^{(R)^{-1}}$
due to (\ref{eq:R_bLDFisher}). Let us consider lower bounds of $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$
with a SLD weight $G=J_{\theta}^{(S)}$. The $\beta$ bound is
\begin{equation}
C_{\theta_{0},G}^{(\beta)}=2+\frac{2a\sqrt{(1-a^{2}(1-r)^{2})(2-a^{2}(1-r)^{3})(1-r)^{3}}}{2-a^{2}(1-r)^{3}}\beta-\frac{a^{2}(1-r)^{2}(1+r)}{2-a^{2}(1-r)^{3}}\beta^{2}.
\end{equation}
By using Theorem \ref{thm:mld_1}, we see that the maximum of $C_{\theta_{0},G}^{(\beta)}$
is achieved by
\begin{equation}
\beta=\min\left\{ 1,\frac{1}{a(1+r)}\sqrt{\frac{(1-a^{2}(1-r)^{2})(2-a^{2}(1-r)^{3})}{1-r}}\right\} .
\end{equation}
In Fig \ref{fig:beta_plot}(left), the behavior of the optimal $\beta$
is plotted as a function of $r$ when $a=0.95$. Due to Theorem \ref{thm:holevo_mld},
$\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$ is the same as
the Holevo bound $C_{\theta_{0},G}^{(H)}$. This result illustrates
a principle behind the explicit expression of the Holevo bound (\ref{eq:suzuki_bound_ori}).
\end{example}
\begin{example}
\label{exa:dim4}Here we show an example of the case when $\dim\H>2$.
Let
\begin{equation}
\left\{ \rho_{\theta}=a(1-|\theta|)\left\{ \frac{1}{2}(\theta^{1}\sigma_{1}+\theta^{2}\sigma_{2}+\sqrt{1-|\theta|^{2}}\sigma_{3})\right\} ^{\otimes2}+(1-a(1-|\theta|))\frac{I}{4};|\theta|<1\right\}
\end{equation}
be a family of density operators on $\H=\C^{4}$ parameterized by
$\theta=(\theta^{1},\theta^{2})$ with fixed $0<a<1$. Let $D_{1}=\partial_{1}\rho_{\theta}$,
$D_{2}=\partial_{2}\rho_{\theta}$, $D_{3}=\rho_{\theta}-\frac{I}{4}$.
A linear space of observables ${\rm span}_{\R}\left\{ D_{i}\right\} _{i=1}^{d}$
is $\D_{\theta}$ invariant at any $\theta$. The extended RLD Fisher
information matrix is calculated by $\tilde{J}_{\theta,ij}^{(R)}=\Tr D_{i}\rho_{\theta}^{-1}D_{j}$.
By the similar calculation as in Example \ref{exa:dim2}, we see that
the maximum of $C_{\theta_{0},G}^{(\beta)}$ is achieved by
\begin{equation}
\beta=\min\left\{ 1,\frac{1}{3a(1+r)}\sqrt{\frac{(1-a(1-r))(1+3a(1-r))(7-r-a(1-r)(-11+12a(1-r)^{2}+5r))}{1-r}}\right\}
\end{equation}
for a SLD weight $G=J_{\theta}^{(S)}$ at $\theta=(r,0)$ with $0\leq r<1$.
In Fig \ref{fig:beta_plot}(right), the behavior of the optimal $\beta$
is plotted as a function of $r$ when $a=0.95$. Due to Theorem \ref{thm:holevo_mld},
$\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$ is the same as
the Holevo bound $C_{\theta_{0},G}^{(H)}$. This example is not included
in the result of (\ref{eq:suzuki_bound_ori}) for $\dim\H=2$.
\begin{figure}
\begin{centering}
\includegraphics{beta1.pdf}\includegraphics{beta2.pdf}
\par\end{centering}
\caption{\label{fig:beta_plot}The behavior of the optimal $\beta$ as a function
of $r$ for Example \ref{exa:dim2} (left) and Example \ref{exa:dim4}
(right) with $a=0.95$.}
\end{figure}
\end{example}
\textcolor{black}{Next, let us show examples of models that can achieve
the maximum logarithmic derivative bounds. It is known that the Holevo
bounds can be achieved for quantum Gaussian shift models\cite{holevo}
and pure states models\cite{matsumoto_pure}. The Holevo bounds can
also be achieved as SLD bounds for models that have commutative SLDs.
We can derive the similar property by combining the above models that
have the achievable Holevo bounds. The following examples show models
of a tensor product of a one-dimensional model and quantum Gaussian
shift models or pure states model that have achievable maximum logarithmic
derivative bounds.}
\begin{example}
\textcolor{black}{\label{exa:gauss}Let
\begin{equation}
\left\{ \sigma_{\eta}^{(1)};\text{\ensuremath{\eta_{1}<\eta<\eta_{2}}}\right\}
\end{equation}
be any one-dimensional family of density operators on a Hilbert space
$\H_{1}$ parameterized by $\eta\in\R$, and let
\begin{equation}
\left\{ \sigma_{\xi}^{(2)};\,\xi\in\R^{2}\right\}
\end{equation}
be a two-dimensional family of quantum Gaussian states, where $\sigma_{\xi}^{(2)}$
is a quantum Gaussian state\cite{holevo,YFG} represented on a Hilbert
space $\H_{2}$ defined by a characteristic function
\begin{equation}
\varphi_{\xi}^{(2)}(\zeta)=\Tr\sigma_{\xi}^{(2)}e^{\ii\zeta^{i}X_{i}}=e^{-\frac{1}{2}s^{2}|\xi|^{2}+\ii\xi^{\intercal}\zeta}\qquad(\zeta\in\R^{2})
\end{equation}
with $s\geq1$ and canonical observables $X_{1},X_{2}$ such that
\begin{equation}
[X_{1},X_{2}]=2\ii I.
\end{equation}
Let us consider a three-dimensional quantum statistical model
\begin{equation}
\left\{ \tilde{\rho}_{(\eta,\xi)}=\sigma_{\eta}^{(1)}\otimes\sigma_{\xi}^{(2)};\text{\ensuremath{\eta_{1}<\eta<\eta_{2}},\,\ensuremath{\xi\in\R^{2}}}\right\} .
\end{equation}
Since it is known that $\tilde{D}_{i}^{(2)}=\frac{1}{s^{2}}X_{i}$
($i=1,2$) are the SLDs of $\sigma_{\xi}^{(2)}$ and their tangent
space is $\D_{\xi}$ invariant, the SLD tangent space of this three-dimensional
model is $\D_{(\eta,\xi)}$ invariant at every $(\eta,\xi)$. Therefore,
Theorem \ref{thm:mld_1} and \ref{thm:holevo_mld} are applicable
for any two-dimensional submodel $\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{2}\right\} $
of $\left\{ \tilde{\rho}_{(\eta,\xi)}\right\} _{(\eta,\xi)}$, and
the maximum logarithmic derivative bound and the Holevo bound can
be calculated explicitly. Furthermore, we can show that the maximum
logarithmic derivative bound can be achieved. Let $D_{1},D_{2}$ be
SLDs of $\rho_{\theta}$ at $\theta=\theta_{0}$, let $D_{3}$ and
$R$ be an observable and a $3\times3$ matrix obtained in the same
way as (\ref{eq:D-1model}). Note that $D_{1},D_{2},D_{3}$ are in
${\rm span}_{\R}\{\tilde{D}^{(1)}\otimes I,I\otimes\tilde{D}_{1}^{(2)},I\otimes\tilde{D}_{2}^{(2)}\}$,
where $\tilde{D}^{(1)}$ is the SLD of $\sigma_{\eta}^{(1)}$ and
$\tilde{D}_{i}^{(2)}=\frac{1}{s^{2}}X_{i}$ ($i=1,2$) are the SLDs
of $\sigma_{\xi}^{(2)}$. Due to Theorem \ref{thm:mld_1}, the maximum
of $C_{\theta_{0},G}^{(\beta)}$ is achieved when $\beta=\beta^{*}:=\min\{1,\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}\}$
for any weight matrix $G$. By using (\ref{eq:beta2B}), it can be
seen that the minimization of (\ref{eq:holevo_bound2}) is achieved
when $B_{i}=B_{i}^{(\beta^{*})}:=\sum_{j=1}^{3}F_{ij}^{(\beta^{*})}D_{j}$
($i=1,2$). Note that $B_{1},B_{2}$ satisfy a commutation relation
\begin{equation}
[B_{1},B_{2}]=-2\ii{\rm Im}Z(B)_{12}I.
\end{equation}
Let $\sigma^{(3)}$ be another ancilla Gaussian states defined by
a characteristic function
\begin{equation}
\varphi^{(3)}(\zeta)=\Tr\sigma^{(3)}e^{\ii\zeta^{i}Y_{i}}=e^{-\frac{1}{2}\zeta^{\intercal}V^{(3)}\zeta}\qquad(\zeta\in\R^{2})
\end{equation}
with canonical observables $Y_{1},Y_{2}$ such that
\begin{equation}
[Y_{1},Y_{2}]=2\ii{\rm Im}Z(B)_{12}I
\end{equation}
and a real positive matrix
\begin{equation}
V^{(3)}=\sqrt{G}^{-1}\left|\sqrt{G}{\rm Im}Z(B)\sqrt{G}\right|\sqrt{G}^{-1}.
\end{equation}
It can be seen that two observables $\hat{B}_{i}:=\theta_{0}^{i}+B_{i}\otimes I+I\otimes Y_{i}$
($i=1,2$) can be measured simultaneously because they are commutative.
Further, these observables satisfy locally unbiased conditions and
achieve the Holevo bound, i.e.,
\begin{align}
\Tr(\rho_{\theta}\otimes\sigma^{(3)})\hat{B}_{i} & =\theta_{0}^{i}\qquad(1\leq i\leq2)\\
\Tr(\partial_{i}\rho_{\theta}\otimes\sigma^{(3)})\hat{B}_{j} & =\delta_{ij}\qquad(1\leq i,j\leq2)\\
\Tr(\rho_{\theta}\otimes\sigma^{(3)})\left(\hat{B}_{i}-\theta_{0}^{i}\right)\left(\hat{B}_{j}-\theta_{0}^{j}\right) & =({\rm Re}Z(B)+V^{(3)})_{ij}\qquad(1\leq i,j\leq2).
\end{align}
}
\end{example}
\begin{example}
\textcolor{black}{\label{exa:pure}Let
\begin{equation}
\left\{ \sigma_{\eta}^{(1)};\text{\ensuremath{\eta_{1}<\eta<\eta_{2}}}\right\}
\end{equation}
be any one-dimensional family of density operators on a Hilbert space
$\H_{1}$ parameterized by $\eta\in\R$, and let
\begin{equation}
\left\{ \sigma_{\xi}^{(2)}=\ket{\psi_{\xi}}\bra{\psi_{\xi}};\,\xi\in\Xi\subset\R^{2}\right\}
\end{equation}
be a two-dimensional family of pure states on a Hilbert space $\H_{2}$
with an open set $\Xi\subset\R^{2}$. Let us consider a three-dimensional
quantum statistical model
\begin{equation}
\left\{ \tilde{\rho}_{(\eta,\xi)}=\sigma_{\eta}^{(1)}\otimes\sigma_{\xi}^{(2)};\text{\ensuremath{\eta_{1}<\eta<\eta_{2}},\,\ensuremath{\xi\in\Xi\subset\R^{2}}}\right\} .
\end{equation}
Suppose ${\rm span}_{\R}\left\{ \partial_{i}\sigma_{\xi}^{(2)}=\ket{\partial_{i}\psi_{\xi_{0}}}\bra{\psi_{\xi_{0}}}+\ket{\psi_{\xi_{0}}}\bra{\partial_{i}\psi_{\xi_{0}}}\right\} _{i=1}^{2}$
is $\D_{\xi_{0}}$ invariant at a fixed point $\xi_{0}$. It can be
seen that $\D_{\xi_{0}}$ invariance for $\left\{ \sigma_{\xi}^{(2)}\right\} _{\xi}$
is equivalent to $\ket{\partial_{2}\psi_{\xi_{0}}}\in{\rm span}_{\R}\left\{ \ket{\partial_{1}\psi_{\xi_{0}}},\ii\ket{\partial_{1}\psi_{\xi_{0}}}\right\} $.
Since this three-dimensional model is also $\D_{(\eta_{0},\xi_{0})}$
invariant at a fixed point $(\eta_{0},\xi_{0})$, Theorem \ref{thm:mld_1}
and \ref{thm:holevo_mld} are applicable for two-dimensional submodel
$\left\{ \rho_{\theta};\,\theta\in\Theta\subset\R^{2}\right\} $ of
$\left\{ \tilde{\rho}_{(\xi,\eta)}\right\} _{(\xi,\eta)}$ at $\theta_{0}$
such that $\rho_{\theta_{0}}=\tilde{\rho}_{(\eta_{0},\xi_{0})}$,
and the maximum logarithmic derivative bound and the Holevo bound
can be calculated explicitly. Furthermore, we can show that the maximum
logarithmic derivative bound can be achieved. Let $D_{1},D_{2}$ be
SLDs of $\rho_{\theta}$ at $\theta=\theta_{0}$, let $D_{3}$ and
$R$ be an observable and a $3\times3$ matrix obtained in the same
way as (\ref{eq:D-1model}). Due to Theorem \ref{thm:mld_1}, the
maximum of $C_{\theta_{0},G}^{(\beta)}$ is achieved when $\beta=\beta^{*}:=\min\{1,\frac{\Tr\left|\sqrt{G}{\rm Im}A\sqrt{G}\right|}{2\bra bG\ket b}\}$
for any weight matrix $G$. By using (\ref{eq:beta2B}), it can be
seen that the minimization of (\ref{eq:holevo_bound2}) is achieved
when $B_{i}=B_{i}^{(\beta^{*})}:=\sum_{j=1}^{3}F_{ji}^{(\beta^{*})}D_{j}$
($i=1,2$). Because $D_{1},D_{2},D_{3}$ are in ${\rm span}_{\R}\{\tilde{D}^{(1)}\otimes I,I\otimes\tilde{D}_{1}^{(2)},I\otimes\tilde{D}_{2}^{(2)}\}$,
where $\tilde{D}^{(1)}$ is the SLD of $\sigma_{\eta}^{(1)}$ and
$\tilde{D}_{i}^{(2)}=2\partial_{i}\sigma_{\xi_{0}}^{(2)}$ ($i=1,2$)
are the SLDs of $\sigma_{\xi}^{(2)}$, there exist $1\times2$ and
$2\times2$ real matrix $F^{(1)}$ and $F^{(2)}$ such that $B_{i}^{(\beta^{*})}=F_{1i}^{(1)}\tilde{D}^{(1)}\otimes I+\sum_{j=1}^{2}F_{ji}^{(2)}I\otimes\tilde{D}_{j}^{(2)}$.
Let $B_{i}^{(1)}:=F_{1i}^{(1)}\tilde{D}^{(1)}\in\B(\H_{1})$ and $B_{i}^{(2)}:=\sum_{j=1}^{2}F_{ji}^{(2)}\tilde{D}_{j}^{(2)}\in\B(\H_{2})$,
and let $Z_{ij}^{(1)}=\Tr\sigma_{\eta_{0}}^{(1)}B_{j}^{(1)}B_{i}^{(1)}$
and $Z_{ij}^{(2)}=\Tr\sigma_{\xi_{0}}^{(2)}B_{j}^{(2)}B_{i}^{(2)}$.
Because $\left\{ B_{i}^{(1)}\otimes I\right\} _{i=1}^{2}$ and $\left\{ I\otimes B_{i}^{(2)}\right\} _{i=1}^{2}$
are independent, $Z(B^{(\beta^{*})})=Z^{(1)}+Z^{(2)}.$ Note that
$Z^{(1)}$ is a real matrix. Let $\ket{\psi^{(3)}},\ket{l_{1}^{(3)}},\ket{l_{2}^{(3)}}\in\H_{3}$
be vectors in a Hilbert space $\H_{3}=\C^{2}$ such that $\braket{\psi^{(3)}}{l_{1}^{(3)}}=\braket{\psi^{(3)}}{l_{2}^{(3)}}=0$,
\begin{equation}
\braket{l_{j}^{(3)}}{l_{i}^{(3)}}=\left(V^{(3)}-\ii{\rm Im}Z^{(2)}\right)_{ij},
\end{equation}
and $\left\Vert \ket{\psi^{(3)}}\right\Vert =1$ with a positive
real matrix $V^{(3)}=\sqrt{G}^{-1}\left|\sqrt{G}{\rm Im}Z^{(2)}\sqrt{G}\right|\sqrt{G}^{-1}.$
Because
\begin{align}
\ket{\hat{\psi}^{(3)}} & :=\ket{\psi_{\xi_{0}}}\otimes\ket{\psi^{(3)}}\\
\ket{\hat{l}_{i}^{(3)}} & :=B_{i}^{(2)}\ket{\psi_{\xi_{0}}}\otimes\ket{\psi^{(3)}}+\ket{\psi_{\xi_{0}}}\otimes\ket{l_{i}^{(3)}}\qquad(i=1,2)
\end{align}
satisfy $\braket{\hat{\psi}^{(3)}}{\hat{l}_{1}^{(3)}}=\braket{\hat{\psi}^{(3)}}{\hat{l}_{2}^{(3)}}=0$
and $\braket{\hat{l}_{j}^{(3)}}{\hat{l}_{i}^{(3)}}=\left({\rm Re}Z^{(2)}+V^{(3)}\right)_{ij}\in\R$,
there exist an orthonormal basis $\{\ket k\}_{k=1}^{\dim\H_{2}\otimes\H_{3}}$
of $\H_{2}\otimes\H_{3}$ such that $\braket k{\hat{\psi}^{(3)}},\braket k{\hat{l}_{1}^{(3)}},\braket k{\hat{l}_{2}^{(3)}}$
are real numbers and $\braket k{\hat{\psi}^{(3)}}\not=0$ for $1\leq k\leq\dim\H_{2}\otimes\H_{3}$.
It can be seen that two observables
\begin{equation}
\hat{B}_{i}=\theta_{0}^{i}+B_{i}^{(1)}\otimes I+I\otimes\left[\sum_{k}\frac{\braket k{\hat{l}_{i}^{(3)}}}{\braket k{\hat{\psi}^{(3)}}}\ket k\bra k\right]
\end{equation}
($i=1,2$) can be measured simultaneously, and they satisfy locally
unbiased conditions and achieve the Holevo bound, i.e.,
\begin{align}
\Tr(\rho_{\theta_{0}}\otimes\sigma^{(3)})\hat{B}_{i} & =\theta_{0}^{i}\qquad(1\leq i\leq2)\\
\Tr(\partial_{i}\rho_{\theta_{0}}\otimes\sigma^{(3)})\hat{B}_{j} & =\delta_{ij}\qquad(1\leq i,j\leq2)\\
\Tr(\rho_{\theta_{0}}\otimes\sigma^{(3)})\left(\hat{B}_{i}-\theta_{0}^{i}\right)\left(\hat{B}_{j}-\theta_{0}^{j}\right) & =({\rm Re}Z(B)+V^{(3)})_{ij}\qquad(1\leq i,j\leq2),
\end{align}
where $\sigma^{(3)}=\ket{\psi^{(3)}}\bra{\psi^{(3)}}$.}
\end{example}
\section{\label{sec:Conclusion}Conclusion}
In this paper, we focused on a logarithmic derivative $L_{i}^{(\beta)}$
lies between SLD $L_{i}^{(S)}$ and RLD $L_{i}^{(R)}$ with $\beta\in[0,1]$
to obtain lower bounds of weighted trace of covariance $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$
of a locally unbiased estimator $(M,\hat{\theta})$ at $\theta_{0}$
of a parametric family of quantum states. We showed that all monotone
metrics induce lower bounds of $\Tr GV_{\theta_{0}}[M,\hat{\theta}]$,
and the maximum logarithmic derivative bound $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
is the largest bound among them. We showed that $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
has explicit solution when the $d$ dimensional model has $d+1$ dimensional
real space $\tilde{\T}\supset{\rm span}_{\R}\{L_{i}^{(S)}\}_{i=1}^{d}$
such that $\D_{\rho_{\theta_{0}}}(\tilde{\T})\subset\tilde{\T}$ at
$\theta_{0}\in\Theta$. Furthermore, when $d=2$, we showed that the
maximization problem $\max_{0\leq\beta\leq1}C_{\theta_{0},G}^{(\beta)}$
is the Lagrangian duality of the minimization problem to define Holevo
bound, and is the same as the Holevo bound.
This explicit solution is the generalization of the solution \eqref{eq:suzuki_bound_ori} given for a two dimensional Hilbert space.
\section*{Acknowledgment}
The author is grateful to Prof. A. Fujiwara for valuable comments.
|
1,116,691,497,181 | arxiv | \section{Motivation and Related Work} \label{sec.intro}
Color histograms are an expressive and convenient representation of an image's color content. Color histograms are routinely used by conventional color transfer methods (e.g., \cite{reinhard2001color, xiao2006color, nguyen2014illuminant, faridul2016colour}).
These color transfer methods aim to manipulate the colors in an input image to match those of a target image, such that the images share a similar ``look and feel''. In the color transfer literature, there are various forms of color histograms used to represent the color distribution of an image, such as a direct 3D histogram~\cite{reinhard2001color, xiao2006color,faridul2016colour}, 2D histogram \cite{avi2020deephist, CCC, afifi2019color, afifi2019sensor}, color palette \cite{chang2015palette, zhang2017palette, afifi2019image} or color triad \cite{shugrina2020nonlinear}. Despite the effectiveness of color histograms for color transfer, recent deep learning methods almost exclusively rely on image-based examples to control colors. While image exemplars impact the final colors of generative adversarial network (GAN)-generated images and deep recolored images, these methods -- that mostly target image style transfer -- also affect other style attributes, such as texture information and tonal values~\cite{gatys2015neural, gatys2016image, johnson2016perceptual, ulyanov2016instance, isola2017image, luan2017deep, sheng2018avatar}.
Consequently, the quality of the results produced by these methods often depends on the semantic similarity between the input and target images, or between a target image and a particular domain~\cite{sheng2018avatar, he2019progressive}.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{model_GAN.pdf}
\vspace{-2mm}
\caption{We inject our histogram into StyleGAN \cite{karras2020analyzing} to control the generated image colors. (A) and (B) are simplified versions of the StyleGAN's first and last blocks. We modified the last two blocks of the StyleGAN by projecting our histogram feature into each block's latent space, as shown in (C). The parameter $m$ controls the capacity of the model.}
\label{fig:GAN-design}
\end{figure*}
In this paper, our attention is focused explicitly on controlling only the color attributes of images---this can be considered a sub-category of image style transfer. Specifically, our method does not require shared semantic content between the input/GAN-generated images and a target image or guide image. Instead, our method aims to assist the deep network through color histogram information only\footnote{Project page: \href{https://github.com/mahmoudnafifi/HistoGAN}{https://github.com/mahmoudnafifi/HistoGAN}}. With this motivation, we first explore using color histograms to control the colors of images generated by GANs.
\vspace{-2mm}
\paragraph{Controlling Color in GAN-Generated Images}
GANs are often used as ``black boxes'' that can transform samples from a simple distribution to a meaningful domain distribution without an explicit ability to control the details/style of the generated images \cite{goodfellow2014generative, radford2015unsupervised, karras2017progressive, arjovsky2017wasserstein, liu2019wasserstein}.
Recently, methods have been proposed to control the style of the GAN-generated images.
For example, StyleGAN~\cite{karras2019style, karras2020analyzing} proposed the idea of ``style mixing'', where different latent style vectors are progressively fed to the GAN to control the style and appearance of the output image.
To transfer a specific style in a target image to GAN-generated images, an optimization process can be used to project the target image to the generator network's latent space to generate images that share some properties with the target image~\cite{abdal2019image2stylegan, karras2020analyzing}.
However, this process requires expensive computations to find the latent code of the target image.
Another direction is to jointly train an encoder-generator network to learn this projection \cite{pidhorskyi2020adversarial, li2020mixnmatch, choi2020stargan}.
More recently, methods have advocated different approaches to control the output of GANs, such as using the normalization flow \cite{abdal2020styleflow}, latent-to-domain-specific mapping \cite{choi2020stargan}, deep classification features \cite{shocher2020semantic}, few-shot image-to-image translation \cite{saito2020coco}, and a single-image training strategy \cite{shaham2019singan}.
Despite the performance improvements, most of these methods are limited to work with a single domain of both target and GAN-generated images \cite{li2020mixnmatch, pidhorskyi2020adversarial}.
We seek to control GAN-generated images using color histograms as our specified representation of image style.
Color histograms enable our method to accept target images taken from \textit{any} arbitrary domain.
Figure \ref{fig:teaser}-top shows GAN-generated examples using our method.
As shown in Fig.~\ref{fig:teaser}, our generated images share the same color distribution as the target images without being restricted to, or influenced by, the semantic content of the target images.
\vspace{-2mm}
\paragraph{Recoloring Real Images}
In addition to controlling the GAN-generated images, we seek to extend our approach to perform image recoloring within the GAN framework. In this context, our method accepts a real input image and a target histogram to produce an output image with the fine details of the input image but with the same color distribution given in the target histogram.
Our method is trained in a fully unsupervised fashion, where no ground-truth recolored image is required.
Instead, we propose a novel adversarial-based loss function to train our network to extract and consider the color information in the given target histogram while producing realistic recolored images.
One of the key advantages of using the color histogram representation as our target colors can be shown in Fig.\ \ref{fig:teaser}-bottom, where we can {\it automatically recolor} an image without directly having to specify a target color histogram.
Auto-image recoloring is a less explored research area with only a few attempts in the literature (e.g., \cite{laffont2014transient, deshpande2017learning, afifi2019image, anokhin2020high}).
\section{HistoGAN} \label{sec.method}
We begin by describing the histogram feature used by our method (Sec.\ \ref{subsec.histoblock}). Afterwards, we discuss the proposed modification to StyleGAN \cite{karras2020analyzing} to incorporate our histogram feature into the generator network (Sec.\ \ref{subsec.method-coloring-GAN-images}). Lastly, we explain how this method can be expanded to control colors of real input images to perform image recoloring~(Sec.\ \ref{subsec.method-recoloring}).
\subsection{Histogram feature} \label{subsec.histoblock}
The histogram feature used by HistoGAN is borrowed from the color constancy literature \cite{CCC, afifi2019color, afifi2019sensor} and is constructed to be a differentiable histogram of colors in the log-chroma space due to better invariance to illumination changes~\cite{finlayson2001color, eibenberger2012importance}. The feature is a 2D histogram of an image's colors projected into a log-chroma space.
This 2D histogram is parameterized by $uv$ and conveys an image's color information while being more compact than a typical 3D histogram defined in RGB space. A log-chroma space is defined by the intensity of one channel, normalized by the other two, giving three possible options of how it is defined.
Instead of selecting only one such space, all three options can be used to construct three different histograms which are combined together into a histogram feature, $\mathbf{H}$, as an $h\!\times\!h\!\times 3$ tensor~\cite{afifi2019color} .
The histogram is computed from a given input image, $\mathbf{I}$, by first converting it into the log-chroma space.
For instance, selecting the $\textrm{R}$ color channel as primary and normalizing by $\textrm{G}$ and $\textrm{B}$ gives:
\begin{equation}
\resizebox{0.9\hsize}{!}{
$\mathbf{I}_{uR}(\mathbf{x}) = \log{\left(\frac{ \mathbf{I}_{\textrm{R}}(\mathbf{x})+ \epsilon}{\mathbf{I}_{\textrm{G}}(\mathbf{x})+ \epsilon}\right)} \textrm{ , } \mathbf{I}_{vR}(\mathbf{x}) = \log{\left(\frac{ \mathbf{I}_{\textrm{R}}(\mathbf{x})+ \epsilon}{\mathbf{I}_{\textrm{B}}(\mathbf{x})+ \epsilon}\right)},$}
\end{equation}
where the $\textrm{R}, \textrm{G}, \textrm{B}$ subscripts refer to the color channels of the image $\mathbf{I}$, $\epsilon$ is a small constant added for numerical stability, $\mathbf{x}$ is the pixel index, and $(uR, vR)$ are the $uv$ coordinates based on using $\textrm{R}$ as the primary channel.
The other components $\mathbf{I}_{uG}$, $\mathbf{I}_{vG}$, $\mathbf{I}_{uB}$, $\mathbf{I}_{vB}$ are computed similarly by projecting the $\textrm{G}$ and $\textrm{B}$ color channels to the log-chroma space.
In \cite{afifi2019color}, the RGB-$uv$ histogram is computed by thresholding colors to a bin and computing the contribution of each pixel based on the intensity $\I_y(\mathbf{x}) = \sqrt{\mathbf{I}_{\textrm{R}}^{2}(\mathbf{x}) + \mathbf{I}_{\textrm{G}}^{2}(\mathbf{x}) + \mathbf{I}_{\textrm{B}}^{2}(\mathbf{x})}$. In order to make the representation differentiable, \cite{afifi2019sensor} replaced the thresholding operator with a kernel weighted contribution to each bin. The final unnormalized histogram is computed as:
\begin{equation}
\mathbf{H}(u,v,c) \propto \sum_{\mathbf{x}} k(\mathbf{I}_{uc}(\mathbf{x}), \mathbf{I}_{vc}(\mathbf{x}), u, v) \mathbf{I}_{y}(\mathbf{x}),
\end{equation}
where $c \in {\{\textrm{R}, \textrm{G}, \textrm{B}\}}$ and $k(\cdot)$ is a pre-defined kernel.
While a Gaussian kernel was originally used in \cite{afifi2019sensor}, we found that the inverse-quadratic kernel significantly improved training stability. The inverse-quadratic kernel is defined as:
\begin{multline}
k(\mathbf{I}_{uc}, \mathbf{I}_{vc}, u, v) = \left(1+\left(\left| \mathbf{I}_{uc} - u \right|/\tau\right)^2\right)^{-1} \\
\times \left(1 + \left(\left| \mathbf{I}_{vc} - v \right|/\tau\right)^2\right)^{-1},
\end{multline}
\noindent
where $\tau$ is a fall-off parameter to control the smoothness of the histogram's bins.
Finally, the histogram feature is normalized to sum to one, i.e., $\sum_{u,v,c}\mathbf{H}(u,v,c)=1$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{analysis.pdf}
\vspace{-6.5mm}
\caption{Progressively generated images using the HistoGAN modifications.}
\label{fig:analysis}
\end{figure}
\subsection{Color-controlled Image Generation}
\label{subsec.method-coloring-GAN-images}
Our histogram feature is incorporated into an architecture based on StyelGAN~\cite{karras2020analyzing}.
Specifically, we modified the original design of StyleGAN (Fig.\ \ref{fig:GAN-design}-[A] and [B]) such that we can ``inject'' the histogram feature into the progressive construction of the output image.
The last two blocks of the StyleGAN (Fig.\ \ref{fig:GAN-design}-[B]) are modified by replacing the fine-style vector with the color histogram feature.
The histogram feature is then projected into a lower-dimensional representation by a ``histogram projection'' network (Fig.\ \ref{fig:GAN-design}-[C]).
This network consists of eight fully connected layers with a leaky ReLU (LReLU) activation function \cite{maas2013rectifier}.
The first layer has 1,024 units, while each of the remaining seven layers has 512.
The ``to-latent'' block, shown in orange in Fig.\ \ref{fig:GAN-design}, maps the projected histogram to the latent space of each block. This ``to-latent'' block consists of a single fc layer with $2^n m$ output neurons, where $n$ is the block number, and $m$ is a parameter used to control the entire capacity of the network.
To encourage generated images to match the target color histogram, a color matching loss is introduced to train the generator.
Because of the differentiability of our histogram representation, the loss function, $C(\mathbf{H}_g,\mathbf{H}_t)$, can be any differentiable metric of similarity between the generated and target histograms $\mathbf{H}_g$ and $\mathbf{H}_t$, respectively.
For simplicity, we use the Hellinger distance defined as:
\begin{equation}\label{eq:hellinger-distance}
C\left(\mathbf{H}_g, \mathbf{H}_t\right) = \frac{1}{\sqrt{2}} \left\Vert \mathbf{H}_g^{1/2} - \mathbf{H}_t^{1/2} \right\Vert_2,
\end{equation}
where $\Vert \cdot \Vert_2$ is the standard Euclidean norm and $\mathbf{H}^{1/2}$ is an element-wise square root. Note that the Hellinger distance is closely related to the Bhattacharyya coefficient, $B(\cdot)$, where $C\left(\mathbf{H}_g, \mathbf{H}_t\right) = \left(1-B\left(\mathbf{H}_g, \mathbf{H}_t\right)\right)^{1/2}$.
This color-matching histogram loss function is combined with the discriminator to give the generator network loss:
\begin{equation}\label{eq:gan-loss}
{\mathcal{L}}_g = D\left(\mathbf{I}_g\right) + \alpha C\left(\mathbf{H}_g, \mathbf{H}_t\right),
\end{equation}
where $\mathbf{I}_g$ is the GAN-generated image, $D\left(\cdot\right)$ is our discriminator network that produces a scalar feature given an image (see supp.\ materials for more details), $\mathbf{H}_t$ is the target histogram feature (injected into the generator network), $\mathbf{H}_g$ is the histogram feature of $\mathbf{I}_g$, $C\left(\cdot\right)$ is our histogram loss function, and $\alpha$ is a scale factor to control the strength of the histogram loss term.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{model_recoloring.pdf}
\vspace{-6.5mm}
\caption{Our Recoloring-HistoGAN (ReHistoGAN) network. We map the input image into the HistoGAN's latent space using an encoder-decoder network with skip connections between each encoder and decoder blocks. Additionally, we pass the latent feature of the first two encoder blocks to our GAN's head after processing it with the histogram's latent feature.}\vspace{-2mm}
\label{fig:recoloring-design}
\end{figure}
As our histogram feature is computed by a set of differentiable operations, our loss function (Eqs. \ref{eq:hellinger-distance} and \ref{eq:gan-loss}) can be optimized using SGD.
During training, different target histograms $\mathbf{H}_t$ are required. To generate these for each generated image, we randomly select two images from the training set, compute their histograms $\mathbf{H}_1$ and $\mathbf{H}_2$, and then randomly interpolate between them. Specifically, for each generated image during training, we generate a random target histogram as follows:
\begin{equation}
\label{eq.target_hist}
\mathbf{H}_t = \delta \mathbf{H}_1 + \left(1 - \delta \right) \mathbf{H}_2,
\end{equation}
where $\delta \sim U(0,1)$ is sampled uniformly. The motivation behind this interpolation process is to expand the variety of histograms during training. This is a form of data augmentation for the histograms with the implicit assumption of the convexity of the histogram distribution in the target domain (e.g., face images). We found this augmentation helped reduce overfitting to the histograms of the training images and ensured robustness at test time. We note that this assumption does not hold true for target domains with high diversity where the target histograms span a broad range in the log-chroma space and can be multimodal (e.g., landscape images). Nonetheless, we found that even in those cases the augmentation was still beneficial to the training.
With this modification to the original StyleGAN architecture, our method can control the colors of generated images using our color histogram features.\ Figure~\ref{fig:analysis} shows the progressive construction of the generated image by HistoGAN. As can be seen, the outputs of the last two blocks are adjusted to consider the information conveyed by the target histogram to produce output images with the same color distribution represented in the target histogram.
\begin{figure}
\includegraphics[width=\linewidth]{variance_loss.pdf}
\vspace{-6.5mm}
\caption{Results of training ReHistoGAN with and without the variance loss term described in Eq. \ref{eq.variance-loss}.}\vspace{-2mm}
\label{fig:variance_loss}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{comparison_with_projection.pdf}
\vspace{-6.5mm}
\caption{Results of image recoloring using the encoder-GAN reconstruction without skip connections and our ReHistoGAN using our proposed loss function.}\vspace{-2mm}
\label{fig:comparison_with_projection}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{qualitative_GAN_results.pdf}
\vspace{-6.5mm}
\caption{Images generated by HistoGAN. For each input image shown in the left, we computed the corresponding target histogram (shown in the upper left corner of the left column) and used it to control colors of the generated images in each row.}\vspace{-2mm}
\label{fig:GAN_results}
\end{figure*}
\subsection{Image Recoloring} \label{subsec.method-recoloring}
We can also extend HistoGAN to recolor an input image, as shown in Fig.\ \ref{fig:teaser}-bottom. Recoloring an existing input image, $\mathbf{I}_i$, is not straightforward because the randomly sampled noise and style vectors are not available as they are in a GAN-generated scenario. As shown in Fig.\ \ref{fig:analysis}, the head of HistoGAN (i.e., the last two blocks) are responsible for controlling the colors of the output image.
Instead of optimizing for noise and style vectors that could be used to generate a given image $\mathbf{I}_i$, we propose to train an encoding network that maps the input image into the necessary inputs of the head of HistoGAN.
With this approach, the head block can be given different histogram inputs to produce a wide variety of recolored versions of the input image.
We dub this extension the ``Recoloring-HistoGAN'' or ReHistoGAN for short.
The architecture of ReHistoGAN is shown in Fig.\ \ref{fig:recoloring-design}.
The ``encoder'' has a U-Net-like structure \cite{ronneberger2015u} with skip connections.
To ensure that fine details are preserved in the recolored image, $\mathbf{I}_r$, the early latent feature produced by the first two U-Net blocks are further provided as input into the HistoGAN's head through skip connections.
The target color information is passed to the HistoGAN head blocks as described in Sec.\ \ref{subsec.method-coloring-GAN-images}. Additionally, we allow the target color information to influence through the skip connections to go from the first two U-Net-encoder blocks to the HistoGAN's head.
We add an additional histogram projection network, along with a ``to-latent'' block, to project our target histogram to a latent representation.
This latent code of the histogram is processed by weight modulation-demodulation operations \cite{karras2020analyzing} and is then convolved over the skipped latent of the U-Net-encoder's first two blocks.
We modified the HistoGAN block, described in Fig.\ \ref{fig:GAN-design}, to accept this passed information (see supp.\ materials for more information).
The leakage of the target color information helps ReHistoGAN to consider information from both the input image content and the target histogram in the recoloring process.
We initialize our encoder-decoder network using He's initialization \cite{he2015delving}, while the weights of the HistoGAN head are initialized based on a previously trained HistoGAN model (trained in Sec.\ \ref{subsec.method-coloring-GAN-images}).
The entire ReHistoGAN is then jointly trained to minimize the following loss function:
\begin{equation}
\label{eq.recoloring-loss}
{\mathcal{L}}_r = \beta R\left(\mathbf{I}_i, \mathbf{I}_r\right) + \gamma D\left(\mathbf{I}_r\right) + \alpha C\left(\mathbf{H}_r, \mathbf{H}_t\right)
\end{equation}
where $R\left(\cdot\right)$ is a reconstruction term, which encourages the preservation of image structure and $\alpha$, $\beta$, and $\gamma$ are hyperparameters used to control the strength of each loss term (see supp.\ materials for associated ablation study).
The reconstruction loss term, $R\left(\cdot\right)$, computes the L1 norm between the second order derivative of our input and recolored images as:
\begin{equation}
R\left(\mathbf{I}_i, \mathbf{I}_r\right) = \left\Vert \mathbf{I}_i \ast \mathbf{L} - \mathbf{I}_r \ast \mathbf{L} \right\Vert_1
\end{equation}
where $\ast \mathbf{L}$ denotes the application of the Laplacian operator.
The idea of employing the image derivative was used initially to achieve image seamless cloning \cite{perez2003poisson}, where this Laplacian operator suppressed image color information while keeping the most significant perceptual details.
Intuitively, ReHistoGAN is trained to consider the following aspects in the output image: (i) having a similar color distribution to the one represented in the target histogram, this is considered by $C\left(\cdot\right)$, (ii) being realistic, which is the goal of $D\left(\cdot\right)$, and (iii) having the same content of the input image, which is the goal of $R\left(\cdot\right)$.
Our model trained using the loss function described in Eq.\ \ref{eq.recoloring-loss} produces reasonable recoloring results.
However, we noticed that, in some cases, our model tends to only apply a global color cast (i.e., shifting the recolored image's histogram) to minimize $C\left(\cdot\right)$.
To mitigate this behavior, we added variance loss term to Eq.\ \ref{eq.recoloring-loss}.
The variance loss can be described as:
\begin{equation}
\label{eq.variance-loss}
V(\mathbf{I}_i, \mathbf{I}_r) = - w \sum_{c\in\{\textrm{R},\textrm{G},\textrm{B}\}}{\left| \sigma\left(\mathbf{I}_{ic} \ast \mathbf{G}\right) - \sigma\left(\mathbf{I}_{rc} \ast \mathbf{G}\right) \right|},
\end{equation}
where $\sigma\left(\cdot\right)$ computes the standard deviation of its input (in this case the blurred versions of $\mathbf{I}_i$ and $\mathbf{I}_r$ using a Gaussian blur kernel, $\mathbf{G}$, with a scale parameter of $15$), and $w = \Vert \mathbf{H}_t - \mathbf{H}_i \Vert_1$ is a weighting factor that increases as the target histogram and the input image's histogram, $\mathbf{H}_t$ and $\mathbf{H}_i$, become dissimilar and the global shift solution becomes more problematic.
The variance loss encourages the network to avoid the global shifting solution by increasing the differences between the color variance in the input and recolored images. The reason behind using a blurred version of each image is to avoid having a contradiction between the variance loss and the reconstruction loss---the former aims to increase the differences between the variance of the \textit{smoothed} colors in each image, while the latter aims to retain the similarity between the fine details of the input and recolored images. Figure \ref{fig:variance_loss} shows recoloring results of our trained models with and without the variance loss term.
We train ReHistoGAN with target histograms sampled from the target domain dataset, as described earlier in Sec.\ \ref{subsec.method-coloring-GAN-images} (Eq.\ \ref{eq.target_hist}).
A simpler architecture was experimented initially, which did not make use of the skip connections and the end-to-end fine tuning (i.e., the weights of the HistoGAN head were fixed).
However, this approach gave unsatisfactory result, and generally failed to retain fine details of the input image.
A comparison between this approach and the above ReHistoGAN architecture can be seen in Fig.\ \ref{fig:comparison_with_projection}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{comparison_w_MixNMatch.pdf}
\vspace{-6.5mm}
\caption{Comparison with the MixNMatch method \cite{li2020mixnmatch}. In the shown results, the target images are used as input shape and background images for the MixNMatch method \cite{li2020mixnmatch}.}\vspace{-2mm}
\label{fig:GAN_comparison_w_MixNMatch}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{face_recoloring.pdf}
\vspace{-6.5mm}
\caption{Results of our ReHistoGAN. The shown results are after recoloring input images (shown in the left column) using the target colors (shown in the top row).}\vspace{-1mm}
\label{fig:face_recoloring}
\end{figure*}
\section{Results and Discussion} \label{sec.results}
This section discusses our results and comparisons with alternative methods proposed in the literature for controlling color.
Due to hardware limitations, we used a lightweight version of the original StyleGAN \cite{karras2020analyzing} by setting $m$ to 16, shown in Fig.\ \ref{fig:GAN-design}.
We begin by presenting our image generation results, followed by our results on image recoloring.
Additional results, comparisons, and discussion are also available in the supp.\ materials.
\begin{table*}
\centering
\caption{Comparison with StyleGAN \cite{karras2020analyzing}. The term `w/ proj.' refers to projecting the target image colors into the latent space of StyleGAN. We computed the similarity between the target and generated histograms in RGB and projected RGB-$uv$ color spaces. For each dataset, we report the number of training images. Note that StyleGAN results shown here \textit{do not} represent the actual output of \cite{karras2020analyzing}, as the used model here has less capacity ($m=16$).\label{table:results}}
\vspace{0.5mm}
\scalebox{0.75}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Dataset} & \multicolumn{6}{c|}{StyleGAN \cite{karras2020analyzing}} & \multicolumn{5}{c|}{HistoGAN (ours)} \\ \cline{2-12}
& \multicolumn{2}{c|}{FID} & \multicolumn{2}{c|}{RGB hist. (w/ proj.)} & \multicolumn{2}{c|}{RGB-$uv$ hist. (w/ proj.)} & \multirow{2}{*}{FID} & \multicolumn{2}{c|}{RGB hist. (w/ proj.)} & \multicolumn{2}{c|}{RGB-$uv$ hist. (w/ proj.)} \\ \cline{2-7} \cline{9-12}
& w/o proj. & w/ proj. & KL Div. & H dis. & KL Div. & H dis. & & KL Div. & H dis. & KL Div. & H dis. \\ \hline
Faces (69,822) \cite{karras2019style} & 9.5018 & 14.194 & 1.3124 & 0.9710 & 1.2125 & 0.6724 & \cellcolor[HTML]{FFFFC7}{\textbf{8.9387}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.9810}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7487}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.4470}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3088}} \\ \hline
Flowers (8,189) \cite{nilsback2008automated} & 10.876 & 15.502 & 1.0304 & 0.9614 & 2.7110 & 0.7038 & \cellcolor[HTML]{FFFFC7}{\textbf{4.9572}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.8986}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7353}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3837}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.2957}} \\ \hline
Cats (9,992) \cite{catdataset} & \cellcolor[HTML]{FFFFC7}{\textbf{14.366}} & 21.826 & 1.6659 & 0.9740 & 1.4051 & 0.5303 & 17.068 & \cellcolor[HTML]{FFFFC7}{\textbf{1.0054}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7278}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3461}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.2639}}\\ \hline
Dogs (20,579) \cite{khosla2011novel} & \cellcolor[HTML]{FFFFC7}{\textbf{16.706}} & 30.403 & 1.9042 & 0.9703 & 1.4856 & 0.5658 & 20.336 & \cellcolor[HTML]{FFFFC7}{\textbf{1.3565}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7405}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.4321}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3058}} \\ \hline
Birds (9,053) \cite{wah2011caltech} & 3.5539 & 12.564 & 1.9035 & 0.9706 & 1.9134 & 0.6091 & \cellcolor[HTML]{FFFFC7}{\textbf{3.2251}} & \cellcolor[HTML]{FFFFC7}{\textbf{1.4976}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7819}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.4261}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3064}} \\ \hline
Anime (63,565) \cite{animedataset}& \cellcolor[HTML]{FFFFC7}{\textbf{2.5002}} & 9.8890 & 0.9747 & 0.9869 & 1.4323 & 0.5929 & 5.3757 & \cellcolor[HTML]{FFFFC7}{\textbf{0.8547}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.6211}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.1352}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.1798}} \\ \hline
Hands (11,076) \cite{afifi201911k} & 2.6853
& 2.7826 & 0.9387 & 0.9942 & 0.3654 & 0.3709 & \cellcolor[HTML]{FFFFC7}{\textbf{2.2438}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.3317}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.3655}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.0533}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.1085}} \\ \hline
Landscape (4,316) & 24.216 & 29.248 & 0.8811 & 0.9741 & 1.9492 & 0.6265 & \cellcolor[HTML]{FFFFC7}{\textbf{23.549}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.8315}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.8169}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.5445}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.3346}} \\ \hline
Bedrooms (303,116) \cite{yu2015lsun} & 10.599 & 14.673 & 1.5709 & 0.9703 & 1.2690 & 0.5363 & \cellcolor[HTML]{FFFFC7}{\textbf{4.5320}} & \cellcolor[HTML]{FFFFC7}{\textbf{1.3774}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.7278}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.2547}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.2464}} \\ \hline
Cars (16,185) \cite{krause20133d}& 21.485 & 25.496 & 1.6871 & 0.9749 & 0.7364 & 0.4231
& \cellcolor[HTML]{FFFFC7}{\textbf{14.408}}
& \cellcolor[HTML]{FFFFC7}{\textbf{1.0743}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.7028}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.2923}}
& \cellcolor[HTML]{FFFFC7}{\textbf{0.2431}}\\ \hline
Aerial Scenes (36,000) \cite{maggiori2017can} & \cellcolor[HTML]{FFFFC7}{\textbf{11.413}} & 14.498 & 2.1142 & 0.9798 & 1.1462 & 0.5158 & 12.602 & \cellcolor[HTML]{FFFFC7}{\textbf{0.9889}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.5887}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.1757}} & \cellcolor[HTML]{FFFFC7}{\textbf{0.1890}}\\ \hline
\end{tabular}}
\end{table*}
\vspace{-2mm}
\paragraph{Image Generation} \label{sec.results-generated-images}
Figure\ \ref{fig:GAN_results} shows examples of our HistoGAN-generated images.\
Each row shows samples generated from different domains using the corresponding input target colors.\
For each domain, we fixed the style vectors responsible for the coarse and middle styles to show our HistoGAN's response to changes in the target histograms.
Qualitative comparisons with the recent MixNMatch method \cite{li2020mixnmatch} are provided in Fig.\ \ref{fig:GAN_comparison_w_MixNMatch}.
To evaluate the potential improvement/degradation of the generated-image diversity and quality caused by our modification to StyleGAN, we trained StyleGAN~\cite{karras2020analyzing} with $m=16$ (i.e., the same as our model capacity) without our histogram modification.
We evaluated both models on different datasets, including our collected set of landscape images.
For each dataset, we generated 10,000 $256\!\times\!256$ images using the StyleGAN and our HistoGAN.
We evaluated the generated-image quality and diversity using the Frech\'et inception distance (FID) metric \cite{heusel2017gans} using the second max-pooling features of the Inception model~\cite{szegedy2015going}.
We further evaluated the ability of StyleGAN to control colors of GAN-generated images by training a regression deep neural network (ResNet \cite{he2016deep}) to transform generated images back to the corresponding fine-style vectors.
These fine-style vectors are used by the last two blocks of StyleGAN and are responsible for controlling delicate styles, such as colors and lights \cite{karras2019style, karras2020analyzing}.
The training was performed for each domain separately using 100,000 training StyleGAN-generated images and their corresponding ``ground-truth'' fine-style vectors.
In the testing phase, we used the trained ResNet to predict the corresponding fine-style vectors of the target image---these target images were used to generate the target color histograms for HistoGAN's experiments. We then generated output images based on the predicted fine-style vectors of each target image.
In the evaluation of StyleGAN and HistoGAN, we used randomly selected target images from the same domain.
The Hellinger distance and KL divergence were used to measure the color errors between the histograms of the generated images and the target histogram; see Table \ref{table:results}.
\begin{figure}[b]
\centering
\vspace{-2mm}
\includegraphics[width=\linewidth]{comparison_w_HiDT.pdf}
\vspace{-6.5mm}
\caption{Comparison with the high-resolution daytime translation (HiDT) method \cite{anokhin2020high}.}\vspace{-2mm}
\label{fig:comparison_w_HiDT}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{compariosns_recoloring_class_based.pdf}
\vspace{-6mm}
\caption{Comparisons between our ReHistoGAN and other image color/style transfer methods, which are: Reinhard et al., \cite{reinhard2001color}, Xiao et al., \cite{xiao2006color}, Piti\'e and Kokaram \cite{pitie2007}, Nguyen et al., \cite{nguyen2014illuminant}, Gatys et al., \cite{gatys2016image}, and Sheng et al., \cite{sheng2018avatar}.\label{fig:compariosns_recoloring_class_based}}\vspace{-2mm}
\end{figure*}
\vspace{-2mm}
\paragraph{Image Recoloring}
Figure \ref{fig:face_recoloring} shows examples of image recoloring using our ReHistoGAN. A comparison with the recent high-resolution daytime translation (HiDT) method \cite{anokhin2020high} is shown in Fig.\ \ref{fig:comparison_w_HiDT}.
Additional comparisons with image recoloring and style transfer methods are shown in Fig.\ \ref{fig:compariosns_recoloring_class_based}.
Arguably, our ReHistoGAN produces image recoloring results that are visually more compelling than the results of other methods for image color/style transfer.
As shown in Fig.\ \ref{fig:compariosns_recoloring_class_based}, our ReHistoGAN produces realistic recoloring even when the target image is from a different domain than the input image, compared to other image style transfer methods (e.g., \cite{gatys2016image, sheng2018avatar}).
Lastly, we provide a qualitative comparison with the recent auto-recoloring method proposed by Afifi et al., \cite{afifi2019image} in Fig.\ \ref{fig:compariosns_auto_recoloring}.
In the shown example, our target histograms were dynamically generated by sampling from a pre-defined set of histograms and applying a linear interpolation between the sampled histograms (see Eq.\ \ref{eq.target_hist}).
\begin{figure}
\includegraphics[width=\linewidth]{compariosns_auto_recoloring.pdf}
\vspace{-6.5mm}
\caption{Automatic recoloring comparison with the recent method by Afifi et al., \cite{afifi2019image}.}
\label{fig:compariosns_auto_recoloring}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{colorization.pdf}
\vspace{-6.5mm}
\caption{Results of using our ReHistoGAN for a diverse image colorization.}
\label{fig:colorization}
\end{figure}
\vspace{-1mm}
\paragraph{What is Learned?}
Our method learns to map color information, represented by the target color histogram, to an output image's colors with a realism consideration in the recolored image. Maintaining realistic results is achieved by learning proper matching between the target colors and the input image's semantic objects (e.g., grass can be green, but not blue). To demonstrate this, we examine a trained ReHistoGAN model for an image colorization task, where the input image is grayscale. The input of a grayscale image means that our ReHistoGAN model has no information regarding objects' colors in the input image. Figure\ \ref{fig:colorization} shows outputs where the input has been ``colorized''. As can be seen, the output images have been colorized with good semantic-color matching based on the image's content.
\section{Conclusion} \label{sec.conclusion}
We have presented HistoGAN, a simple, yet effective, method for controlling colors of GAN-generated images.
Our HistoGAN framework learns how to transfer the color information encapsulated in a target histogram feature to the colors of a generated output image.\
To the best of our knowledge, this is the first work to control the color of GAN-generated images directly from color histograms.
Color histograms provide an abstract representation of image color that is decoupled from spatial information.
This allows the histogram representation to be less restrictive and suitable for GAN-generation across arbitrary domains.
We have shown that HistoGAN can be extended to control colors of real images in the form of the ReHistoGAN model.
Our recoloring results are visually more compelling than currently available solutions for image recoloring.
Our image recoloring also enables ``auto-recoloring'' by sampling from a pre-defined set of histograms.
This allows an image to be recolored to a wide range of visually plausible variations.
HistoGAN can serve as a step towards intuitive color control for GAN-based graphic design and artistic endeavors.
\newcommand{\beginsupplement}{%
\setcounter{table}{0}
\renewcommand{\thetable}{S\arabic{table}}%
\setcounter{figure}{0}
\renewcommand{\thefigure}{S\arabic{figure}}%
}
\section{Supplementary Material}
\beginsupplement
\subsection{Details of Our Networks} \label{sec:network}
Our discriminator network, used in all of our experiments, consists of a sequence of $\log_2(N) – 1$ residual blocks, where $N$ is the image width/height, and the last layer is an fully connected (fc) layer that produces a scalar feature. The first block accepts a three-channel input image and produce $m$ output channels. Then, each block $i$ produces $2m_{i-1}$ output channels (i.e., duplicate the number of output channels of the previous block). The details of the residual blocks used to build our discriminator network are shown in Fig.\ \ref{fig:discriminator_block}.
Figure~\ref{fig:recoloring-design_} provides the details of our encoder, decoder and GAN blocks used in our ReHistoGAN (used for image recoloring). As shown, we modified the last two blocks of our HistoGAN's to accept the latent feature passed from the first two blocks of our encoder. This modification helps our HistoGAN's head to consider both information of the input image structure and the target histogram in the recoloring process.
\begin{figure}[b]
\centering
\includegraphics[width=0.72\linewidth]{discriminator_block.pdf}
\vspace{-2mm}
\caption{Details of the residual discriminator block used to reconstruct our discriminator network. The term P and S refer to the padding and stride used in each layer.}\vspace{-2mm}
\label{fig:discriminator_block}
\end{figure}
\subsection{Training Details} \label{sec:training}
We train our networks using an NVIDIA TITAN X (Pascal) GPU. For HistoGAN training, we optimized both the generator and discriminator networks using the diffGrad optimizer \cite{8939562}. In all experiments, we set the histogram bin, $h$, to 64 and the fall-off parameter of our histogram's bins, $\tau$, was set to 0.02. We adopted the exponential moving average of generator network's weights \cite{karras2019style, karras2020analyzing} with the path length penalty, introduced in StyleGAN \cite{karras2020analyzing}, every 32 iterations to train our generator network. Due to the hardware limitation, we used mini-batch of 2 with accumulated gradients every 16 iteration steps and we set the image's dimension, $N$, to 256. We set the scale factor of the Hellinger distance loss, $\alpha$, to 2 (see Sec.\ \ref{sec:ablations} for an ablation study).
As mentioned in the main paper, we trained our HistoGAN using several domain datasets, including: human faces \cite{karras2019style}, flowers \cite{nilsback2008automated}, cats \cite{catdataset}, dogs \cite{khosla2011novel}, birds \cite{wah2011caltech}, anime faces \cite{animedataset}, human hands \cite{afifi201911k}, bedrooms \cite{yu2015lsun}, cars \cite{krause20133d}, and aerial scenes \cite{maggiori2017can}. We further trained our HistoGAN using 4,316 landscape images collected from Flickr. The collected images have one of the following copyright licenses: no known copyright restrictions, Public Domain Dedication (CC0), or Public Domain Mark. See Fig.\ \ref{fig:landscape_dataset} for representative examples from the landscape set.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{model_recoloring_supp.pdf}
\vspace{-5.5mm}
\caption{Details of our ReHistoGAN network. We modified the last two blocks of our HistoGAN by adding a gate for the processed skipped features from the first two blocks of our encoder.}\vspace{-2mm}
\label{fig:recoloring-design_}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{landscape_dataset.png}
\vspace{-5.5mm}
\caption{Examples taken from our set of 4,316 landscape images collected from Flickr.}\vspace{-2mm}
\label{fig:landscape_dataset}
\end{figure}
To train our ReHistoGAN, we used the diffGrad optimizer \cite{8939562} with the same mini-batch size used to train our HistoGAN. We trained our network using the following hyperparameters $\alpha=2$, $\beta=1.5$, $\gamma=32$ for 100,000 iterations. Then, we continued training using $\alpha=2$, $\beta=1$, $\gamma=8$ for additional 30,000 iterations to reduce potential artifacts in recoloring (see Sec.\ \ref{sec:ablations} for an ablation study).
\subsection{Ablation Studies} \label{sec:ablations}
We carried out a set of ablation experiments to study the effect of different values of hyperparameters used in the main paper. Additionally, we show results obtained by variations in our loss terms.
We begin by studying the effect of the scale factor, $\alpha$, used in the loss function to train our HistoGAN. This scale factor was used to control strength of the histogram loss term. In this set of experiments, we used the 11K Hands dataset \cite{afifi201911k} to be our target domain and trained our HistoGAN with the following values of $\alpha$: 0.2, 2, 4, 8, and 16. Table \ref{table:ablation_results} shows the evaluation results using the Frech\'et inception distance (FID) metric \cite{heusel2017gans}, the KL divergence, and Hellinger distance. The KL divergence and Hellinger distance were used to measure the similarity between the target histogram and the histogram of GAN-generated images. Qualitative comparisons are shown in Fig.\ \ref{fig:alpha_ablation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{alpha_ablation.pdf}
\vspace{-5.5mm}
\caption{Results obtained by training our HistoGAN in hand images \cite{afifi201911k} using different values of $\alpha$.}\vspace{-2mm}
\label{fig:alpha_ablation}
\end{figure}
\begin{table}[]
\centering
\caption{Results of our HistoGAN using different values of $\alpha$. In this set of experiments, we used the Hands dataset \cite{afifi201911k} as our target domain. The term FID stands for the Frech\'et inception distance metric \cite{heusel2017gans}. The term KL Div. refers to the KL divergence between the histograms of the input image and generated image, while the term H. dis. refers to Hellinger distance.\label{table:ablation_results}}
\scalebox{0.87}{
\begin{tabular}{|c|c|c|c|}
\hline
& & \multicolumn{2}{c|}{RGB-$uv$ hist.} \\ \cline{3-4}
\multirow{-2}{*}{$\alpha$} & \multirow{-2}{*}{FID} & KL Div. & H dist. \\ \hline
0.2 & 1.9950 & 0.3935 & 0.3207 \\ \hline
\rowcolor[HTML]{FFFFC7}
2 & \textbf{2.2438} & \textbf{0.0533} & \textbf{0.1085} \\ \hline
4 & 6.8750 & 0.0408 & 0.0956 \\ \hline
8 & 9.4101 & 0.0296 & 0.0822 \\ \hline
16 & 15.747 & 0.0237 & 0.0743 \\ \hline
\end{tabular}}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{alpha_beta_gamma_ablation.pdf}
\vspace{-6.5mm}
\caption{Results of recoloring by training our recoloring network using different values of $\alpha$, $\beta$, and $\gamma$ hyperparameters. The highlighted results refer to the settings used to produce the reported results in the main paper and the supplementary materials.}\vspace{-2mm}
\label{fig:alpha_beta_gamma_ablation}
\end{figure*}
Figure \ref{fig:alpha_beta_gamma_ablation} shows examples of recoloring results obtained by trained ReHistoGAN models using different combination values of $\alpha$, $\beta$, $\gamma$. As can be seen, a lower value of the scale factor, $\alpha$, of the histogram loss term results in ignoring our network to the target colors, while higher values of the scale factor, $\gamma$, of the discriminator loss term, make our method too fixated on producing realistic output images, regardless of achieving the recoloring (i.e., tending to re-produce the input image as is).
In the recoloring loss, we used a reconstruction loss term to retain the input image's spatial details in the output recolored image. Our reconstruction loss is based on the derivative of the input image. We have examined two different kernels, which are: the vertical and horizontal $3\!\times\!3$ Sobel kernels (i.e., the first-order derivative approximation) and the $3\!\times\!3$ Laplacian kernel (i.e., the second-order derivative). We found that training using both kernels give reasonably good results, while the Laplacian kernel produces more compiling results in most cases; see Fig.\ \ref{fig:sobel_vs_laplacian} for an example.
\begin{figure}
\includegraphics[width=\linewidth]{sobel_vs_laplacian.pdf}
\vspace{-5.5mm}
\caption{Results of two different kernels used to compute the reconstruction loss term.}\vspace{-2mm}
\label{fig:sobel_vs_laplacian}
\end{figure}
\begin{figure}[b]
\includegraphics[width=\linewidth]{variance_loss_supp.pdf}
\vspace{-5.5mm}
\caption{The impact of the variance loss term. The shown results were obtained by training our ReHistoGAN with and without the variance loss term.}\vspace{-2mm}
\label{fig:variance_loss_supp}
\end{figure}
In the main paper, we introduced a variance loss term to encourage our network to avoid the global color cast solution for image recoloring. Figure \ref{fig:variance_loss_supp} shows an example of the global color cast problem, where the network applies a global color shift to the input image to match the target histogram. As shown in Fig.\ \ref{fig:variance_loss_supp} after training our network with the variance loss, this problem is reduced.
\subsection{Universal ReHistoGAN Model}\label{sec:universal}
As the case of most GAN methods, our ReHistoGAN targets a specific object domain to achieve the image recoloring task. This restriction may hinder the generalization of our method to deal with images taken from arbitrary domains. To deal with that, we collected images from a different domain, aiming to represent the ``universal'' object domain.
Specifically, our training set of images contains $\sim$2.4 million images collected from different image datasets. These datasets are: collection from the Open Images dataset \cite{kuznetsova2020open}, the MIT-Adobe FiveK dataset \cite{fivek}, the Microsoft COCO dataset \cite{lin2014microsoft}, the CelebA dataset \cite{liu2015deep}, the Caltech-UCSD birds-200-2011 dataset \cite{wah2011caltech}, the Cats dataset \cite{catdataset}, the Dogs dataset \cite{khosla2011novel}, the Cars dataset \cite{krause20133d}, the OxFord Flowers dataset \cite{nilsback2008automated}, the LSUN dataset \cite{yu2015lsun}, the ADE20K dataset \cite{zhou2017scene, zhou2019semantic}, and the FFHQ dataset \cite{karras2019style}. We also added Flickr images collected using the following keywords: $\texttt{landscape}$, $\texttt{people}$, $\texttt{person}$, $\texttt{portrait}$, $\texttt{field}$, $\texttt{city}$, $\texttt{sunset}$, $\texttt{beach}$, $\texttt{animals}$, $\texttt{living room}$, $\texttt{home}$, $\texttt{house}$, $\texttt{night}$, $\texttt{street}$, $\texttt{desert}$, $\texttt{food}$. We have excluded any grayscale image from the collected image set.
We trained our ``universal'' model using $m=18$ on this collected set of 2,402,006 images from several domains. The diffGrad optimizer \cite{8939562} was used to minimize the same generator loss described in the main paper using the following hyperparameters $\alpha=2$, $\beta=1.5$, $\gamma=32$ for 150,000 iterations. Then, we used $\alpha=2$, $\beta=1$, $\gamma=8$ to train the model for additional 350,000 iterations. We set the mini-batch size to 8 with an accumulated gradient every 24 iterations. Figure \ref{fig:object_specific_vs_universal_recoloring} show results of our domain-specific and universal models for image recoloring. As can be seen, both models produce realistic recoloring, though the universal model tends to produce recolored images with less vivid colors compared to our domain-specific model. Additional examples of auto recoloring using our universal model are shown in Fig. \ref{fig:universal_model_auto_results}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{object_specific_vs_universal_recoloring.pdf}
\vspace{-5.5mm}
\caption{Results of domain-specific and universal ReHistoGAN models. We show results of using a given target histogram for recoloring and two examples of the auto recoloring results of each model.}\vspace{-2mm}
\label{fig:object_specific_vs_universal_recoloring}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{universal_model_auto_results.pdf}
\vspace{-5.5mm}
\caption{Auto recoloring using our universal ReHistoGAN model.}\vspace{-2mm}
\label{fig:universal_model_auto_results}
\end{figure}
\subsection{Limitations} \label{sec:limitations}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{failure_cases.pdf}
\vspace{-5.5mm}
\caption{Failure cases of HistoGAN and ReHistoGAN. Our HistoGAN fails sometimes to consider all colors of target histogram in the generated image. Color bleeding is another problem that could occur in ReHistoGAN's results, where our network could not properly allocate the target (or sampled) histogram colors in the recolored image.}\vspace{-2mm}
\label{fig:failure_cases}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{fixing_failure_cases.pdf}
\vspace{-4mm}
\caption{To reduce potential color bleeding artifacts, it is possible to apply a post-color transfer to our initial recolored image colors to the input image. The results of adopting this strategy are better than applying the color transfer to the input image in the first place. Here, we use the color transfer method proposed by Piti\'e and Kokaram \cite{pitie2007} as our post-color transfer method. We also show the results of directly applying Piti\'e and Kokaram's \cite{pitie2007} method to the input image.}\vspace{-2mm}
\label{fig:fixing_failure_cases}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{dealing_with_any_size.pdf}
\vspace{-5.5mm}
\caption{We apply the bilateral guided upsampling \cite{chen2016bilateral} as a post-processing to reduce potential artifacts of dealing with high-resolution images in the inference phase. In the shown example, we show our results of recoloring using an input image with $2048\!\times\!2048$ pixels.}\vspace{-2mm}
\label{fig:dealing_with_any_size}
\end{figure}
Our method fails in some cases, where the trained HistoGAN could not properly extract the target color information represented in the histogram feature. This problem is due to the inherent limitation of the 2D projected representation of the original target color distribution, where different colors are mapped to the same chromaticity value in the projected space. This is shown in Fig.\ \ref{fig:failure_cases}-top, where the GAN-generated images do not have all colors in the given target histogram. Another failure case can occur in image recoloring, where the recolored images could have some color-bleeding artifacts due to errors in allocating the target/sampled histogram colors in the recolored image. This can be shown in Fig. \ref{fig:failure_cases}-bottom
\begin{figure}
\includegraphics[width=\linewidth]{compariosn_w_styleGAN.pdf}
\vspace{-5.5mm}
\caption{Comparison with generated images using StyleGAN \cite{karras2020analyzing} with latent space projection (see the main paper for more details) and our results.}\vspace{-2mm}
\label{fig:compariosn_w_styleGAN}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{comparison_w_MixNMatch_supp.pdf}
\vspace{-5.5mm}
\caption{Additional comparison with the MixNMatch method \cite{li2020mixnmatch}. In these examples, the target images were used as input shape and background images for the MixNMatch method.}
\label{fig:comparison_w_MixNMatch_supp}
\end{figure}
\subsection{Post-Processing} \label{sec:post-processing}
As discussed in Sec. \ref{sec:limitations}, our method produces, in some times, results with color bleeding, especially when the target histogram feature has unsuitable color distribution for the content of the input image. This color-bleeding problem can be mitigated using a post-process color transfer between the input image and our initial recoloring. Surprisingly, this post-processing mapping produces results better than adopting the mapping in the first place---namely, applying the color transfer mapping without having our intermediate recoloring result.
\begin{figure*}
\includegraphics[width=\linewidth]{histGAN_as_styleGAN_supp.pdf}
\vspace{-6.5mm}
\caption{Our HistoGAN can be used to generate ``unlimited'' number of random samples, exactly like traditional StyleGANs \cite{karras2019style, karras2020analyzing}, by sampling from a pre-defined set of histograms to generate target histograms. In the shown figure, we show generated images by StyleGAN \cite{karras2019style, karras2020analyzing} and our HistoGAN. In each row of the StyleGAN-generated images, we fixed the fine-style vector of the last two blocks of the StyleGAN, as these blocks are shown to control the fine-style of the generated image \cite{karras2020analyzing}. We also fixed the generated histogram for each row of our HistoGAN-generated images.}\vspace{-2mm}
\label{fig:histGAN_as_styleGAN_supp}
\end{figure*}
Figure \ref{fig:fixing_failure_cases} shows an example of applying Piti\'e, and Kokaram's method \cite{pitie2007} as a post-processing color transfer to map the colors of the input image to the colors of our recolored image. In the shown figure, we also show the result of using the same color transfer method -- namely, Piti\'e and Kokaram's method \cite{pitie2007} -- to transfer the colors of the input image directly to the colors of the target image. As shown, the result of using our post-process strategy has a better perceptual quality.
Note that except for this figure (i.e., Fig.\ \ref{fig:fixing_failure_cases}), we \textit{did not} adopted this post-processing strategy to produce the reported results in the main paper or the supplementary materials. We discussed it here as a solution to reduce the potential color bleeding problem for completeness.
As our image-recoloring architecture is a fully convolutional network, we can process testing images in any arbitrary size. However, as we trained our models on a specific range of effective receptive fields (i.e., our input image size is 256), processing images with very high resolution may cause artifacts. To that end, we follow the post-processing approach used in \cite{afifi2020learning} to deal with high-resolution images (e.g., 16-megapixel) without affecting the quality of the recolored image.
\begin{figure*}
\includegraphics[width=\linewidth]{qualitative_GAN_results_supp.pdf}
\vspace{-5.5mm}
\caption{Additional examples of generated images using our HistoGAN. The target colors histograms were computed from the shown target images (left column).}\vspace{-2mm}
\label{fig:qualitative_GAN_results_supp}
\end{figure*}
Specifically, we resize the input image to $256\!\times\!256$ pixels before processing it with our network. Afterward, we apply the bilateral guided upsampling \cite{chen2016bilateral} to construct the mapping from the resized input image and our recoloring result. Then, we apply the constructed bilateral grid to the input image in its original dimensions. Figure \ref{fig:dealing_with_any_size} shows an example of our recoloring result for a high-resolution image ($2048\!\times\!2048$ pixels). As can be seen, our result has the same resolution as the input image with no artifacts.
\begin{figure*}
\includegraphics[width=\linewidth]{compariosns_recoloring_class_based_supp.pdf}
\vspace{-5.5mm}
\caption{Additional comparisons with image recoloring/style transfer methods. We compare our results with results of the following methods: Reinhard et al., \cite{reinhard2001color}, Xiao et al., \cite{xiao2006color}, Piti\'e and Kokaram \cite{pitie2007}, Nguyen et al., \cite{nguyen2014illuminant}, and Sheng et al., \cite{sheng2018avatar}.}\vspace{-2mm}
\label{fig:compariosns_recoloring_class_based_supp}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{qualitative_recoloring_class_based.pdf}
\vspace{-5.5mm}
\caption{Additional results for image recoloring. We recolor input images, shown in the right by feeding our network with the target histograms of images shown in the top.}\vspace{-1mm}
\label{fig:qualitative_recoloring_class_based}
\end{figure*}
\subsection{Additional Results} \label{sec:additional-results}
This section provides additional results generated by our HistoGAN and ReHistoGAN. As discussed in the main paper, we trained a regression ResNet \cite{he2016deep} model to learn the back-projection from the generated images into the corresponding fine-style vectors of StyleGAN \cite{karras2020analyzing}. This regression model was used to compareHistoGAN and StyleGAN's ability to control the generated images' colors given a target color distribution. Figure \ref{fig:compariosn_w_styleGAN} shows a qualitative comparison between the results of our HistoGAN and StyleGAN with this projection approach. We show additional qualitative comparisons with the recent MixNMatch method \cite{li2020mixnmatch} in Fig.\ \ref{fig:comparison_w_MixNMatch_supp}. In the shown figures, we show the KL divergence and the Hellinger distance between the histograms of the GAN-generated images and the target histogram.
Our HistoGAN, along with the sampling procedure (used for auto recoloring in the main paper) can be used to turn our HistoGAN into a traditional GAN method, where there is no need for any user intervention to input the target histograms. Figure \ref{fig:histGAN_as_styleGAN_supp} shows an example of using our sampling procedure to generate random histogram samples. The generated histogram samples are used by HistoGAN to generate ``unlimited'' number of samples. In the shown figure, we compare between our HistoGAN results, using the generated histograms, with StyleGAN \cite{karras2020analyzing}. In Fig.\ref{fig:histGAN_as_styleGAN_supp}-(A), each row shows generated examples with a fixed fine-style vectors, which are used by the last two blocks of the StyleGAN as these blocks are shown to control the fine-style (e.g., colors, lighting, etc.) of the generated image \cite{karras2019style, karras2020analyzing}. In Fig.\ref{fig:histGAN_as_styleGAN_supp}-(B), each row shows generated images using our HistoGAN with a fixed generated histogram. As shown in the figure, our HistoGAN generates samples with a higher color diversity compared to StyleGAN results.
Figure \ref{fig:qualitative_GAN_results_supp} shows additional HistoGAN-generated images from different domains. In each row, we show example images generated using the corresponding input target colors. We fixed the coarse- and middle-style vectors, for each domain, to show the response of our HistoGAN to changes in the target histograms.
\begin{figure*}
\includegraphics[width=\linewidth]{comparions_universal_model.pdf}
\vspace{-5.5mm}
\caption{Comparisons between our universal ReHistoGAN and the methods proposed by Shih et al., \cite{shih2013data} and Laffont et al., \cite{laffont2014transient} for color transfer.}\vspace{-2mm}
\label{fig:comparions_universal_model}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{comparison_with_diverse_colorization.pdf}
\vspace{-5.5mm}
\caption{Comparisons between our ReHistoGAN and the diverse colorization method proposed by Deshpande et al., \cite{deshpande2017learning}. For our ReHistoGAN, we show the resutls of our domain-specific and universal models.}\vspace{-2mm}
\label{fig:comparison_with_diverse_colorization}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{compariosns_auto_recoloring_supp.pdf}
\vspace{-5.5mm}
\caption{Comparison with the auto image recoloring method proposed in \cite{afifi2019image}. Our recoloring results were produced by domain-specific networks, except for the last three rows, where our results were generated by the universal model.}\vspace{-2mm}
\label{fig:compariosns_auto_recoloring_supp}
\end{figure*}
In the main paper, we showed comparisons with different image recoloring and style transfer methods. Figure \ref{fig:compariosns_recoloring_class_based_supp} shows additional qualitative comparisons. Note that Gaty et al.'s optimization method \cite{gatys2015neural} takes $\sim$4 minutes to process a single image. In contrast, our ReHistoGAN processes a single image in $\sim$0.5 seconds without the guided upsampling procedure \cite{chen2016bilateral}, and $\sim$22 seconds with an unoptimized implementation of the guided upsampling using a single GTX 1080 GPU. Further qualitative examples are shown in Fig.\ \ref{fig:qualitative_recoloring_class_based}. As can be seen, our ReHistoGAN successfully transfers the target colors to the recolored images naturally.
As mentioned in the main paper, there are a few attempts to achieve auto recoloring (e.g., \cite{laffont2014transient, deshpande2017learning, afifi2019image, anokhin2020high}). The high-resolution daytime translation (HiDT) method \cite{anokhin2020high}, for example, achieves the auto-style transfer by sampling from a pre-defined set of target styles. We compared our method and the HiDT method in the main paper, where we used one of the pre-defined target styles as our target histogram. This idea of having a pre-defined set of target styles was originally proposed in \cite{laffont2014transient}, where a set of transient attributes are used to search in a dataset of different target styles. These methods, however, are restricted to the semantic content of the target styles to match the semantic content of training/testing input images. Unlike these auto recoloring/style transfer methods, our ReHistoGAN can deal with histograms taken from any arbitrary domain, as shown in our results in the main paper and these supplementary materials. In Fig.\ \ref{fig:comparions_universal_model}, we show qualitative comparisons of the recoloring results using our universal ReHistoGAN and the method proposed in \cite{laffont2014transient}.
Another strategy for image recoloring is to learn a diverse colorization model. That is, the input image is converted to grayscale, and then a trained method for diverse colorization can generate different colorized versions of the input image. In Fig.\ \ref{fig:comparison_with_diverse_colorization}, we show a qualitative comparison with the diverse colorization method proposed by Deshpande et al., \cite{deshpande2017learning}.
Lastly, we show additional qualitative comparisons with the recent auto-recoloring method proposed by Afifi et al., \cite{afifi2019image} in Fig.\ \ref{fig:compariosns_auto_recoloring_supp}. The figure shows the results of domain-specific ReHistoGAN models (the first four rows) and the universal ReHistoGAN model (the last three rows). As can be seen from the shown figures, our ReHistoGAN arguably produces more realistic recoloring compared to the recoloring results produced by other auto recoloring methods.
{\small
|
1,116,691,497,182 | arxiv | \section*{Abstract}
{\bf
The first Workshop on Tau Lepton Physics took place at Orsay in 1990.
The evolution of the field and some physics highlights are briefly described, following the presentations discussed at the fifteen $\boldsymbol{\tau}$ workshops
that have been celebrated since then.
}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Orsay90b.jpg}
\caption{Bernard Jean-Marie (left), Martin Perl (center) and Michel Davier (right) at the first workshop on Tau Lepton Physics (Orsay, 1990) \cite{Davier:1991jq}.}
\label{fig:mass}
\end{figure}
In 2020 we were supposed to celebrate the $30^{\mathrm{th}}$ anniversary of the
Workshop on Tau Lepton Physics, a very successful series of scientific meetings \cite{Davier:1991jq,Gan:1992fum,Rolandi:1995yx,Smith:1997iq,PichZardoya:1999yv,Sobie:2001it,Schalk:2002zs,Ohshima:2005iv,Cei:2007zza,Bondar:2009zz,Lafferty:2011zz,Hayasaka:2014xya,Stahl:2015oci,Yuan:2017tgx,Proceedings:2019ihn}
that was initiated in 1990, at Orsay, by Michel Davier and Bernard Jean-Marie \cite{Davier:1991jq}.
Owing to the covid pandemic, the $16^{\mathrm{th}}$ Tau Workshop and this celebration have finally taken place on-line, with a one-year delay.
Meanwhile, the $\tau$ community got shocked with the sad losses of our friends Simon Eidelman and Olga Igonkina, summary speaker and main organizer, respectively, of the TAU 2018 Workshop \cite{Proceedings:2019ihn}, which are deeply missed by all of us.
In the following sections, I try to describe the 1990 status and the posterior evolution of the field, through a selection of physics highlights.
Obviously, I cannot cover the large number of excellent contributions discussed in the fifteen workshops celebrated so far. I apologize in advance for the many omissions in this incomplete and very personal (subjective) overview. A long list of relevant references can be found in my 2014 review on $\tau$ physics \cite{Pich:2013lsa}.
\section{Orsay 1990: a Successful and Inspiring Meeting}
Since its discovery in 1975 \cite{Perl:1975bf} at the SPEAR $e^+e^-$ storage ring,
the $\tau$ properties were investigated by many experiments, finding a quite reasonable agreement with the pioneering predictions of Yung-Su Tsai \cite{Tsai:1971vv}. The status before the Orsay meeting was nicely summarized in 1988 by Barry C. Barish and Ryszard Stroynowski in an extensive Physics Report \cite{Barish:1987nj}, and it was far from being satisfactory. Tau physics was not yet a field on its own, since it had just developed as a ``by-product of general-purpose $e^+e^-$ detectors'' (B. Barish, in \cite{Davier:1991jq}). The $\tau$ data had relatively large uncertainties and
exhibited internal inconsistencies. In particular, many efforts were being devoted to understand two long-standing anomalies:
\begin{enumerate}
\item
The so-called missing one-prong problem, a $3.3\,\sigma$ deficit of the sum of exclusive $\tau$-decay branching ratios into channels with one single charged particle, $\mathrm{B}_1^{\mathrm{excl}} = (81.0\pm 1.5)\%$, with respect to the inclusive measurement
$\mathrm{B}_1^{\mathrm{incl}} = (86.1\pm 0.3)\%$.
\item
A sizeable $2.7\,\sigma$ discrepancy between the
$\tau$ lifetime, $\tau_\tau^{\mathrm{exp}} = (3.04\pm 0.06)\cdot 10^{-13}$~s, and its Standard Model (SM) prediction,
$\tau_\tau^{\mathrm{SM}} = (2.81\pm 0.06)\cdot 10^{-13}$~s, obtained from the measured branching ratio of the decay $\tau\to\nu_\tau e\bar\nu_e$, $\mathrm{B}_e^{\mathrm{exp}} = (17.78\pm 0.32)\%$.
\end{enumerate}
A much larger anomaly, the 1987 HRS claim \cite{Derrick:1987sp} of a huge 5\% $\tau^+$ branching ratio into the G-parity violating $\eta\pi^+\bar\nu_\tau$ final state, had just been dismissed on pure experimental grounds.
A workshop devoted to study the physics potential of a low-energy $\tau$-charm factory ($\tau$cF) was organised in 1989 at SLAC \cite{Proceedings:1989lya}, triggering a renovated interest in this type of physics. This was followed by several other $\tau$cF meetings at different sites (Spain, USA, China\ldots) that would later culminate with the BEPCII project in Beijing. However, with the available luminosity, charm and charmonium were the clear physics priorities of this $\tau$cF.
The start of the LEP operation in August 1989 was the main motivation to organise a topical workshop fully devoted to the $\tau$ lepton. Although the initially planned scientific programme of LEP had not paid much attention to $\tau$ physics, it was soon realised that this $e^+e^-$ collider was an excellent environment to perform precise measurements of the $\tau$ properties.
An increasing interest in this third-generation particle was clearly manifested by the large number (117) of participants attending the Orsay meeting, which triggered many ideas, suggestions and alive discussions. First, very preliminary, analyses of the LEP data were already presented (H.-S. Chen, D.E. Klem, G.G.G. Massaro, S. Orteu, S. Snow, A. Stahl, P. Vaz, M. Winter, F.~Zomer), showing the high physics potential of the new collider. In the Orsay proceedings \cite{Davier:1991jq}, one can already find first studies on topics that would later become common ingredients of the $\tau$ research: isospin relation with $e^+e^-$ data (S. Eidelman and V. Ivanchenko), polarization analysers (A.~Rougé), resonance studies (J. K\"uhn), etc. I was invited to
discuss a possible determination of the strong coupling ($\Lambda_{\overline{\mathrm{MS}}}$) from the inclusive $\tau$ decay width, suggested by Stephan Narison and myself \cite{Narison:1988ni}.
It was a quite bold and heterodox proposal at the time, and my talk was finally scheduled in the new-physics section.
\section{The Golden Age}
The next workshops on Tau Lepton Physics (Columbus 1992 \cite{Gan:1992fum}, Montreux 1994 \cite{Rolandi:1995yx}, Estes Park 1996~\cite{Smith:1997iq}, Santander 1998 \cite{PichZardoya:1999yv}, Victoria 2000 \cite{Sobie:2001it}) witnessed a fast and drastic qualitative change on the status of $\tau$ physics, with lots of good data coming from CLEO, LEP and SLD, together with the last ARGUS analyses.
In 1992, the disturbing $\tau$ anomalies were already solved (M.~Davier, W.J.~Marciano in \cite{Gan:1992fum}). A tight (95\% CL) limit on unmeasured decay modes was set with the LEP data, $\mathrm{B}_{\mathrm{unseen}}< 0.11\%$ (M.~Davier in \cite{Gan:1992fum}), and BES released a very precise measurement of the $\tau$ mass (H.~Marsiske in \cite{Gan:1992fum}), slightly below the previous world average. It was followed by a precise ALEPH measurement of the $\tau$ lifetime, subsequently confirmed by the other LEP detectors and SLD, which shifted $\tau_\tau$ to smaller values (M. Davier in \cite{Rolandi:1995yx}). Those measurements are compared with their 2014 values in Figs.~\ref{fig:mass} and \ref{fig:lifetime}.
The current PDG averages are not much different:
$m_\tau = (1776.86 \pm 0.12)~\mathrm{MeV}$ and
$\tau_\tau = (290.3\pm 0.5)~\mathrm{fs}$ \cite{ParticleDataGroup:2020ssz}. The combination of these two experimental inputs eliminated the previous discrepancy with the electronic branching ratio, as shown in Fig.~\ref{fig:BeLifetime} (A. Pich in \cite{Sobie:2001it}).
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/TauMass_BES92.pdf}
\hskip .5cm
\includegraphics[width=0.5\textwidth]{Figures/TauMass_BES14.pdf}
\caption{$m_\tau$ status in 1992 (H. Marsiske in \cite{Gan:1992fum}) and 2014 (T. Luo in \cite{Stahl:2015oci}).}
\label{fig:mass}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/TauLifetime_Hayes96.pdf}
\hskip 1cm
\includegraphics[width=0.45\textwidth]{Figures/Tau2014_Shapkin_Lifetime.pdf}
\caption{$\tau_\tau$ status
in 1996 (K.G. Hayes in \cite{Smith:1997iq}) and 2014 (M.~Shapkin in \cite{Stahl:2015oci}).}
\label{fig:lifetime}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.55\textwidth]{Figures/BeTauPlot00.pdf}
\caption{2000 world averages of $\tau_\tau$ and $\mathrm{B}_e$. The blue lines show the SM prediction for the measured range of $m_\tau$ (A. Pich in \cite{Sobie:2001it}).}
\label{fig:BeLifetime}
\end{figure}
The excellent quality of the LEP data brought a new era of precision physics, which was complemented with many theory contributions, allowing us to perform accurate tests of the SM, in both the electroweak and QCD sectors. Around 2000, lepton universality was tested with a 0.2\% precision for charged currents (A. Pich in \cite{Sobie:2001it}),
and the measured leptonic $Z$ couplings already indicated that low values of the Higgs mass were favoured (D.W. Reid in \cite{Sobie:2001it}).
Thanks to the $\tau$ polarization emerging from the decay $Z\to\tau^+\tau^-$, it was possible to analyse the Lorentz structure of the leptonic $\tau$ decays and put relevant bounds on hypothetical right-handed charged-current couplings (A. Stahl in \cite{PichZardoya:1999yv}).
Detailed experimental analyses of the inclusive hadronic $\tau$ decay width and its invariant-mass distribution, performed by ALEPH (L.~Duflot in \cite{Gan:1992fum}), CLEO and OPAL (S. Menke in \cite{PichZardoya:1999yv}), put on very firm grounds the determination of the strong coupling at the $\tau$ mass (E. Braaten in \cite{Smith:1997iq}),
providing a beautiful test of its QCD running, shown in Fig.~\ref{fig:StrongCoupling}, and a very accurate measurement of $\alpha_s(M_Z)_\tau = 0.1202\pm 0.0027$ (M.~Davier in \cite{Sobie:2001it}), in excellent agreement with the direct determination at the $Z$ peak, $\alpha_s(M_Z)_Z = 0.1183\pm 0.0027$. To put in perspective the importance of these analyses, one should remind that the advocated pre-LEP value was $\alpha_s(M_Z) = 0.11\pm 0.01$ \cite{Altarelli:1990gv}
and the first (1992) precise lattice determination, $\alpha_s(M_Z) = 0.105\pm 0.004$ \cite{El-Khadra:1992ytg}, was substantially lower than the currently accepted value.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figures/Davier_TAU00_alphasPlot.pdf}
\caption{Strong coupling extracted from $\tau$ decays, at $\mu=m_\tau$, compared with the $Z$-peak measurement. The band shows the predicted QCD running (M. Davier in \cite{Sobie:2001it}).}
\label{fig:StrongCoupling}
\end{figure}
\subsection{The 1995 Nobel Prize in Physics}
In 1995, Martin L. Perl (New York 1927 - Palo Alto 2014) was finally awarded with the Nobel Prize in Physics for the discovery of the $\tau$ lepton, sharing the prize with Frederick Reines (Paterson 1918 - Orange 1998) for the first neutrino detection. This high recognition was well celebrated by the whole $\tau$ community. Martin participated very actively in the Tau Physics workshops, being an honorary member of the IAC until the end of his life. He attended in person all workshops from 1990 (Orsay) to 2002 (Santa Cruz), providing always his very strong support and wise advise.
\section{B-Factory Era}
The advent of the B factories opened a new era (Victoria 2000 \cite{Sobie:2001it}, Santa Cruz 2002 \cite{Schalk:2002zs}, Nara 2004 \cite{Ohshima:2005iv}, Pisa 2006 \cite{Cei:2007zza}, Novosibirsk 2008 \cite{Bondar:2009zz}, Manchester 2010 \cite{Lafferty:2011zz}), characterised by very large data samples. This allowed for detailed studies of exclusive invariant-mass distributions, resonance structures and high-multiplicity modes (J. Portolés in \cite{Ohshima:2005iv}; H.~Hayashii, M. Fujikawa in \cite{Bondar:2009zz}; A. Adametz, M. Davier, M.J. Lee in \cite{Lafferty:2011zz}; R.J. Sobie in \cite{Hayasaka:2014xya}; H.~Hayashii, E. Tanaka, P.~Roig in \cite{Stahl:2015oci}).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figures/PionFF_Belle_TAU08.pdf}
\caption{Pion form factor in $\tau^-\to\nu_\tau\pi^-\pi^0$ (H. Hayashii, M. Fujikawa in \cite{Bondar:2009zz}).}
\label{fig:PionFF}
\end{figure}
Searches for processes violating lepton flavour or lepton number were obviously highly benefited by the huge available statistics, reaching sensitivities of a few $10^{-8}$ in many $\tau$ decay modes (A. Cervelli, M. Lewczuk, K. Inami in \cite{Lafferty:2011zz};
Y. Jin, D.A. Epifanov, H. Aihara, S. Eidelman in \cite{Proceedings:2019ihn}).
This has been complemented by the strong constraints coming from $\mu$ decays
(B.~Golden in \cite{Lafferty:2011zz}; H. Natori in \cite{Hayasaka:2014xya})
and $\mu N \to e N$ conversion (T.~Iwamoto in \cite{Proceedings:2019ihn})).
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Figures/LFVdecays.png}
\caption{Current (90\% C.L.) upper limits on $\tau$ decays violating
lepton flavour or lepton number, and future prospects at Belle-II (M. Hernández Villanueva in \cite{Proceedings:2019ihn}).}
\label{fig:LFV}
\end{figure}
Many analyses of LEP data kept going during this period. Worth mentioning are the complete list of ALEPH branching ratios (M. Davier in \cite{Ohshima:2005iv}), and the inclusive strange spectral functions reported by ALEPH (M. Davier et al in \cite{Sobie:2001it}) and OPAL (W. Mader in \cite{Ohshima:2005iv}), which made possible to extract values of the strange quark mass and the Cabibbo mixing (J. Prades in \cite{Ohshima:2005iv}). I would like to stress here the comprehensive works on SU(3) breaking (and light-by-light contributions to $g-2$) developed by my collaborator and friend Ximo Prades (Castellón 1963 - Granada 2010), which unfortunately is no longer with us.
A very important theoretical development was the impressive calculation of the $O(\alpha_s^4)$ correction to the inclusive $\tau$ hadronic width (Baikov et al in
\cite{Bondar:2009zz}), which would be later complemented with a 5-loop computation of the QCD $\beta$ function (Baikov et al in \cite{Yuan:2017tgx}) and an updated version of the Cabibbo-allowed ALEPH spectral functions (Z. Zhang in \cite{Stahl:2015oci}). This has made possible to determine the strong coupling at the N${}^3$LO, triggering a huge theoretical activity and many alive discussions at different $\tau$ meetings (moment analyses, OPE contributions, renormalons, duality violations, etc). The current status has been recently reviewed in \cite{Pich:2020gzz}, which contains
an extensive list of relevant references.
The muon anomalous magnetic moment is another timely topic that has been discussed in full detail at different meetings. The BNL E821 data was presented by B.L. Roberts in \cite{Schalk:2002zs} and D. Hertzog in \cite{Ohshima:2005iv}, and the relevant SM contributions have been reviewed: QED (T.~Kinoshita in \cite{Ohshima:2005iv}; M. Hayakawa in \cite{Hayasaka:2014xya}), electroweak (A. Czarnecki in \cite{Schalk:2002zs}), hadronic vacuum polarization (A.~Hoecker in \cite{Schalk:2002zs})
and light-by light (A. Vainshtein in \cite{Cei:2007zza}; E. de Rafael in \cite{Hayasaka:2014xya}; H.~Meyer in \cite{Proceedings:2019ihn}). In particular, there have been many experimental contributions from BaBar, Belle, BES, CLEO, CMD, KEDR, KLOE, SND, etc, providing the necessary input to the dispersive evaluation of the hadronic vacuum polarization contribution
(M. Davier, H. Hagiwara et al in \cite{Yuan:2017tgx}; B.~Shwartz in \cite{Proceedings:2019ihn}). The radiative return method (J. K\"uhn in \cite{Ohshima:2005iv}; G. Rodrigo in \cite{Cei:2007zza})
and the complementary information from $\tau$ decay data were also discussed (S. Eidelman, M.~Davier in \cite{Sobie:2001it}; A. Hoecker in \cite{Schalk:2002zs}; Z. Zhang in \cite{Hayasaka:2014xya}). In the 2021 workshop, we have of course seen the new measurement of the Muon $g-2$ experiment at Fermilab (J. Stapleton) and an overview of the theory status (G. Colangelo).
The prospects to improve the poorly known electromagnetic dipole moments of the $\tau$ lepton have been also analysed (M. Fael et al in \cite{Hayasaka:2014xya}; M. Hernández-Ruiz et al in \cite{Proceedings:2019ihn}).
The most important achievements in neutrino physics were also presented at the $\tau$ workshops, where a dedicated session has been always scheduled. Worth a mention for their direct connection with the $\tau$ lepton are the first direct observation of the $\tau$ neutrino by the DONUT experiment (B. Baller in \cite{Sobie:2001it}), and the first $\nu_\mu\to\nu_\tau$ events registered ten years later with the OPERA detector (Y. Gornushkin in \cite{Lafferty:2011zz}).
The SNO measurement of solar neutrino fluxes (E.W.~Beier in \cite{Schalk:2002zs}) and the SuperKamiokande atmospheric $\nu_\mu\to\nu_\tau$ signal (R. Svoboda in \cite{Sobie:2001it}; J. Shirai in \cite{Ohshima:2005iv}) were, of course, major milestones in neutrino oscillations. Regular updates of the oscillation data from solar, atmospheric, reactor and accelerator experiments have been discussed since then (J. Shirai in \cite{Ohshima:2005iv}; C. Howcroft in \cite{Cei:2007zza}; R.A. Johnson in \cite{Hayasaka:2014xya}; G.J. Barker in \cite{Stahl:2015oci}).
\section{LHC Times}
With the start of operation of the LHC the main focus has obviously moved to the energy frontier (Manchester 2010 \cite{Lafferty:2011zz}, Nagoya 2012 \cite{Hayasaka:2014xya}, Aachen 2014 \cite{Stahl:2015oci}, Beijing 2016 \cite{Yuan:2017tgx}, Amsterdam 2018 \cite{Proceedings:2019ihn}, Indiana 2021).
The high-momenta $\tau$'s produced at the LHC turn out to be an excellent signature to probe new physics. They have low multiplicity and good tagging efficiency. Moreover, their decay products are tightly collimated (mini-jet like) and momentum reconstruction is possible. Being a third-generation particle, the $\tau$ is also the lepton that couples more strongly to the Higgs; $H\to\tau^+\tau^-$ is in fact the $4^{\mathrm{th}}$ largest branching ratio of the Higgs boson.
Thus, in the more recent $\tau$ workshops, we have seen a proliferation of Higgs-related measurements and exclusion plots from clever search analyses (D. Chakraborty, S. Knutzen in \cite{Stahl:2015oci}; A. Lusiani, Z. Mao, R. Reece in \cite{Yuan:2017tgx}; C. Caputo, F. Lyu in \cite{Proceedings:2019ihn}). The detection of $H\to\tau^+\tau^-$ events and the corresponding measurement of the $\tau$ Yukawa coupling (T. Müller in \cite{Stahl:2015oci}; D.~Zanzi in \cite{Yuan:2017tgx}; L.~Schildgen in \cite{Proceedings:2019ihn}) have been major milestones, together with the limits set on lepton-flavour-violating couplings of the Higgs (A. Nehrkorn in \cite{Yuan:2017tgx}; B. Le in \cite{Proceedings:2019ihn}) and the $Z$ boson (K. De Bruyn, A. Nehrkorn in \cite{Yuan:2017tgx}; W.S. Chan in \cite{Proceedings:2019ihn}). It is remarkable that the LHC bounds on $Z\to\ell\ell'$ with $\ell\not=\ell'$ are already better than the LEP ones. The LHC experiments have also provided relevant bounds on the
$\tau\to 3\mu$ decay mode (K. De Bruyn in \cite{Yuan:2017tgx}).
\begin{figure}[h]
\centering
\includegraphics[width=0.72\textwidth]{Figures/ATLAS_H2tau_TAU2018.jpg}
\caption{ATLAS measurement of $\sigma(H\to\tau^+\tau^-)$ (L.~Schildgen in \cite{Proceedings:2019ihn}).}
\label{fig:H2tau}
\end{figure}
The strong improvement achieved in $\tau$ detection techniques has made possible to perform relevant tests of the SM itself. Worth mentioning are the first hadron-collider measurement of the $\tau$ polarization in $W\to\tau\nu$ decays (Z. Czyczula in \cite{Hayasaka:2014xya}) and the
$\tau$ polarization asymmetry in $Z\to\tau^+\tau^-$ (V. Cherepanov in \cite{Yuan:2017tgx}). More recently, the ATLAS and CMS experiments have been able to test lepton universality in $W\to\ell\nu_\ell$ decays, in good agreement with the SM, clarifying the puzzling $2.5\,\sigma$ excess of $\tau$ events observed a long time ago in the LEP data.
The flavour anomalies identified in $B$ decays have been one of the more recent highlights (E. Manoni in \cite{Hayasaka:2014xya}; A. Celis, T. Kuhr in \cite{Stahl:2015oci}; K. De Bruyn, S. Hirose, X.-Q. Li in \cite{Yuan:2017tgx}; S.~Benson, S. Fajfer in \cite{Proceedings:2019ihn}), since they indicate unexpected large violations of lepton universality in $b\to c\tau\nu$ and $b\to s\mu^+\mu^-$. The available high-$p_T$ data on di-tau production at the LHC provides complementary information, constraining many suggested new-physics scenarios (D.A.~Faroughy in \cite{Proceedings:2019ihn}).
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Figures/RD_HFLAV2018.pdf}
\caption{Measurements of the ratios $R_{D^{(*)}} = \Gamma[B\to D^{(*)}\tau\nu]/\Gamma[B\to D^{(*)}\ell\nu]$ ($\ell = e,\mu$),
compared with the SM predictions \cite{HFLAV:2019otj} (S.~Benson in \cite{Proceedings:2019ihn}).}
\label{fig:RD}
\end{figure}
Another surprising result was reported by the BaBar collaboration (R. Sobie in \cite{Hayasaka:2014xya}), which observed a CP-violating rate asymmetry in the decay $\tau^-\to\nu_\tau\pi^-K_S^0\, (\ge 0 \pi^0)$ that deviates by $2.8\,\sigma$ of the SM expectation from $K^0-\bar{K}^{0}$ mixing. The BaBar signal has not been confirmed by Belle, which did not reach the required sensitivity (M. Hernández Villanueva in \cite{Proceedings:2019ihn}), and seems incompatible with other sets of flavour data (V. Cirigliano et al
in \cite{Proceedings:2019ihn}).
While future measurements should clarify this situation, the search for signatures of CP violation in $\tau$ decays remains an interesting goal (I. Bigi in \cite{Proceedings:2019ihn}).
The most important achievements in Astroparticle physics have been also reviewed at the $\tau$ workshops. Two recent IceCube highlights are the discovery of an astrophysical neutrino flux in 2013, which marked the birth of neutrino astronomy (D. Xu in \cite{Yuan:2017tgx}), and the evidence for the identification of a blazar as an astrophysical neutrino source reported in 2018 (D. van Eijk in \cite{Proceedings:2019ihn}).
\section{Future Prospects}
The forthcoming high-statistics data samples that will soon be accumulated by the Belle-II detector will give a new boost to precision $\tau$ physics \cite{Belle-II:2018jsg}. In addition to much larger sensitivities to decays violating the lepton flavour or the lepton number, one expects significant improvements on the $\tau$ lifetime and branching ratios, decay distributions, CP asymmetries, Michel parameters, etc (M. Hernández Villanueva in \cite{Proceedings:2019ihn}). This superb physics potential will be complemented with more precise measurements of the $\tau$ mass at BES-III (J. Zhang in \cite{Proceedings:2019ihn}), and a new generation of muon experiments (C. Wu in \cite{Yuan:2017tgx}; R. Bonventre, A. Bravar, A. Driutti, T.~Iwamoto, A.-K. Perrevoort, N. Teshima in \cite{Proceedings:2019ihn}), neutrinoless double-beta-decay searches (L. Cardani in \cite{Proceedings:2019ihn})
and neutrino detectors (M. Komatsu, Z. Li, H. Lu in \cite{Yuan:2017tgx}; D.~van Eijk, I.~Esteban, P.~Fernández, A. Pocar et al, H. Seo, C. Timmermans, A. Tonazzo, M. Trini in \cite{Proceedings:2019ihn}) at different laboratories.
The LHC is also going to start its new Run 3, aiming to a sizeable increment of the integrated luminosity. This will be followed later by a much more significant improvement of the instantaneous luminosity at the HL-LHC, which will increase the potential for new discoveries \cite{Cerri:2018ypt}. In the long term, several linear and circular high-energy colliders are being discussed. Huge and clean $\tau^+\tau^-$ data samples could be provided by an electron-positron TeraZ facility, running at the $Z$ peak (M. Dam in \cite{Proceedings:2019ihn}). The projects to build a high-luminosity super-$\tau$cF (S. Eidelman in \cite{Stahl:2015oci}) are also in a quite advanced stage.
Thus, there is a bright future ahead of us with lots of interesting physics to be explored. We can look forward to many relevant experimental discoveries to be celebrated at the $40^{\mathrm{th}}$ Tau Lepton Physics anniversary in 2030.
\section*{Acknowledgements}
I would like to thank Michel Davier for his continuous support to the $\tau$ workshops,
and the local organizers for making possible this $16^{\mathrm{th}}$ meeting, in spite of the difficult circumstances.
This work has been supported by
MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00,
and by the Generalitat Valenciana, Grant No. Prometeo/2021/071.
|
1,116,691,497,183 | arxiv | \section{Introduction}
\setcounter{equation}{0}
The law of the past supremum $\overline{X}_t=\sup_{s\le t}X_s$ of L\'evy processes before a deterministic time $t>0$
presents some major interest in stochastic modeling such as queuing and risk theories as it is related to the law of
the first passage time $T_x$ above any positive level $x$ through the relation $\p(\overline{X}_t\ge x)=\p(T_x\le t)$. The
importance of knowing features of this law for some domains of application mainly explains the abundance of the literature on
this topic. From the works of P.~L\'evy
on Brownian motion \cite{le} to the recent developments of A.~Kuznetsov \cite{ku2} for a very large class of
stable L\'evy processes, an important number of papers have appeared. Most of them concern explicit computations for
stable processes and basic features, such as tail behavior of this law, are still unknown in the general case.
The present work is mainly concerned with the study of the nature of the law of the overall supremum $\overline{X}_t$
and more specifically, with the existence of a density for this distribution. In a recent paper,
N.~Bouleau and L.~Denis \cite{bd} have proved that the law of $\overline{X}_t$ is absolutely continuous
whenever the L\'evy measure of $X$ is itself absolutely continuous and satisfies some additional conditions, see Proposition 3
in \cite{bd}. This result has raised our interest on the subject and we proposed to determine 'exploitable' necessary
and sufficient conditions under which the law of $\overline{X}_t$ is absolutely continuous. Doing so, we also obtained conditions
for the absolute continuity of the random vectors $(g_t,\overline{X}_t)$, $(\overline{X}_t,X_t)$ and $(g_t,\overline{X}_t,X_t)$,
where $g_t$ is the time at which the maximum of $X$ occurs on $[0,t]$. The proofs are based on two main ingredients. The first
one is the equivalence between the law of $X_t$ in $\mathbb{R}_+$ and the entrance law of the excursions of the reflected process
at its minimum, see Lemma \ref{equivalence1}. The second argument is an expression of the law of $(g_t,\overline{X}_t,X_t)$ in
terms of the entrance laws of the excursions of both reflected processes. From this expression, we may in addition recover
the law of $(g_t,\overline{X}_t,X_t)$ for Brownian motion with drift and derive an explicit form of this law for the symmetric
Cauchy process. The law of $(g_t,\overline{X}_t)$, may also be computed in some instances of spectrally negative L\'evy processes.
The remainder of this paper is organized as follows. In Section \ref{prelim}, we give some definitions and we recall some basic
elements of excursion theory and fluctuation theory for L\'evy processes, which are necessary for the proofs. The main results
of the paper are stated in Sections \ref{main} and \ref{expressions}. In Section \ref{main}, we state continuity properties
of the triples $(g_t,\overline{X}_t,X_t)$ and $(g_t^*,\underline{X}_t,X_t)$, whereas Section \ref{expressions} is devoted
to some representations and explicit expressions for the law of $(g_t,\overline{X}_t,X_t)$.
Then except for corollaries, proofs of the results are postponed to Section \ref{proofs}.
\section{Preliminaries}\label{prelim}
\setcounter{equation}{0}
We denote by $\mathcal{D}$ the space of c\`{a}dl\`{a}g paths $\omega:[0,\infty )
\rightarrow \mathbb{R\cup \{\infty\}}$ with lifetime $\zeta
(\omega )=\inf \{t\ge0:\omega _{t}=\omega_s,\,\forall s\ge t\}$, with the usual convention that $\inf \{\emptyset \}=+\infty$.
The space $\mathcal{D}$ is equipped with the Skorokhod topology, its
Borel $\sigma $-algebra $\mathcal{F}$, and the usual completed filtration $(\mathcal{F}_{s},s\geq 0)$ generated by the
coordinate process $X=(X_{t},t\geq 0)$ on
the space $\mathcal{D}$. We write $\overline{X}$ and $\underline{X}$ for the
supremum and infimum processes:
\[\overline{X}_{t}=\sup \{X_{s}:0\leq s\leq t\}\;\;\;\mbox{and}\;\;\;
\underline{X}_{t}=\inf \{X_{s}:0\leq s\leq t\}\,.\]
For $t>0$, the last passage times by $X$ at its supremum and at its infimum before $t$ are respectively defined by:
\[g_t=\sup\{s\le t:X_s=\overline{X}_t\;\mbox{or}\;X_{s-}=\overline{X}_t\}\;\;\mbox{and}\;\;g_t^*=\sup\{s\le t:X_s=\underline{X}_t\;\mbox{or}\;X_{s-}=\underline{X}_t\}\,.\]
We also define the first passage time by $X$ in the open halfline $(0,\infty)$ by:
\[\tau_0^+=\inf\{t\ge0:X_t>0\}\,.\]
For each $x\in \mathbb{R}$, we denote by $\mathbb{P}_{x}$ the law on $\mathcal{D}$ of a
L\'{e}vy process starting from $x,$ and we write $\mathbb{P}_{0}=\mathbb{P}$. We
assume throughout the sequel that $(X,\mathbb{P})$ is not a compound Poisson
process and that $|X|$ is not a subordinator.
Note that under our assumptions, 0 is always regular for $(-\infty,0)$ or/and $(0,\infty)$.
It is well known that the reflected processes $\overline{X}-X$ and $X-\underline{X}$ are strong
Markov processes. Under $\mathbb{P}$, the state 0 is regular for $(0,\infty)$ (resp. for $(-\infty,0)$)
if and only if it is regular for $\{0\}$, for the reflected process $\overline{X}-X$ (resp. for $X-\underline{X}$).
If 0 is regular for $(0,\infty)$, then the local time at 0 of the reflected process $\overline{X}-X$ is the unique
continuous, increasing, additive functional $L$ with $L_0=0$, a.s., such that the support of the measure $dL_t$ is
the set $\overline{\{t:\overline{X}_t=X_t\}}$ and which is normalized by
\begin{equation}\label{norm1}
\e\left(\int_0^\infty e^{-t}\,dL_t\right)=1\,.
\end{equation}
Let $G$ be the set of left end points of the excursions away from 0 of $\overline{X}-X$ and for each $s\in G$,
call $\epsilon^s$ the excursion which starts at $s$. Denote by $E$ the set of excursions, i.e.
$E=\{\omega\in{\cal D}:\omega_t>0,\,\mbox{for all $0<t<\zeta(\omega)$}\}$ and let ${\cal E}$ be the Borel $\sigma$-algebra which
is the trace of ${\cal F}$ on the subset $E$ of ${\cal D}$.
The It\^o measure $n$ of the excursions away from 0 of the process $\overline{X}-X$ is characterized by the
so-called {\it compensation formula}:
\begin{equation}\label{compensation}\e\left(\sum_{s\in G}F(s,\omega,\epsilon^s)\right)=
\e\left(\int_0^\infty dL_s\left(\int F(s,\omega,\epsilon)n(d\epsilon)\right)\right)\,,
\end{equation}
which is valid whenever $F$ is a positive and predictable process, i.e. ${\cal P}({\cal F}_s)\otimes{\cal E}$-measurable,
where ${\cal P}({\cal F}_s)$ is the predictable $\sigma$-algebra associated to the filtration $({\cal F}_s)$.
We refer to \cite{be}, Chap.~IV, \cite {ky}, Chap.~6 and \cite{do} for more detailed definitions and some
constructions of $L$ and $n$.
If 0 is not regular for $(0,\infty)$, then the set $\{t:(\overline{X}-X)_t=0\}$ is discrete and following
\cite{be} and \cite{ky}, we define the local time $L$ of $\overline{X}-X$ at 0 by
\begin{equation}\label{norm2}
L_t=\sum_{k=0}^{N_t}{\rm\bf e}^{(k)}\,,
\end{equation}
where $N_t=\mbox{Card}\{s\in(0,t]:\overline{X}_s=X_s\}$, and ${\rm\bf e}^{(k)}$, $k=0,1,\dots$ is a sequence of independent and exponentially
distributed random variables with parameter
\begin{equation}\label{alpha}
\gamma=\left(1-\e(e^{-\tau^+_0})\right)^{-1}\,.
\end{equation}
In this case, the measure $n$ of the excursions away from 0 is proportional to
the distribution of the process $X$ under the law $\mathbb{P}$, returned at its first passage
time in the positive halfline. More formally, let us define
$\epsilon^{\tau_0^+}=(X_{\tau_0^+}-X_{(\tau_0^+-s)-},0\le s<\tau_0^+)$,
then for any bounded Borel functional $K$ on ${\cal E}$,
\begin{equation}\label{excdisc}
\int_{\cal E}K(\epsilon)n(d\varepsilon)=\gamma\,\e[K(\epsilon^{\tau_0^+})]\,.
\end{equation}
Define $G$ and $\epsilon^s$ as in the regular case, then from definitions (\ref{norm2}), (\ref{excdisc}) and an application
of the strong Markov property, we may check that the normalization (\ref{norm1}) and the compensation formula (\ref{compensation})
are still valid in this case.
The local time at 0 of the reflected process at its infimum $X-\underline{X}$ and the measure of its excursions away from 0
are defined in the same way as for $\overline{X}-X$. They are respectively denoted by $L^*$ and $n^*$.
Then the ladder time processes $\tau$ and $\tau^*$, and the ladder height processes $H$ and $H^*$ are the following subordinators:
\[\tau_t=\inf\{s:L_s>t\}\,,\;\;\tau^*_t=\inf\{s:L_s^*>t\}\,,\;\;H_t=X_{\tau_t}\,,\;\;H^*_t=X_{\tau_t^*}\,,\;\;t\ge0\,,\]
where $\tau_t=H_t=+\infty$, for $t\ge\zeta(\tau)=\zeta(H)$ and $\tau_t^*=H_t^*=+\infty$, for $t\ge\zeta(\tau^*)=\zeta(H^*)$.
The ladder processes $(\tau,H)$ and $(\tau^*,H^*)$ are (possibly killed) L\'evy processes whose
characteristic exponents $\kappa$ and $\kappa^*$ are given by
\begin{equation}\label{kappa}\e\left(e^{-\alpha\tau_1-\beta H_1}\right)=
e^{-\kappa(\alpha,\beta)}\;\;\;\mbox{and}\;\;\; \e\left(e^{-\alpha\tau_1^*-\beta H_1^*}\right)=e^{-\kappa^*(\alpha,\beta)}\,.
\end{equation}
From (\ref{norm1}), we derive that $\kappa(1,0)=\kappa^*(1,0)=1$, so that Wiener-Hopf factorization in time which is stated in \cite{be},
p.~166 and in \cite{ky}, p.~166 is normalized as follows
\begin{equation}\label{wh}
\kappa(\alpha,0)\kappa^*(\alpha,0)=\alpha\,,\;\;\;\mbox{for all $\alpha\ge0$.}
\end{equation}
Recall also that the drifts ${\tt d}$ and ${\tt d}^*$ of the subordinators $\tau$ and $\tau^*$ satisfy ${\tt d}=0$
(resp. ${\tt d}^*=0$) if and only if 0 is regular for $(-\infty, 0)$, (resp. for $(0,\infty)$) and that:
\begin{equation}\label{delta}
\int_0^t\ind_{\{X_s=\overline{X}_s\}}\,ds={\tt d} L_t\;\;\;\mbox{and}\;\;\;\int_0^t\ind_{\{X_s=\underline{X}_s\}}\,ds={\tt d}^*L_t^*\,.
\end{equation}
Suppose that 0 is not regular for $(0,\infty)$ and let ${\rm\bf e}$ be an independent exponential time with mean 1,
then from (\ref{norm1}) and (\ref{delta}), $\p((X-\underline{X})_{\rm\bf e}=0)={\tt d}^*$. From the time reversal property of L\'evy processes,
$\p((X-\underline{X})_{\rm\bf e}=0)=\p(\overline{X}_{\rm\bf e}=0)=\p(\tau_0^+\ge {\rm\bf e})=\gamma^{-1}$, so that
${\tt d}^*=\gamma^{-1}$.
We will denote by $q_t^*$ and $q_t$ the entrance laws of the reflected excursions at the maximum and at the minimum, i.e. for $t>0$,
\[q_t(dx)=n(X_t\in dx,t<\zeta)\;\;\;\mbox{and}\;\;\;q_t^*(dx)=n^*(X_t\in dx,t<\zeta)\,.\]
They will be considered as measures on $\mathbb{R}_+=[0,\infty)$.
Recall that the law of the lifetime of the reflected excursions is related to the L\'evy measure
of the ladder time processes, through the equalities:
\begin{equation}\label{pi}
q_t(\mathbb{R_+})=n(t<\zeta)=\overline{\pi}(t)+a\;\;\;\mbox{and}\;\;\; q_t^*(\mathbb{R_+})=
n^*(t<\zeta)=\overline{\pi}^*(t)+a^*\,,
\end{equation}
where $\overline{\pi}(t)=\pi(t,\infty)$ and $\overline{\pi}^*(t)=\pi^*(t,\infty)$ and $a$, $a^*$ are the killing rates
of the subordinators $\tau$ and $\tau^*$.
In this paper, we will sometimes write $\mu\ll\nu$, when $\mu$ is absolutely continuous with respect to $\nu$. We will say that
$\mu$ and $\nu$ are {\it equivalent} if $\mu\ll \nu$ and $\nu\ll\mu$.
We will denote by $\lambda$ the Lebesgue measure on $\mathbb{R}$. A measure which is absolutely continuous
with respect to the Lebesgue measure will sometimes be called {\it absolutely continuous}. A measure which has no
atoms will be called {\it continuous}.
\section{Continuity properties of the law of $(g_t,\overline{X}_t,X_t)$}\label{main}
\setcounter{equation}{0}
For $t>0$ and $q>0$, we will denote respectively by $p_t(dx)$ and $U_q(dx)$ the semigroup and the resolvent measure
of $X$, i.e. for any positive Borel function $f$,
\[\e(f(X_t))=\int_0^\infty f(x)p_t(dx)\;\;\;\mbox{and}\;\;\;\int_0^\infty f(x) \,U_q(dx)=\e\left(\int_0^\infty
e^{-qt}f(X_t)\,dt\right)\,.\]
It is clear that for all $q$ and $q'$ the resolvent measures $U_q(dx)$ and $U_{q'}(dx)$ are equivalent.
Moreover, each measure $U_q$ is equivalent to the potential measure $U_0(dx)=\int_0^\infty\p(X_t\in dx)\,dt$.
In what follows, when comparing the law of $\overline{X}_t$ to the measures $U_q$, $q\ge0$,
we will take $U(dx)\eqdef U_1(dx)$ as a reference measure.
We will say that a L\'evy process $X$ is of \\
$\cdot$ type 1 if 0 is regular for both $(-\infty,0)$ and $(0,\infty)$,
$\cdot$ type 2 if 0 is not regular for $(-\infty,0)$,
$\cdot$ type 3 if 0 is not regular for $(0,\infty)$.\\
\noindent Note that since $X$ is not a compound Poisson process, types 1, 2 and 3
define three exhaustive cases.
Recall that $\mathbb{R}_+=[0,\infty)$ and let ${\cal B}_{\mathbb{R}_+}$ be the Borel $\sigma$-field on $\mathbb{R}_+$.
For $t>0$, let $\mu_t^+$ be the restriction to
$(\mathbb{R}_+,{\cal B}_{\mathbb{R}_+})$ of the average occupation measure of $X$, on the time interval $[0,t)$, i.e.
\[\int_{[0,\infty)}f(x)\,\mu_t^+(dx)=\e\left(\int_0^tf(X_s)\,ds\right)\,,\]
for every positive Borel function $f$ on $(\mathbb{R}_+,{\cal B}_{\mathbb{R}_+})$. Moreover, we will denote by $p_t^+(dx)$ the restriction
of the semigroup $p_t(dx)$ to $(\mathbb{R}_+,{\cal B}_{\mathbb{R}_+})$. In particular, we have $\mu_t^+=\int_0^t p_s^+\,ds$.
The law of $\overline{X}_t$ will be considered as a measure on $(\mathbb{R}_+,{\cal B}_{\mathbb{R}_+})$. In all the remainder of
this article, we assume that the time $t$ is deterministic and finite.
\begin{theorem}\label{type} For $t>0$, the law of the past supremum $\overline{X}_t$ can be compared
to the occupation measure $\mu_t^+$ as follows.
\begin{itemize}
\item[$1.$] If $X$ is of type $1$, then for all $t>0$, the law of
$\overline{X}_t$ is equivalent to $\mu_t^+$.
\item[$2.$] If $X$ is of type $2$, then for all $t>0$, the law of $\overline{X}_t$ is equivalent
to $p_t^+(dx)+\mu_t^+(dx)$.
\item[$3.$] If $X$ is of type $3$, then for all $t>0$, the law of $\overline{X}_t$ has an atom at $0$ and its restriction
to the open halfline $(0,\infty)$ is equivalent to the restriction of the measure $\mu_t^+(dx)$ to $(0,\infty)$.
\end{itemize}
\end{theorem}
\noindent
It appears clearly from this theorem that the law of $\overline{X}_t$ is absolutely continuous for all $t>0$, whenever 0 is regular
for $(0,\infty)$ and $p_t$ is absolutely continuous, for all $t>0$. We will see in Theorem \ref{coro2} that a stronger result actually
holds.
Let $U^+(dx)$ be the restriction to $(\mathbb{R}_+,{\cal B}_{\mathbb{R}_+})$ of the resolvent measure $U(dx)$. Since $\mu_t$ is absolutely
continuous with respect to $U^+$ for all $t>0$, the law of the past supremum before $t$ can be compared to $U^+$ as follows.
\begin{corollary}\label{lebesgue} Under the same assumptions as above:
\begin{itemize}
\item[$1.$] If $X$ is of type $1$, then for any $t>0$, the law of $\overline{X}_t$ is absolutely
continuous with respect to the resolvent measure $U^+(dx)$.
\item[$2.$] If $X$ is of type $2$, then for any $t>0$, the law of $\overline{X}_t$ is absolutely
continuous with respect to the measure $p_t^+(dx)+U^+(dx)$.
\item[$3.$] If $X$ is of type $3$, then the same conclusions as in $1.$ hold for the measures
restricted to $(0,\infty)$.
\end{itemize}
\end{corollary}
\noindent Under our assumption, the resolvent measure $U^+(dx)$ is always continuous, see Proposition I.$15$
in {\rm \cite{be}}.
Moreover, the measure $p_t^+(dx)$ is also continuous for all $t>0$, see Theorem 27.4 in Sato {\rm \cite{sa}}.
Hence from Corollary \ref{lebesgue}, for all $t>0$, when $X$ is of type $1$ or $2$, the law of $\overline{X}_t$ is continuous
and when it is of type 3, this law has only one atom at $0$. This fact has already been observed in \cite{pr}, Lemma 1.\\
It is known that for a L\'evy process $X$, the law of $X_t$ may be absolutely continuous for all $t>t_0$, whereas it is continuous
singular for $t\in(0,t_0)$, see Theorem 27.23 and Remark 27.24 in \cite{sa}. The following theorem shows that when $X$ is of type 1,
this phenomenon cannot happen for the law of the supremum, i.e. either absolute continuity of the law of $\overline{X}_t$ holds at any
time $t$ or it never holds. We denote by $V(dt,dx)$ the potential measure of the ladder process $(\tau,H)$ and by $V(dx)$ the potential
measure of the ladder height process $H$, i.e.
\[V(dt,dx)=\int_0^\infty\p(\tau_s\in dt,H_s\in dx)\,ds\;\;\;\mbox{and}\;\;\;V(dx)=\int_0^\infty\p(H_s\in dx)\,ds\,.\]
Then let $\lambda^+$ be the Lebesgue measure on $\mathbb{R}_+$.
\begin{theorem}\label{th2} Suppose that $X$ is of type $1$. The following assertions are equivalent:
\begin{itemize}
\item[$1.$] The law of $\overline{X}_t$ is absolutely continuous with respect to $\lambda^+$, for some $t>0$.
\item[$2.$] The resolvent measure $U^+(dx)$ is absolutely continuous with respect to $\lambda^+$.
\item[$3.$] The resolvent measure $U(dx)$ is absolutely continuous with respect to $\lambda$.
\item[$4.$] The potential measure $V(dx)$ is absolutely continuous with respect to $\lambda^+$.
\end{itemize}
As a consequence, if $1.$ holds for some $t>0$, then it holds for all $t>0$. Moreover assertions $1$ -- $4$ are equivalent
to the same assertions formulated for the dual process $-X$. In particular, $1$ -- $4$ hold if and only if the law of
$-\underline{X}_t$ is absolutely continuous with respect to $\lambda^+$, for all $t>0$.
\end{theorem}
\noindent Condition 4 of the above theorem is satisfied whenever the drift coefficient
of the subordinator $H$ is positive, see Theorem II.16 and Corollary II.20 in \cite{be}.
Let us also mention that necessary and sufficient conditions for $U(dx)$ to be absolutely continuous may be found in Theorem 41.15 of
\cite{sa}, and in Proposition 10, Chap. I
of \cite{be}. Formally, $U\ll \lambda$ if and only if for some $q>0$ and for all bounded Borel function $f$, the function
$x\mapsto\e_x\left(\int_0^\infty f(X_t)e^{-qt}\,dt\right)$ is continuous.
However, we do not know any necessary and sufficient conditions bearing directly on the characteristic exponent $\psi$ of $X$.
Let us simply recall the following sufficient condition. From Theorem II.16 in \cite{be}, if
\begin{equation}\label{intest}
\int_{-\infty}^\infty\Re\left(\frac1{1+\psi(x)}\right)\,dx<\infty\,,
\end{equation}
then $U(dx)\ll\lambda$, with a bounded density. Therefore, if $X$ is of type 1, then from Theorem \ref{th2}, condition $(\ref{intest})$
implies that both the laws of $\underline{X}_t$ and $\overline{X}_t$
are absolutely continuous for all $t>0$.
A famous result from \cite{fu} asserts that when $X$ is a symmetric process, condition $U\ll \lambda$ implies that
$p_t\ll \lambda$, for all $t>0$.
Then it follows from Theorem \ref{th2} that in this particular case, absolute continuity of the law of $\overline{X}_t$, for all $t>0$
is equivalent to the absolute continuity of the semigroup $p_t$, for all $t>0$.
\begin{theorem}\label{coro2} If $0$ is regular for $(0,\infty)$, then the following assertions are equivalent:
\begin{itemize}
\item[$1.$] The measures $p_t^+$ are absolutely continuous with respect to $\lambda^+$, for all $t>0$.
\item[$2.$] The measures $p_t$ are absolutely continuous with respect to $\lambda$, for all $t>0$.
\item[$3.$] The potential measure $V(dt,dx)$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}_+^2$.
\end{itemize}
If moreover $X$ is of type $1$, then each of the following assertions is equivalent to $1$ -- $3$:
\begin{itemize}
\item[$4.$] The law of $(g_t,\overline{X}_t)$ is absolutely continuous with respect to the Lebesgue measure on
$[0,t]\times\mathbb{R}_+$, for all $t>0$.
\item[$5.$] The law of $(g_t,\overline{X}_t,X_t)$ is absolutely continuous with respect to the Lebesgue measure on
$[0,t]\times\mathbb{R}_+\times\mathbb{R}$, for all $t>0$.
\end{itemize}
\end{theorem}
\noindent When $X$ is of type 1, it is plain that for all $t>0$, $g_t>0$ and $\overline{X}_t>0$, a.s., hence absolute continuity
in assertion 1 of Theorem \ref{th2} and in assertion 4 of Theorem \ref{coro2} is actually an equivalence with respect to the
Lebesgue measure.\\
We may wonder if the equivalence between assertions 1 and 2 of Theorem \ref{coro2} still holds when $t$ is fixed, i.e.~when 0
is regular for $(0,\infty)$, does the condition $p_t^+\ll\lambda^+$, imply that $p_t\ll\lambda$~?
A counterexample in the case where 0 is not regular for $(-\infty,0)$ may easily be found. Take for instance, $X_t=Y_t-S_t$, where $Y$ is a compound
Poisson process with absolutely continuous L\'evy measure and $S$ is a subordinator independent of $Y$, whose law at time
$t>0$ is continuous singular. Then clearly $p_t^+\ll\lambda^+$, and there exists a Borel set $A\subset(-\infty,0)$ such that $\lambda(A)=0$
and $\p(-S_t\in A)>0$, so that $p_t(A)>\p(Y_t=0)\p(S_t\in A)>0$.\\
Let $Y$ be a c\`adl\`ag stochastic process such that $Y_0=0$, a.s. We say that $Y$ is an {\it elementary process} if there is
an increasing sequence $(T_n)$ of nonnegative random variables such that $T_0=0$ and $\lim_{n\rightarrow+\infty}T_n=+\infty$, a.s.
and two sequences of finite real-valued random variables $(a_n,\,n\ge0)$ and $(b_n,\,n\ge0)$ such that $b_0=0$ and
\begin{equation}\label{elem}
Y_t=a_nt+b_n\;\;\;\mbox{if}\;\;\;t\in[T_n,T_{n+1})\,.
\end{equation}
We say that $Y$ is a {\it step process} if it is an elementary process with $a_n=0$, for all $n$ in the above definition.
\begin{proposition}\label{coro3} Suppose that $0$ is regular for $(0,\infty)$.
\begin{itemize}
\item[$1.$] If $0$ is regular for $(-\infty,0)$ and if the law of $\overline{X}_t$ is absolutely continuous for some $t>0$,
then for any step process $Y$ which is
independent of $X$, the law of $\sup_{s\le t}(X+Y)_s$ is absolutely continuous for all $t>0$.
\item[$2.$] If $p_t^+\ll\lambda^+$, for all $t>0$, or if $X$ has unbounded variation and if at least one of the ladder height processes
$H$ and $H^*$ has a positive drift, then for any elementary stochastic process $Y$ which is independent of $X$,
the law of $\sup_{s\le t}(X+Y)_s$ is absolutely continuous for all $t>0$.
\end{itemize}
\end{proposition}
\noindent Sufficient conditions for the absolute continuity of the semigroup may be found in Chapter 5 of \cite{sa} and in
Section 5 of \cite{ka}. In particular if $\Pi(\mathbb{R})=\infty$ and $\Pi\ll\lambda$, then $p_t\ll\lambda$ for all $t>0$.
Proposition 20 in Bouleau and Denis \cite{bd} asserts that under a slight reinforcement of this condition, for any independent
c\`adl\`ag process $Y$, the law of $\sup_{s\le t}(X+Y)_s$ is absolutely continuous provided it has no atom at 0. In the
particular case where $Y$ is an elementary process, this result is a consequence of part 2 of Proposition \ref{coro3}.\\
In view of Theorems \ref{th2} and \ref{coro2}, it is natural to look for instances of L\'evy processes of type 1 such that
the law of $\overline{X}_t$ is absolutely continuous whereas $p_t(dx)$ is not, as well as instances of L\'evy processes
of type 1 such that the law of $\overline{X_t}$ is not absolutely continuous.
The following corollary is inspired from Orey's example \cite{or}, see also \cite{sa}, Exercise 29.12 and Example 41.23.
\begin{corollary} Let $X$ be a L\'evy process whose characteristic exponent $\psi$, i.e. $\e(e^{i\lambda X_t})=e^{-t\psi(\lambda)}$
is given by:
\[\psi(\lambda)=\int_{\mathbb{R}}(1-e^{i\lambda x}+i\lambda x\ind_{\{|x|<1\}})\Pi(dx)\,.\]
Let $\alpha\in(1,2)$ and $c$ be an integer such that $c>2/(2-\alpha)$ and set $a_n=2^{-c^n}$.
\begin{itemize}
\item[$1.$] If $\Pi(dx)=\sum_{n=1}^\infty a_n^{-\alpha}\delta_{-a_n}(dx)$, then $X$ is of type $1$ and
for all $t>0$, the law of
$\overline{X}_t$ is absolutely continuous whereas $p_t(dx)$ is continuous singular.
\item[$2.$] If $\Pi(dx)=\sum_{n=1}^\infty a_n^{-\alpha}(\delta_{-a_n}(dx)+\delta_{a_n}(dx))$, then $X$ is of type $1$ and
for all $t>0$, the law $\overline{X}_t$ is not absolutely continuous.
\end{itemize}
\end{corollary}
\begin{proof} We may check that $\int_{(0,1)}x\Pi(dx)=\infty$ in both cases 1 and 2, so that $X$ has unbounded variation and
it is of type 1, from Rogozin's criterion, see \cite{be} p.~167.
On the one hand, in part 1, since $X$ has no positive jumps, the ladder height process $H$ is a pure drift, so it follows from Theorem
\ref{th2} and the remark thereafter that the law of $\overline{X}_t$ is absolutely continuous for all $t>0$.
On the other hand, following \cite{or}, we see that $-\log|\psi(\lambda)|$ does not tend to $+\infty$ as $|\lambda|\rightarrow\infty$,
so that from the Riemann-Lebesgue Theorem, $p_t(dx)$ is not absolutely continuous. But since $\Pi(dx)$ is discrete with infinite mass, it
follows from the Hartman-Wintner Theorem, see Theorem 27.16 in \cite{sa}, that $p_t(dx)$ is continuous singular.
Then it is proved in Example 41.23 of \cite{sa} that the resolvent measure $U(dx)$ of $X$ is not absolutely continuous, so part 2
follows from Theorem \ref{th2}.
\end{proof}
\section{An expression for the joint law of $(g_t,\overline{X}_t,X_t)$}\label{expressions}
\setcounter{equation}{0}
The following theorem presents a path decomposition of the L\'evy process $X$, over the interval [0,t], at time $g_t$.
Actually, we will essentially focus on its corollaries which provide some representations of the joint law of $g_t$,
$\overline{X}_t$ and $X_t$ at a fixed time $t$, in terms of the entrance laws $(q_s)$ and $(q^*_s)$. Besides
they will be applied in Section \ref{proofs} for the proofs of the results of Section \ref{main}.
For $\omega\in{\cal D}$ and $s\ge0$, we set $\Delta^\pm_s(\omega)=(\omega_s-\omega_{s-})^\pm$,
where $\omega_{0-}=\omega_0$. Then we define the (special) {\it shift} operator by
\[\theta_s(\omega)=(\omega_{s-}-\omega_{s+u}+\Delta_s^+(\omega),\,u\ge0)\,.\]
The {\it killing} operator and the {\it return} operator are respectively defined as
follows:
\[k_s(\omega)=\left\{\begin{array}{ll}\omega_u,&0\le u<s\\\omega_s,&u\ge s\end{array}\right.,
\;\;\;r_s(\omega)=\left\{\begin{array}{ll}\omega_s-\omega_{(s-u)-}-\Delta_s^-(\omega),&
0\le u<s\\\omega_s-\omega_{0}-\Delta_s^-(\omega),&u\ge s\end{array}\right..\]
We also denote by $\omega^0$ the path which is identically equal to 0.
\begin{theorem}\label{mainth}
Fix $t>0$, let $f$ be any bounded Borel function and let $F$ and $K$ be any bounded Borel functionals
which are defined on the space ${\cal D}$.
\begin{itemize}
\item[$1.$] If $X$ is of type $1$, then
\begin{eqnarray*}
&&\e(f(g_t)\cdot F\circ r_{g_t}\cdot K\circ k_{t-g_t}\circ\theta_{g_t})=\\
&&\qquad\qquad\int_0^tf(s)n^*(F\circ k_s,s<\zeta)n(K\circ k_{t-s},t-s<\zeta)\,ds\,.
\end{eqnarray*}
\item[$2.$] If $X$ is of type $2$, then
\begin{eqnarray*}\label{pos}
&&\e(f(g_t)\cdot F\circ r_{g_t}\cdot K\circ k_{t-g_t}\circ\theta_{g_t})=\\
&&\qquad\qquad\int_0^tf(s)n^*(F\circ k_s,s<\zeta)n(K\circ k_{t-s},t-s<\zeta)\,ds\\
&&\qquad\qquad\qquad\qquad+{\tt d}\,f(t)n^*(F\circ k_t,t<\zeta)K(\omega^0)\,.\nonumber
\end{eqnarray*}
\item[$3.$] If $X$ is of type $3$, then
\begin{eqnarray*}\label{neg}
&&\e(f(g_t)\cdot F\circ r_{g_t}\cdot K\circ k_{t-g_t}\circ\theta_{g_t})=\\
&&\qquad\qquad\int_0^tf(s)n^*(F\circ k_s,s<\zeta)n(K\circ k_{t-s},t-s<\zeta)\,ds\\
&&\qquad\qquad\qquad\qquad+{\tt d}^*f(0)F(\omega^0)n(K\circ k_t,t<\zeta)\,.\nonumber
\end{eqnarray*}
\end{itemize}
\end{theorem}
\noindent Simultaneously to our work, a similar path decomposition has been obtained in \cite{ya}, when $X$ is of type 1.
In the later work, the post-$g_t$ part of $(X_s,\,0\le s\le t)$ is expressed in terms of the meander $\mathbb{M}^{(t)}=n(\,\cdot\,|\,t<\zeta)/n(t<\zeta)$, see Theorem 5.1.
By applying Theorem \ref{mainth} to the joint law of $g_t$ together with the terminal values of the pre-$g_t$
and the post-$g_t$ parts of $(X_s,\,0\le s\le t)$, we obtain the following representation for the law of the triple
$(g_t,\overline{X}_t,X_t)$. Moreover, when $\lim_{t\rightarrow\infty}X_t=-\infty$, a.s., we define $\overline{X}_\infty=\sup_tX_t$,
the overall supremum of $X$ and $g_\infty=\sup\{t:X_t=\overline{X}_\infty\}$, the location of this supremum. Then we obtain
the same kind of representation for $(g_\infty,\overline{X}_\infty)$. We emphasize that in the next result, as well as in Corollaries
\ref{law1} and \ref{semigroup}, at least one of the drift coefficients ${\tt d}$ and ${\tt d}^*$ is zero.
\begin{corollary}\label{law}
The law of $(g_t,\overline{X}_t,X_t)$ fulfills the following representation:
\begin{eqnarray}\label{both}
\p(g_t\in ds\,,\overline{X}_t\in dx,\overline{X}_t-X_t\in dy)&=&q_s^*(dx)q_{t-s}(dy)\ind_{[0,t]}(s)\,ds\\
&&\qquad\quad+{\tt d}\,\delta_{\{t\}}(ds)q_t^*(dx)\delta_{\{0\}}(dy)\nonumber\\
&&\qquad\qquad\qquad +{\tt d}^*\delta_{\{0\}}(ds)\delta_{\{0\}}(dx)q_t(dy)\,.\nonumber
\end{eqnarray}
If moreover $\lim_{t\rightarrow\infty}X_t=-\infty$, a.s., then
\begin{eqnarray}\label{infty}
\p(g_\infty\in ds\,,\overline{X}_\infty\in dx)=a\,q_s^*(dx)\,ds+{\tt d}^*a\delta_{\{(0,0)\}}(ds,dx)\,,
\end{eqnarray}
where $a$ is the killing rate of the ladder time process $\tau$.
\end{corollary}
\begin{proof}
Let $g$ and $h$ be two bounded Borel functions on $\mathbb{R_+}$ and define the functionals $K$ and $F$ on ${\cal D}$ by
$F(\omega)=g(\omega_\zeta)$ and $K(\omega)=h(\omega_\zeta)$. Then we may check that for $\epsilon\in{\cal E}$ and $t<\zeta(\epsilon)$,
$F\circ k_t(\epsilon)=g(\epsilon_t)$ and $K\circ k_t(\epsilon)=h(\epsilon_t)$. We also have $F\circ r_{g_t}\circ X=g(\overline{X}_t)$ and $K\circ k_{t-g_t}\circ\theta_{g_t}\circ X=h(\overline{X}_t-X_t)$, so that by applying Theorem \ref{mainth} to the functionals $F$ and $K$, we obtain
(\ref{both}).
To prove (\ref{infty}), we first note that $\lim_{t\rightarrow\infty}(g_t,\overline{X}_t)=(g_\infty,\overline{X}_\infty)$, a.s.
Then let $f$ be a bounded and
continuous function which is defined on $\mathbb{R}_+^2$. We have from (\ref{both}):
\[\e(f(g_t,\overline{X}_t))=\int_0^tf(s,x)n(t-s<\zeta)\,q_s^*(dx)\,ds+{\tt d}\int_0^\infty f(t,x)q_t^*(dx)+{\tt d}^*n(t<\zeta)f(0,0)\,.\]
On the one hand, we see from (\ref{pi}) that $\lim_{t\rightarrow\infty} n(t<\zeta)=n(\zeta=\infty)=a>0$.
On the other hand, $\lim_{t\rightarrow\infty} n^*(t<\zeta)=0$, and since the term ${\tt d}\int_0^\infty f(t,x)q_t^*(dx)$ is bounded by $Cn^*(t<\zeta)$, where $|f(s,x)|\le C$, for all $s,x$, it converges to 0 as $t$ tends to $\infty$. This allows
us to conclude.
\end{proof}
\noindent We derive from Corollary \ref{law} that when $X$ is of type 1, the law of the time $g_t$ is equivalent to
the Lebesgue measure, with density $s\mapsto n^*(s<\zeta)n(t-s<\zeta)\ind_{[0,t]}(s)$.
This corollary illustrates the importance of the entrance laws $q_t$ and $q_t^*$ for the
computation of some distributions involved in fluctuation theory. We give below a couple of examples where some explicit forms
can be obtained for $q_t$, $q_t^*$ and the law of $(g_t,\overline{X}_t,X_t)$. When $q_t(dx)\ll\lambda^+$ (resp. $q^*_t(dx)\ll\lambda^+$),
we will denote by $q_t(x)$
(resp. $q_t^*(x)$) the density of $q_t(dx)$ (resp. $q_t^*(dx)$).\\
\noindent {\it Example $1$}: Suppose that $X$ is a Brownian motion with
drift, i.e. $X_t=B_t+ct$, where $B$ is the standard Brownian motion and $c\in\mathbb{R}$. We derive for instance from
Lemma \ref{equivalence1} in Section \ref{proofs} that
\[q_t(dx)=\frac x{\sqrt{\pi t^3}} e^{-(x-c)^2/2t}\,dx\;\;\;\mbox{and}\;\;\;q_t^*(dx)=\frac x{\sqrt{\pi t^3}} e^{-(x+c)^2/2t}\,dx\,.\]
Then expression (\ref{both}) in Corollary \ref{law} allows us to compute the law of the triple $(g_t,\overline{X}_t,X_t)$.\\
\noindent {\it Example $2$}: Recently, the density of the measure $q_t(dx)$ for the symmetric Cauchy process
has been computed in \cite{ma}:
\begin{eqnarray*}
&&q_t(x)=q^*_t(x)=\sqrt{2}\frac{\sin\left(\frac\pi8+\frac32\arctan\left(\frac xt\right)\right)}{(t^2+x^2)^{3/4}}\\
&&\qquad\qquad\qquad-\frac1{2\pi}\int_0^\infty\frac{y}{(1+y^2)(xy+t)^{3/2}}
\exp\left(-\frac1\pi\int_0^\infty\frac{\log(y+s)}{1+s^2}\,ds\right)\,dy\,.
\end{eqnarray*}
As far as we know, this example together with the case of Brownian motion with drift (Example 1) are the only instances of L\'evy processes where
the measures $q_t(dx)$, $q^*_t(dx)$ and the law of the triplet $(g_t,\overline{X}_t,X_t)$ can be computed explicitly.\\
\noindent {\it Example $3$}: Recall from (\ref{pi}) that $\int_0^\infty q_t(dx)=n(t<\zeta)$ and $\int_0^\infty q_t^*(dx)=n^*(t<\zeta)$, so that
we can derive from Theorem \ref{law}, all possible marginal laws in the triplet $(g_t,\overline{X}_t,X_t)$. In particular, when $X$ is stable,
the ladder time process $\tau$ also satisfies the scaling property with index $\rho=\p(X_1\ge0)$, so we derive from the normalization
$\kappa(1,0)=1$ and (\ref{pi}) that $n(t<\zeta)=t^{-\rho}/\Gamma(1-\rho)$. Moreover $q_t^*$ and $q_t$ are absolutely
continuous in this case (it can be derived for instance from part 4 of Lemma \ref{equivalence1} in the next section).
Then a consequence of (\ref{both}) is the following form of the joint law of $(g_t,\overline{X}_t)$:
\begin{equation}\label{lawstable}
\p(g_t\in ds\,,\overline{X}_t\in dx)=\frac{(t-s)^{-\rho}}{\Gamma(1-\rho)}\ind_{[0,t]}(s)\,q_s^*(x)\,ds\,dx\,.
\end{equation}
Note that this computation is implicit in \cite{ac}, see Corollary 3 and Theorem 5. A more explicit form is given in (\ref{lawsn}),
after Proposition \ref{plsn} in the case where the process has no positive jumps. Note also that when $X$ is stable,
the densities $q_t$ and $q_t^*$ satisfy the scaling properties
\[q_t(y)=t^{-\rho-1/\alpha}q_1(t^{-1/\alpha}y)\;\;\; \mbox{and}\;\;\; q_t^*(x)=t^{\rho-1-1/\alpha}q_1^*(t^{-1/\alpha}x)\,.\]
These properties together with Corollary \ref{law} imply that the three r.v.'s $g_t$,
$\overline{X}_t/g_t^{1/\alpha}$ and $(\overline{X}_t-X_t)/(t-g_t)^{1/\alpha}$ are independent and have for densities
\[\frac{\sin(\pi\rho)}{\pi}s^{\rho-1}(t-s)^{-\rho}\ind_{[0,t]}(s)\,,\;\;\;\Gamma(\rho)q^*_1(x)\;\;\mbox{and}\;\;\Gamma(1-\rho)q_1(y)\,,\]
respectively. The independence between $g_t$, $\overline{X}_t/g_t^{1/\alpha}$ and $(\overline{X}_t-X_t)/(t-g_t)^{1/\alpha}$
has recently been proved in Proposition 2.39 of \cite{co}.\\
\noindent It is clear that an expression for the law of $\overline{X}_t$ follows directly from Corollary \ref{law} by integrating (\ref{both})
over $t$ and $y$. However, for convenience in the proofs of Section~\ref{proofs}, we write it here
in a proper statement. An equivalent version of Corollary \ref{law1} may also be found in \cite{ds}, Lemma 6.
\begin{corollary}\label{law1} The law of $\overline{X}_t$ fulfills the following representation:
\begin{eqnarray}\label{both1}
\p(\overline{X}_t\in dx)=\int_0^tn(t-s<\zeta)q_s^*(dx)\,ds+{\tt d}\,q_t^*(dx)+{\tt d}^*n(t<\zeta)\delta_{\{0\}}(dx)\,.
\end{eqnarray}
\end{corollary}
\noindent Another remarkable, and later useful, direct consequence of Corollary \ref{law} is the following
representation of the semigroup of $X$ in terms of the entrance laws $(q_s)$ and $(q_s^*)$.
\begin{corollary}\label{semigroup} Let us denote the measure $q_t(-dx)$ by $\overline{q}_t(dx)$. We extend the measures
$\overline{q}_t(dx)$ and $q_t^*(dx)$ to $\mathbb{R}$ by setting $\overline{q}_t(A)=\overline{q}_t(A\cap \mathbb{R}_-)$ and
$q_t^*(A)=q_t^*(A\cap \mathbb{R}_+)$, for any Borel set $A\subset \mathbb{R}$. Then we have the following identity between measures
on $\mathbb{R}$:
\begin{eqnarray}\label{sgent}
p_t=\int_0^t\overline{q}_s*q_{t-s}^*\,ds+{\tt d}\, q_t^*+{\tt d}^*\overline{q}_t\,.
\end{eqnarray}
\end{corollary}
Now we turn to the particular case where $X$ has no positive jumps. Then, 0 is always regular for $(0,\infty)$.
When moreover 0 is regular for $(-\infty,0)$, since $H_t\equiv t$, it follows from Theorem~\ref{th2} and the remark thereafter that
the law of $(g_t,\overline{X}_t)$ is absolutely continuous. In the next result, we present
an explicit form of this law. We set $c=\Phi(1)$, where $\Phi$ is the Laplace exponent of
the first passage process $T_x=\inf\{t:X_t>x\}$, which in this case is related to the ladder time process by $T_x=\tau_{cx}$.
\begin{proposition}\label{plsn} Suppose that the L\'evy process $X$ has no positive jumps.
\begin{itemize}
\item[$1.$] If $0$ is regular for $(-\infty,0)$, then for $t>0$, the couple $(g_t,\overline{X}_t)$ has law:
\begin{eqnarray}
\p(g_t\in ds,\overline{X}_t\in dx)&=&cxp_s^+(dx)n(t-s<\zeta)s^{-1}\ind_{(0,t]}(s)\,ds\label{0528}\\
&=&cn(t-s<\zeta)\ind_{(0,t]}(s)\p(\tau_{cx}\in ds)\,dx\,.\label{7520}
\end{eqnarray}
In particular, the density of the law of $\overline{X}_t$ is given by the function:
\[x\mapsto\int_0^tcn(t-s<\zeta)\p(\tau_{cx}\in ds)\,.\]
\item[$2.$] If $0$ is not regular for $(-\infty,0)$, then for all $t>0$,
\begin{eqnarray}\label{1528}
\p(g_t\in ds,\overline{X}_t\in dx)&=&cxn(t-s<\zeta)s^{-1}\ind_{(0,t]}(s)p^+_s(dx)\,ds+\nonumber\\
&&{\tt d}cxt^{-1}p_t^+(dx)\delta_{\{t\}}(ds)\,.
\end{eqnarray}
Moreover, we have the following identity between measures on $[0,\infty)^3$$:$
\begin{eqnarray}
\p(g_t\in ds,\overline{X}_t\in dx)\,dt&=&cn(t-s<\zeta)\ind_{(0,t]}(s)\p(\tau_{cx}\in ds)\,dx\,dt+\nonumber\\
&&{\tt d}c\p(\tau_{cx}\in dt)\delta_{\{t\}}(ds)\,dx\,.\label{7620}
\end{eqnarray}
\end{itemize}
\end{proposition}
\noindent {\it Example $4$}: Using the series development (14.30), p.88 in \cite{sa} for $p_s^+(dx)$,
we derive from (\ref{0528}) in Proposition \ref{plsn}, the following reinforcement of expression (\ref{lawstable}). When $X$
is stable and spectrally negative, the density of $(g_t,\overline{X}_t)$ is given by:
\begin{equation}\label{lawsn}
\frac{c}{\pi\Gamma\left(\frac{\alpha-1}\alpha\right)(t-s)^{1/\alpha}}\sum_{n=1}^\infty(-1)^{n-1}
\frac{\Gamma(1+n/\alpha)}{n!}\sin\left(\pi n\frac{\alpha-1}\alpha\right)s^{-\frac{n+\alpha}\alpha}x^n\,,\;\;s\in[0,t]\,,\;\;x\ge0\,,
\end{equation}
which completes Proposition 1, p.282 in \cite{bi}.\\
\noindent We end this section with a remark on the existence of a density with respect to the Lebesgue measure, for the law of the local time of general Markov processes. From (\ref{1528}), we derive that $\p(\tau_x\ge t)\,dt=\int_0^x\int_{(0,t]}n^*(t-s<\zeta)\p(\tau_y\in ds)\,dy\,dt+
{\tt d}\p(\tau_x\in dt)\,dt$.
Actually, this identity may be generalized to any subordinator $S$ with drift $b$, killing rate $k$ and
L\'evy measure $\nu$. Set $\bar{\nu}(t)=\nu(t,\infty)+k$, then the characteristic exponent $\Phi$ of $S$
is given by
\[\Phi(\alpha)=\alpha b+\alpha\int_0^\infty e^{-\alpha t}\bar{\nu}(t)\,dt\,,\]
from which and Fubini Theorem, we derive that for all $x\ge0$ and $\alpha>0$:
\begin{eqnarray*}
\frac1\alpha\e(1-e^{-\alpha S_x})&=&\left(b+\int_0^\infty e^{-\alpha t}\bar{\nu}(t)\,dt\right)\frac{\e(1-e^{-\alpha S_x})}{\Phi(\alpha)}\,,\\
\int_0^\infty e^{-\alpha t}\p(S_x>t)\,dt&=&\left(b+\int_0^\infty e^{-\alpha t}\bar{\nu}(t)\,dt\right)\int_0^\infty e^{-\alpha t}
\int_0^x\p(S_y\in dt)\,dy\,.
\end{eqnarray*}
Inverting the Laplace transforms on both sides of this identity gives for all $x\ge0$, the following identity between measures,
\[\p(S_x>t)\,dt=\int_0^x\int_{(0,t]}\bar{\nu}(t-s)\,\p(S_y\in ds)\,dy\,dt+b\int_0^x\p(S_y\in dt)\,dy\,.\]
In particular, if $S$ has no drift coefficient, then the law of $L_t\eqdef\inf\{u:S_u>t\}$ has density:
\[\frac{\p(L_t\in dx)}{dx}=\int_{(0,t]}\bar{\nu}(t-s)\,\p(S_x\in ds)\,.\]
This shows that if $a\in\mathbb{R}$ is a regular state of any real Markov process $M$ such that $\int_0^t\ind_{\{M_s=a\}}\,ds=0$, a.s. for all $t$,
then the law of the local time of $M$ at level $a$ is absolutely continuous, for any time $t>0$.
This last result is actually a particular case of \cite{dr}, where it is proved that for any non creeping L\'evy process,
the law of the first passage time over $x>0$ is always absolutely continuous.\\
\section{Proofs and further results}\label{proofs}
\setcounter{equation}{0}
\begin{proof} {\it of Theorem $\ref{mainth}$}. Let $\mathbf{e}$ be an exponential time with parameter $\varepsilon>0$ which is
independent of $(X,\mathbb{P})$. Recall the notations of Section \ref{prelim} and for $\omega\in{\cal D}$, define
$d_s=\inf\{u>s:\omega_u=0\}$. From the independence of $\mathbf{e}$ and Fubini Theorem, we have for all bounded function $f$
on $\mathbb{R}_+$ and for all bounded Borel functionals $F$ and $K$ on~${\cal D}$,
\hspace*{-1in}
\begin{eqnarray*}
&&\e(f(g_{\mathbf{e}})F\circ r_{g_{\mathbf{e}}}K\circ k_{{\mathbf{e}}-g_{\mathbf{e}}}\circ\theta_{g_{\mathbf{e}}})=
\e\left(\int_0^\infty\varepsilon e^{-\varepsilon t} f(g_{t})F\circ r_{g_{t}}K\circ k_{t-g_{t}}\circ\theta_{g_{t}} \,dt\right)\nonumber\\
&=&\e\left(\sum_{s\in G}\varepsilon e^{-\varepsilon s}f(s)F\circ r_s\int_s^{d_s}e^{-\varepsilon (u-s)} K\circ k_{u-s}\circ\theta_{s}
\,du\right)\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad+\e\left(\int_0^\infty \varepsilon e^{-\varepsilon t}f(t)F\circ r_t\ind_{\{g_t=t\}}\,dt\right)K(\omega^0)\,.
\end{eqnarray*}
Recall from Section \ref{prelim} that $\epsilon^s$ denotes the excursion starting at $s$. Then
\begin{eqnarray}
\e(f(g_{\mathbf{e}})F\circ r_{g_{\mathbf{e}}}K\circ k_{{\mathbf{e}}-g_{\mathbf{e}}}\circ\theta_{g_{\mathbf{e}}})&=&
\e\left(\sum_{s\in G}\varepsilon e^{-\varepsilon s}f(s)F\circ r_s\int_0^{d_s-s}e^{-\varepsilon u} K(\epsilon^s\circ k_u)
\,du\right)\nonumber\\
&+&\e\left(\int_0^\infty \varepsilon e^{-\varepsilon t}f(t)F\circ r_t\ind_{\{X_t=\overline{X}_t\}}\,dt\right)K(\omega^0)\,.\label{3618}
\end{eqnarray}
The process
\[(s,\omega,\epsilon)\mapsto e^{-\varepsilon s}f(s)F\circ r_s(\omega)\int_0^{\zeta(\epsilon)} e^{-\varepsilon u}K\circ k_u(\epsilon)\,du\]
is ${\cal P}({\cal F}_s)\otimes{\cal E}$-measurable, so that by applying (\ref{compensation}) and (\ref{delta}) to equality
(\ref{3618}), we obtain
\begin{eqnarray}
\frac1\varepsilon\e(f(g_{\mathbf{e}})F\circ r_{g_{\mathbf{e}}}K\circ k_{{\mathbf{e}}-g_{\mathbf{e}}}\circ\theta_{g_{\mathbf{e}}})
&=&\e\left(\int_0^\infty dL_s e^{-\varepsilon s}f(s)F\circ r_s\right)n\left(\int_0^\zeta e^{-\varepsilon u}K\circ k_u\,du\right)\nonumber\\
&&\qquad\quad+{\tt d}\,\e\left(\int_0^\infty dL_s e^{-\varepsilon s}f(s)F\circ r_s\right)K(\omega^0)\,.\label{4264}
\end{eqnarray}
From the time reversal property of L\'evy processes, see Lemma 2, p.45 in \cite{be},
under $\p$ we have $X\circ k_e\ed X\circ r_e$, so that
\begin{eqnarray}\label{4265}
\e(f(g_{\mathbf{e}})F\circ r_{g_{\mathbf{e}}}K\circ k_{{\mathbf{e}}-g_{\mathbf{e}}}\circ\theta_{g_{\mathbf{e}}})=
\e(f(\mathbf{e}-g^*_{\mathbf{e}})K\circ r_{g^*_{\mathbf{e}}}F\circ k_{{\mathbf{e}}-g^*_{\mathbf{e}}}
\circ\theta_{g^*_{\mathbf{e}}})\,.
\end{eqnarray}
Doing the same calculation as in (\ref{4264}) for the reflected process at its minimum $X-\underline{X}$, we get
\begin{eqnarray}
&&\frac1\varepsilon\e(f(\mathbf{e}-g^*_{\mathbf{e}})K\circ r_{g^*_{\mathbf{e}}}F\circ k_{{\mathbf{e}}-g^*_{\mathbf{e}}}\circ\theta_{g^*_{\mathbf{e}}})\nonumber\\
&=&\e\left(\int_0^\infty dL_s^* e^{-\varepsilon s}K\circ r_s\right)n^*\left(\int_0^\zeta e^{-\varepsilon u}f(u)F\circ k_u\,du\right)\label{4268}\\&&\qquad\qquad\qquad+{\tt d}^*\,
\e\left(\int_0^\infty dL_s^* e^{-\varepsilon s}K\circ r_s\right)f(0)F(\omega^0)\nonumber\,.
\end{eqnarray}
Then we derive from (\ref{4264}), (\ref{4265}) and (\ref{4268}), the following equality
\begin{eqnarray}
&&\e\left(\int_0^\infty dL_s e^{-\varepsilon s}f(s)F\circ r_s\right)n\left(\int_0^\zeta e^{-\varepsilon u}K\circ k_u\,du\right)\nonumber\\
&&\qquad\qquad\qquad+{\tt d}\,\e\left(\int_0^\infty dL_s e^{-\varepsilon s}f(s)F\circ r_s\right)K(\omega^0)\nonumber\\
&=&\e\left(\int_0^\infty dL_s^* e^{-\varepsilon s}K\circ r_s\right)n^*\left(\int_0^\zeta e^{-\varepsilon u}f(u)F\circ k_u\,du\right)\nonumber\\
&&\qquad\qquad\qquad+{\tt d}^*\,\e\left(\int_0^\infty dL_s^* e^{-\varepsilon s}K\circ r_s\right)f(0)F(\omega^0)\,.\label{4369}
\end{eqnarray}
Then by taking $f\equiv1$, $F\equiv1$ and $K\equiv1$, we derive from (\ref{4264}) that
\begin{equation}\label{3247}
\kappa(\varepsilon,0)=n(1-e^{-\varepsilon\zeta})+\varepsilon{\tt d}\,.
\end{equation}
Now suppose that $X$ is of type 1 or 2, so that ${\tt d}^*=0$, from what has been recalled in Section \ref{prelim}.
Hence with $K\equiv1$ in (\ref{4369}) and using (\ref{3247}), we have
\begin{eqnarray}
\e\left(\int_0^\infty dL_s e^{-\varepsilon s}f(s)F\circ r_s\right)\kappa(\varepsilon,0)\kappa^*(\varepsilon,0)
=\varepsilon n^*\left(\int_0^\zeta e^{-\varepsilon u}f(u)F\circ k_u\,du\right)\,.\label{4036}
\end{eqnarray}
But using (\ref{wh}) and plunging (\ref{4036}) into (\ref{4264}) gives
\begin{eqnarray*}
&&\e\left(\int_0^\infty e^{-\varepsilon t} f(g_{t})F\circ r_{g_{t}}K\circ k_{t-g_{t}}\circ\theta_{g_{t}} \,dt\right)
=n^*\left(\int_0^\zeta e^{-\varepsilon u}f(u)F\circ k_u\,du\right)\times\\
&&n\left(\int_0^\zeta e^{-\varepsilon u}K\circ k_u\,du\right)
+{\tt d}\,n^*\left(\int_0^\zeta e^{-\varepsilon u}f(u)F\circ k_u\,du\right)K(\omega^0)\,,
\end{eqnarray*}
so that part 1 and part 2 of the theorem follow for $\lambda$-almost every $t>0$ by inverting the Laplace transforms in this equality.
We easily check that for all $t>0$, $g_t$ is continuous at time $t$. Hence for any bounded and continuous functions $f$, $K$ and $F$,
both sides of these identities are continuous in $t$,
hence they coincide for all $t>0$. Then we extend this result to any bounded Borel functions $f$, $H$ and $F$ through a classical
density argument. Finally, part 3 is obtained in the same way as parts 1 and 2.\end{proof}
Recall that the definition of the ladder height process $(H_t)$ has been given in Section~\ref{prelim}.
Then define $(\ell_x,\,x\ge0)$ as the right continuous inverse of $H$, i.e.
\[\ell_x=\inf\{t:H_t> x\}\,.\]
Note that for types 1 and 2, since $H$ is a strictly increasing subordinator,
the process $(\ell_x,\,x\ge0)$ is continuous, whereas in type 3, since $H$ is a compound Poisson process, then $\ell$
is a c\`adl\`ag jump process.
Parts 1 and 2 of the following lemma are reinforcements of Theorems 3 and 5 in~\cite{ac}. Recall that $V(dt,dx)$ denotes the potential
measure of the ladder process $(\tau,H)$.
\begin{lemma}\label{equivalence1} Let $X$ be a L\'evy process which is not a compound Poisson process
and such that $|X|$ is not a subordinator.
\begin{itemize}
\item[$1.$] The following identity between measures holds on $\mathbb{R}_+^3$:
\begin{equation}\label{4594}
u\p(X_t\in dx,\,\ell_x\in du)\,dt=t\p(\tau_u\in dt,H_u\in dx)\,du\,.
\end{equation}
\item[$2.$] The following identity between measures holds on $\mathbb{R}_+^2$:
\begin{equation}\label{26811}
{\tt d}^*\delta_{\{(0,0)\}}(dt,dx)+q_t^*(dx)\,dt=V(dt,dx)\,,
\end{equation}
moreover for all $t>0$, and for all Borel sets $B\in{\cal B}_{\mathbb{R}_+}$, we have,
\begin{equation}\label{2681}
q_t^*(B)=t^{-1}\e\left(\ell(X_t)\ind_{\{X_t\in B\}}\right)\,.
\end{equation}
\item[$3.$] For all $t>0$, the measures $q_t^*(dx)$ and $p_t^+(dx)$ are equivalent on $\mathbb{R}_+$.
\end{itemize}
\end{lemma}
\begin{proof} When 0 is regular for $(-\infty,0)$, part 1.~is proved in Theorem 3 of \cite{ac} and when 0 is regular for both $(-\infty,0)$
and $(0,\infty)$, part 2.~is proved in Theorem 5 of \cite{ac}.
Although the proofs of parts 1 and 2 follow about the same scheme as in \cite{ac}, it is necessary to check some details.
First recall the so-called Fristedt identity which is established in all the cases concerned by this
lemma, in \cite{ky}, see Theorem 6.16. For all $\alpha\ge0$ and $\beta\ge0$,
\begin{equation}\label{fris}
\kappa(\alpha,\beta)=\exp\left(\int_0^\infty dt\int_{[0,\infty)}(e^{-t}-e^{-\alpha t-\beta x})t^{-1}\,\p(X_t\in dx)\right)\,.
\end{equation}
Note that the constant $k$ which appears in this theorem is equal to 1, according to our normalization,
see Section 1. Then recall (\ref{kappa}): $\e\left(e^{-\alpha\tau_u-\beta H_u}\right)=e^{-u\kappa(\alpha,\beta)}$. This expression
is differentiable, in $\alpha>0$ and in $u>0$ and using (\ref{fris}), we obtain:
\begin{eqnarray*}
\e(\tau_ue^{-\alpha\tau_u-\beta H_u})&=&u\,\e(e^{-\alpha\tau_u-\beta H_u})\frac{\partial}{\partial \alpha}\kappa(\alpha,\beta)\\
&=&-u\int_0^\infty e^{-\alpha t}\e\left(e^{-\beta X_t}\ind_{\{X_t\ge0\}}\right)dt\frac{\partial}{\partial u}\e(e^{-\alpha\tau_u-\beta H_u})\\
&=&-u\frac{\partial}{\partial u}\e\left(\int_0^\infty \exp(-\alpha(t+\tau_u)-\beta(\tilde{X}_t+H_u))\ind_{\{\tilde{X}_t\ge0\}}\,dt\right)\,,
\end{eqnarray*}
where $\tilde{X}$ is a copy of $X$ which is independent of $(\tau_u,H_u)$. We may take for instance
$\tilde{X}=(X_{\tau_u+t}-X_{\tau_u},\,t\ge0)$, so that it follows from a change of variables and
the definition of $(\ell_x,\,x\ge0)$,
\begin{eqnarray*}
\e(\tau_ue^{-\alpha\tau_u-\beta H_u})&=&-u\frac{\partial}{\partial u}\e\left(\int_0^\infty \exp(-\alpha(t+\tau_u)-\beta X_{\tau_u+t})\ind_{\{X_{\tau_u+t}\ge H_u\}}\,dt\right)\\
&=&-u\frac{\partial}{\partial u}\e\left(\int_0^\infty \exp(-\alpha t-\beta X_{t})\ind_{\{X_{t}\ge H_u,\,\tau_u\le t\}}\,dt\right)\\
&=&-u\frac{\partial}{\partial u}\int_0^\infty dt e^{-\alpha t}\int_{[0,\infty)} e^{-\beta x}\p(X_t\in dx,\ell_x> u)\,,
\end{eqnarray*}
from which we deduce that
\[ \int_{[0,\infty)^2}e^{-\alpha t-\beta x} t\,\p(\tau_u\in dt,H_u\in dx)\,du=
\int_{[0,\infty)^2} e^{-\alpha t-\beta x}u\,\p(X_t\in dx,\ell_x\in du)\,dt\,,\]
and (\ref{4594}) follows by inverting the Laplace transforms.
Let ${\bf e}$ be an exponentially distributed random variable with parameter $\varepsilon$, which is independent of $X$.
From identity (6.18), p.~159 in \cite{ky}, we have
\begin{equation}\label{fris1}
\e\left(\exp(-\beta\overline{X}_{\bf e})\right)=\kappa(\varepsilon,0)\int_{[0,\infty)^2}e^{-\varepsilon t-\beta x}
\int_0^\infty\p(\tau_s\in dt,H_s\in dx)\,ds\,.
\end{equation}
Suppose that $X$ is of type 1 or 2. By taking the Laplace transforms in $x$ and $t$ of identity (\ref{both1}) in Corollary
\ref{law1}, we obtain
\begin{equation}\label{3137}
\e\left(\exp(-\beta\overline{X}_{\bf e})\right)=\left(\varepsilon{\tt d}+n(1-e^{-\varepsilon\zeta})\right)
n^*\left(\int_0^\zeta e^{-\varepsilon s}e^{-\beta \epsilon_s}\,ds\right)\,,
\end{equation}
and by comparing (\ref{3247}), (\ref{fris1}) and (\ref{3137}), it follows
\begin{equation}\label{1247}
n^*\left(\int_0^\zeta e^{-\varepsilon s}e^{-\beta \epsilon_s}\,ds\right)=\int_{[0,\infty)^2}e^{-\varepsilon t-\beta x}
\int_0^\infty\p(\tau_s\in dt,H_s\in dx)\,ds\,.
\end{equation}
Then we derive part 2 from (\ref{1247}) and (\ref{4594}). If $X$ is of type 3, then taking
the Laplace transforms in $x$ and $t$ of identity (\ref{both1}), gives \\
\begin{equation}\label{3138}
\e\left(\exp(-\beta\overline{X}_{\bf e})\right)= n(1-e^{-\varepsilon\zeta})
\left({\tt d}^*+n^*\left(\int_0^\zeta e^{-\varepsilon t}e^{-\beta \epsilon_t}\,dt\right)\right)\,,
\end{equation}
so that by comparing (\ref{3247}), (\ref{fris1}) and (\ref{3138}), we obtain
\begin{equation}\label{1250}
{\tt d}^*+n^*\left(\int_0^\zeta e^{-\varepsilon t}e^{-\beta \epsilon_t}\,dt\right)=\int_{[0,\infty)^2}e^{-\varepsilon t-\beta x}
\int_0^\infty\p(\tau_s\in dt,H_s\in dx)\,ds\,,
\end{equation}
and part 2 follows from (\ref{1250}) and (\ref{4594}) in this case.
Then we show the third assertion. First note that $q_t^*$ is absolutely continuous with respect to $p_t^+$ for all $t>0$,
since from (\ref{2681}) we have for any Borel set $B\subset\mathbb{R}_+$ such that $\p(X_t\in B)=0$,
\[q_t^*(B)=t^{-1}\e(\ell(X_t)\ind_{\{X_t\in B\}})=0\,.\]
Conversely, take a Borel set $B\subset\mathbb{R}_+$, such that $\p(X_t\in B)>0$. Then since $\p(X_t=0)=0$, there exists $y>0$ such that
$\p(X_t\in B,\,X_t>y)>0$. As the right continuous inverse of a subordinator,
$(\ell_x)$ is nondecreasing and we have for all $x>0$, $\p(\ell_x>0)=1$.
Therefore the result follows from the inequality:
\[0<\e(\ell_{y}\ind_{\{X_t\in B,\,X_t>y\}})\le\e(\ell(X_t)\ind_{\{X_t\in B\}})\,,\]
together with identity (\ref{2681}).
\end{proof}
\noindent Recall from Section \ref{prelim} that $\pi$ is the L\'evy measure of the ladder time process $\tau$ and that
$\overline{\pi}(t)=\pi(t,\infty)$.
\begin{lemma}\label{equivalence2} Under the assumption of Lemma $\ref{equivalence1}$, for all $t>0$,
the following measures of $\mathbb{R}_+$$:$
\[\int_0^t\overline{\pi}(t-s)q_s^*(dx)\,ds\;\;\mbox{and}\;\;\int_0^tq_s^*(dx)\,ds\]
are equivalent.
\end{lemma}
\begin{proof} For all Borel set $B\subset\mathbb{R}_+$, we have
\[\overline{\pi}(t)\int_0^tq_s^*(B)\,ds\le
\int_0^t\overline{\pi}(t-s)q_s^*(B)\,ds\,,\]
hence $\int_0^tq_s^*(dx)\,ds$ is absolutely continuous with respect to
$\int_0^t\overline{\pi}(t-s)q_s^*(dx)\,ds$. Moreover,
for all $\varepsilon\in(0,t)$ and all Borel set $B\subset\mathbb{R}_+$, we may write
\begin{eqnarray*}
&&\int_0^t\overline{\pi}(t-s)q_s^*(B)\,ds\le\\
&&\overline{\pi}(\varepsilon)\int_0^tq_s^*(B)\,ds+
\int_{t-\varepsilon}^t\overline{\pi}(t-s)q_s^*(B)\,ds<\infty\,.
\end{eqnarray*}
Hence if $\int_0^tq_s^*(B)\,ds=0$, then for all $\varepsilon\in(0,t)$,
\[\int_0^t\overline{\pi}(t-s)q_s^*(B)\,ds\le
\int_{t-\varepsilon}^t\overline{\pi}(t-s)q_s^*(B)\,ds<\infty\,.\]
The finiteness of the right hand side of the above inequality can be derived from
Corollary \ref{law1}. Hence this term tends to 0 as $\varepsilon$ tends to 0,
so that the equivalence between the measures $\int_0^t\overline{\pi}(t-s)q_s^*(dx)\,ds$
and $\int_0^tq_s^*(dx)\,ds$ is proved.
\end{proof}
\noindent Now we are ready to prove all the results of Section \ref{main}.\\
\noindent {\it Proof of Theorem} \ref{type}. When $X$ is of type 1 or 2, it follows from Corollary \ref{law1},
part 3 of Lemma \ref{equivalence1}, relation (\ref{pi}) and Lemma \ref{equivalence2}. When $X$ is of type 3, the arguments
are the same, except that one has to take account of the fact that the law of $\overline{X}_t$ has an atom at 0, as
it is specified in Corollary \ref{law1}. $\;\;\Box$\\
\noindent {\it Proof of Theorem $\ref{th2}$}. We first prove that part 1 implies that for all $t>0$, the law of $\overline{X}_t$ is absolutely continuous. To that aim, observe that
\[\overline{X}_{2t}=\max\left\{\overline{X}_t,X_t+\sup_{0\le s\le t}(X_{t+s}-X_t)\right\}=\max\left\{\overline{X}_t,X_t+\sup_{0\le s\le t}X^{(1)}_s\right\}\,,\]
where $X^{(1)}$ is an independent copy of $X$. From this independence and the above expression, we easily deduce that if the law of
$\overline{X}_t$ is absolutely continuous, then so is this of $\overline{X}_{2t}$. Therefore, from Theorem \ref{type}, the measure
$\mu^+_{2t}$ is absolutely continuous. This clearly implies that for all $s\in(0,2t]$, the measure $\mu_s^+$ is absolutely continuous.
Applying Theorem \ref{type} again, it follows that the law of $\overline{X}_s$ is absolutely continuous, for all $s\in(0,2t]$. Then
we show the desired result by reiterating this argument.
Let us assume that part 1 holds. Then for all $t>0$, the law of $\overline{X}_t$ is absolutely continuous. Therefore the resolvent measure
$U(dx)$ is absolutely continuous. Indeed,
let ${\bf e}$ be an independent exponentially distributed random time with parameter 1,
then the law of $\overline{X}_{\bf e}$ admits a density, hence the law of $X_{\bf e}=X_{\bf e}-\overline{X}_{\bf e}+\overline{X}_{\bf e}$
also admits a density, since the random variables $X_{\bf e}-\overline{X}_{\bf e}$ and $\overline{X}_{\bf e}$ are independent, see Chap. VI in \cite{be}. Since the law of $X_{\bf e}$ is precisely the measure $U(dx)$, we have proved that part 1 implies part 3.
Then part 3 clearly implies part 2 and from Corollary \ref{lebesgue}, part 2 implies part 1.
First observe that $V(dx)$ is absolutely continuous if and only if $\int_0^tq_s^*(dx)\,ds$ is absolutely continuous, for all $t>0$. Indeed,
from part 2 of Lemma \ref{equivalence1}, we have $V(dx)=\int_0^\infty q_s^*(dx)\,ds$, hence if $V(dx)$ is absolutely continuous, then so
are the measures $\int_0^t q_s^*(dx)\,ds$, for all $t>0$. Conversely assume that the measures $\int_0^t q_s^*(dx)\,ds$ are absolutely continuous
for all $t>0$. Let $A$ be a Borel set of $\mathbb{R}_+$ such that $\lambda_+(A)=0$. From the assumption, $q_s^*(A)=0$, for $\lambda$-almost every
$s>0$, hence $V(A)=\int_0^\infty q_s^*(A)\,ds=0$, so that $V(dx)$ is absolutely continuous. Then from Lemma \ref{equivalence2} and Corollary \ref{law1}, for each $t$, the law of $\overline{X}_t$ is equivalent to the measure $\int_0^t q_s^*(dx)\,ds$. Therefore part 4 is equivalent
to part 1, from the argument of the beginning of this proof.$\;\;\Box$\\
\noindent {\it Proof of Theorem $\ref{coro2}$}. If $p_t^+\ll\lambda^+$ for all $t>0$, then from part 3 of Lemma \ref{equivalence1}, $q_t^*\ll\lambda^+$, for all $t>0$. Suppose moreover that 0 is regular for $(0,\infty)$ and let $A$ be a Borel subset of $\mathbb{R}$,
such that $\lambda(A)=0$. Then from Corollary \ref{semigroup} and Fubini Theorem, we have
\[p_t(A)=\int_0^t ds\,q_s^**\bar{q}_{t-s}(A)+{\tt d}\,q_t^*(A)\,,\]
where $\bar{q}_s$ and $q_s^*$ are extended on $\mathbb{R}$, as in this corollary.
But from the assumptions, for all $0<s<t$, $q_s^**\bar{q}_{t-s}(A)=0$ and $q_t^*(A)=0$, hence $p_t(A)=0$,
for all $t>0$ and $p_t$ is absolutely continuous, for all $t>0$. So part 1 implies part 2 and the converse is obvious.
Then it readily follows from part 3 of Lemma \ref{equivalence1} and identity (\ref{26811}) that part 1 implies part 3 (recall that ${\tt d}^*=0$ in
the present case). Now suppose that $V(dt,dx)$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}_+^2$. Then we derive from identity (\ref{26811}) that the measures $q_t^*(dx)$ are absolutely continuous for $\lambda$-almost every $t>0$. From Corollary \ref{semigroup}, it means that $p_t$ is absolutely continuous for $\lambda$-almost every $t>0$. But if the semigroup $p_t$ is absolutely continuous for some $t$, then $p_s$ is absolutely continuous for all $s\ge t$. Hence $p_t$ is actually absolutely continuous, for all $t>0$ and part 3 implies part 2.
Then suppose that $X$ is of type 1 and recall that ${\tt d}={\tt d}^*=0$ in this case. From Corollary \ref{law} and part 2 of
Lemma \ref{equivalence1}, we have:
\begin{equation}\label{potential}
\p(g_t\in ds,\overline{X}_t\in dx)= n(t-s<\zeta)V(ds,dx)\,.
\end{equation}
Since $n(t-s<\zeta)>0$, for all $s\in[0,t]$, we easily derive from identity (\ref{potential}) that part 3 and part 4 are equivalent.
Let us denote by $p_t^-$ the restriction of $p_t$ to $\mathbb{R}_-$. If part 2 is satisfied, then $p_t^+$ and $p_t^-$ are absolutely continuous for all $t>0$. Then from part 3 of Lemma \ref{equivalence1} applied to $X$ and its dual process $-X$, it follows that $q_t$ and $q_t^*$ are absolutely continuous for all $t>0$, so that from Corollary \ref{law}, the triple $(g_t,\overline{X}_t,X_t)$ is absolutely continuous for all $t>0$, hence part 2 implies part 5.
Then part 5 clearly implies part 4.$\;\;\Box$\\
\noindent {\it Proof of Proposition $\ref{coro3}$.} In this proof, it suffices to assume that $Y$ is
a deterministic process, i.e. $(T_n)$, $(a_n)$ and $(b_n)$ in (\ref{elem}) are deterministic sequences.
In order to prove part 1, let us first assume that $a_n=0$, for all $n$. Then recall that from Theorem \ref{th2}, the law
of $\overline{X}_t$ is absolutely continuous, for all $t>0$.
Fix $t>0$ and let $n$ be such that $t\in[T_n,T_{n+1})$. Set $Z_k=Y_{T_k}+\sup_{T_k\le s<T_{k+1}}X_s$ and
$Z=Y_{T_n}+\sup_{T_n\le s<t}X_s$, then we have
\begin{equation}\label{4580}\sup_{s\le t}X_s+Y_s=\max\{Z_1,Z_2,\dots,Z_{n-1},Z\}\,.
\end{equation}
But we can write
\begin{equation}\label{4380}
Z_k=Y_{T_k}+X_{T_k}+\sup_{s\le T_{k+1}-T_k}X^{(k)}_s\;\;\;\mbox{and}\;\;\;Z=Y_{T_n}+
X_{T_n}+\sup_{s\le t-T_n}X^{(n)}_s\,,
\end{equation}
where $X^{(k)}$, $k=1,\dots,n$ are copies of $X$ such that $X, X^{(k)}$, $k=1,\dots,n$ are independent.
From Theorem \ref{th2}, the laws of $\sup_{s\le T_{k+1}-T_k}X^{(k)}_s$ and $\sup_{s\le t-T_n}X^{(n)}_s$ are absolutely continuous.
From the representation (\ref{4380}) and the independence hypothesis, we derive that the laws of $Z_1,Z_2,\dots,Z_{k-1}$ and $Z$ are
absolutely continuous. Since the maximum of any finite sequence of absolutely continuous random variables is itself
absolutely continuous, we conclude that the law of $\sup_{s\le t}X_s+Y_s$ is absolutely continuous and the first part is proved.
Now we assume that $(a_n)$ is any deterministic sequence. Then we have (\ref{4580}) with
\begin{equation}\label{4381}
Z_k=b_k+X_{T_k}+\sup_{s\le T_{k+1}-T_k}X^{(k)}_s+a_ks\;\;\;\mbox{and}\;\;\;Z=b_n+
X_{T_n}+\sup_{s\le t-T_n}X^{(n)}_s+a_ns\,,
\end{equation}
where $X^{(k)}$, $k=1,\dots,n$ are as above. If $p^+_t\ll \lambda^+$ for all $t$, then this property also holds
for the process $X$ with any drift $a$, i.e. $X_t+at$, so from Theorem \ref{type} the laws of $\sup_{s\le T_{k+1}-T_k}X^{(k)}_s+a_ks$
and $\sup_{s\le t-T_n}X^{(n)}_s+a_ns$ are absolutely continuous and we conclude that the law of $\sup_{s\le t}X_s+Y_s$ is
absolutely continuous, in the same way as for the first part.
Finally, if $X$ has unbounded variations, then it is of type 1. If moreover for instance the ladder
height process at the supremum $H$ has a positive drift, then from Theorem \ref{th2} and the remark thereafter, the law of $\overline{X}_t$ is absolutely continuous for all $t>0$. Since $X$ has unbounded variations, it follows from (iv) p.~64 in \cite{do} that
for any $a\in\mathbb{R}$, the ladder height process at the supremum of the drifted L\'evy process $X_t+at$ also has a positive drift,
and since $X_t+at$ is also of type 1, the law of $\sup_{s\le t}X_s+as$ is absolutely continuous. Then from Theorem \ref{th2}, the laws of $\sup_{s\le T_{k+1}-T_k}X^{(k)}_s+a_ks$ and $\sup_{s\le t-T_n}X^{(n)}_s+a_ns$ are absolutely continuous and again we conclude that the law of $\sup_{s\le t}X_s+Y_s$ is absolutely continuous, in the same way as for the first part. $\;\;\Box$\\
\noindent {\it Proof of Proposition $\ref{plsn}$}: Recall that under the assumption of this proposition, we have ${\tt d}^*=0$.
So, we derive from Corollary \ref{law}, by integrating identity (\ref{both}) over $y$ and from part 2 of Lemma \ref{equivalence1},
that
\begin{eqnarray*}
&&\p(g_t\in ds,\overline{X}_t\in dx)=\\
&&\qquad\qquad s^{-1}n(t-s<\zeta)\e(\ell(x)\ind_{\{X_s\in dx\}})\ind_{(0,t]}(s)\,ds+
{\tt d}\delta_{\{t\}}(ds)t^{-1}\e(\ell(x)\ind_{\{X_s\in dx\}})\,.
\end{eqnarray*}
Since $X$ has no positive jumps, then $\overline{X}_t$ continuous. Moreover, it is an increasing additive functional
of the reflected process $\overline{X}_t-X_t$, such that
\[\e\left(\int_0^\infty e^{-t}\,d\overline{X}_t\right)=\Phi(1)^{-1}\,,\]
where $\Phi$ is the Laplace exponent of the subordinator $T_x=\inf\{t:X_t>x\}$. Hence we have
$L_t=c\overline{X}_t$, with $c=\Phi(1)$. Then it follows from the definition of $H$ and $\ell$, that
\[H_u=c^{-1}u,\;\;\mbox{on $H_u<\infty$},\;\;\mbox{and}\;\;\ell_x=cx\,\;\;\mbox{on $\ell_x<\infty$.}\]
Besides, from part 1 of Lemma \ref{equivalence1}, we have by integrating (\ref{4594}) over $u\in[0,\infty)$,
\begin{equation}\label{end}
cxp_t^+(dx)\,dt=ct\p(\tau_{cx}\in dt)\,dx\,,
\end{equation}
as measures on $[0,\infty)^2$. This ends the proof of the proposition.$\;\;\Box$\\
\noindent Finally note that identity (\ref{end}) may also be
derived from Corollary VII.3, p.190 in \cite{be} or from Theorem 3 in \cite{ac}. The constant $c$ appearing in our expression is due
to the choice of the normalization of the local time in (\ref{norm1}).
\vspace*{.5in}
\noindent{\bf Acknowledgement}: I would like to thank Laurent Denis who has brought the problem of the absolute continuity
of the supremum of L\'evy processes to my attention and for fruitful discussions on this subject. I am also very grateful
to Victor Rivero for some valuable comments.
\vspace*{.8in}
\newpage
|
1,116,691,497,184 | arxiv | \section{Introduction}
Gravity has been stringently tested on terrestrial, solar system, and
astrophysical scales but cosmic scales represent a further $10^6$ to
$10^{14}$ extrapolation in length scale (from galaxy scales or solar
system scales respectively). Given that cosmic expansion is, surprisingly,
accelerating, opposite to the expectation from gravity acting on matter,
it is natural to desire tests of gravity on cosmic scales.
The cosmic expansion by itself cannot distinguish between a change in
the laws of gravity and in the material contents, i.e.\ a dark energy
component, but the combination with the cosmic growth of large scale
structure can. Therefore considerable effort has gone into understanding
the effect of modified gravity on cosmic structure growth; for reviews,
especially model independent work, see
\cite{review1,review2,1604.01059,1703.01271}.
Numerous alternatives to general relativity exist, with many of them
falling within the Horndeski class of gravity, or described by an effective
field theory approach (see \cite{review3} and references therein). These
approaches involve four or more free functions of time, in addition to
the expansion history, with no prescription for how they should behave.
Even next generation data will not be able to constrain four functions,
or more than a few parameters. Simple functional forms tend to be highly
restrictive and possibly poor approximations \cite{1512.06180,1607.03113}.
Thus we must either work one at a time with one particular
model of gravity, one particular functional form within that model, and
one particular parameter set within that functional form (e.g.\ $f(R)$
gravity, of the Hu-Sawicki \cite{husawicki} form, with $n=1$), or seek a
phenomenological low dimensional model independent approach.
If we follow the data, then in the subhorizon, quasistatic limit (applicable
to where precision data will lie) the linear growth of structure is determined
by a generalized Poisson equation, as clearly shown by \cite{bz}. Here the
gravitational strength determining matter density perturbation growth is
$\gm(k,a)$ rather than Newton's constant $G_N$, where the scale factor $a$
represents
the time dependence and the wavenumber $k$ the scale dependence. This is a
robust treatment for modified gravity under these circumstances \cite{bz}.
Thus the issue, if one is concerned with using cosmic growth data to
test gravity, is how to parametrize $\gm$.
One advance in this direction appears in \cite{paper1} (hereafter Paper 1).
The authors derived, and demonstrated numerically, that modifications to the
gravitational strength at early times, $a\lesssim0.25$ in the matter dominated
era, could be modeled with high accuracy in their effects on the growth
observables by a single parameter, $G_{\rm hi}$, related to the effective
area under the $\dgm(a)$ curve. This is accurate to 0.3\% or better. The
treatment of later time modifications to gravity, however, was left as an
unresolved question. The aim here is to address it.
In Sec.~\ref{sec:model} we describe the variety of gravity models that we
seek to fit, and the model independent method used. The specific approach
and observational data used is described in Sec.~\ref{sec:method}, and
the results for the accuracy of the parametrization are presented in
Sec.~\ref{sec:results}. We propagate the fitting residuals to cosmological
parameter bias in Sec.~\ref{sec:bias}. Section~\ref{sec:discuss} discusses
how to use the parametrization with data in a practical sense, and how to
extract key properties of the gravity theory from the results. We conclude
in Sec.~\ref{sec:concl}.
\section{Gravity Models and Fits} \label{sec:model}
In the quest for a low dimensional parametrization of the effect of
modified gravity on linear growth observables, we want not only an
accurate parametrization but a broadly model independent one. Functional
forms such as power laws tend to be limited, and often bias the results
by weighting unfairly parts of the cosmic history,
as well as the results being sensitive to the power law, or prior on the
power law, assumed, while being unable to constrain the power law well
\cite{1109.1846,1612.00812}. Assuming a close relation with the effective
dark energy density also yields misleading conclusions
\cite{1512.06180,1607.03113}, with the simplest counterargument being that
$f(R)$ gravity often shows a gravitational strength that only deviates from
general relativity at quite late times, e.g.\ reaching 1\% only at
$z\approx1.5$, when the dark energy density fraction is already greater than 15\%.
Conversely, assuming that gravitational modifications only occur at late
times can miss important aspects of many theories such as the Horndeski
class of gravity.
Therefore we turn toward bins in scale factor or redshift as a model
independent approach. These have
been successfully used in projecting future constraints on modified
gravity, e.g.\ \cite{1212.0009,1612.00812}. We will lay out a methodology
for deciding on the number of bins, and interpreting the meaning of the
results in the remainder of the article. We emphasize that our goal is
to fit to the observables, specifically the redshift space distortion (RSD)
function $f\sigma_8(a)$, not the theory function $\gm(a)$.
To test the efficacy of binned parameters, we have to compare them
to some underlying ``true'' theory.
To robustly explore the comparison of the results of the binned model with
the exact theory, we need to ``stress test'' the binned approximation by
comparing it to a wide variety of theoretical behavior. Since our focus
is on growth observables and looking for signatures of modified gravity,
we use identical expansion histories for the model and the theory case
it is attempting to fit.
The theory behaviors should be fairly
realistic, with enough complexity and features to provide an adequate
test of the binned parametrization. We adopt six different forms of
scale factor dependence to test:
\begin{enumerate}
\item a nonmonotonic function, taken to be a Gaussian of variable width and
location, as in Paper 1, but at late times;
\item a rising function;
\item a falling function (it is obvious that the
constant function considered in Paper 1 can be fit by a binned
parametrization);
\item a multipeaked function such as seen in some Galileon gravity cases
(e.g.\ see Fig.~3 of \cite{1607.03113}), taken to be a sum of Gaussians;
\item braneworld theory given by DGP gravity \cite{dgp1,dgp2};
\item $f(R)$ gravity.
\end{enumerate}
The Gaussian deviation, normalizing $\gm$ by Newton's constant so
that general relativity has $\gm=1$, is
\begin{equation}
\dgm=\dg\,e^{-(\ln a-\ln a_t)^2/(2\sigma^2)} \ ,
\end{equation}
where we will study the results for various central values $a_t$ and widths
$\sigma$.
The rising parametrization is
\begin{equation}
\dgm=\dg\,a^s\quad {\rm for}\ a>a_\star\ ,
\end{equation}
and otherwise zero, where $a_\star$ is a cutoff scale factor.
We might choose $a_\star=0.25$ ($z=3$) since from Paper 1 we know how
to treat the deviations for $z>3$.
The falling parametrization is
\begin{equation}
\dgm=\dg\,a^{-s}\quad {\rm for}\ a>a_\star\ ,
\end{equation}
and otherwise zero.
The sum of two Gaussians gives either a multipeaked function or a broader
deviation, depending on the separation of the Gaussians and their width.
We take $a_t=0.3$ and $a_t=0.7$, with either $\sigma=0.25$ (giving multiple
peaks) or $\sigma=0.5$ (giving a broad deviation).
For DGP gravity, the expansion history is given by the modified
Friedmann equation
\begin{equation}
\frac{H(a)}{H_0}=\frac{1-\Omega_m}{2}+\sqrt{\frac{(1-\Omega_m)^2}{4}+\Omega_m\,a^{-3}} \ ,
\end{equation}
and the modified gravity strength is
\begin{equation}
\dgm=-\frac{1}{3}\,\frac{1-\Omega_m^2(a)}{1+\Omega_m^2(a)} \ .
\end{equation}
where $\Omega_m(a)=\Omega_m\,a^{-3}/[H(a)/H_0]^2$. At early times, $\Omega_m(a)\to1$ and
the strength restores to the Newtonian value, i.e.\ $\dgm=0$. In
the asymptotic future, the Hubble parameter freezes to a de Sitter state,
$H/H_0\to 1-\Omega_m$ and gravity freezes to $\dgm=-1/3$, i.e.\ $\gm=2/3$,
weaker than Newtonian due to the extra dimensional leakage.
For the $f(R)$ scalar-tensor gravity case, we adopt exponential gravity.
See \cite{0905.2962} for the relevant equations; the basic features are
that the expansion history is close to $\Lambda$CDM but with the dark
energy equation of state varying slowly around $w=-1$, on both the phantom
and normal sides. The gravitational strength is greater than Newtonian,
rising from the general relativity value at high redshift (and indeed
for $z\gtrsim1.5$) to 4/3 times the value; i.e.\ $\dgm$ goes from 0
to 1/3.
These are all compared to the results from the binned parametrization.
This is simply $\gm$ piecewise constant in
two bins of $a$. These span $a=[0.25,0.5]$ and $a=[0.5,1]$, since these
are the main observational windows. As discussed in the next section,
if the data show the need then we include an early time parameter
corresponding to the area parameter of Paper 1, implemented as a constant
value in a window $a=[0.1,0.25]$. We smooth the bin edges with a tanh
function; results are insensitive to a smoothing width below
$\Delta\ln a=0.01$.
\section{Method and Data} \label{sec:method}
For the theory and the binned model we solve the growth evolution
equation using a fourth order Runge-Kutta method. The background expansion
is taken to be a flat, $\Lambda$CDM cosmology with $\Omega_m=0.3$, except for
the DGP and $f(R)$ gravity cases where we simultaneously solve the background
evolution equations.
We then compare the observable RSD quantity $f\sigma_8(a)$ between
the input theory and the binned parametrization results and determine the
maximum and rms deviation.
The bin values are optimized by minimizing
one of these quantities. We find that substantially similar values result
from either optimization. For values used below, we nominally minimize the
maximum deviation over the range $z=0.15$--1.9, corresponding to the
data used, as discussed below.
Note that a point deviation value, or rms, is not really the key quantity.
Neither will pick up particular trends, such as the RSD observable being
high for several redshift bins, then low for several, as opposed to random
scatter. One possibility is to use some statistic such as the crossing
statistic \cite{crossing1,crossing2} that does identify such patterns.
However, what
we are really interested in is the propagation of the residual between the
theory prediction and the binning approximation to the cosmological
parameters. For example, even a moderately large amplitude high frequency
oscillation will not affect the cosmological determination since it does
not look like a shift in cosmology. Therefore we use the maximum deviation
in Sec.~\ref{sec:results} to determine the bin values, but then propagate
the residuals to cosmology in Sec.~\ref{sec:bias} with the Fisher bias
formalism.
As our mock data we take RSD measurements as projected for the Dark Energy
Spectroscopic Instrument (DESI \cite{desi}), with the uncertainties on
$f\sigma_8(a)$ given in Tables~2.3 and 2.5 of \cite{desitable}, for
$k_{\rm max}=0.1\,h/$Mpc.
\section{Results} \label{sec:results}
For theory Case 1, with nonmonotonic time dependence, we adopt Gaussian
modifications $\dgm$ with amplitude $0.05$ and width $\sigma=0.25$.
Figure~\ref{fig:gaus1} shows the deviations in $f\sigma_8(a)$ for the theory models
with $a_t=0.3$, 0.5, 0.7 vs the binned approximations.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fsbingaus1.ps}
\caption{
The accuracy of fitting the observational RSD factor $f\sigma_8$ with two late
time bins for modified gravity $\dg(a)$ is compared to that for the exact
theory case. The theory model has a Gaussian $\dg(a)$ with parameters
$\dg=0.05$, $\sigma=0.25$, and $a_t=0.3$ (dotted red), 0.5 (solid black),
0.7 (dashed blue). The dot dashed green curve shows the $a_t=0.3$ case
fit when allowing for a third, early bin due to the early modification.
}
\label{fig:gaus1}
\end{figure}
We see that two bin parameters achieve subpercent level residuals relative
to the exact results over the full redshift range. For the $a_t=0.3$ case,
the modification extends earlier than the bin start at $a=0.25$. If we
wanted to add the early time modification area parameter, or equivalently
a third bin at $a=[0.1,0.25]$, we reduce the maximum deviation from 0.9\%
to 0.6\% (though nearly 0 for $a>0.5$). There is no particular need for
a third parameter even in this case, and this conclusion is verified by
the cosmology bias analysis in Sec.~\ref{sec:bias}.
Note that for modifications close to the present, e.g.\ the $a_t=0.7$ case,
even just one parameter, from the bin $a=[0.5,1]$ gives excellent results. Even
if we double the amplitude, to $\dg=0.1$, the maximum deviation in $f\sigma_8$
stays under 0.5\% as seen in Fig.~\ref{fig:hiamp}. This also illustrates
the possibility of trading off a residual curve that stays closer to 0
for much of its run, but has an overall larger max--min range, with one that
is further from 0 but rather flat. We might expect that the latter, though with
greater rms deviation, has less cosmological consequence, and indeed this
holds true. One could further improve on the fit by allowing the second
bin to enter, and then the high amplitude case has only 0.2\% maximum
deviation.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fsbing10x07s025.ps}
\caption{
The Gaussian modification case with larger amplitude $\dg=0.1$,
$a_t=0.7$, $\sigma=0.25$ can still be accurately fit with two bins.
Three different possibilities are shown, with different rms residuals,
but all give high accuracy fits.
}
\label{fig:hiamp}
\end{figure}
Turning to theory Case 2 and 3, we consider power law rising and falling
time dependences, with $\dg\propto a^3$ and $a^{-3}$.
(Note that the parametrizations used by
\cite{1703.10538,1705.04714} are basically within the rising class.)
The
normalization we use gives a maximum $\dg=0.21$, a considerable amplitude,
with the rising case reaching this at $a=1$ and the falling case at its
starting point $a=0.25$. As seen in Fig.~\ref{fig:fallrise}, the rising case
can be easily fit with two bin parameters, and the maximum deviation is
less than 0.5\% for $a<0.85$.
This is more than satisfactory as the
DESI data projects an uncertainty of greater than 12\% for $a>0.85$ due
to the small cosmic volume available. (Better measurements may be possible
by using peculiar velocities \cite{howlett}.) In any case, one could
achieve 0.9\% accuracy over all $a$ using $\dg_3=0.075$ instead of 0.06.)
The falling case achieves 1.4\% accuracy with two bins, due to its large
amplitude and rapid variation in the bin $a=[0.25,0.5]$. This deviation
pattern would
be noticeable in the data fits, and would spur an analysis where this bin
would be split in two, e.g.\ $a=[0.25,0.4]$ and $a=[0.4,0.5]$. With this
three parameter fit, the residuals obtain subpercent accuracy. Either way,
this sort of oscillation in residuals does not tend to give a cosmology
bias, and thus is mostly harmless. Finally, note that
in any case such a falling model is not generally seen in gravity theories
commonly investigated.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fsbinfallrise.ps}
\caption{
Modifications that rise or fall monotonically over the range $a=[0.25,1]$
can also be fit well with just two parameters, though the less realistic
falling case benefits from splitting the $a=[0.25,0.5]$ bin.
}
\label{fig:fallrise}
\end{figure}
For the multipeak case, reminiscent of modifications seen in theories
with many terms such as Horndeski gravity, we model this by the sum
of two Gaussians, at $a_t=0.3$ and 0.7. We adopt $\sigma=0.25$ to obtain
a multipeak $\dg(a)$, and also investigate $\sigma=0.5$
to give a broad, non-Gaussian $\dg(a)$. Figure~\ref{fig:gaus2} shows the
accuracy of the binned approximation.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{fsbingaus2.ps}
\caption{
The multipeak (two Gaussians) model can have subpercent residuals when using two
bin parameters. When the early Gaussian has substantial support at $a<0.25$
(the $\sigma=0.5$ case) then adding a third, early bin substantially improves
the accuracy.
}
\label{fig:gaus2}
\end{figure}
For broad early modification, one needs an early bin for subpercent accuracy,
i.e.\ the area parameter of Paper 1. As discussed in Sec.~\ref{sec:discuss},
the need for an early bin makes itself known from the trend of data points
with redshift. However, if the precision data extends only to $z\approx1.9$
($a\approx0.34$) then two bins gives 0.8\% precision.
Finally, we consider actual gravity theories.
Braneworld gravity, specifically DGP gravity, exhibits a significant change
in the strength of gravity, with $\dg\approx -1/3$. Since its deviation from
general relativity starts relatively early, i.e.\ once $\Omega_m(a)$ starts to
deviate from 1, we expect to need to include the third, area parameter or
early bin. The results appear in Fig.~\ref{fig:bwsimul}.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{bwsimul.ps}
\caption{
DGP gravity is well fit by two bin parameters plus the early time
modification, or area, parameter.
}
\label{fig:bwsimul}
\end{figure}
Fitting to binned $\dg$ gives an oscillating residual, reflecting that
$\dg_{\rm DGP}$ is quite smooth and monotonic so each bin fits the average
value within its redshift range, under- and overestimating the function in
the different halves of the bin. The amplitude of the residuals is
0.6\% except at very early or late times. (Again, the DESI measurement
precision at $a<0.2$ or $a>0.9$ is such that even a 1\% residual there
needs no improvement.) Since this oscillatory pattern
does not look like cosmological parameter variation, we expect little bias in
the three bin case.
Finally, consider $f(R)$ theory. We adopt the exponential gravity form,
with $c=3$, which is consistent with observations \cite{0905.2962}. Recall
that $f(R)$ gravity also exhibits a significant change
in the strength of gravity, with $\dg\approx 1/3$. It generally has a steep
time dependence, with $\dg(a)=0$ until quite recent times and then rapidly
rising. For example, it reaches 1\% deviation from general relativity at
$z\approx 1.5$ and has 33\% deviation at $z=0$. In addition, the gravitational
strength, and hence growth, is scale dependent. Figure~\ref{fig:frsimul}
shows the binned gravity values for growth at three separate wavenumbers $k$.
\begin{figure}[htbp!]
\includegraphics[width=\columnwidth]{frsimul.ps}
\caption{
Exponential $f(R)$ gravity is fit by 2-3 bin values, with different
gravitational strengths at different wavenumbers $k$ (i.e.\ scale
dependent). Due to the steepness of the time evolution of the modification,
the fit is greatly improved when using a third bin made by splitting the
$a=[0.25,0.5]$ bin (long dashed green and dot dashed orange curves, relative
to dashed blue and dotted red curves).
}
\label{fig:frsimul}
\end{figure}
If we knew the true theory was $f(R)$ gravity then we could scale the
bin values according to the predicted scale dependence of $\gm$ in $f(R)$
theory, i.e.\ $[3+4k^2/(aM)^2]/[3+3k^2/(aM)^2]$ where $M(a)$ is the
scalaron mass \cite{bz,0511218,0709.0296}.
However, we do not know this a priori. (See \cite{1707.08964} for a more
model independent approach.) Indeed, we discover the scale
dependence empirically, when we compare the data to the result from the
binned gravity fit and the residuals indicate a discrepancy that can be removed
by introducing different binned values for different wavenumbers. Note,
however, that the binned values are fairly similar for $k\gtrsim 0.1\,h$/Mpc.
The steepness of the evolution of the gravitational strength shows up
not only in the two-bin values, but in the strong improvement made when
splitting the $a=[0.25,0.5]$ bin into two parts, $a=[0.25,0.4]$ and
$[0.4,0.5]$. As mentioned above, there is almost no deviation from
general relativity for $a<0.4$, and the finer early bin value is consistent
with zero, while the larger, later split bin value greatly reduces the
residuals.
As discussed in Sec.~\ref{sec:discuss}, the steepness of the increase in
the bin values for any $k$, the late time value near $1/3$, and the scale
dependence would together allow us to deduce -- from a model independent
analysis method! -- that the true gravity theory is likely of the $f(R)$
class.
\section{Impact on Cosmology} \label{sec:bias}
We established in the previous section that the residuals from fitting
the RSD observable $f\sigma_8(a)$ with the two or three gravity parameters are
at subpercent accuracy. Since next generation, DESI precision for $f\sigma_8(a)$
will be at the $\gtrsim 2\%$ ($\gtrsim 1\%$ if we used data out to
$k_{\rm max}=0.2\,h$/Mpc), this seems sufficient. However, if the residuals
coherently combine in their effect, due to a time dependence mimicking a
shift in a cosmological parameter, they have the potential to bias the
cosmological conclusions.
Therefore we now propagate the residuals in $f\sigma_8(a)$ to the cosmological
model parameters. We use the Fisher bias formalism to carry this out
\cite{fisbias}. The set of cosmological parameters considered is
the present matter density in units of
the critical density, $\Omega_m$, the dark energy equation of state parameter
today $w_0$, and a measure of its time variation $w_a$,
where $w(a)=w_0+w_a(1-a)$. We use the DESI $f\sigma_8(a)$ data as described in
Sec.~\ref{sec:method} and to break background degeneracies
we apply a simple Gaussian prior on the matter density $\sigma(\Omega_m)=0.01$.
Our fiducial model is flat $\Lambda$CDM with $\Omega_m=0.3$.
The bias on a parameter $p_i$ due to a misestimation of the observable
$\Delta O(a)$ is (see, e.g., \cite{0604280}, including for the case where
the error matrix is not diagonal)
\begin{equation}
\delta p_i=\left(F^{-1}\right)_{ij}\sum_k \frac{\partial O_k}{\partial p_j}
\frac{1}{\sigma_k^2}\,\Delta O_k \ ,
\end{equation}
where $O_k$ is the $k$th observable (i.e.\ $f\sigma_8(z_k)$) and $F$ is the
Fisher matrix.
Once we have the set of $\{\delta p_i\}$ we can quantify the bias statistically.
One way is of course simply looking at $\delta p_i/\sigma(p_i)$, the bias
relative to the statistical uncertainty. A common statistical quantity that
employs this is the risk, which take the square root of the quadrature sum
of the bias and dispersion. We can take the ratio of the risk to the
statistical uncertainty to find the bloat, or effective increase in
the uncertainty on a parameter:
\begin{equation}
B_i=\frac{\sqrt{\delta p_i^2+\sigma^2(p_i)}}{\sigma(p_i)}=
\sqrt{1+[\delta p_i/\sigma(p_i)]^2}
\end{equation}
This quantity appears for example in the Rao-Cram{\'e}r-Frechet bound \cite{rao}.
Finally, perhaps most useful is the shift induced in the joint parameter
fitting, e.g.\ in the offset of the derived values from the true best fit
in the dark energy equation of state plane $w_0$--$w_a$. The shift relative
to the likelihood contours at some confidence level presents an informative,
quantitative assessment of the bias that takes into account parameter
degeneracies. This is given by \cite{0508296,0812.0769}
\begin{equation}
\Delta \chi^2=\sum_{ij} \delta p_i\,F^{({\rm red})}_{ij}\,\delta p_j \ ,
\end{equation}
where the reduced Fisher matrix $F^{({\rm red})}$ runs over only those
parameters $p_i$, $p_j$ whose bias we are interested in, e.g.\ $w_0$ and
$w_a$ for the 2D joint likelihood contour plot in the $w_0$--$w_a$ plane,
and is marginalized over all others.
This quantity automatically takes into account the {\it direction\/} of the
shift, i.e.\ that a bias perpendicular to the degeneracy direction is more
damaging than one along the degeneracy direction.
Table~\ref{tab:chi} presents the values of $\Delta\chi^2$ for the joint
$w_0$--$w_a$ likelihood, the maximum $\delta p/\sigma(p)$ for any of the
cosmological parameters, and the maximum bloat in any of the cosmological
parameters. Note that a shift of $\Delta\chi^2=2.3$ moves the true values
out to the 68\% confidence contour, i.e.\ a joint $1\sigma$ bias. A shift
smaller than this lies within the contour.
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{\columnwidth}
{@{\extracolsep{\fill}} l c c c }
\hline
Model & $\Delta\chi^2$ & [$\delta p/\sigma(p)$]$_{max}$ & Risk$_{\rm max}$ \\
\hline
Gaussian ($a_t=0.7$) & 0.02 & 0.02 & 1.00 \\
Gaussian ($a_t=0.5$) & 0.13 & 0.33 & 1.05 \\
Gaussian ($a_t=0.3$) & 0.16 & 0.22 & 1.02 \\
Gaussian ($a_t=0.7$; $\delta G=0.1$) & 0.09 & 0.04 & 1.00 \\
Gaussian$_3$ ($a_t=0.3$) & 0.09 & 0.22 & 1.02 \\
Gaussian$^2$ ($\sigma=0.25$) & 0.03 & 0.09 & 1.00 \\
Gaussian$^2$ ($\sigma=0.5$) & 0.04 & 0.04 & 1.00 \\
Gaussian$_3^2$ ($\sigma=0.5$) & 0.03 & 0.07 & 1.00 \\
Rising $a^3$ & 0.01 & 0.09 & 1.00 \\
Falling $a^{-3}$ & 0.36 & 0.25 & 1.03 \\
Falling$_{3s}$ $a^{-3}$ & 0.10 & 0.23 & 1.03 \\
DGP & 2.28 & 0.45 & 1.10 \\
DGP$_3$ & 0.00 & 0.02 & 1.00 \\
$f(R)$ ($k_0=0.02$) & 0.07 & 0.06 & 1.00 \\
$f(R)$ ($k_0=0.10$) & 1.81 & 1.34 & 1.67 \\
$f(R)_{3s}$ ($k_0=0.10$) & 0.18 & 0.40 & 1.08 \\
$f(R)$ ($k_0=0.14$) & 2.57 & 1.52 & 1.82 \\
$f(R)_{3s}$ ($k_0=0.14$) & 0.12 & 0.31 & 1.05 \\
\end{tabular*}
\caption{Parameter bias levels corresponding to the binned approximation
of $\dg(a)$. $\Delta\chi^2$ is the shift in the dark energy equation of state
parameter $w_0$-$w_a$ plane due to the bias; recall that $\Delta\chi^2=2.3$
corresponds to a $1\sigma$ shift in the joint parameter fit. The maximum
bias of a parameter relative to its statistical uncertainty is shown
in the $\delta p/\sigma(p)$ column. The Risk column shows the maximum
``bloat'' of the Risk, i.e.\ the increase in the uncertainty due to the bias.
The subscript 3 denotes the three bin fit with an early bin, and $3s$
denotes a three bin fit splitting the mid $z$ bin. The superscript 2
denotes a convolution of two Gaussians, with $a_t=0.3$ and $a_t=0.7$.
Note the approximate form can be good to $\Delta\chi^2<0.18$ for all models.
}
\label{tab:chi}
\end{center}
\end{table}
We see that in the results for the whole range of gravity models, using
two parameters to represent the bin values, or in rare cases three
parameters, keeps $\Delta\chi^2<0.18$, i.e.\ less than a tenth of the distance
to the $1\sigma$ joint likelihood contour. Alternately, the risk bloat factor
is less than 1.08, i.e.\ the binned approximation only blows up the error
bars, taking into account the systematic offset, by at most 8\%. Thus the two, or
if needed three, parameter description of gravitational strength
modification is statistically extremely robust.
\section{Observational Signatures} \label{sec:discuss}
For any parametrization it is important that it be clear how it can
be used to understand the data. That is, it should be of practical use
to the observers and data analysts, as well as offering guidance to
theorists.
The binned parametrization is simple to apply, readily able to calculate
$f\sigma_8(a)$ or other growth quantities with excellent accuracy. The steps in
using it are straightforward:
\begin{enumerate}
\item Fit the predictions from two bins in
$a=[0.25,0.5]$ and $a=[0.5,1]$ to the data. If all values are consistent
with zero then general relativity is a viable gravity theory. If some
values differ from zero with statistical significance, this is an alert
that a potential signature of modified gravity has been found.
\item If there are any residuals
that show a pattern of exceeding the data error bars in some redshift
range, then
\begin{enumerate}
\item Add the area parameter or equivalently a third bin at
$a=[0.1,0.25]$ if the deviation shows up from early times (note the
kink deviation and then slope in the residuals shown in the figures for
the early Gaussian and DGP figures), or
\item Split
the $a=[0.25,0.5]$ bin into two bins over $a=[0.25,0.4]$ and $a=[0.4,0.5]$
if the deviation peaks in that range (see the falling and $f(R)$ figures).
\end{enumerate}
\item If the residuals indicate an overall poor fit, and in particular
if the time evolution also looks steep (as in the $f(R)$ case), try
separating the data into low
and high wavenumbers to look for scale dependence.
\end{enumerate}
One could carry the bin refinement
to a further level but none of the varied models we have considered require
more than three bins, with two bins always sufficient here if data were at
the 2\% precision level.
The next question is how to interpret the results in terms of gravity
theory. Note that the bin values are not a map of $\dgm(a)$ per se; they
are a combination of the gravitation strength, the redshift weighting
of the data and its precision, and a delay due to the convolution windowing
of $\dgm(a)$ in the integral for $f\sigma_8(a)$. That said, they do provide a
coarse guide to $\dgm$.
A late bin value near $1/3$ inspires closer examination in terms of
scalar-tensor gravity, while $-1/3$ would recall DGP gravity. The
need for an early bin might lead to theories with early modifications
such as the many members of the Horndeski class. Steepness of evolution
in the binned value, reflecting steep time evolution of $\gm$, could point
to $f(R)$ gravity, especially if splitting the $a=[0.25,0.5]$ bin led
to a significant improvement in the residuals. And of course scale dependence
gives theoretically important information. Thus, even though the analysis
method is model independent, not assuming any functional form or even
that gravitational modification is only a late time phenomenon, we can
obtain substantial information about the theory characteristics from the
signatures in cosmic growth data.
\section{Conclusions} \label{sec:concl}
Comparing cosmic growth vs cosmic expansion is one of the premier
methods for probing the nature of dark energy. Moreover, the details
of cosmic growth can test the laws of gravity in the universe, on
scales much greater than solar system or astrophysical tests. Given
a well defined theory, such tests are straightforward. However, without
a compelling theory -- not just a class but with a particular functional
form and hyperparameters -- the comparison with data is more difficult,
or at best model dependent.
Allowing the data to play a central role, we demonstrate a model
independent approach. We find that only two (or in specific physical
cases three) parameters in the form of binned values of $\gm(a)$
deliver subpercent accuracy in fitting the predominant redshift
space distortion observable $f\sigma_8(a)$. This extends to all redshifts the
previous high redshift parameter method of Paper 1.
We stress tested the approach against a set of six, diverse modified
gravity classes with a variety of time dependences, including DGP
gravity and $f(R)$ gravity. Residuals against exact behaviors of
the observable successfully achieved subpercent accuracy. Minimizing
the residual determines the bin values, while any remaining pattern
offers concrete guidance to the need for a third bin or not.
We propagated the residuals from the parametrization to cosmological
parameter bias and showed they are negligible, at below the effective
$0.1\sigma$ level in joint confidence contours, for next generation
data of the characteristics of the DESI galaxy redshift survey.
As importantly, the method lays out a clear path for interpreting the
bin parametrization results in terms of the physical signature of the
cosmological gravity theory. Based on the trend of values, their
steepness, magnitude, and any need for an early bin or scale dependence,
this approach can guide the search for the laws of cosmic gravity in
the appropriate direction.
Future work includes whether such a method can be fruitful for weak
gravitational lensing: like cosmic growth it relies on a modified Poisson
equation, with $G_{\rm light}(a)$ instead of $\gm(a)$, but with a
different kernel. If it too can be parametrized for the observables
in a low dimensional manner, then next generation surveys will -- even
in a model independent manner -- have
excellent capabilities to explore cosmic gravity.
\acknowledgments
This work benefitted from discussions during the Energetic Cosmos
Laboratory conference
``Exploring the Energetic Universe 2017'' at Nazarbayev University.
EL is also grateful to the Yukawa Institute for Theoretical Physics at Kyoto
University for hospitality during YITP workshop YITP-T-17-03, and useful
discussions, especially with Kazuya Koyama.
This work is supported in part by the Energetic Cosmos Laboratory and by
the U.S.\ Department of Energy, Office of Science, Office of High Energy
Physics, under Award DE-SC-0007867 and contract no.\ DE-AC02-05CH11231.
|
1,116,691,497,185 | arxiv | \section{Introduction}
Synchronisation of the dynamical variables of coupled systems is an important
nonlinear phenomenon where intense research is being concentrated
recently \cite{sesy-ref1}. This is probably because of its engineering
applications like spread spectrum and secure data transmission using chaotic
signals \cite{sesy-ref1,sesy-ref2,sesy-ref3}, control of microwave electronic
devices \cite{sesy-ref4}, graph colouring etc. Also communication between
different regions of the brain depends heavily on the synchronised
behaviour of neuronal networks \cite{sesy-ref5,sesy-ref6}. Moreover patterns
of synchrony and phase shifts in animal locomotion is gaining importance as a
field of active study \cite{sesy-ref7,sesy-ref8,sesy-ref9,sesy-ref10,%
sesy-ref11,sesy-ref12}. In general, the synchronised networks for analysing or
modelling all these physical or biological situations are constructed by
coupling basic dynamic units with a well defined connection topology that can
be nearest neighbour, small world, random or hierarchical architectures. In
addition, in specific applications like communication or neural networks, a
realistic modelling may require the introduction of connection delays due to
finite information transmission or processing speeds among the units
\cite{sesy-ref13}. In any case, it is found that the collective dynamics
depends crucially on the connection topology \cite{sesy-ref14}.
The simplest yet the most widely used topology in this context is the linear
array and its combinations. The study of synchronisation in arrays of systems
was first applied to laser systems \cite{sesy-ref15,sesy-ref16} which has
relevance in optical communication systems. Since then the occurrence of
synchronisation in coupled map lattices has been extensively studied with many
consequent applications \cite{sesy-ref17}.
Such systems, with synchronisation in temporally
chaotic but spatially ordered units forming an array, is applied in many
situations like data driven parallel computing \cite{sesy-ref19}.
However most of these cases studied so far involve continuous systems of
chaotic oscillators.
In this paper, we consider two such regular arrays, one vertical and the other
horizontal, that works under the drive response mechanism, where the
connection is unidirectional. We find that the former setup leads to
simultaneous synchronisation while the latter results in sequential
synchronisation. Here we would like to comment that in most of the connected
networks, the synchronisation is found to occur simultaneously. However the
topology in the linear horizontal array introduced here develops
synchronisation sequentially and the delay time from one unit to the next can
be adjusted by external control. This mechanism therefore would be useful for
many technological applications. These two types of synchronisations are
characterised using response time (which is the time for synchronisations to
stabilise), size effect, bunching effect etc. These two arrays can be further
worked together to produce square lattice networks with desirable or
useful inter connections.
The array is realised here with a two dimensional discrete system or map as the
local unit and a connection that involves a non linear function forming part
of the map function. The stability of the simultaneously synchronised state for
the vertical array is studied by computing the Maximum Conditional
Lyapunov Exponent (MCLE) \cite{sesy-ref20}, so that the minimum coupling
coefficient required for onset of synchronisation can be deduced. The dependence
of the characteristic response time $\tau_s$ on the coupling coefficient
$\epsilon$
is
analysed numerically. A horizontal array with the same dynamics is
constructed with each unit driven by the previous one, modelling an open flow
system and leading to sequential synchronisation. In this case the time taken
for the last unit to synchronise is taken as the total response time
$\tau_s$. The behaviour of its average for different initial conditions and
size $N$ of the system are studied. The additional time or delay time $\tau_l$
required for the last unit to synchronise after its previous one has
synchronised is found to saturate with system size. Moreover we note an
interesting bunching effect where the total $\tau_s$ can be controlled by
varying the value of $\epsilon$ in bunch of $m$ units.
In Section 2, we introduce the basic unit which serves as the driving as well
as the driven systems with identical individual dynamics. The concept of
generalised synchronisation and its stability in the context of
unidirectionally coupled systems is also discussed. The construction and the
collective dynamics of the vertical array and the characterisation of
simultaneous synchronisation is given in section 3. In section 4, we
introduce sequential synchronisation and its control due to the bunching effect
of the unidirectionally coupled units. Our concluding remarks are given in
section 5.
\section{Basic Dynamical unit and Generalised Synchronisation}
The basic unit used here for the present analysis of synchronisation in arrays
is a two dimensional discrete systems, which serves both as the driving and
driven systems defined in the phase space $\overline{X}(n)=
\left(X(n), Y(n)\right)$. The specific system chosen for this work is the
Gumowski-Mira recurrence relation \cite{sesy-ref21} given as
\begin{align}
X(n+1) &= Y(n)+a\left(1-b Y{(n)}^2\right) Y{(n)}+ f(X(n)).\notag\\
Y(n+1) &=-X(n)+f(X(n+1)).\label{sesy-eq2.1}
\end{align}
where $f(X(n))=\mu X(n)+\dfrac{2(1-\mu) X^2(n)}{1+X^2(n)}$ and\\
$n$ refers to the discrete time index.
Our earlier investigations in this system reveal that \eqref{sesy-eq2.1}
is capable of giving rise to many interesting two dimensional patterns in
$(X,Y)$ plane that depend very sensitively on the control parameter $\mu$
\cite{sesy-ref22}. This can be exploited in decision making algorithms and
control techniques for computing and communications. We have tried three
different coupling schemes in two such systems \cite{sesy-ref23} and found
that they are capable of total or lag synchronisation in periodic,
quasi periodic or chaotic states, when $N$ such systems are geometrically set
to form a vertical or horizontal array and driven unidirectionally, they are
capable of synchronising to the same chaotic state.
In the context of unidirectionally coupled systems, the type of synchronised
behaviour called generalised synchronisation has been attracting much attention
recently \cite{sesy-ref24,sesy-ref25}. Here the states of the driving system
$\overline{X}_d$ and the driven system $\overline{X}_{dr}$ are dynamically
related by a function $F$ such that the relation $\overline{X}_{dr}(t)=
F(\overline{X}_d(t))$ is true once the transients are over. The form of
$F$ can be smooth or fractal and in either case, the procedure for finding the
same can be complicated. Hence often an auxiliary system identical to the
driven system is introduced as $X_a(t)$. The initial conditions of
$X_{dr}$ and $X_a$ are taken different (both being individually chaotic in
dynamics) but lying in the basin of the same attractor. Once the
transients have settled, the dynamical equivalence of ${X}_{dr}(t)$ and
$X_a(t)$ is taken as an indication of generalised synchronisation between
$X_d(t)$ and $X_{dr}(t)$.
\section{Simultaneous Synchronisation in a Vertical Array}
We extend the above concept to construct a vertical array of $N$ identical
systems,
$\left[ X_{dr}^1(n), X_{dr}^2(n) \cdots X_{dr}^N(n)\right]$; each driven
independently by $X_d(n)$. All the systems are identical and individually
evolve according to \eqref{sesy-eq2.1}. Fig.~\ref{sesy-fig1} show the above
scheme of construction of vertical arrays.
\begin{figure}[h]
\centerline{\epsfig{file=Fig1.eps, width=.35\linewidth}}
\caption{Schematic view of the construction of a vertical
array}\label{sesy-fig1}
\end{figure}
Here the driving system follows the dynamics
\begin{align}
X_d(n+1) &= Y_d(n)+a\left(1-b Y_d^2{(n)}\right) Y_d{(n)}+ f(X_d(n)).\notag\\
Y_d(n+1) &=-X_d(n)+f(X_d(n+1)).\label{sesy-eq3.1}
\end{align}
with $f(X_d(n))=\mu_d X_d(n)+\dfrac{2(1-\mu) X_d^2(n)}{1+X_d^2(n)}$.\\
The $i^{\text{th}}$ driven unit in the vertical array has the dynamics
\begin{align}
X_{dr}^i(n+1) &=
Y_{dr}^i(n)+a\left(1-b Y_{dr}^{i^2}{(n)}\right) Y_{dr}^i{(n)}\notag\\
&\quad+ f(X_{dr}^i(n))+\epsilon \left(f(X_d(n))
-f(X_{dr}^i(n))\right)\notag\\
Y_{dr}^i(n+1) &=-X_{dr}^i(n)+f(X_{dr}^i(n+1)).\label{sesy-eq3.2}
\end{align}
with $f(X_{dr}^i(n))=\mu_{dr} X_{dr}^i(n)+\dfrac{2(1-\mu_{dr})
X_{dr}^{i^2}(n)}{1+X_{dr}^{i^2}(n)}$ where $\epsilon$ is the coupling coefficient of the
unidirectional coupling applied to the $X$ variable through the function
$f(x)$. The parameters $a$ and $b$ are set as $a=0.008$ and $b=0.05$. The
total number of units considered is $N=51$. The value of $\mu_{dr}$ is chosen
to be the same for all the $50$ driven units. We can realise synchronisation
for different combinations of values of $\mu_{dr}$ and $\mu_d$ with $\mu_d$, in
general different from $\mu_{dr}$. For the special case of $\mu_d=\mu_{dr}$
all the $51$ units synchronise including the driving system, when started with
different initial conditions.
Fixing the value of coupling coefficient $\epsilon=0.9$, the values of $\mu_d$,
$\mu_{dr}$ for which synchronisation is feasible in the $50$ driven systems
is plotted in fig.~\ref{sesy-fig2}.
\begin{figure}[h]
\centerline{\epsfig{file=Fig2.eps, width=\linewidth}}
\caption{Points in the $\mu_d-\mu_{dr}$ plane for which synchronisation
of the driven systems are possible with $\epsilon=0.9$. Points marked
$\divideontimes$
indicates $\mu_d$, $\mu_{dr}$ values leading to synchronised states with
periodicity less than 15. Points marked $\boxdot$ correspond to
synchronisation in higher periodic states or mostly chaotic
states.}\label{sesy-fig2}
\end{figure}
In the parameter plane considered here
in the range $-0.2<\mu_d<-0.5$, $-0.2<\mu_{dr}<-0.5$, the points marked $\divideontimes$
indicates $(\mu_d,\mu_{dr})$ values leading to synchronised periodic state
with periodicity less than 15. Points marked $\boxdot$ indicates
synchronisation
in higher periodic state or mostly chaotic states.
For specific cases like $\mu_d=\mu_{dr}=-0.39$ both the driving system and
driven systems are in chaotic state individually. With $\epsilon=1.56$ all the
$50$ driven systems are synchronised in the chaotic state while the
driving system is asynchronous with them. However when $\epsilon$ is slightly
increased to $1.6$ all the $51$ units are found to synchronise in the
chaotic state. Fig.~\ref{sesy-fig3}a gives this chaotic synchronisation
between two participating driven systems for $\epsilon=1.6$, where the
iterates of the $X$ variable of the $6^{\text{th}}$ and $49^{\text{th}}$
units are plotted, after the transients have died out.
\begin{figure}[h]
\subfigure[]{\epsfig{file=Fig3a.eps, width=\linewidth}}\\
\subfigure[]{\epsfig{file=Fig3b.eps, width=\linewidth}}
\caption{(a) Chaotic synchronisation between two participating driven
systems with $\epsilon=1.6$, $\mu_d=\mu_{dr}=-0.39$. Here the iterates of
the $X$ variable of the $6^{\text{th}}$ and $49^{\text{th}}$ units are
plotted. (b) Synchronised periodic 15 cycle for
$\mu_d=\mu_{dr}=-0.23$ with $\epsilon=1.6$. The iterates of the $X$ variable
of the driving system and $49^{\text{th}}$ driven unit is
plotted.}\label{sesy-fig3}
\end{figure}
For $\mu_d=\mu_{dr}=-0.23$, individually the systems are chaotic. For
$\epsilon=0.9$, all the $N$ driven systems are synchronised to the same
periodic state of periodicity 15. But the driving system is also synchronised
only when $\epsilon$ is increased to 1.6. Fig.~\ref{sesy-fig3}b gives this
synchronised periodic 15 cycle for $\epsilon=1.6$, where the iterates
of the driving system and the $49^{\text{th}}$ unit are plotted. Thus the
driving system and the driven system can be simultaneously synchronised only
when $\mu_d=\mu_{dr}$ and when $\epsilon$ is very large ie
$\epsilon\sim 1.6$.
For $\mu_d=-0.2$, the driving system is in periodic 8 cycle. For
$\mu_{dr}=-0.39$ the driven systems are individually chaotic. For
$\epsilon=0.9$ all the $N$ driven systems are synchronised in the chaotic
state. Fig.~\ref{sesy-fig4} shows the synchronised chaotic state for the above
case. Here the iterates of the $X$ variable of the $48^{\text{th}}$ unit and
$10^{\text{th}}$ unit are plotted after the transients have died out.
\begin{figure}[h]
\centerline{\epsfig{file=Fig4.eps, width=\linewidth}}
\caption{Synchronised chaotic states between two driven systems for
$\mu_d=-0.2$, $\mu_{dr}=-0.39$ with $\epsilon=0.9$. Here the iterates of the
$X$ variable of the $48^{\text{th}}$ unit and the $10^{\text{th}}$ units are
plotted.}\label{sesy-fig4}
\end{figure}
The condition for the stability of generalised synchronisation is discussed
using the Maximal Conditional Lyapunov Exponent $\lambda_{MCLE}$
\cite{sesy-ref26,sesy-ref27}. Here the Lyapunov Exponent of the driven system
is calculated and it is different from the uncoupled system, since it
depends on the dynamics of the driving system also. The condition for the
stability of generalised synchronisation is that $\lambda_{MCLE}$ should
be negative \cite{sesy-ref28}. From equations \eqref{sesy-eq3.1} and
\eqref{sesy-eq3.2} the Jacobian matrix for the $i^{\text{th}}$ unit can be
written as
\begin{equation}\label{sesy-eq3.3}
M=\begin{bmatrix}
(1-\epsilon)\frac{\partial F^i}{\partial X^i} &
\frac{\partial F^i}{\partial Y^i}\\
\frac{\partial G^i}{\partial X^i} & \frac{\partial G^i}{\partial Y^i}
\end{bmatrix}
\end{equation}
where $X^i_{dr}(n+1)=F^i(X,Y)$ and $Y^i_{dr}(n+1)=G^i(X,Y)$.\\
If $\sigma^1$ and $\sigma^2$ are the eigen values of the product of the
Jacobian matrices at every iteration such that $\sigma^1>\sigma^2$,
then \cite{sesy-ref29}
\begin{equation}\label{sesy-eq3.4}
\lambda_{MCLE}=\lim_{m\rightarrow \infty}\frac{1}{m} ln |\sigma^1|
\end{equation}
$\lambda_{MCLE}$ can be calculated numerically for different $\epsilon$ values
using \eqref{sesy-eq3.4}.
We consider the case $\mu_d=-0.2$ and $\mu_{dr}=-0.39$ which we have
discussed above and calculate $\lambda_{MCLE}$ for different $\epsilon$
values. Calculations are done for 10000 iterates after leaving initial
70000 iterates as transients. In fig.~\ref{sesy-fig5}
\begin{figure}[h]
\centerline{\epsfig{file=Fig5.eps, width=\linewidth}}
\caption{The variation of Maximum Conditional Lyapunov Exponent
($\lambda_{\text{MCLE}}$)
of the driven system with the coupling coefficient $\epsilon$.
Here $\mu_d=-0.2$,
$\mu_{dr}=-0.39$. The minimum coupling coefficient for synchronisation
$\epsilon_{\min}=0.829$.}\label{sesy-fig5}
\end{figure}
the values of $\lambda_{MCLE}$ for different
$\epsilon$ values are plotted. It is found that, $\lambda_{MCLE}$
crosses zero at $\epsilon=0.829$, which is the minimum value of $\epsilon$
\cite{sesy-ref20} viz $\epsilon_{\min}$ such that for
$\epsilon>\epsilon_{\min}$ the synchronised state is stable.
For the above case the coupling coefficient is varied in steps from $0.84$ to
$0.97$ and the time taken for reaching synchronisation in the driven systems
is noted. Fig.~\ref{sesy-fig6} gives the variation of thus average response
\begin{figure}[h]
\centerline{\epsfig{file=Fig6.eps, width=\linewidth}}
\caption{The variation of the response time $\langle \tau_s\rangle$ which is
the total time for synchronisation with coupling coefficient $\epsilon$ for
the $50^{\text{th}}$ unit. $\langle \tau_s\rangle$ is almost constant for
values of $\epsilon > 0.87$.}\label{sesy-fig6}
\end{figure}
time $\tau_s$ (averaged over
10 different initial conditions) with $\epsilon$ for the
$50^{\text{th}}$ unit. It is interesting to note that $\langle \tau_s\rangle$
is almost constant for values of $\epsilon>0.87$. In this case since
the synchronisation
is simultaneous and coupling is unidirectional and similar, the average
$\langle \tau_s \rangle$ is independent of the size of the array $N$.
\section{Sequential synchronisation in a Horizontal Array}
In this section a horizontal array of $N$ identical systems with open ends,
where each unit is driven by the previous one is introduced. The coupling is
through the nonlinear function $f(X,Y)$ as in the previous case.
Fig.~\ref{sesy-fig7} gives the schematic view of unidirectional coupling in a
flow which consists of $N$ units.
\begin{figure}[h]
\centerline{\epsfig{file=Fig7.eps, width=.8\linewidth}}
\caption{Schematic view of unidirectional coupling in a flow
system.}\label{sesy-fig7}
\end{figure}
The $i^{\text{th}}$ unit in the horizontal array follows the dynamics
\noindent\begin{align}
X^i(n+1) &=
Y^i(n)+a\left(1-b Y^{i^2}{(n)}\right) Y^i{(n)}\notag\\
&\quad+f(X^i(n))+\epsilon \left(f(X^{i-1}(n))
-f(X^i(n))\right)\notag\\
Y^i(n+1) &=-X^i(n)+f(X^i(n+1)).\label{sesy-eq4.1}
\end{align}
with $f(X^i(n))=\mu X^i(n)+\dfrac{2(1-\mu) X^{i^2}{(n)}}{1+X^{i^2}{(n)}}$.
The control parameter $\mu$ in the same for all the units such that the units
are chaotic individually. This set up is found to give rise to sequential
synchronisation in the array. In our calculations we consider an array of 51
units. This can be extended to any number of units $N$.
For $\mu=-0.23$ where the individual systems are chaotic, and coupling coefficient
$\epsilon=1.9$, we find that synchronisation sets in sequentially with the
$2^{\text{nd}}$ synchronising after the first, the third after the second and
so on. The time taken by the last unit to synchronise is taken as
$\langle \tau_s \rangle$
which is the average total response time for the whole array.
Fig.~\ref{sesy-fig8}
\begin{figure}[h]
\centerline{\epsfig{file=Fig8.eps, width=\linewidth}}
\caption{Synchronised chaotic state for $\mu=-0.23$ with $\epsilon=1.9$.
The iterates of the $X$ variable of the first unit and the last unit
($51^{\text{st}}$ unit) are plotted.}\label{sesy-fig8}
\end{figure}
shows this synchronised chaotic state after the last unit has synchronised. The
$\langle \tau_s\rangle$ is found to vary with coupling coefficient $\epsilon$
as shown in Fig.~\ref{sesy-fig9}.
\begin{figure}[h]
\centerline{\epsfig{file=Fig9.eps, width=\linewidth}}
\caption{The variation of total response time $\langle \tau_s\rangle$ of the
$51^{\text{st}}$ unit with coupling coefficient $\epsilon$.
$\langle \tau_s\rangle$ has a minimum value for $\epsilon=2$.}\label{sesy-fig9}
\end{figure}
It is found that $\langle\tau_s\rangle$ has a minimum value
for a particular $\epsilon$ which in this case is $\epsilon=2$.
The delay time $\tau_l$ ie, the additional taken for the $N^{\text{th}}$ unit
to synchronise after its previous one has synchronised is defined as
$\tau_l=\tau^N_s-\tau^{N-1}_s$. This $\tau_l$ is found to saturate with the
system size \cite{sesy-ref30} as shown in fig.~\ref{sesy-fig10}. Beyond
$N=35$, $\tau_l$ is almost constant.
\begin{figure}[h]
\centerline{\epsfig{file=Fig10.eps, width=\linewidth}}
\caption{The delay time $\tau_l$ which is the additional time taken for the
$N^{\text{th}}$ unit to synchronise after its previous one has
synchronised is found to saturate with system size $N$. Beyond $N=35$,
$\tau_l$ is almost a constant.}\label{sesy-fig10}
\end{figure}
An interesting observation in this horizontal array of units is a bunching
effect that reflects in the total response time $\langle\tau_s\rangle$.
For this
instead of fixing the same value for the coupling coefficient $\epsilon$ for all
the units, we fix its value for a particular number of units and increase
it in steps for the next bunch and so on. Then the total $\langle\tau_s\rangle$
is found to
be smaller compared to the previous case of the same $\epsilon$ for all the
units. Moreover this time depends on the size of the bunch and is minimum for
a certain number of units in each bunch.
We report a few specific cases. With $\mu=-0.23$ the value of $\epsilon$
is increased in steps of $0.001$ for each bunch so that $\epsilon$ for the
last bunch is $\epsilon_{\max}=2.01$ for different bunch sizes. In each case
the total response time $\langle \tau_s\rangle$ is found.
Fig. \ref{sesy-fig11} shows how the response time
$\langle \tau_s\rangle$ changes
\begin{figure}[h]
\begin{center}
\mbox{
\epsfig{file=Fig13.eps, width=\linewidth}
\end{center}
\caption{Change in the response time $\langle \tau_s\rangle$ of the
last unit with bunch size $m$. Here $\mu=-0.23$. \textit{Curve a} is for
$\epsilon_{\max}=1.91$, when the bunch size $m=7$, the response time
$\langle \tau_s\rangle$ is minimum. \textit{b} is for $\epsilon_{\max}=2.01$,
the response time $\langle \tau_s\rangle$ is minimum for $m=8$. In both cases
$\langle \tau_s\rangle$ is found to be less than the case when we apply
$\epsilon_{\max}$ to all the units.}\label{sesy-fig11}
\end{figure}
with the variation in the bunch size $m$, ie., number of units in each bunch.
The response time $\langle \tau_s\rangle$ is minimum when the bunch size
$m=8$. For $m=8$, $\langle \tau_s\rangle=138612$ iterations, whereas when
$\epsilon=\epsilon_{\max}=2.01$ for all the units, the response time
$\langle \tau_s\rangle=144404$ iterations.
As a second case for same $\mu=-0.23 \epsilon_{\max}$ is taken as $1.91$ and
calculations repeated as above. In this case $\langle \tau_s\rangle$ is found
to be minimum and is $152150$ iterations when the size of the bunch is
$m=7$ as shown in Fig. \ref{sesy-fig11}. If $\epsilon=1.91$ for all the units,
$\langle \tau_s\rangle$ is $161953$ iterations. We observe that the decrease in
$\langle \tau_s\rangle$ for the whole array due to bunching must be
reflected in the response time of each bunch. So for the minimum case, the
response times for the last unit of the first bunch
(ie., 8$^{\text{th}}$ unit), last unit of the second bunch
(ie., 16$^{\text{th}}$ unit) etc. are noted with bunching. The same quantity
with $\epsilon$ same for all the units ie., without bunching are also
noted. $\langle \tau_s\rangle$ thus obtained are plotted against the respective
units in Fig. \ref{sesy-fig12}.
It is found that except for the 8$^{\text{th}}$ unit in the first bunch
the response time is less in the case of bunching.
\begin{figure}[h]
\begin{center}
\centerline{\epsfig{file=Fig11.eps, width=\linewidth}}
\caption{The response time $\langle \tau_s\rangle$ is plotted for different
units for $\mu=-0.23$, $\epsilon_{\max}=2.01$.
The dotted line gives the total response time $\langle \tau_s\rangle$ of
8$^{\text{th}}$ unit, 16$^{\text{th}}$ unit, 24$^{\text{th}}$ unit etc.
when $\epsilon_{\max}=2.01$ is applied to all the units.
Full line gives the total response time $\langle \tau_s\rangle$
of 8$^{\text{th}}$ unit
(last unit of 1$^{\text{st}}$ bunch), 16$^{\text{th}}$ unit (last unit of
2$^{\text{nd}}$ bunch), 24$^{\text{th}}$ unit (last unit of 3$^{\text{rd}}$
bunch) etc. when $\epsilon$ is increased in steps for each bunch so that
$\epsilon_{\max}=2.01$ for the last bunch. Here $2.005\leq \epsilon\leq
2.01$ with step size 0.001. Thus bunching can control the total
response time $\langle \tau_s\rangle$ of the array.}\label{sesy-fig12}
\end{center}
\end{figure}
\section{Conclusion}
In this work we report how synchronisation in an array of systems can be made
more efficient and flexible to suit specific applications. We consider
two such arrays, vertical and horizontal, working under the drive-response
mechanism with a two-dimensional discrete systems as the unit dynamics.
It is observed that synchronisation sets in all the systems
simultaneously in the vertical setup. The minimum value of the coupling
coefficient $\epsilon$ required for stability of synchronisation is computed
numerically from the Maximum Conditional Lyapunov Exponent. The possible
choices of parameters for stable states of synchrony are isolated.
The specific cases of chaotic single units
synchronising to periodic and chaotic synchronised states and periodic single
units stabilising to chaotic states of synchronisation are considered
in detail. The average response time required to overcome the transients is
found to saturate beyond a certain value of $\epsilon$. The horizontal array
exhibits many interesting features useful for technological applications.
In this case the synchronisation sets in sequentially from unit to unit along
the array since the coupling is unidirectional. The total response time for
the whole array has a minimum as $\epsilon$ is varied. The additional time
required for the last unit to synchronise after the previous one is found to
saturate with system size.
We further note that the total response time for the whole array can be
reduced by introducing bunching with step wise increase of $\epsilon$ from
bunch to bunch. There exists a specific bunch size giving minimum time which
depends on the choice of the parameter and the maximum $\epsilon$ given to
the last bunch. This makes the sequential synchronisation flexible and
controllable to suit specific applications.
At present we do not find any specific reasons for the above findings to
depend on the unit chosen. For different choices of the unit dynamics the
behaviour should be qualitatively similar. To establish this generality and
applicability of the present technique, we are trying it out for a number
of systems.
The results will be published elsewhere.
\medskip\noindent\textbf{Acknowledgement}\\
K. A. thanks University Grants Commission, New Delhi for deputation
under Faculty Improvement Programme. G. A. thanks IUCAA, Pune for hospitality
and computer facility.
|
1,116,691,497,186 | arxiv | \section{Introduction}
\IEEEPARstart{R}{\lowercase{egression}} is one of the fundamental problems of statistics, system identification, signal processing and machine learning { \cite{cucker2007learning}}. Given a finite sample of input-output pairs, the typical aim is to estimate
the so-called {\em regression function}{ , which, given an input, encodes the conditional expectation of the corresponding output} \cite{ljung2010perspectives}. There are several well-known (parametric and nonparametric) approaches for regression, from linear regression to neural networks and kernel methods, which provide {\em point-estimates} from a { given} model class
\cite{gyorfi2002distribution}.
However, sole point-estimates are often not sufficient and {\em region-estimates} are also needed, for example,
to support {\em robust} approaches. These region-estimates have several variants, such as {\em confidence regions} for the ``true'' function generating the observations \cite{Algo2018}; for the {\em expected}
output at a given input \cite{quinonero2005unifying}; and {\em prediction regions} for the next (noisy) observation \cite{vovk2005algorithmic}.
In this paper, we focus on building {\em confidence bands} for the regression function. These bands have natural connections to filtering and smoothing methods.
While in a {\em parametric} setting such region-estimates are typically induced by confidence sets
in the parameter space, in a {\em nonparametric} setting this indirect approach is not feasible.
Therefore, nonparametic confidence bands for the expected outputs should be constructed directly.
Regarding prediction intervals for the {\em next observation}, promising distribution-free approaches are {\em interval predictor models} (IPMs) based on the scenario approach \cite{campi2009interval, garatti2019class}, and the {\em conformal prediction} framework also offers { several nonparametric methods for regression and classification} \cite{vovk2005algorithmic}.
{ If} the data is jointly Gaussian, a powerful methodology is offered by {\em Gaussian process regression} \cite{quinonero2005unifying} that can provide
prediction regions for the outputs, and { credible regions for the expected outputs}. However, the Gaussianity assumption is sometimes unrealistic that calls for alternative
approaches.
In this paper, we suggest a {\em nonparametric} approach using Paley-Wiener kernels,
to build data-driven {\em simultaneous} confidence bands for an unknown bounded, {\em band-limited} function, based on an independent and identically distributed (i.i.d.) sample of input-output pairs. The method is {\em distribution-free} in the sense that only very mild assumptions are needed about the observation noises, such as they are distributed {\em symmetrically} about zero. On the other hand, we assume that the {\em distribution of the inputs} is known, particularly, we
assume uniformly distributed inputs, as more general cases can often be traced back to this assumption. First,
the case without observation noises is studied,
then the ideas are extended to the general, noisy case. The results are supported by both {\em non-asymptotic} theoretical guarantees and numerical experiments.
\section{Kernels and Band-Limited Functions}
{ Kernel methods have an immerse range of applications in machine learning and related fields \cite{pillonetto2014kernel}.
In this section, we review some of their fundamental theoretical concepts.}
\subsection{Reproducing Kernel Hilbert Spaces}
A Hilbert space $\mathcal{H}$ of $f: \mathbb{X} \to \mathbb{R}$ functions with an inner product $\langle\cdot,\cdot\rangle_{\mathcal{H}}$ is called a {\em Reproducing Kernel Hilbert Space} (RKHS), if each Dirac functional, which evaluates functions at a point,
$\delta_z: f \to f(z)$, is
bounded for all $z \in \mathbb{X}$, that is $\forall z \in \mathbb{X}: \exists \, \kappa_z > 0$ with $|\hspace{0.3mm}\delta_z(f)\hspace{0.3mm}| \leq \kappa_z\, \| f \|_{\mathcal{H}}$ for all $f \in \mathcal{H}$.
Then, by building on the Riesz representation theorem, a unique {\em kernel}, $k: \mathbb{X} \times \mathbb{X} \to \mathbb{R}$, can be constructed %
encoding the Dirac functionals satisfying $\langle k(\cdot,z),f \rangle_{\mathcal{H}} = f(z),$
for all
$z \in \mathbb{X}$ and $f \in \mathcal{H}$, which formula is called the {\em reproducing property}.
As a special case of this property, we also have for
all $z,s \in \mathbb{X}$ { that} $k(z,s)=\langle k(\cdot,z),k(\cdot,s) \rangle_{\mathcal{H}}.$
Therefore,
the kernel of an RKHS is a symmetric and positive-definite function.
Furthermore, the Moore-Aronszajn theorem asserts that the converse statement
holds true, as well:
for every symmetric and positive-definite function $k: \mathbb{X} \times \mathbb{X} \to \mathbb{R}$, there exists a unique RKHS for which $k$ is its reproducing kernel \cite{berlinet2004reproducing}.
The {\em Gram} or kernel matrix of a given kernel $k$ w.r.t.\ (input) points $x_1, \dots, x_n$ is
$K_{i,j} \doteq k(x_i,x_j)$, for all { $i, j \in [n] \doteq \{1,\dots, n\}$}. Observe that $K \in \mathbb{R}^{n \times n}$ is always positive semi-definite. A kernel is called {\em strictly} positive-definite, if its Gram matrix is
positive-definite for all {\em distinct} inputs $\{x_i\}$.
{ Archetypal} kernels include the Gaussian kernel $k(z,s)=\exp (-||z-s||^2 / (2 \sigma^2)),$ where $\sigma >0$; the polynomial kernel $k(z,s)=(\langle z,s \rangle +c)^p,$ where
$c \geq 0$, $p \in \mathbb{N}$;
and the sigmoidal kernel $k(z,s)=\tanh (a \langle z,s \rangle +b),$
for some $a,b \geq 0$.
\subsection{Paley-Wiener Spaces}
Let $\mathcal{H}$ be the space of $f \in \mathcal{L}^2 (\mathbb{R}, \lambda)$ functions, where $\lambda$ is the Lebesgue measure, such that the support of the {\em Fourier transform} of $f$ is included in $[\hspace{0.3mm}-\eta,\, \eta\hspace{0.5mm}]$, where $\eta > 0$. It is a subspace of $\mathcal{L}^2$ and thus we use the $\mathcal{L}^2$ inner product:\vspace{-0.5mm}
$$\langle f, g \rangle_\mathcal{H} \, \doteq \int_{\mathbb{R}} f(x)\,g(x) \: \mathrm{d} \lambda(x).$$
This space of {\em band-limited} functions, called the {\em Paley-Wiener space} \cite{berlinet2004reproducing}, is an RKHS.
Its reproducing kernel is
$$k(z,s) \, \doteq \, \frac{\sin (\eta(z-s))}{\pi(z-s)},$$
for $z \neq s$, where $z, s \in \mathbb{R}$; and $k(z, z) \doteq \eta/\pi$.
Henceforth, we will work with the above defined {\em Paley-Wiener kernel}.
\begin{remark}
Paley-Wiener spaces can also be defined on $\mathbb{R}^d$ \cite{iosevich2015exponential}, but
for simplicity we focus on the scalar input case.
\end{remark}
\section{Nonparametric { Confidence Bands}}
Let $(x_1, y_1), \dots, (x_n, y_n)$ be a finite sample of i.i.d. pairs of
random variables with unknown joint distribution $\mathbb{P}_{\! \scriptscriptstyle X,Y}$, where
$x_k$ and $y_k$ are $\mathbb{R}$-valued, and
$\mathbb{E}[\hspace{0.3mm}y^2_k\hspace{0.3mm}] < \infty$. We assume that\vspace{-0.5mm}
$$
y_k \, = \, f_*(x_k) + \varepsilon_k,
$$
for $k \in [n]$, where $\mathbb{E}[\hspace{0.3mm}\varepsilon_k\hspace{0.3mm}] = 0$.
Variables $\{\varepsilon_k\}$
represent
the measurement or observation {\em noises} { on the ``true'' $f_*$.}
We call $f_*$ the {\em regression function}\hspace*{-0.5mm} { \cite{cucker2007learning}}, as on the support of $\{x_k\}$ it can also be written as
$
f_*(x) \,= \, \mathbb{E} \left[\hspace{0.5mm} Y\hspace{0.5mm} |\hspace{0.5mm} X = x \hspace{0.5mm}\right]
$,
where $(X,Y)$ is a
random vector with distribution $\mathbb{P}_{\! \scriptscriptstyle X,Y}$.
\subsection{Objectives and Reliability}
\label{sec:objectives}
Our aim
is to { build a (simultaneous) {\em confidence band}} for $f_*$, i.e., a function $I:\mathcal{D} \to { \mathbb{R} \times \mathbb{R}}$, where $\mathcal{D}$ is the {\em support} of the input distribution,
such that { $I(x) = (\hspace{0.3mm}I_1(x), I_2(x)\hspace{0.3mm})$ specifies the {\em endpoints} of an interval estimate for $f_*(x)$, for all $x \in \mathcal{D}$}.
More precisely,
we would like to construct $I$ with\vspace{-0.5mm}
%
$$
\nu(I)\,\doteq \, \mathbb{P} \big(\, \forall x \in \mathcal{D}: { I_1(x) \leq f_*(x) \leq I_2(x)} \,\big) \, \geq \, 1- \alpha,
$$
where $\alpha \in (0,1)$ is a user-chosen {\em risk} probability, and $\nu(I)$ is
{ the {\em reliability} of the confidence band.
Let us introduce}
\vspace{-0.2mm}
$$
\mathcal{I} \, \doteq \, \big\{\hspace{0.5mm} (x,y) \in \mathcal{D} \times \mathbb{R} : y \in [ \hspace{0.3mm} I_1(x), I_2(x) \hspace{0.3mm} ] \hspace{0.5mm} \big\}.
$$
{ Based} on this, the reliability is $\nu(I) = \mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,)$, where we define $\mathrm{graph}_{\mathcal{D}}(f_*) \doteq \{\, (x, f_*(x)) : x\in \mathcal{D} \,\}$.
{ For notational simplicity, we will use $I(x) = \emptyset$ to denote $I(x) = (\hspace{0.3mm}1,-1\hspace{0.3mm})$, i.e., the endpoints of an empty interval.}
Hence, we aim at building a {confidence band} that contains the graph (w.r.t.\ domain $\mathcal{D}$) of the ``true'' $f_*$
with a {\em user-chosen} probability { level}. Moreover, we would like to have a {\em distribution-free} method (w.r.t.\ the noises) and the region should have {\em finite-sample} guarantees without a parametric model of $f_*$, namely, we take a {\em nonparametric} approach.
\begin{remark}
We
note here, as well, that in the
IPMs \cite{campi2009interval}\cite{garatti2019class} and in the conformal prediction framework \cite{vovk2005algorithmic}, the aim is to build a guaranteed prediction region for the {\em next observation}, while here we aim at predicting the value of the {\em regression function} instead. In this sense, { our objective is similar to that of the region estimates of Gaussian process regression} \cite{quinonero2005unifying}, however, without the assumption { of joint Gaussianity}.
\end{remark}
\subsection{Main Assumptions}
Our core assumptions can be summarized as follows:
\smallskip
\setcounter{assumption}{-1}
\begin{assumption}
\label{A0} %
{\em The dataset, $(x_1, y_1), \dots, (x_n, y_n) \in \mathbb{R} \times \mathbb{R}$, is an i.i.d.\ sample of input-output pairs; and $\mathbb{E}[\hspace{0.3mm}y^2_k\hspace{0.3mm}] < \infty$, for $k \in [n]$}.
\end{assumption}
\smallskip
\begin{assumption}
\label{A1} {\em Each (measurement) noise, $\varepsilon_k \doteq y_k - f_*(x_k)$, for $k \in [n]$, has a {symmetric} probability distribution about zero.}
\end{assumption}
\smallskip
\begin{assumption}
\label{A2} {\em The inputs, $\{x_k\}$, are distributed uniformly on $[\hspace{0.4mm}0, 1\hspace{0.2mm}]$.}
\end{assumption}
\smallskip
\begin{assumption}
\label{A3} {\em Function
$f_*$ is from a Paley-Wiener space $\mathcal{H}$;
$\forall\, x\in[\hspace{0.4mm}0, 1\hspace{0.2mm}]: { |f_*(x)|} \leq 1$; and
$f_*$ is almost time-limited to $[\hspace{0.4mm}0, 1\hspace{0.3mm}]:$
$$
\int_{\mathbb{R}} f^2_*(x)\,\mathbb{I}(x \notin [\hspace{0.4mm}0, 1\hspace{0.2mm}]) \: \mathrm{d}\lambda(x) \, \leq \, \delta_0,
$$
where $\mathbb{I}(\cdot)$ is an indicator and $\delta_0 > 0$ is a universal constant.}
\end{assumption}
\smallskip
Now, let us briefly discuss these assumptions. The i.i.d.\ requirement of A\ref{A0} is standard in mathematical statistics and supervised learning \cite{Vapnik1998}.
The square-integrability of the outputs is needed to estimate the $\mathcal{L}^2$ norm of $f_*$ based on the sample and to have a well-defined regression function.
The assumption on the
noises, A\ref{A1}, is very mild, as most standard distributions (e.g., Gauss, Laplace and uniform) satisfy this.
Our strongest assumption is certainly A\ref{A2},
which basically { amounts} to the assumption that {\em we know the distribution of the inputs} and it is absolutely continuous. The more general case when the inputs, $\{x'_k\}$, have a {\em known}, strictly monotone { increasing} and continuous cumulative distribution function $F$, could be traced back to assumption A\ref{A2}, { since} it is well-known that $x_k \doteq F(x'_k)$ is distributed uniformly on $[\hspace{0.4mm}0, 1\hspace{0.2mm}]$.
Assumption A\ref{A3}, especially limiting the frequency domain of $f_*$,
is needed to restrict the model class and to ensure that we can effectively generalize to unknown data points. We allow
the ``true'' function to be defined outside the support of the inputs, cf.\ the Fourier uncertainty principle{ \cite{pinsky2008introduction}}, but the part of $f_*$ outside of $\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$ should be ``negligible'', i.e., its norm cannot exceed a { (known)} small constant, $\delta_0$.
{ A crucial property of Paley-Wiener spaces is that their norms coincide with the standard $\mathcal{L}^2$ norm, which will allow us to efficiently upper bound $\|f_*\|_{\mathcal{H}}^2$ based on the sample.}
\section{{ Confidence Bands}: Noise-Free Case}
In order to motivate our solution, we start with a simplified problem, in which we observe the regression function perfectly at
random inputs. In this noise-free case, we can recall the celebrated Nyquist–Shannon sampling theorem, which states that a band-limited function can be fully reconstructed from the samples, assuming the sampling rate exceeds twice the maximum frequency. On the other hand, if we only have a small number of observations, we cannot apply this result. Nevertheless, we still would like to have at least a region estimate. In this section we provide such an algorithm.
Recall that for a dataset $\{(x_k, y_k)\}$, where inputs $\{x_k\}$ are {\em distinct} (which has probability one under A\ref{A2}), the element from $\mathcal{H}$ that has the {\em minimum norm} and {\em interpolates} each output $y_k$ at the corresponding input $x_k$, that is\vspace{-0.5mm}
$$
\bar{f} \, \doteq \, \operatornamewithlimits{arg\,min} \big\{\,\|\hspace{0.3mm}f\hspace{0.4mm}\|_{\mathcal{H}} : f \in \mathcal{H}\hspace{1.5mm} \&\hspace{1.5mm} \forall\hspace{0.3mm} k \in [n]: f(x_k) =\, y_k \, \big\},\vspace{-0.5mm}
$$
takes the following form \cite{berlinet2004reproducing} for all input $x \in \mathbb{X}:$\vspace{-0.5mm}
$$\bar{f}(x)\,=\, \sum_{k=1}^n \bar{\alpha}_k k(x, x_k),\vspace{-0.5mm}$$
where the weights are $\bar{\alpha} = K^{-1} y$ with $y\doteq (y_1, \dots, y_n)\tr$ and $\bar{\alpha} \doteq (\bar{\alpha}_1, \dots, \bar{\alpha}_n)\tr$; we also used that the Paley-Wiener kernel is strictly positive-definite, thus
matrix $K$ is invertible.
We will exploit, as well, that the norm square of $\bar{f}$ is\vspace{-0.5mm}
$$\|\hspace{0.3mm}\bar{f}\hspace{0.4mm}\|_{\mathcal{H}}^2 = \bar{\alpha}\tr \hspace{-0.3mm}K \bar{\alpha},\vspace{-0.5mm}$$
which is a direct consequence of the reproducing property.
Assuming we have a stochastic upper bound for the norm square of the regression function, denoted by $\kappa$, the idea of our construction is as follows. We include those $(x_0,y_0)$ pairs in the { confidence band}, for which the minimum norm interpolation of $\{(x_k, y_k)\} \,\cup\, \{(x_0,y_0)\}$, namely, which simultaneously interpolates the original dataset and $(x_0,y_0)$, has a norm square which is less than or equal to $\kappa$. In order to make this approach practical, we need (1) a guaranteed upper bound for the norm square of the { ``true''} data-generating function; and (2) an efficient method to decide the endpoints of the { confidence} interval for each potential input $x_0 \in \mathcal{D}$.
\subsection{Bounding the Norm: Noise-Free Case}
It is easy to see that in the noise-free case, if $y_k = f_*(x_k)$, for $k \in [n]$, the norm square of $f_*$ can be estimated by \vspace{-0.5mm}
$$\frac{1}{n} \sum_{k=1}^n y_k^2 = \frac{1}{n} \sum_{k=1}^n f_*^2(x_k) \approx \mathbb{E}\big[ f^2_*(X)\big] \approx \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{2}^2 = \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2,$$
since in the Paley-Wiener space the norm is the $\mathcal{L}^2$ norm, and we also used that $\{x_k\}$ are uniform on domain
$\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$.
As the next lemma demonstrates, we can construct such a guaranteed upper bound using the Hoeffding inequality:
\medskip
\begin{lemma}
\label{lemma:Hoeffding.noiseless}
{\em Assuming A\ref{A0}, A\ref{A2}, A\ref{A3} and that $y_k = f_*(x_k)$, for $k \in [n]$,
{ we have for any risk probability $\alpha\in (0,1)$,\vspace{-0.5mm}
$$
\mathbb{P}\big(\norm{f_*}_{\mathcal{H}}^2 \leq \kappa \hspace{0.3mm}\big) \, \geq \, 1-\alpha,
$$
with the following choice of the upper bound $\kappa$:\vspace{-0.5mm}
$$
\kappa \, \doteq\, \frac{1}{n} \sum_{k=1}^n y_k^2 + \sqrt{\frac{\ln(\alpha)}{-2n}} +
\delta_0.$$}
}
\end{lemma}
\vspace{-1.5mm}
\hspace*{-8mm}
\begin{proof}
By using the notation ${ R} \doteq \nicefrac{1}{n}\sum_{k=1}^n y_k^2$, we have
$$\mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm} ]\, =\, \|\hspace{0.3mm} f_* \cdot \mathbb{I}_{\mathcal{D}} \hspace{0.3mm} \|_2^2\, \geq\, \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2 - \delta_0,
$$
where $\mathbb{I}_{\mathcal{D}}$ is the indicator function of $\mathcal{D} = [\hspace{0.4mm}0, 1\hspace{0.2mm}]$. That is, ${ R}$ is a Monte Carlo estimate of the integral of this $\mathcal{L}^2$ norm.
Then, from the Hoeffding inequality, for all $t>0$:
$$\mathbb{P}({ R} - \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm} ] \leq -t) \leq \mbox{exp} (-2n t^2).$$
According to the complement rule, we also have
$$\mathbb{P} ( \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm}] < { R} + t) \geq 1-\mbox{exp}(-2nt^2).$$
We would like choose a threshold $t > 0$ such that
$$1-\alpha \, \leq\, \mathbb{P} ( \mathbb{E}[\hspace{0.3mm} { R}\hspace{0.3mm}] < { R}+t).$$
{ This} inequality is satisfied if we choose a $t>0$ with
$$1-\alpha \leq 1-\mbox{exp}(-2nt^2)\; \Longrightarrow \;\mbox{exp}(-2nt^2) \leq \alpha.$$
After taking the natural logarithm, we get
$-2nt^2 \leq \ln(\alpha)$,
hence, the choice of
$t^* = \sqrt{\ln(\alpha)/(-2n)}$
guarantees
$$\mathbb{P}\big( \hspace{0.3mm} \|\hspace{0.3mm}f_*\hspace{0.4mm}\|_{\mathcal{H}}^2 \geq { R} +t^*+\delta_0 \hspace{0.3mm} \big) \leq \alpha,$$
which completes the proof of the lemma.
\end{proof}
\smallskip
\subsection{Interval Endpoints: Noise-Free Case}
Now, we construct a { confidence} interval for a given input {\em query point} $x_0 \in \mathcal{D}$, for which $x_0 \neq x_k$, for $k \in [n]$. That is, we build an { interval $[I_1(x_0),I_2(x_0)]$} that contains $f_*(x_0)$ with probability at least $1-\alpha$, where $\alpha \in (0,1)$ is given.
First, we extend the Gram matrix with query point $x_0$,
$$
K_{0}({i+1},{j+1})\, \doteq \, k(x_i,x_j),
$$
for $i, j = 0,1, \dots ,n$. As $\{x_k\}_{k=0}^n$ are distinct (a.s.), this Gramian can
be inverted. Hence, for any $y_0$, the minimum norm interpolation of $(x_0, y_0), (x_1, y_1), \dots, (x_n, y_n)$ is \vspace{-0.5mm}
$$\tilde{f}(x)\,=\, \sum_{k=0}^n \tilde{\alpha}_k k(x, x_k),$$
where the weights are $\tilde{\alpha} = K_{0}^{-1} \tilde{y}$ with $\tilde{y}\doteq (y_0, y_1, \dots, y_n)\tr$ and $\tilde{\alpha} \doteq (\tilde{\alpha}_0, \dots, \tilde{\alpha}_n)\tr.$
The norm square of $\tilde{f}$ is
$$
\|\hspace{0.3mm}\tilde{f}\hspace{0.4mm}\|_{\mathcal{H}}^2 \,=\, \tilde{\alpha}\tr\hspace{-0.3mm} K_{0} \tilde{\alpha}\, =\, \tilde{y}\tr\hspace{-0.3mm} K_{0}^{-1} K_{0} K_{0}^{-1} \tilde{y}\,=\, \tilde{y}\tr\hspace{-0.3mm} K_{0}^{-1} \tilde{y}.
$$
Since the output query point $y_0$ in $\tilde{y} = (y_0, y\tr)\tr$ is arbitrary, we can compute the minimum norm needed to interpolate the original dataset extended by $(x_0, y_0)$ for any
candidate $y_0$.
Therefore, having a bound $\kappa$ on the norm square (which is guaranteed with probability $\geq 1-\alpha$), we can compute the highest and the lowest $y_0$ values which can be interpolated with a function from $\mathcal{H}$ having at most norm square $\kappa$.
This leads to the following {\em two} optimization problems:
\begin{equation}
\label{noiseless-opt-min-max}
\begin{split}
\mbox{min\,/\,max} &\quad y_{0} \\[0.5mm]
\mbox{subject to} &\quad (y_0, y\tr) K_{0}^{-1} (y_0, y\tr)\tr \leq\, \kappa\\[1mm]
\end{split}
\end{equation}
where ``min\,/\,max'' means that we have to solve the problem as a minimization and also as a maximization (separately).
The optimal values of these problems, denoted by $y_{\mathrm{min}}$ and $y_{\mathrm{max}}$, respectively, determine the {\em endpoints} of the { confidence} interval for $f_*(x_0)$, that is
$I_1(x_0) \doteq y_{\mathrm{min}}$ and $I_2(x_0) \doteq y_{\mathrm{max}}$.
Problems \eqref{noiseless-opt-min-max} are convex, moreover, as we will show,
their optimal vales
can be calculated {\em analytically}. First, note that the only decision variable of these problems is $y_0$, everything else is constant (including the input
$x_0$, which is also given).
Let us partition the inverse Gramian, $K_{0}^{-1}$, as\vspace{-0.2mm}
$$
\begin{bmatrix}
\; c & b\tr\\
\; b & A
\,\end{bmatrix} \doteq\, K_{0}^{-1}\!\!,
$$
where $c \in \mathbb{R}$, $b\in \mathbb{R}^n$ and $A \in \mathbb{R}^{n\times n}$; after which
$$
\quad (y_0, y\tr) K_{0}^{-1} (y_0, y\tr)\tr =\, c\, y_0^2 + 2\, b\tr y\, y_0 + y\tr\hspace{-0.3mm} A y.
$$
Then, introducing $a_0 \doteq c$, $b_0 \doteq 2b\tr y$ and $c_0 = y\tr\hspace{-0.3mm} A y - \kappa$, the two optimization problems \eqref{noiseless-opt-min-max} can be written as
\begin{equation}
\label{noiseless-opt-proof}
\begin{split}
\mbox{min\,/\,max} &\quad y_{0} \\[0.5mm]
\mbox{subject to} &\quad a_0 y_0^2 + b_0 y_0 + c_0 \, \leq \, 0
\end{split}
\end{equation}
in which $a_0$, $b_0$ and $c_0$ are constants (w.r.t.\ the optimization).
Since these are (convex) quadratic programming problems (with linear objectives), their optimal solutions must be on the boundary of the constraint. This can be easily verified directly, for example, by the technique of Lagrange multipliers.
There are at most two solutions of the quadratic equation $a_0 y_0^2 + b_0 y_0 + c_0 = 0.$
The smaller one will be denoted by $y_{\mathrm{min}}$ and the larger one by $y_{\mathrm{max}}$ (they are allowed to be the same, if there is only one solution).
Then, we set $I_1(x_0) \doteq y_{\mathrm{min}}$, and $I_2(x_0) \doteq y_{\mathrm{max}}$; or $I(x_0) \doteq \emptyset$, in case there is no solution. Finally, we define $I_1(x_k) = I_2(x_k) = y_k$, for all $k \in [n]$, as
the outputs are noise-free, that is $y_k = f_*(x_k)$, for $k \in [n]$.
{\renewcommand{\arraystretch}{1.3}
\begin{table}[!t]
\centering
\caption{\vspace*{-4mm}}
\begin{tabular}{|cl|}
\hline
\multicolumn{2}{|c|}{\textsc{Pseudocode: { Confidence} interval for the noise-free case}} \\ \hline\hline
{\em Input:} & Data sample $\{(x_k, y_k)\}_{k=1}^{n}$, input query point $x_0 \in \mathcal{D}$,\\
& and risk probability $\alpha \in (0,1)$.\\
{\em Output:} & { The endpoints of the confidence interval $[\hspace{0.3mm}I_1(x_0), I_2(x_0)\hspace{0.3mm}]$}\\
& { which has confidence probability at least $1-\alpha$.}\\[0.5mm]
\hline \hline
1. & If $x_0 = x_k$ for any $k \in [n]$, return
$I_1(x_0) = I_2(x_0) = y_k$.\\
2.& Calculate $\kappa \doteq \frac{1}{n} \sum_{k=1}^n y_k^2 + \sqrt{\frac{\log(\alpha)}{-2n}} + \delta_0$. \\
3. & Create the extended Gram matrix\\
& $K_{0}(i+1, j+1)\doteq k(x_i,x_j),$ for $i,j=0,1,...,n$. \\
4.& Calculate $K_{0}^{-1}$ and partition it as:\\
&
$
\begin{bmatrix}
\; c & b\tr\\
\; b & A
\,\end{bmatrix} \doteq\, K_{0}^{-1}
$\\
5. & Solve the quadratic equation $a_0 y_0^2 + b_0 y_0 + c_0 = 0$, \\
& where $a_0 \doteq c$, $b_0 \doteq 2b\tr y$ and $c_0 = y\tr\hspace{-0.3mm} A y - \kappa$.\\
6. & If there is no solution, return $I(x_0) \doteq \emptyset$; otherwise return\\
& $I_1(x_0) \doteq y_{\mathrm{min}}$, and $I_2(x_0) \doteq y_{\mathrm{max}}$, where $y_{\mathrm{min}} \leq y_{\mathrm{max}}$\\
& are the solutions (which are allowed to coincide).\\[0.5mm]
\hline
\end{tabular}
\label{table:pseudo-noise-free}
\vspace*{-4mm}
\end{table}}
Table \ref{table:pseudo-noise-free} summarizes the proposed algorithm for the case without measurement noise. By observing that if $\kappa$ satisfies $\norm{f}_{\mathcal{H}}^2 \leq \kappa$, which has probability at least $1-\alpha$, then the construction guarantees that $\mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}$, as the region contains all outputs that can be interpolated with a function from $\mathcal{H}$ which also interpolates the original dataset and
has norm square at most $\kappa$. Hence, we can conclude that
\medskip
\begin{theorem}{\em Assume that A\ref{A0}, A\ref{A2}, A\ref{A3} and $y_k = f_*(x_k)$, for $k \in [n]$, are satisfied. Let $\alpha \in (0,1)$ be a
risk probability.
Then, the { confidence} band of Algorithm \ref{table:pseudo-noise-free} guarantees
$$\mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,) \, \geq \, 1-\alpha.\vspace*{0.8mm}$$}
\end{theorem}
\section{{ Confidence Bands} with Measurement Noise}
Now, we turn to the general case, when the observations of $f_*$ are affected by {\em noises}{ ,} $y_k = f_*(x_k) + \varepsilon_k$, for $k \in [n]$.
Since now we do not have exact knowledge of the function values at the sample inputs, we cannot directly apply our previous approach. The main idea in this case is that first we need to construct {\em interval estimates} of $f_*$ at some {\em { observed} inputs}, $\{x_k\}$, which then can be used to bound the norm and to build { confidence} intervals for the {\em unobserved} inputs.
\subsection{{ Confidence} Intervals at the { Observed} Inputs}
\label{sec:SPS}
We employ the {\em kernel gradient perturbation} (KGP) method, proposed in \cite{csaji2019distribution}, to build {\em non-asymptotically} guaranteed, {\em distribution-free} { confidence} intervals for $f_*$ at some of the {\em observed} inputs. The KGP algorithm is based on ideas
from {\em finite-sample system identification} \cite{Algo2018}, particularly, it is an extension of the {\em Sign-Perturbed Sums} (SPS) method \cite{csaji2014sign}.
The KGP method can build non-asymptotically guaranteed distribution-free confidence regions for the RKHS coefficients of the {\em ideal} representation (w.r.t.\ given input points)
of $f_*$.
A representation $f \in \mathcal{H}$ is called ideal w.r.t.\ $\{x_k\}_{k=1}^{d}$, if it has the property that $f(x_k) = f_*(x_k)$, for all $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$.
{
The KGP construction guarantees \cite[Theorem 2]{csaji2019distribution} that the confidence set contains the coefficients of an ideal representation w.r.t.\ $\{x_k\}_{k=1}^{d}$ {\em exactly} with a user-chosen confidence probability, assuming the noises satisfy regularity conditions, e.g., they are symmetric and independent (cf.\ A\ref{A0} and A\ref{A1}).
Note that KGP regions are only guaranteed at the {\em observed} inputs. KGP cannot provide confidence bands directly.}
The KGP approach can be used together with a number of kernel methods, such { as} support vector regression and kernelized LASSO. Here, we use it with {\em kernel ridge regression} (KRR) { which} is the kernelized version of Tikhonov regularized least squares (LS). It solves the following problem:
\begin{equation}
\label{krr:objective}
\hat{f}_{\scriptscriptstyle\text{KRR}} \; \doteq \; \operatornamewithlimits{arg\,min}_{f \in \mathcal{H}}\, \frac{1}{n}\,\sum_{k=1}^n w_i (y_k - f(x_k))^2 \,+\, \lambda\, \| f \|^2_{\mathcal{H}},
\vspace{1mm}
\end{equation}
where $\lambda > 0$, $w_k > 0$, $i \in [n]$, are given (constant) weights.
Using the { representer theorem} \cite{hofmann2008kernel} and the reproducing property, the objective of \eqref{krr:objective} can be rewritten as \cite{csaji2019distribution}\vspace{-0.5mm}
\begin{equation}
\label{krr:obj2}
\frac{1}{n}\,(y - K\hspace{0.2mm} \theta)\tr W (y - K\hspace{0.2mm} \theta) \,+\, \lambda\, \theta\tr \hspace{-0.3mm}K\hspace{0.2mm} \theta,
\end{equation}
where
$W \doteq \mbox{diag}(w_1,\dots, w_n)$, $K$ is the Gramian matrix,
and $\theta = (\theta_1, \dots, \theta_n)$ are the
coefficients of the solution.
Minimizing \eqref{krr:obj2} can be further reformulated as a canonical {\em ordinary least squares} (OLS) problem, $\|\hspace{0.3mm}{ v} \,-\, \Phi\hspace{0.2mm} \theta\hspace{0.3mm}\|^2$, by using\vspace{-0.5mm}
\begin{equation*}
\Phi\, =\, \left[
\begin{array}{c}
\,(\nicefrac{1}{\sqrt{n}})\,W^{\frac{1}{2}} K\, \\[1mm]
\sqrt{\lambda}\, K^{\frac{1}{2}}
\end{array}
\right]\!,\quad
{ v} \,=\, \left[
\begin{array}{c}\,
(\nicefrac{1}{\sqrt{n}})\, W^{\frac{1}{2}} y\, \\[1mm]
\;0_n\;
\end{array}
\right]\!,
\end{equation*}
where $W^{\frac{1}{2}}$ and $K^{\frac{1}{2}}$ denote the principal, non-negative square roots of matrices $W$ and $K$, respectively. Note that the square roots exist as these matrices are positive semi-definite.
For convex quadratic problems (such as KRR) and {\em symmetric} noises (cf.\ A\ref{A1}), the KGP confidence regions coincide with SPS regions.
They are {\em star convex} with the LS estimate, $\hat{\theta}$, as a star center. Furthermore, they have {\em ellipsoidal outer approximations}, that is there are regions of the form
\vspace{-0.5mm}
\begin{equation}
\widehat{\Theta}_{\beta} \; \doteq \; \Big\{\, \theta \in \mathbb{R}^n\, :\, (\theta-\hat{\theta})^\mathrm{T}\frac{1}{n}\Phi\tr\Phi\hspace{0.3mm}(\theta-\hat{\theta})\,\leq\, r \, \Big\},
\end{equation}
where $1-\beta \in (0,1)$ is a given confidence probability \cite{csaji2014sign}.
The radius of this confidence ellipsoid, $r$, can be computed
by
{\em semi-definite programming}:
see \cite[{ Section VI.B}]{csaji2014sign}.
Hence, the construction guarantees $\mathbb{P}(\hspace{0.3mm}\tilde{\theta} \in \Theta_\beta\hspace{0.3mm}) \geq 1-\beta$, where $\tilde{\theta}$ is the coefficient vector of an {\em ideal} representation:
\vspace{-1mm}
$$
\sum_{i=1}^n \tilde{\theta}_i k(x_i, x_k) \,=\, f_*(x_k),
$$
for $k \in [n]$. By defining $\varphi_k \doteq (k(x_1,x_k), \dots, k(x_n,x_k))\tr$, we know that $f_*(x_k) = \varphi_k\tr\tilde{\theta}$, but of course $\tilde{\theta}$ is unknown.
Since $\tilde{\theta}$ is inside the ellipsoid $\widehat{\Theta}_{\beta}$ with probability $\geq 1-\beta$, we could construct (probabilistic) upper and lower bounds of $f_*(x_k)$ by maximizing and minimizing $\varphi_k\tr\theta$, for $\theta \in \widehat{\Theta}_{\beta}$.
These problems (linear objective and ellipsoid constraint) have known solutions: the minimum and the maximum are
$$
\nu_k = \varphi_k\tr\hat{\theta} - (\varphi_k\tr P\varphi_k)^{\frac{1}{2}}, \qquad \mu_k = \varphi_k\tr\hat{\theta} + (\varphi_k\tr P\varphi_k)^{\frac{1}{2}},
$$
where $P = (nr)^{-1} \Phi\tr\Phi$, and $\hat{\theta}$ is the center of the ellipsoid, i.e.,
the { solution} of the OLS formulation $\|\hspace{0.3mm}{ v} \,-\, \Phi\hspace{0.2mm} \theta\hspace{0.3mm}\|^2$.
{ Due to the construction of KGP confidence regions, there is a (extremely small, but nonzero) probability of getting an empty region. In this case, we define $\nu_k = 1$ and $\mu_k = -1$, for all $k \in [n]$. That is, we give an {\em empty interval} for each $f(x_k)$, using a similar representation as in Section \ref{sec:objectives}.}
Finally, we introduced a slight modification to this construction. We can also construct confidence intervals just for the first $d \leq n$ observations by redefining objective \eqref{krr:obj2} as
\begin{equation*}
\frac{1}{n}\,(y - K_1\hspace{0.2mm} \theta)\tr W (y - K_1\hspace{0.2mm} \theta) \,+\, \lambda\, \theta\tr \hspace{-0.3mm}K_2\hspace{0.2mm} \theta,
\end{equation*}
where $K_1 \in \mathbb{R}^{n \times d}$ is $K$ having the last $n-d$ columns removed, and $K_2\in \mathbb{R}^{d \times d}$ is $K_1$ having the last $n-d$ rows removed. Hence, we search for $\tilde{\theta} \in \mathbb{R}^d$ ideal vector, such that
for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, we have $(K_1 \tilde{\theta})(k)= f_*(x_k)$.
For the error computation we still use {\em all} measurements ($K_1$ still has $n$ rows). { It is important that in this case only the first $d$ residuals are perturbed in the construction of the KGP ellipsoid. This}
usually considerably reduces the sizes of the intervals, but then we only have guarantees { at $d\leq n$ observed inputs}.
\subsection{Bounding the Norm with Measurement Noise}
In the previous section, we built {\em simultaneous} confidence intervals at the sample inputs for the first $d\leq n$ observations, $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$; that is, they have the property
\vspace{-0.2mm}
\begin{equation}
\label{eq:sym.conf.int}
\mathbb{P}\big(\hspace{0.3mm} \forall \hspace{0.3mm}k \in [\hspace{0.3mm}d\hspace{0.5mm}]: f_*(x_k) \in [\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]\hspace{0.3mm}\big)\, \geq\, 1 - \beta,
\end{equation}
for some (user-chosen) risk probability $\beta \in (0,1)$.
Recall that by Lemma \ref{lemma:Hoeffding.noiseless}, for any $n$, the variable
\begin{equation}
\label{eq:Hoeffdieng}
\kappa \, \doteq \frac{1}{n} \sum_{k=1}^n f^2_*(x_k) + \sqrt{\frac{\ln(\alpha)}{-2n}} +
\delta_0,
\end{equation}
is an upper bound of $\norm{f_*}_{\mathcal{H}}^2$ with probability at least $1-\alpha$.
Using property \eqref{eq:sym.conf.int}, we also know that\vspace{-1mm}
\begin{equation}
\label{eq:sum.max.nu.mu.square}
\sum_{k=1}^{d} f_*^2(x_k) \,\leq\, \sum_{k=1}^{d} \max\{\nu_k^2, \mu_k^2\},
\end{equation}
with probability at least $1-\beta$. By combining property \eqref{eq:sym.conf.int}, formulas \eqref{eq:Hoeffdieng} and \eqref{eq:sum.max.nu.mu.square},
the results of Lemma \ref{lemma:Hoeffding.noiseless}, as well as using Boole's inequality (the union bound), we have
\medskip
\begin{lemma}
\label{lemma:Hoeffding.noisy}
{\em Assume that A\ref{A0}, A\ref{A2}, A\ref{A3} hold and that confidence intervals
$[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, satisfy \eqref{eq:sym.conf.int}.
{ Then,
$$
\mathbb{P}\big(\norm{f_*}_{\mathcal{H}}^2 \leq \tau \hspace{0.3mm}\big)\, \geq \,1-\alpha-\beta,
$$
with the following choice of the upper bound $\tau$:
$$
\tau \, \doteq\, \frac{1}{d} \sum_{k=1}^{d} \max\{\nu^2_k,\mu^2_k \} + \sqrt{\frac{\ln(\alpha)}{-2d}} +
\delta_0.$$
}}
\vspace{0mm}
\end{lemma}
\begin{remark}
Although we only used the first $d$ observations for estimating the norm (square), the intervals $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, incorporate information about the {\em whole} sample.
{ The ``optimal'' choice of $d$ leading to small intervals is an open question,
in practice $d = \mathcal{O}(\sqrt{n})$ often works well.}
\end{remark}
\subsection{Interval Endpoints with Measurement Noise}
The final step is to construct a { confidence} interval for a given input {\em query point} $x_0 \in \mathcal{D}$ with $x_0 \neq x_k$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$.
We extend the Gram matrix with query point $x_0$,
$$
\widetilde{K}_{0}({i+1},{j+1})\, \doteq \, k(x_i,x_j),
$$
for $i, j = 0,1, \dots ,d$; but we only use the first $d$ data points.
{\renewcommand{\arraystretch}{1.3}
\begin{table}[!t]
\centering
\caption{\vspace*{-4mm}}
\begin{tabular}{|cl|}
\hline
\multicolumn{2}{|c|}{\textsc{Pseudocode: { Confidence} interval with measurement noise}} \\ \hline\hline
{\em Input:} & Data sample $\{(x_k, y_k)\}_{k=1}^{n}$, input query point $x_0 \in \mathcal{D}$,\\
& risk probabilities $\alpha \in (0,1)$ and $\beta \in (0,1)$.\\
{\em Output:} & { The endpoints of the confidence interval $[\hspace{0.3mm}I_1(x_0), I_2(x_0)\hspace{0.3mm}]$}\\
& { which has confidence probability at least $1-\alpha-\beta$.}\\[0.5mm]
\hline \hline
1. & Select $d \in [n]$, the number of confidence intervals built for\\
& a subset of { observed} inputs. Default choice: $d = \ceil{\sqrt{n}\hspace{0.3mm}}$. \\
2.& Construct $1-\beta$ level simultaneous confidence intervals for\\
& $\{f_*(x_k)\}_{k=1}^{d}$, that is $[\hspace{0.3mm}\nu_k, \mu_k\hspace{0.3mm}]$, for $k \in [\hspace{0.3mm}d\hspace{0.5mm}]$, with \eqref{eq:sym.conf.int}.\\
& (e.g., apply the KGP method discussed in Section \ref{sec:SPS})\\
3.& Set $\tau \, \doteq\, \frac{1}{d} \sum_{k=1}^{d} \max\{\nu_k^2, \mu_k^2 \} + \sqrt{\frac{\ln(\alpha)}{-2d}} +
\delta_0$. \\
4. & Solve both convex optimization problems given by \eqref{noisy-opt-min-max}.\\
5. & If there is no solution, return $I(x_0) \doteq \emptyset$; otherwise return\\
& $I_1(x_0) \doteq z_{\mathrm{min}}$ and $I_2(x_0) \doteq z_{\mathrm{max}}$, where $z_{\mathrm{min}} \leq z_{\mathrm{max}}$\\
& are the solutions (which are allowed to coincide).\\[0.5mm]
\hline
\end{tabular}
\label{table:pseudo-noisy}
\vspace*{-4mm}
\end{table}}
We have to be careful with the optimization problems, as now we do not know the exact function values, we only have potential intervals for them. Therefore, all function values are treated as decision-variables, which can take values from the given confidence intervals. Hence, we have to solve
\begin{equation}
\label{noisy-opt-min-max}
\begin{split}
\mbox{min\,/\,max} &\quad z_{0} \\[0.5mm]
\mbox{subject to} &\quad (z_0, \dots, z_d) \widetilde{K}_{0}^{-1} (z_0, \dots, z_d)\tr \leq\, \tau\\
&\quad \nu_1 \leq z_1 \leq \mu_1,\; \dots,\; \nu_d \leq z_d \leq \mu_d\\[0.5mm]
\end{split}
\end{equation}
where ``min\,/\,max'' again means that the problem have to be solved as a minimization and as a maximization (separately).
These problems are {\em convex}, therefore, they can be solved efficiently.
The optimal values, denoted by $z_{\mathrm{min}}$ and $z_{\mathrm{max}}$, are the {\em endpoints} of the { confidence} interval:
$I_1(x_0) \doteq z_{\mathrm{min}}$, and $I_2(x_0) \doteq z_{\mathrm{max}}$.
If \eqref{noisy-opt-min-max} is infeasible, e.g., we get an empty KGP ellipsoid, we set $I(x_0) = \emptyset$, i.e., we use $I(x_0) =(\hspace{0.3mm}1, -1\hspace{0.3mm})$.
Table \ref{table:pseudo-noisy} summarizes the algorithm to construct the endpoints of a confidence interval at a given query point, in case of having measurement noises. Its theoretical guarantee is:
\medskip
\begin{theorem}{\em Assume that A\ref{A0}, A\ref{A1}, A\ref{A2}, A\ref{A3} are satisfied. Let $\alpha, \beta \in (0,1)$ be given risk probabilities.
Then, the confidence band built by Algorithm \ref{table:pseudo-noisy} described above guarantees
$$\mathbb{P}(\, \mathrm{graph}_{\mathcal{D}}(f_*) \subseteq \mathcal{I}\,) \, \geq \, 1-\alpha - \beta.$$}
\end{theorem}
\vspace{4mm}
\begin{remark}
Applying the KGP approach in Algorithm \ref{table:pseudo-noisy} is optional. One could use any other construction that provides simultaneous confidence intervals for a subset of $\{f_*(x_k)\}$, cf.\ \eqref{eq:sym.conf.int}. Another approach could be to assume sub-Gaussian or sub-exponential noises and use their tail bounds to ensure \eqref{eq:sym.conf.int}.
\end{remark}
\begin{figure}[!t]
\centering
\hspace*{-2mm}
%
\includegraphics[width = 1.02\columnwidth]{zajmentesabra_b_30.pdf}
%
%
\caption{Nonparametric { confidence bands} for the noise-free setting.}
\label{fig:experiment1}
\end{figure}
\begin{figure}[!t]
\centering
\hspace*{-2mm}
%
\includegraphics[width = 1.02\columnwidth]{zajosabra6_Lap_04.pdf}
%
%
\caption{Nonparametric { confidence bands} with measurement noise.}
\label{fig:experiment2}
\vspace*{-2mm}
\end{figure}
\section{Numerical Experiments}
The algorithms were also tested numerically.
We used a Paley-Wiener RKHS with $\eta = 30$. The ``true''
function was constructed as follows: first, $20$ random input points $\{\bar{x}_k\}_{k=1}^{20}$ were generated, with uniform distribution on $[\hspace{0.3mm}0,1]$. Then $f_*(x) = \sum_{k=1}^{20} w_k k(x, \bar{x}_k)$ was created, where each $w_k$ had a uniform distribution on $[-1,1]$. The function was normalized, in case its maximum exceeded $1$. Then, $n$ random observations were generated about $f_*$. In the noisy case,
$\{\varepsilon_k\}$ had Laplace distribution with location $\mu = 0$ and scale $b = 0.4$ parameters.
In the noise-free case, we used $n=10$ observations, and created confidence bands with risk $\alpha = 0.1$ and $0.5$. Figure \ref{fig:experiment1} demonstrates that
in the noise-free setting a very small sample size can lead to informative nonparametric confidence bands.
In case of measurement noises, $n=100$ sample size was used with $d=20$ (orange points). { Confidence bands} with risk $\alpha + \beta = 0.1$ and $0.5$ are illustrated in Figure \ref{fig:experiment2}. We simply used $\alpha = \beta$ in these cases. The results indicated that even with limited information,
adequate regions can be created.
\section{Conclusions}
In this paper a nonparametric and distribution-free { method was introduced to build simultaneous confidence bands for bounded, band-limited functions}. The construction was first presented for the case when there are no measurement noises, then it was extended allowing symmetric noises. Besides having non-asymptotic theoretical guarantees, the approach was also demonstrated numerically, supporting its feasibility.
\bibliographystyle{ieeetr}
|
1,116,691,497,187 | arxiv | \section{Introduction}
\begin{defn}
\label{nilpJord}
A group $G$ is called Jordan, solvably Jordan or nilpotently Jordan of class at most $c$ ($c\in\mathbb{N}$) if there exists a constant $J=J(G)\in\mathbb{Z}^+$, only depending on $G$,
such that every finite subgroup $H\leqq G$ has a subgroup $K\leqq H$ such that
$|H:K|\leqq J$ and $K$ is Abelian, solvable or nilpotent of class at most $c$, respectively.
\end{defn}
The notion of Jordan groups and solvably Jordan groups was introduced by V. L. Popov (Definition 2.1 in \cite{Po11}) and Yu. Prokhorov and C. Shramov (Definition 8.1 in \cite{PS14}), respectively.
\begin{thm}
\label{main}
The birational automorphism group of a $d$ dimensional variety over a field of characteristic zero is
nilpotently Jordan of class at most $d$.
\end{thm}
\begin{rem}
\label{C}
It is enough to prove the theorem over the field of the complex numbers. Indeed, let $K$ be a field of characteristic zero and $X$ be a variety over $K$.
We can fix a finitely generated field extension $L_0|\mathbb{Q}$ and an $L_0$-variety $X_0$ such that $X\cong X_0\times_{L_0}\Spec K$.
Fix a field embedding $L_0\hookrightarrow\mathbb{C}$ and let $X^*\cong X_0\times_{L_0}\Spec\mathbb{C}$.
For an arbitrary finite subgroup $G\leqq\Bir(X)$ we can find a finitely generated field extension $L_1|L_0$ such that the elements of $G$ can be defined as birational transformations over the field $L_1$. Hence
$G\leqq \Bir(X_1)$, where $X_1\cong X_0\times_{L_0}\Spec L_1$.
We can extend the fixed field embedding $L_0\hookrightarrow\mathbb{C}$ to a field embedding $L_1\hookrightarrow\mathbb{C}$.
Therefore $X^*\cong X_0\times_{L_0}\Spec\mathbb{C}\cong X_1\times_{L_1}\Spec\mathbb{C}$, and we can embed $G$ to the birational automorphism group of the complex variety $X^*$.
As the birational class of the complex variety $X^*$ only depends on the birational class of the variety $X$, it is enough to examine complex varieties.
\end{rem}
In the following discussion we shortly sketch the history of Jordan type properties in birational geometry over fields of \textit{characteristic zero}.
Research about investigating the Jordan property of the birational automorphism group of a variety was initiated by J.-P. Serre (\cite{Se09}) and V. L. Popov (\cite{Po11}).
In \cite{Se09} J.-P. Serre settled the problem for the Cremona group of rank two (by showing that it enjoys the Jordan property),
while in the articles \cite{Po11}, \cite{Za15} V. L. Popov and Yu. G. Zarhin solved the question for one and two dimensional varieties.
They found that the birational automorphism group of a curve or a surface is Jordan, save when the variety is birational to a direct product of an elliptic curve and the projective line.
This later case was examined in \cite{Za15}, where -based on calculations of D. Mumford- the author was able to conclude that the birational automorphism group contains Heisenberg $p$-groups for arbitrarily large prime numbers $p$.
Hence it does not enjoy the Jordan property.\\
In \cite{PS14} and \cite{PS16} Yu. Prokhorov and C. Shramov made important contributions to the subject using the arsenal of the Minimal Model Program and assuming the Borisov-Alexeev-Borisov (BAB) conjecture
(which has later been verified in the celebrated article \cite{Bi16} of C. Birkar; for a survey paper on the work of C. Birkar and its connection to the Jordan property, the interested reader can consult with \cite{Ke19}).
Amongst many highly interesting results, Yu. Prokhorov and C. Shramov proved that the birational automorphism group of a rationally connected variety and
the birational automorphism group of a non-uniruled variety is Jordan. To answer a question of D. Allcock, they also introduced the concept of solvably Jordan groups,
and showed that the birational automorphism group of an arbitrary variety is solvably Jordan.\\
The landscape is strikingly similar in differential geometry. The techniques are fairly different, still the results converge to similar directions.
In the following we briefly review the history of the question of Jordan type properties of diffeomorphism groups of smooth compact real manifolds. (We note that there are many other interesting setups which were considered by differential geometers;
for a very detailed account see the Introduction of \cite{MR18}.) As mentioned in \cite{MR18}, during the mid-nineties \'E. Ghys conjectured that the diffeomorphism group of a smooth compact real manifold is Jordan,
and he proposed this problem in many of his talks (\cite{Gh97}). The case of surfaces follows from the Riemann-Hurwitz formula (see \cite{MR10}), the case of 3-folds are more involved.
In \cite{Zi14} B. P. Zimmermann proved the conjecture for them using the geometrization of compact 3-folds (which follows from the work of W. P. Thurston and G. Perelman).
I. Mundet i Riera also verified the conjecture for several interesting cases, like tori, projective spaces, homology spheres and manifolds with non-zero Euler characteristic (\cite{MR10},\cite{MR16}, \cite{MR18}).\\
However, in 2014, B. Csik\'os, L. Pyber and E. Szab\'o found a counterexample (\cite{CPS14}).
Their construction was remarkably analogous to the one of Yu. G. Zarhin. They showed that if the manifold $M$ is diffeomorphic to the direct product of the two-sphere and the two-torus
or to the total space of any other smooth orientable two-sphere bundle over the two-torus, then the diffeomorphism group contains Heisenberg $p$-groups for arbitrary large prime numbers $p$. Hence $\Diff(M)$ cannot be Jordan.
As a consequence, \'E. Ghys improved on his previous conjecture, and proposed the problem of showing that the diffeomorphism group of a compact real manifold is nilpotently Jordan (\cite{Gh15}).
As the first trace of evidence, I. Mundet i Riera and C. Sa\'ez-Calvo showed that the diffeomorphism group of a 4-fold is nilpotently Jordan of class at most 2 (\cite{MRSC19}). Their proof uses results from the classification theorem of finite simple groups.\\
Motivated by these antecedents, in this article we investigate the nilpotently Jordan property for birational automorphism groups of varieties.\\
The idea of the proof stems from the following picture. Let $X$ be a $d$ dimensional complex variety. We can assume that $X$ is smooth and projective. Let $G\leqq\Bir(X)$ be an arbitrary finite subgroup.
Consider the MRC (maximally rationally connected) fibration $\phi:X\dashrightarrow Z$ (Theorem \ref{MRC}). Because of the functoriality of the MRC fibration, a birational $G$-action is induced on $Z$, making $\phi$ $G$-equivariant.
After a smooth regularization (Lemma \ref{reg}) we can assume that both $X$ and $Z$ are smooth and projective, $G$ acts on them by regular automorphisms and $\phi$ is a $G$-equivariant morphism.
Since the general fibres of $\phi$ are rationally connected, we can run a $G$-equivariant relative Minimal Model Program over $Z$ on $X$ (Theorem \ref{MMP}). It results a $G$-equivariant Mori fibre space $\varrho:W\to Y$ over $Z$.
\[
\xymatrix{
X\ar@ {-->} [r]^{\cong} \ar[rd]_{\phi} & W \ar[r]^\varrho \ar[d] & Y \ar[ld]^\psi\\
& Z
}
\]
We can understand the $G$-action on $X$ by analyzing the $G$-actions on $\psi:Y\to Z$ and on $\varrho:W\to Y$.
We will apply induction on the relative dimension $e=\dim X-\dim Z$ to achieve this (Theorem \ref{AlmostMain}).
Actually, we will prove a slightly stronger theorem then Theorem \ref{main} and will show that $\Bir(X)$ is nilpotently Jordan of class at most $(e+1)$.
The base of the induction is when $e=0$. Then $X$ is non-uniruled and a theorem of Yu. Prokhorov and C. Shramov (Theorem 1.8 in \cite{PS14}) shows us that the birational automorphism group of $X$ is Jordan.\\
Otherwise, the inductive hypothesis will show us that $H=\Imag(G\to\Aut_{\mathbb{C}}(Y))$ has a bounded index nilpotent subgroup of class at most $e$.
To perform the inductive step, we will take a closer look at the $G$-action on the generic fibre $W_\eta\to\Spec K(Y)$. We will use two key ingredients.
The first one is based on the boundedness of Fano varieties, and will allow us to embed $G$ into the semilinear group $\GL(n, K(Y))\rtimes\Aut_{\mathbb{C}}(K(Y))$, where $n$ is bounded in terms of $e$ (Proposition \ref{Fano}).
The second one is a Jordan type theorem on certain finite subgroups of a semilinear group (Theorem \ref{groupmain}).
Putting these together will finish the proof.\\
The article is organized in the following way. In Section \ref{P} we recall the definition and some basic facts about nilpotent groups, we also recall the concept of the MRC fibration.
In Section \ref{FGV} we collect results about finite birational group actions on varieties.
In particular, it contains the theorem of Yu. Prokhorov and C. Shramov about the Jordan property of the birational automorphism group of non-uniruled and rationally connected varieties (Theorem \ref{nu}),
the regularization lemma (Lemma \ref{reg}), the theorem on the $G$-equivariant MMP (Theorem \ref{MMP}) and
the proposition about certain finite group actions on Fano varieties (Proposition \ref{Fano}). At the end of the section we investigate some questions about bounds on the number of generators of finite subgroups of the birational automorphism group.
The boundedness of the generating set helps us to give a more accurate bound on the nilpotency class (Remark \ref{NoB}).
Section \ref{gp} deals with the proof of the Jordan type theorem on semilinear groups (Theorem \ref{groupmain}).
Finally, in Section \ref{PMT} we prove our main theorem.
\subsection*{Acknowledgements}
The author is very grateful to E. Szab\'o for many helpful discussions.
\section{Preliminaries}
\label{P}
\subsection{Nilpotent groups}
We recall the definition of nilpotent groups and some of their basic properties.
\begin{defn}
Let $G$ be a group.
Let $\Z_0(G)=1$ and define $\Z_{i+1}(G)$ as the preimage of $\Z(G/\Z_i(G))$ under the natural quotient group homomorphism $G\to G/\Z_i(G)$ $(i\in\mathbb{N})$. The series of groups
$1=\Z_0(G)\leqq\Z_1(G)\leqq\Z_2(G)\leqq...$ is called the upper central series of $G$.\\
Let $\gamma_0(G)=G$ and let $\gamma_{i+1}(G)=[\gamma_i(G),G]$ ($i\in\mathbb{N}$, and $[,]$ denotes the commutator operation). The series of groups
$G=\gamma_0(G)\geqq\gamma_1(G)\geqq\gamma_2(G)\geqq...$ is called the lower central series of $G$.\\
$G$ is called nilpotent if one (hence both) of the following equivalent conditions hold:
\begin{itemize}
\item
There exists $n\in\mathbb{N}$ such that $\Z_n(G)=G$.
\item
There exists $n\in\mathbb{N}$ such that $\gamma_n(G)=1$.
\end{itemize}
If $G$ is a nontrivial nilpotent group, then there exists a natural number $c$ for which $\Z_c(G)=G$, $\Z_{c-1}(G)\neq G$ and $\gamma_c(G)=1$, $\gamma_{c-1}(G)\neq 1$ holds. $c$ is called the nilpotency class of $G$.
(If $G$ is trivial, then its nilpotency class is zero.)
\end{defn}
\begin{rem}
Note that $\Z_1(G)$ is the centre of the group $G$, while $\gamma_1(G)$ is the commutator subgroup. A non-trivial group $G$ is nilpotent of class one if and only if it is Abelian.\\
Nilpotency is the property between the Abelian and the solvable properties. The Abelian property implies nilpotency, while nilpotency implies solvability.
\end{rem}
The following proposition describes one of the key features of nilpotent groups. They can be built up by successive central extensions.
\begin{prop}
\label{CE}
Let $G$ be a group and $A\leqq\Z(G)$ be a central subgroup of $G$. If $G/A$ is nilpotent of class at most $c$, then $G$ is nilpotent of class at most $(c+1)$.
\end{prop}
We will use also the two properties below about nilpotent groups.
\begin{prop}
\label{ICmap}
Let $G$ be a nilpotent group of class at most $n$.
Fix $n-1$ arbitrary elements in $G$, denote them by $g_1,g_2,...g_{n-1}$, and let $1\leqq j \leqq n$ be an arbitrary integer.
The map $\varphi_j$ defined by the help of iterated commutators of length $(n-1)$
\begin{gather*}
\varphi_j:G\to\gamma_{n-1}(G)\\
g\mapsto [[...[[[[...[[g_1,g_2],g_3]...],g_{j-1}],g],g_j]...],g_{n-1}]
\end{gather*}
gives a group homomorphism.
\end{prop}
\begin{prop}
\label{IC}
Let $G$ be a group. $G$ is nilpotent of class at most $n$ if and only if $\forall g_1,g_2,...,g_{n+1}\in G$: $[[...[[g_1,g_2],g_3]...],g_{n+1}]=1$.
\end{prop}
\begin{rem}
Typical examples of nilpotent groups are finite $p$-groups (where $p$ is a prime number). If we restrict our attention to finite nilpotent groups, even more can be said.
(Recall that a $p$-Sylow subgroup of a finite group is the largest $p$-group contained in the group.)
A finite group is nilpotent if and only if it is the direct product of its Sylow subgroups (Theorem 6.12 in \cite{CR62}).
\end{rem}
\subsection{The maximally rationally connected fibration}
We recall the concept of the maximally rationally connected fibration. For a detailed treatment see Chapter $4$ of \cite{Ko96}, for the non-uniruledness of the basis see Corollary 1.4 in \cite{GHS03}.
\begin{thm}
\label{MRC}
Let $X$ be a smooth proper complex variety. The pair $(Z,\phi)$ is called the maximally rationally connected (MRC) fibration if
\begin{itemize}
\item $Z$ is a complex variety,
\item $\phi:X\dashrightarrow Z$ is a dominant rational map,
\item there exist open subvarieties $X_0$ of $X$ and $Z_0$ of $Z$ such that $\phi$ descends to a proper morphism between them $\phi_0:X_0\to Z_0$ with rationally connected fibres,
\item if $(W,\psi)$ is another pair satisfying the three properties above, then $\phi$ can be factorized through $\psi$. More precisely, there exists a rational map $\tau: W\dashrightarrow Z$ such that $\phi=\tau\circ\psi$.
\end{itemize}
The MRC fibration exists and is unique up to birational equivalence. Moreover the basis $Z$ is non-uniruled.
\end{thm}
\section{Finite group actions on varieties}
\label{FGV}
In this section we introduce techniques which help us to solve partial cases of our problem and help us to build up the full solution from the special cases.\\
\subsection{Jordan property}
Yu. Prokhorov and C. Shramov proved the following theorem (Theorem 1.8 in \cite{PS14} and Theorem 1.8 in \cite{PS16}).
It will serve us as a starting point of an inductive argument in the proof of our main theorem and
will be an important ingredient when we look for bounds on the number of generators of finite subgroups of the birational automorphism group (Theorem \ref{bfsg}).
\begin{thm}
\label{nu}
Let $X$ be variety over a field of characteristic zero. Assume that $X$ is either non-uniruled or rationally connected. Then the birational automorphism group of $X$ is Jordan (in other words, it is nilpotently Jordan of class at most 1).
\end{thm}
\subsection{Smooth regularization}
The next lemma is a slight extension of the well-known (smooth) regularization of finite group actions on varieties (Lemma-Definition 3.1. in\cite{PS14}).
\begin{lem}
\label{reg}
Let $X$ and $Z$ be complex varieties and $\phi:X\dashrightarrow Z$ be a dominant rational map between them. Let $G$ be a finite group which acts by birational automorphisms on $X$ and $Z$ in such a way that $\phi$ is $G$-equivariant.
There exist smooth projective varieties
$X^*$ and $Z^*$ with regular $G$-actions on them and a $G$-equivariant projective morphism $\phi^*: X^*\to Z^*$ such that
$X^*$ is $G$-equivariantly birational to $X$, $Z^*$ is $G$-equivariantly birational to $Z$ and $\phi^*$ is $G$-equivariantly birational to $\phi$. In other words, we have a $G$-equivariant commutative diagram.
\[
\xymatrix{
X \ar@{-->}[r]^\cong \ar@{-->}[d]^\phi & X^* \ar[d]^{\phi^*}\\
Z \ar@{-->}[r]^\cong & Z^*
}
\]
\end{lem}
\begin{proof}
Let $K(Z)\leqq K(X)$ be the field extension corresponding to the function fields of $Z$ and $X$, induced by $\phi$.
Take the induced $G$-action on this field extension and let $K(Z)^G\leqq K(X)^G$ be the field extension of the $G$-invariant elements.
Consider a projective model of it, i.e. let $\varrho_1: X_1\to Z_1$ be a (projective) morphism, where $X_1$ and $Z_1$ are projective varieties such that $K(X_1)\cong K(X)^G$ and $K(Z_1)\cong K(Z)^G$,
and $\varrho_1: X_1\to Z_1$ induces the field extension $K(Z_1)\cong K(Z)^G\leqq K(X)^G\cong K(X_1)$.
By normalizing $X_1$ in the function field $K(X)$ and $Z_1$ in the function field $K(Z)$ we get projective varieties $X_2$ and $Z_2$, moreover $\varrho_1$ induces a $G$-equivariant morphism $\varrho_2:X_2\to Z_2$ between them.\\
As the next step, we can take a $G$-equivariant resolution of singularities $\widetilde{Z_2}\to Z_2$. After replacing $Z_2$ by $\widetilde{Z_2}$
and $X_2$ by the irreducible component of $X_2\times_{Z_2}\widetilde{Z_2}$ which dominates $\widetilde{Z_2}$, we can assume that $Z_2$ is smooth. Hence $G$-equivarianlty resolving the singularities of $X_2$ finishes the proof.
\end{proof}
\subsection{Minimal Model Program and boundedness of Fano varieties}
Applying the results of the famous article by C. Birkar, P. Cascini, C. D. Hacon and J. McKernan (\cite{BCHM10}) enables us to use the arsenal of the Minimal Model Program.
As a consequence, we can examine rationally connected varieties (fibres) with the help of Fano varieties (fibres).
For the later we can use boundedness results because of yet another famous theorem by C. Birkar (\cite{Bi16}). (This theorem was previously known as the BAB Conjecture).
\begin{thm}
\label{MMP}
Let $X$ and $Z$ be smooth projective complex varieties such that $\dim Z<\dim X$. Let $\phi:X\to Z$ be a dominant morphism between them with rationally connected general fibres.
Let $G$ be a finite group which acts by regular automorphisms on $X$ and $Z$ in such a way that $\phi$ is $G$-equivariant.
We can run a $G$-equivariant Minimal Model Program (MMP)
on $X$ relative to $Z$ which results a Mori fibre space. In particular, the Minimal Model Program gives a $G$-equivariant commutative diagram
\[
\xymatrix{
X\ar@ {-->} [r]^{\cong} \ar[rd]^{\phi} & W \ar[r] \ar[d] & Y \ar[ld]\\
& Z
}
\]
where $W$ is $G$-equivariantly birational to $X$, $\dim Y< \dim X$ and the generic fibre of the morphism between $W$ and $Y$ is a Fano variety with (at worst) terminal singularities.
\end{thm}
\begin{proof}
By Corollary 1.3.3 of \cite{BCHM10}, we can run a relative MMP on $\phi:X\to Z$ (which results a Mori fibre space) if the canonical divisor of $X$ is not $\phi$-pseudo-effective. It can be done equivariantly if we have
finite group actions. (See Section 2.2 in \cite{KM98} and Section 4 of \cite{PS14} for further discussions on the topic.) So, it remains to show that the canonical divisor of $X$ is not $\phi$-pseudo-effective.\\
By generic smoothness, a general fibre of $\phi$ is a smooth rationally connected projective complex variety.
Therefore if $x$ is a general closed point of a general fibre $F$, then there exists a free rational curve $C_x$ running through $x$, lying entirely in the fibre $F$ (Theorem 1.9 of Chapter 4 in \cite{Ko96}).
Since $C_x$ is a free rational curve, $C_x.K_X\leqq-2$. Since the inequality holds for every general closed point of every general fibre, $K_X$ cannot be $\phi$-pseudo-effective.
\end{proof}
The lemmas and the theorems above open the door for us to use induction on the relative dimension of the MRC fibration while proving Theorem \ref{main}. So we only need to deal with Fano varieties of bounded dimensions.
\begin{prop}
\label{Fano}
Let $e$ be a natural number. There exists a constant $n=n(e)\in\mathbb{N}$, only depending on $e$, with the following property. If
\begin{itemize}
\item $K$ is a field of characteristic zero,
\item $F$ is a Fano variety over $K$ of dimension at most $e$, with terminal singula/-rities,
\item $G$ is a finite group which acts faithfully on $F$ by regular automorphisms of the $\mathbb{Q}$-scheme $F$,
and acts on $\Spec K$ by regular automorphisms of the $\mathbb{Q}$-scheme $\Spec K$,
in such a way that the structure morphism $F\to\Spec K$ is $G$-equivariant,
\end{itemize}
then $G$ can be embedded into the semilinear group $\KL (n, K)\cong \GL(n, K)\rtimes \Aut K$
in such a way that $G\hookrightarrow \KL (n,K)\twoheadrightarrow\Aut K$ corresponds to the $G$-action on $\Spec K$.
\end{prop}
\begin{proof}
Fix $K$, $F$ and $G$ with the properties described by the theorem. There exists a finitely generated field extension $L_0|\mathbb{Q}$ and a Fano variety $F_0$ over $L_0$ such that $F\cong F_0\times_{L_0}\Spec K$.
Consider an embedding of fields $L_0\hookrightarrow\mathbb{C}$, and let $F_1\cong F_0\times_{L_0}\Spec \mathbb{C}$.
Since complex Fano varieties with terminal singularities of bounded dimension form a bounded family (Theorem1.1 in\cite{Bi16}), there exist constants $P=P(e),M=M(e)\in\mathbb{N}$, only depending on $e$,
such that $P$-th power of the anticanonical divisor embeds $F_1$ to the $M_1$-dimensional complex projective space, where $M_1\leqq M$.
Since the $P$-th power of the anticanonical divisor is defined over any field, this embedding is defined over any field, in particularly over $K$.
So we have a closed embedding of the form $F\hookrightarrow \mathbb{P}_K^{M_1}\cong\mathbb{P}(\H0 (X,-K_F^P)^*)$.\\
By the functorial property of a (fixed) power of the anticanonical divisor, an equivariant $G$-action is induced on the commutative diagram below.
\[
\xymatrix{
F\ar@{^{(}->}[r] \ar[d] & \mathbb{P}(\H0 (X,-K_F^P)^*) \ar[ld] \\
\Spec K
}
\]
Since $F\hookrightarrow\mathbb{P}(\H0 (X,-K_F^P)^*)$ is a closed embedding, the semilinear action of $G$ on the vector space $\H0 (X,-K_F^P)$ is faithful.
Hence $G$ embeds to $\KL(\H0 (X,-K_F^P))$. Clearly $G\to \Aut K$ corresponds to the $G$-action on $\Spec K$.
As $\dim\H0 (X,-K_F^P)\leqq M(e)+1$, we finished the proof.
\end{proof}
\subsection{Bound on the number of generating elements of finite subgroups of the birational automorphism groups}
Now we turn our attention on finding bounds on the number of generating elements of finite subgroups of the birational automorphism group of varieties.
It will be important for as when we will investigate commutator relations (Lemma \ref{DN}), and it will be crucial to have a bound on the number of the elements of a generating set of the group.\\
The next theorem and its proof are essentially due to Y. Prokhorov and C. Shramov. (We use the world essentially as they only considered the case of finite Abelian subgroups (Remark 6.9 of \cite{PS14}).)
It is also important to note that the proof of Remark 6.9 of \cite{PS14} uses the result of C. Birkar about the boundedness of Fano varieties (Theorem 1.1 in \cite{Bi16}).
\begin{thm}
\label{bfsg}
Let $X$ be a variety over a field of characteristic zero. There exists a constant $m=m(X)\in\mathbb{Z}^+$, only depending on the birational class of $X$, such that
if $G\leqq\Bir(X)$ is an arbitrary finite subgroup of the birational automorphism group, then $G$ can be generated by $m$ elements.
\end{thm}
\begin{proof}
First we show the theorem in the special cases when $X$ is either non-uniruled or rationally connected.
By Remark 6.9 of \cite{PS14} and Theorem 1.1 of \cite{Bi16}, there exists a constant $m=m(X)\in\mathbb{Z}^+$, only depending on the birational class of $X$, such that
if $A\leqq\Bir(X)$ is an arbitrary finite Abelian subgroup of the birational automorphism group, then $A$ can be generated by $m$ elements. Since $\Bir(X)$ is Jordan when $X$ is non-uniruled or rationally connected (Theorem \ref{nu}),
the result on the finite Abelian groups implies the claim of the theorem in both of these special cases.\\
Now let $X$ be arbitrary. Arguing as in Remark \ref{C} we can assume that $X$ is a complex variety.
Consider the MRC fibration $\phi:X\dashrightarrow Z$.
By Lemma \ref{reg} we can assume that both $X$ and $Z$ are smooth projective varieties, and $G$ acts on them by regular automorphisms.
Let $\rho$ be the generic point of $Z$, and let $X_\rho$ be the generic fibre of $\phi$. $X_\rho$ is a rationally connected variety over the function field $k(Z)$.\\
Let $G_\rho\leqq G$ be the maximal subgroup of $G$ acting fibrewise. $G_\rho$ has a natural faithful action on $X_\rho$, while $G/G_\rho=G_Z$ has a natural faithful action on $Z$.
This gives a short exact sequence of groups
\[1\to G_\rho\to G\to G_Z\to 1.\]
By the rationally connected case there exists a constant $m_1(X_{\rho})$, only depending on the birational class of $X_{\rho}$, such that $G_\rho$ can be generated by $m_1(X_{\rho})$ elements.
By the non-uniruled case there exists a constant $m_2(Z)$, only depending on the birational class of $Z$, such that $G_Z$ can be generated by $m_2(Z)$ elements.
So $G$ can be generated by $m(X_{\rho}, Z)=m_1(X_{\rho})+m_2(Z)$ elements. Since $m(X_{\rho},Z)$ only depends on the birational classes of $X_{\rho}$ and $Z$,
and both of the birational classes of $X_{\rho}$ and $Z$ only depend on the birational class of $X$, this finishes the proof.
\end{proof}
In case of rationally connected varieties we will use a slightly stronger version of the theorem. To prove it, we need a theorem about fixed points of rationally connected varieties.
It is due to Yu. Prokhorov and C. Shramov (Theorem 4.2 of \cite{PS14}).
\begin{thm}
\label{afp}
Let $e$ be a natural number. There exits a constant $R=R(e)\in\mathbb{Z}^+$, only depending on $e$, with the following property.
If $X$ is a rationally connected complex projective variety of dimension at most $e$,
and $G\leqq\Aut(X)$ is an arbitrary finite subgroup of its automorphism group,
then there exists a subgroup $H\leqq G\leqq \Aut(X)$ such that $H$ has a fixed point in $X$, and the index of $H$ in $G$ is bounded by $R$.
\end{thm}
\begin{thm}
\label{bgrc}
Let $e$ be a natural number. There exits a constant $m=m(e)\in\mathbb{Z}^+$, only depending on $e$, with the following property.
If $K$ is an arbitrary field of characteristic zero, $X$ is a rationally connected variety over $K$ of dimension at most $e$,
and $G\leqq\Bir(X)$ is an arbitrary finite subgroup of the birational automorphism group,
then $G$ can be generated by $m$ elements.
\end{thm}
\begin{proof}
Fix $K$, $X$ and $G$ with the properties described by the theorem.
Arguing as in the case of Remark \ref{C}, we can assume that $K$ is the field of the complex numbers.\\
Using Lemma \ref{reg}, we can assume that $X$ is smooth and projective and $G$ is a finite subgroup of the biregular automorphism group $\Aut(X)$.\\
By Theorem \ref {afp}, we can assume that $G$ has a fixed point in $X$. Denote it by $P$.\\
By Lemma 4 of \cite{Po14} $G$ acts faithfully on the tangent space of the fixed point $P$. So $G$ can be embedded to $\GL(\T_PX)$, whence $G$ can be embedded to $\GL(e,\mathbb{C})$.
Therefore the claim of the theorem follows from Lemma \ref{bg}. This finishes the proof.
\end{proof}
\section{Calculations in the general semilinear group}
\label{gp}
This section contains the group theoretic ingredient of the proof of the main theorem.
\begin{thm}
\label{groupmain}
Let $c,n$ and $m$ be positive integers. Let $F$ be the family of those finite groups $G$ which have the following properties.
\begin{itemize}
\item
There exists a field $K$ of characteristic zero containing all roots of unity such that $G$ is a subgroup of the semilinear group $\KL(n,K)\cong \GL(n)\rtimes \Aut K$.
\item
Every subgroup of $G$ can be generated by $m$ elements.
\item
The image of the composite group homomorphism $G\hookrightarrow \KL(n,K)\twoheadrightarrow\Aut K$, denoted by $\Gamma$, is nilpotent of class at most $c$ ($c \in\mathbb{N}$) and fixes all roots of unity.
\end{itemize}
There exists a constant $C=C(c,n,m)\in\mathbb{Z}^+$, only depending on $c,n$ and $m$, such that every finite group $G$ belonging to $F$ contains a nilpotent subgroup $H\leqq G$ with nilpotency class at most $(c+1)$ and with index at most $C$.
\end{thm}
First, we recall a slightly strengthened version of Jordan's theorem.
\begin{thm}
\label{Jor}
Let $n$ be a positive integer. There exists a constant $J=J(n)\in\mathbb{Z}^+$, only depending on $n$,
such that if a finite group $G$ is a subgroup of a general linear group $\GL(n, K)$, where $K$ is a field of characteristic zero, then $G$ contains a characteristic Abelian subgroup $A\leqq G$ of index at most $J$.
\end{thm}
\begin{rem}
The only claim of the above theorem which does not follow immediately from Theorem 2.3 in \cite{Br11} is that we require the Abelian subgroup of bounded index $A\leqq G$ to be characteristic (i.e. invariant under all automorphisms of $G$)
instead of being normal (i.e. invariant under the inner automorphisms of $G$).
In the following we will prove some lemmas which help us to deduce the above variant of the theorem from the one which can be found in \cite{Br11}.
\end{rem}
\begin{lem}
\label{bg}
Let $n$ be a positive integer. There exists a constant $r=r(n)\in\mathbb{Z}^+$, only depending on $n$,
such that if a finite group $G$ is a subgroup of a general linear group $\GL(n, K)$, where $K$ is a field of characteristic zero, then $G$ can be generated by $r$ elements.
\end{lem}
\begin{proof}
It is enough to prove the lemma when $K$ is algebraically closed, so we can assume it.
By Theorem 2.3 in \cite{Br11}, $G$ contains a diagonalizable subgroup of bounded index.
Since finite diagonal groups of $\GL(n,K)$ can be generated by $n$ elements, the lemma follows.
\end{proof}
\begin{lem}
\label{ind}
Let $J$ and $r$ be positive integers. There exists a constant $L=L(J,r)\in\mathbb{N}$, only depending on $r$ and $J$, such that
if $G$ is a finite group which can be generated by $r$ elements, then $G$ has at most $L$ many subgroups of index $J$.
\end{lem}
\begin{proof}
Fix an arbitrary finite group $G$ which can be generated by $r$ elements. We can construct an injective map of sets
from the set of index $J$ subgroups of $G$ to the set of group homomorphisms from $G$ to the symmetric group of degree $J$.
Since $G$ can be generated by $r$ elements the later set has boundedly many elements, hence the former set has boundedly many elements as well. So we only left with the task of constructing such an injective map.\\
Let $S$ be a set with $J$ elements. We can identify the symmetric group of degree $J$, denoted by $\Sym_J$, with the symmetry group of the set $S$. Fix an arbitrary element $x\in S$.
For every index $J$ subgroup $K\leqq G$, fix a bijection $\mu_K$ between the set of the left cosets of $K$ and the set $S$, subject to the following condition, $K$ is mapped to the fixed element $x$, i.e. $\mu_K(K)=x$.
Let $H\leqq G$ be an arbitrary subgroup of index $J$. $G$ acts on the set of the left cosets of $H$ by left multiplication. Using the bijection $\mu_H$, this induces a group homomorphism $\phi_H: G\to \Sym_J$ .
The constructed assignment is injective as the stabilizator subgroup of $x$ in the image group $\Imag\phi_H$ uniquely determines $H$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Jor}]
Let $K$ be an arbitrary field of characteristic zero, and let $G$ be an arbitrary finite subgroup of $\GL(n,K)$.
By Theorem 2.3 in \cite{Br11} $G$ contains an Abelian subgroup $A\leqq G$ of index bounded by $J_0=J_0(n)$. Consider the set $S$ of the smallest index Abelian subgroups of $G$.
By Lemma \ref{bg} and Lemma \ref{ind} there exists a constant $L=L(n)$, only depending on $n$, such that $S$ has at most $L$ many elements.
Take the intersection of the subgroups contained in $S$, it gives a characteristic Abelian subgroup of index at most $J_0^L$.
\end{proof}
Next we prove a lemma about nilpotent groups.
\begin{lem}
\label{DN}
Let $c,J$ and $m$ be positive integers. There exists a constant $C=C(c,J,m)\in \mathbb{N}$, only depending on $c,J$ and $m$, such that if
\begin{itemize}
\item $G$ is a nilpotent group of class at most $(c+1)$,
\item $G$ can be generated by $m$ elements,
\item the cardinality of $\gamma_{c}(G)$ is at most $J$,
\end{itemize}
then $G$ has a nilpotent subgroup $H\leqq G$ of class at most $c$ whose index is bounded by $C$.
\end{lem}
\begin{proof}
Fix a generating system $g_1,...,g_m\in G$. Consider the group homomorphisms (Proposition \ref{ICmap})
\begin{gather*}
\varphi_{i_1,i_2,...,i_{c}}:G\to\gamma_{c}(G)\\
g\mapsto [[[...[[g_{i_1},g_{i_2}],g_{i_3}]...],g_{i_{c}}],g],
\end{gather*}
where $1\leqq i_1,i_2,...,i_{c}\leqq m$, i.e. for every ordered length $c$ sequence of the generators we assign a group homomorphism using the iterated commutators.
Let $H$ be the intersection of the kernels.
$$H=\bigcap\limits_{1\leqq i_1,i_2,...,.i_{c} \leqq m} \Ker\varphi_{i_1,i_2,...,i_{c}}$$
Using the fact that the length $c$ iterated commutators give group homomorphisms in every variable if we fix the other variables (Proposition \ref{ICmap}),
one can show that all the length $c$ iterated commutators of $H$ vanish. Hence $H$ is nilpotent of class at most $c$ (Proposition \ref{IC}).\\
On the other hand $H$ is the intersection of $m^c$ many subgroups of index at most $|\gamma_{c}(G)|\leqq J$. Hence the index of $H$ is bounded in terms of $c,J$ and $m$. This finishes the proof.
\end{proof}
Now we are ready to prove the main theorem of the section.
\begin{proof}[Proof of Theorem \ref{groupmain}]
Let $K$ be an arbitrary field of characteristic zero containing all roots of unity, and let $G$ be an arbitrary finite subgroup of $\KL(n,K)$ belonging to $F$. Consider the short exact sequence of groups given by
$$1\to N\to G\to\Gamma\to 1$$
where $N=\GL(n,K)\cap G$ and $\Gamma=\Imag(G\to\Aut K)$. By Theorem \ref{Jor}, $N$ contains a characteristic Abelian subgroup of index bounded by $J=J(n)\in\mathbb{Z}^+$.
Since $A$ is characteristic in $N$ and $N$ is normal in $G$, $A$ is a normal subgroup of $G$.\\
Consider the natural action of $G$ on the vector space $V=K^n$. Since $A$ is a finite Abelian subgroup of $\GL(V)$ and the ground field $K$ contains all roots of unity,
$A$ decomposes $V$ into common eigenspaces of its elements: $V=V_1\oplus V_2\oplus...\oplus V_r$ $(r\leqq n)$.
As $A$ is normal in $G$, $G$ respects this decomposition, i.e. $G$ acts on the set of linear subspaces $\{V_1,V_2,...,V_r\}$ by permutations.
The kernel of this group action, denoted by $G_1$, is a bounded index subgroup of $G$ (indeed $|G:G_1|\leqq r!\leqq n!$). Furthermore, $A$ is central in $G_1$, i.e. $A\leqq \Z(G_1)$.
To see this, notice that on an arbitrary fixed eigenspace $V_i$ $(1\leqq i\leqq r)$ $A$ acts by scalar matrices in such a way that all scalars are drawn from the set of the roots of unity. Since $G_1$ leaves $V_i$ invariant by definition and
$\Imag(G_1\to\Aut K)$ fixes all roots of unity, our claim follows. After replacing $G$ with the bounded index subgroup $G_1$, we can assume that $A\leqq \Z(G)$.\\
As $A$ is a central subgroup of $G$, we can consider the quotient group $\overline{G}=G/A$. By Proposition \ref{CE}, we only need to prove that $\overline{G}$ has a bounded index nilpotent subgroup of class at most $c$.
Our strategy will be that, first we prove that $\overline{G}$ has a bounded index nilpotent subgroup of class at most $(c+1)$, then we will apply Lemma \ref{DN}.\\
Let $\overline{N}=N/A$, and consider the short exact sequence of groups
$$1\to \overline{N}\to\overline{G}\to\Gamma\to 1.$$
The number of elements of $\overline{N}$ is bounded by $J(n)$, by the definition of $A$, and $\Gamma$ is nilpotent of class at most $c$, by the definition of $G$.\\
$\overline{G}$ acts on $\overline{N}$ by conjugation, and the kernel of this action is the centralizer group $\Cent_{\overline{G}}(\overline{N})=\{g\in\overline{G}|\; ng=gn\;\forall n\in\overline{N}\}$.
Therefore $\overline{G}/\Cent_{\overline{G}}(\overline{N})$ embeds into the automorphism group of $\overline{N}$ which has cardinality at most $J!$. Hence $\Cent_{\overline{G}}(\overline{N})$ has bounded index in $\overline{G}$.
Hence, after replacing $\overline{G}$ with $\Cent_{\overline{G}}(\overline{N})$, $\overline{N}$ with $\overline{N}\cap \Cent_{\overline{G}}(\overline{N})$ and $\Gamma$ with the image group $\Imag(\Cent_{\overline{G}}(\overline{N})\to \Gamma)$,
we can assume that $\overline{G}$ is the central extension of the Abelian group $\overline{N}$ and nilpotent group $\Gamma$ whose nilpotency class is at most $c$.
Therefore we can assume that $\overline{G}$ is nilpotent of class at most $(c+1)$ (Proposition \ref{CE}).\\
Notice that $\gamma_c(\overline{G})$ maps to $\gamma_c(\Gamma)=1$, which implies that the former group is contained in $\overline{N}$. So $|\gamma_c(\overline{G})|\leqq|\overline{N}|\leqq J$.
Hence we are in the position to apply Lemma \ref{DN}, which finishes the proof.
\end{proof}
\begin{rem}
\label{NoB}
In the above proof we only used the assumption that $G$ can be generated by $m$ elements via Lemma \ref{DN}. So if we omit this condition from Theorem \ref{groupmain}, we can still prove that there exists a constant
$D=D(n)\in\mathbb{Z}^+$, only depending on $n$ (not even on $c$), such that if $G$ belongs to the corresponding family of groups, then $G$ contains a nilpotent subgroup $H\leqq G$ with nilpotency class at most $(c+2)$ and with index at most $D$.
\end{rem}
\section{Proof of the Main Theorem}
\label{PMT}
Using the techniques developed in the previous sections, we will prove our main theorem.
\begin{thm}
\label{AlmostMain}
Fix a non-uniruled complex variety $Z_0$. Let $F_{Z_0}$ be the collection of 5-tuples $(X, Z,\phi, G, e)$, where
\begin{itemize}
\item $X$ is a complex variety,
\item $Z$ is a complex variety, which is birational to $Z_0$,
\item $\phi: X\dashrightarrow Z$ is a dominant rational map such that there exist open subvarieties $X_1$ of $X$ and $Z_1$ of $Z$ such that
$\phi$ descends to a morphism between them $\phi_1:X_1\to Z_1$ with rationally connected fibres,
\item $G\leqq \Bir(X)$ is a finite group of the birational automorphism group of $X$, which also acts by birational automorphisms on $Z$ in such a way that $\phi$ is $G$-equivariant,
\item $e\in\mathbb{N}$ is the relative dimension $e=\dim X-\dim Z_0$.
\end{itemize}
Then the following claims hold.
\begin{itemize}
\item
There exist constants $\{m_{Z_0}(e)\in\mathbb{Z}^+|\,e\in\mathbb{N}\}$, only depending on the birational class of $Z_0$, such that if the 5-tuple $(X,Z,\phi, G, e)$ belongs to $F_{Z_0}$, then
$G$ can be generated by $m_{Z_0}(e)$ elements.
\item
There exist constants $\{J_{Z_0}(e)\in\mathbb{Z}^+|\,e\in\mathbb{N}\}$, only depending on the birational class of $Z_0$, such that if the 5-tuple $(X,Z,\phi, G, e)$ belongs to $F_{Z_0}$, then
$G$ has a nilpotent subgroup $H\leqq G$ of nilpotency class at most $(e+1)$ and index at most $J_{Z_0}(e)$.
\end{itemize}
\end{thm}
\begin{proof}
(Proof of the First Claim) Let $(X, Z,\phi, G, e)$ be an arbitrary 5-tuple belonging to $F_{Z_0}$. By Lemma \ref{reg} we can assume that both $X$ and $Z$ are smooth projective varieties, and $G$ acts on them by regular automorphisms.
Let $\rho$ be the generic point of $Z$, and let $X_\rho$ be the generic fibre of $\phi$. $X_\rho$ is a rationally connected variety of dimension $e$ over the function field $K(Z)$.\\
Let $G_\rho\leqq G$ be the maximal subgroup of $G$ acting fibrewise. $G_\rho$ has a natural faithful action on $X_\rho$, while $G/G_\rho=G_Z$ has a natural faithful action on $Z$.
This gives a short exact sequence of groups
\[1\to G_\rho\to G\to G_Z\to 1.\]
By Theorem \ref{bgrc} there exists a constant $m_1(e)$, only depending on $e$, such that $G_\rho$ can be generated by $m_1(e)$ elements.
By Theorem \ref{bfsg} there exists a constant $m_2(Z)$, only depending on the birational class of $Z$, such that $G_Z$ can be generated by $m_2(Z)$ elements.
So $G$ can be generated by $m_{Z_0}(e)=m_1(e)+m_2(Z)$ elements. Since $m_{Z_0}(e)$ only depends on $e$ and the birational class of $Z_0$, this finishes the proof of the first claim.\\
(Proof the Second Claim) We will apply induction on $e$. If $e=0$, then $X$ and $Z_0$ are birational, hence $G\leqq \Bir(Z_0)$ and the claim of the theorem follows from Theorem \ref{nu}.
So we can assume that $e>0$ and the claim of the theorem holds if the relative dimension is strictly smaller than $e$.\\
Let $(X, Z,\phi, G, e)$ be a 5-tuple belonging to $F_{Z_0}$.
After regularizing $\phi$ in the sense of Lemma \ref{reg}, we may assume that $X$ and $Z$ are smooth projective varieties, $G$ acts on them by regular automorphisms
and $\phi$ is a $G$-equivariant (projective) morphism.\\
Hence by Theorem \ref{MMP}, we can run a relative $G$-equivariant MMP on $\phi: X\to Z$. It results a $G$-equivariant commutative diagram
\[
\xymatrix{
X \ar@{-->}[r]^\cong \ar[rd]_{\phi} & W \ar[r]^{\varrho}\ar[d] & Y \ar[ld]^{\psi}\\
& Z
}
\]
where $\varrho:W\to Y$ is a Mori fibre space and $\psi: Y\to Z$ is a dominant morphism with rationally connected general fibres (as so does $\phi$).
Let $H$ be the image of $G\to\Aut_{\mathbb{C}}(Y)$, and let $f$ be the relative dimension $f=\dim Y-\dim Z$. The 5-tuple $(Y, Z,\psi, H, f)$ clearly belongs to $F_{Z_0}$.
Moreover, since $f<e$, we can use the inductive hypothesis. Let $H_1\leqq H$ be the nilpotent subgroup of nilpotency class at most $(f+1)$ and index at most $J_{Z_0}(f)$. After replacing $H$ with its bounded index subgroup $H_1$
(and $G$ with the preimage of $H_1$), we can assume that $H$ is nilpotent of class at most $e$.\\
Let $\eta\cong\Spec K(Y)$ be the generic point of $Y$, and let $W_\eta$ be the generic fibre of $\varrho$. Since $\varrho:W\to Y$ is a Mori fibre space, $W_\eta$ is a Fano variety over $K(Y)$ with (at worst) terminal singularities.
Furthermore, $G$ acts on the structure morphism $W_\eta\to\Spec K(Y)$ equivariantly by scheme automorphisms. Hence we can apply Proposition \ref{Fano}, and we can embed $G$ to $\KL(n,K(Y))\cong\GL(n,K(Y))\rtimes \Aut K(Y)$ where
$n=n(e)$ only depends on $e$ (since $\dim W_\eta\leqq e$).
Moreover, the image group $\Gamma=\Imag(G\hookrightarrow\KL(n,K(Y))\twoheadrightarrow \Aut K(Y))$ corresponds to the $G$-action on $\Spec K(Y)$, therefore it corresponds to the $H$-action on $Y$.
Hence $\Gamma$ fixes all roots of unity, as $Y$ is a complex variety, and $\Gamma$ is nilpotent of class at most $e$, as so does $H$.
Furthermore, by the first claim of the theorem, every subgroup of $G$ can be generated by $m=m_{Z_0}(e)$ elements (where $m$ only depends on $e$ and the birational class of $Z_0$).
So we are in the position to apply Theorem \ref{groupmain} to the group $G$, which finishes the proof.
\end{proof}
\begin{rem}
\label{NoB2}
In accordance with Remark \ref{NoB}, we need to consider bounds on the number of generators of finite subgroups of the birational automorphism group to give a more accurate bound on the nilpotency class.
\end{rem}
To close our article, we prove our main theorem.
\begin{proof}[Proof of Theorem \ref{main}]
Let $X$ be a $d$ dimensional complex variety. We can assume that $X$ is smooth and projective. We can also assume that $X$ is non-uniruled by Theorem \ref{nu}.
Let $G\leqq\Bir(X)$ be an arbitrary finite subgroup of the birational automorphism group of $X$. Let $\phi: X\dashrightarrow Z$ be the MRC fibration, and let $e=\dim X-\dim Z$ be the relative dimension.
By the functoriality of the MRC fibration (Theorem $5.5$ of Chapter $4$ in \cite{Ko96}), $G$ acts on the base $Z$ by birational automorphisms making the rational map $\phi$ $G$-equivariant.
Hence the 5-tuple $(X,Z,\phi,G, e)$ belongs to the collection $F_Z$ defined in the previous theorem. Therefore $G$ has a nilpotent subgroup of class at most $(e+1)$ and index at most $J_Z(e)$.
Since $e<d$ (as $X$ is non-uniruled), moreover the relative dimension $e$ and the birational class of the base $Z$ only depends on the birational class of $X$, the theorem follows.
\end{proof}
|
1,116,691,497,188 | arxiv | \subsection{Algorithm Description and Proof of Main Theorem}
\label{sec:algdesc}
Before sketching the algorithm we will first prove Lemma~\ref{lem:robustmain} and explain some of the parameter choices.
Refer to Table \ref{tab:parameters} for a list of the numerical quantities that were introduced.
\begin{proof}[Proof of Lemma~\ref{lem:robustmain}]
Set $\tau$ small enough so that $\kappa_0$, $d\kappa_1 + K^3\sigma$, $d\kappa_2+K^2\sigma^2$, $d\kappa_3+K\sigma^3 < \sqrt{\epsilon}/4$ and $\tau < \epsilon/2$. We then set $\tau_1 = \stheta(\tau)$ from Lemma~\ref{lem:decrease_regularizer} and $\tau_2 = \Theta(\sigma^{15/4})$ from Lemma~\ref{lem:add_1_direction}.
Now assume that conditions (1), (2), and (3) from the statement of the Lemma fail to hold.
We seek to show that $f(\tup) < \epsilon$.
By Lemma \ref{lem:decrease_regularizer} and our choice of $\tau_1$, we have that $R(\tup) < \tau \leq \epsilon/2$.
By Lemma \ref{lem:removing_directions}, we have that $\|\bm A_3\|_F, \|\bm B_3\|_F, \|\bm C_3\|_F$ are all less than $\gamma$.
By Lemma \ref{lem:improve_S}, we have that $\|\ten T_{1,1,1} - \point\|_F \leq \kappa_0$.
By Lemma \ref{lem:add_1_direction}, we have that $\|\ten T_{i,j,k}\|_2 < \kappa_1$ for $(i,j,k) \in \{(2,1,1), (1,2,1), (1,1,2)\}$.
By Lemma \ref{lem:add_2_directions}, we have that $\|\ten T_{i,j,k}\|_2 < \kappa_2$ for $(i,j,k) \in \{(2,2,1), (2,1,2), (1,2,2)\}$.
By Lemma \ref{lem:add_3_directions}, we have that $\|\ten T_{2,2,2}\|_2 < \kappa_3$.
Combining all of these bounds, we have
\begin{align*}
f(\tup) &= R(\tup) + \sum_{i,j,k} \|\ten S_{i,j,k}(\bm A_i, \bm B_j, \bm C_k) - \ten T_{i,j,k}\|_F^2\\
&< \epsilon/2 + \kappa_0^2 + 3(K^3\sigma + d\kappa_1)^2 + 3(K^2\sigma^2 + d\kappa_2)^2 + (K\sigma^3+d\kappa_3)^2\\
&< \epsilon/2 + \epsilon/2,
\end{align*}
as desired.
\end{proof}
We now sketch our algorithm in Algorithm~\ref{alg:local}. The algorithm basically tries to follow the main Lemma~\ref{lem:robustmain}. If the point has large gradient or negative eigenvalue in Hessian, we can just use any standard local search algorithm. When the point is a higher order saddle point, we use Algorithm~\ref{alg:add_direction} as in Lemma~\ref{lem:add_2_directions} or Lemma~\ref{lem:add_3_directions} to generate directions of improvements.
\begin{algorithm}
\begin{algorithmic}
\REQUIRE tensor $\ten T$, error threshold $\epsilon$
\STATE Choose thresholds $\tau_1,\tau_2$ according to Lemma~\ref{lem:robustmain}.
\REPEAT
\STATE Run a local search algorithm to find $(\tau_1,\tau_2)$-second order stationary point.
\STATE Call Algorithm~\ref{alg:add_direction} for $i,j,k=1,2$ to generate improvement directions, repeat for $O(\log 1/\epsilon)$ times.
\IF{any of the generated directions improve the function value by at least $\somega(\sigma^{15/8})$}
\STATE Move in the direction.
\STATE Break.
\ENDIF
\UNTIL{no direction of improvement can be found}
\end{algorithmic}
\caption{Local search algorithm for Tucker decomposition\label{alg:local}}
\end{algorithm}
Now we are ready to prove Theorem~\ref{thm:robust}
\begin{proof}[Proof of Theorem~\ref{thm:robust}]
By Lemma~\ref{lem:robustmain}, for any $(\tau_1,\tau_2)$-second order stationary point, if $f \ge \epsilon$ Lemma~\ref{lem:add_2_directions} and Lemma~\ref{lem:add_3_directions} will be able to generate a direction of improvement that improves the function value by at least $\somega(\sigma^{15/8})$ with constant probability. Since the initial point has constant loss, if a direction of improvement is found for more than $\so(1/\sigma^{15/8})$ iterations, then the function value must already be smaller than $\epsilon$.
After the repetition, the probability that we find a direction of improvement is at least $1- o(\sigma)$. By union bound, we know that with high probability for all the iterations we can find a direction of improvement.
\end{proof}
\section{Escaping from High Order Saddle Points for Tucker Decomposition}
\label{sec:inexact}
As we discussed before, since our objective $f = L+\lambda R$ as in \eqref{eqn:obj} may have high order saddle points, standard local search algorithms may not be able to find a local minimum. However, in this section we show that the high order saddle points of $f$ are {\em benign}: there is a polynomial time local search algorithm that can find an approximate local and global minimum of $f$.
We will first review the guarantees of standard local search algorithms, and then describe how to escape from high order saddle points.
\subsection{Local search algorithms for second order stationary points}
For a general function $f(\bm x)$ whose first two derivatives exist, we say a point $\bm x$ is a $(\tau_1,\tau_2)$-second order stationary point if
$$
\|\nabla f(\bm x)\| \le \tau_1, \quad \lambda_{min}(\nabla^2 f(\bm x)) \ge -\tau_2.
$$
If the function $f(\bm x)$ satisfies the gradient and Hessian Lipschitz conditions
\begin{align*}
\forall \bm x, \bm y & \quad \|\nabla f(\bm x) - \nabla f(\bm y)\| \le \rho_1 \|\bm x-\bm y\|_2,\\
\forall \bm x, \bm y & \quad \|\nabla^2 f(\bm x) - \nabla^2 f(\bm y )\| \le \rho_2 \|\bm x-\bm y\|_2,
\end{align*}
there are many local search algorithms that can find $(\tau_1,\tau_2)$-second order stationary points in polynomial time. This includes traditional second order algorithms such as cubic regularization~\citep{nesterov2006cubic}, and more recently first order algorithms such as perturbed gradient descent~\citep{jin2017escape}.
Of course, these guarantees are not enough for our objective $f$, as it has higher order saddle points. The main theorem in this section shows that there is an efficient local search algorithm that can optimize $f$.
\begin{theorem}\label{thm:robust}
Let $\lambda = 1/16r^4$, assume wlog. that $\|\ten T\|_F = 1$ and the initial point satisfies $f = L+\lambda R= O(1)$\footnote{This can be achieved by initializing at 0, or any point with norm $O(1)$.}. Then there is a local search algorithm that in $\mbox{poly}(d, r, 1/\epsilon)$ time finds a point $(\tup)$ such that $f(\tup) \le \epsilon$.
\end{theorem}
The algorithm that we will design is just a proof of concept: although its running time is polynomial, it is far from practical. We have not attempted to improve the dependencies on $d, r, 1/\epsilon$. Local search algorithms seem to perform much better for Tucker decomposition in practice, and understanding that is an interesting open problem.
To prove Theorem~\ref{thm:robust}, we will first show that sublevel sets of $f$ are all bounded (Section~\ref{subsec:bounded}).
This allows us to bound the gradient and Hessian Lipschitz constants $\rho_1$ and $\rho_2$, so we can use any of the previous local search algorithm to find a $(\tau_1,\tau_2)$-second order stationary point.
Next, we follow the steps of Theorem~\ref{thm:exact}, but we do it much more carefully to show that as long as the objective is larger than $\epsilon$, then either the point has a large gradient or a negative eigenvalue in Hessian, or there is a way to construct a direction of improvement. This is captured in our main Lemma~\ref{lem:robustmain}.
Finally we give a sketch of the algorithm and show that these local improvements are enough to guarantee the convergence in Section~\ref{sec:algdesc}.
Throughout the section, we use $\so(\cdot)$, $\somega(\cdot)$ and $\stheta(\cdot)$ to hide polynomial factors of $r$ and $d$.
We will introduce several numerical quantities in the remainder of this section; we list the most important ones in Table \ref{tab:parameters}.
\begin{table}
\centering
\begin{tabular}{ccl}
Symbol & Definition & Note\\
\hline
$\lambda$ & $\frac{1}{16r^4}$ & weight for regularizer\\
$K$ & $O^*(1)$ & universal bound for norms of $\tup,\ten T$\\
$\tau$ & $< 1$ & bound on $R (\tup)$\\
$\gamma$ & $\stheta(\tau^{1/48})$ & bound on the norm of $\bm A_3, \bm B_3, \bm C_3$, introduced in Lemma \ref{lem:removing_directions}\\
$\sigma$ & $\sqrt{\gamma}$ & singular value threshold for $\bm A_1, \bm B_1, \bm C_1$\\
$\kappa_0$ & $\sqrt{\gamma}$ & max error in $\ten T_{1,1,1}$, introduced in Lemma \ref{lem:improve_S}\\
$\kappa_1$ & $2K\sigma^{3/4}$ & max error in $\ten T_{2,1,1}$, introduced in Lemma \ref{lem:add_1_direction}\\
$\kappa_2$ & $2K\sigma^{1/8}$ & max error in $\ten T_{2,2,1}$, introduced in Lemma \ref{lem:add_2_directions}\\
$\kappa_3$ & $2K\sigma^{1/2}$ & max error in $\ten T_{2,2,2}$, introduced in Lemma \ref{lem:add_3_directions}
\end{tabular}
\caption{Notation and definitions used in Section \ref{sec:inexact}\label{tab:parameters}}
\end{table}
\subsection{Bounded Sublevel Sets}
\label{subsec:bounded}
We first establish the boundedness of sublevel sets of the objective function.
Our local search algorithm will guarantee the function value decreases in every iteration, so the trajectory of the algorithm will remain in a sublevel set. As a result, we know that the parameters remain bounded in norm at each step by some constant, say $K$.
\begin{lemma}
\label{lem:bounded_sublevel}
For all $\Gamma \geq 0$, the set of points $(\tup)$ with $f(\tup) \le \Gamma$ satisfy that $\|\ten S\|_F, \|\bm A\|_F, \|\bm B\|_F, \|\bm C\|_F \le K$ where $K = \so((\Gamma+1)^{1/8})$.
\end{lemma}
To prove this lemma, we will first state some tools that we need.
\begin{lemma}
\label{lemma:submult}
For any parameter tuple $(\tup)$, we have
\[
\|\point\|_F \leq \|\ten S\|_F\|\bm A\|_2\|\bm B\|_2\|\bm C\|_2.
\]
\end{lemma}
\begin{proof}
This follows from the fact that $\|\cdot\|_F$ is invariant to matricization, and the fact that $\|\bm P \bm Q\|_F \le \|\bm P\|_2 \|\bm Q\|_F$.
Observe that
\[
\|\point\|_F = \|\bm A^\top \ten S(\bm I,\bm B,\bm C)_{(1)}\|_F \leq \|\bm A\|_2\|\ten S(\bm I,\bm B,\bm C)\|_F,
\] and then repeat the argument for the other modes.
\end{proof}
\begin{lemma}
\label{lem:norm_bound_S}
For any $\ten S \in \mathbb{R}^{r\times r \times r}$, it holds that
\[
\|\ten S(\ten S_{(1)},\ten S_{(2)},\ten S_{(3)})\|_F \geq \frac{1}{r^4}\|\ten S\|_F ^4
\]
\end{lemma}
\begin{proof}
Let $\bm u, \bm v, \bm w \in \mathbb{R}^{r}$ be unit vectors such that $\ten S(\bm u,\bm v,\bm w) = \|\ten S\|_2$.
Then
\begin{align*}
\ten S_{(3)}\text{vec}(\bm u\otimes \bm v)&= \ten S(\bm u,\bm v,\bm I) = \|\ten S\|_2\bm w,\\
\ten S_{(2)}\text{vec}(\bm u\otimes \bm w)&= \ten S(\bm u,\bm I,\bm w) = \|\ten S\|_2\bm v,\\
\ten S_{(1)}\text{vec}(\bm v\otimes \bm w)&= \ten S(\bm I,\bm v,\bm w) = \|\ten S\|_2\bm u.
\end{align*}
Then
\begin{align*}
\|\ten S(\ten S_{(1)},\ten S_{(2)},\ten S_{(3)})\|_2 &\geq
\ten S(\|\ten S\|_2\bm u, \|\ten S\|_2\bm v,\|\ten S\|_2\bm w)
= \|\ten S\|_2^3\ten S(\bm u,\bm v,\bm w)= \|\ten S\|_2^4
\end{align*}
The result then follows from the norm inequality $\|\ten S\|_F\leq r\|\ten S\|_2$.
\end{proof}
Now we are ready to prove Lemma~\ref{lem:bounded_sublevel}:
\begin{proof}[Proof of Lemma~\ref{lem:bounded_sublevel}]
Assume that $\Gamma \geq f(\tup)$.
From $L$, we have
\begin{align*}
\sqrt{\Gamma} &\geq \|\point-\ten T\|_F\\
&\geq \|\point\|_F - \|\ten T\|_F,
\end{align*}
so that $\|\point\|_F \leq \sqrt{\Gamma} + \|\ten T\|_F$.
Next, define the following for $i = 1,2,3$: $d_i(\bm X,\ten S) = \bm X\bm X^\top - \ten S_{(i)}\ten S_{(i)}^\top$.
Note that $d_1(\bm A,\ten S), d_2(\bm B,\ten S), d_3(\bm C,\ten S)$ are each bounded above in norm by $\Gamma^{1/4}$.
We have
\begin{align*}
\|\point\|_F^2 &= \langle \point,\point\rangle\\
&= \langle \ten S(\bm A \bm A^\top,\bm B\bm B^\top ,\bm C\bm C^\top ),\ten S\rangle\\
&= \langle \ten S(\ten S_{(1)}\ten S_{(1)}^\top,\ten S_{(2)}\ten S_{(2)}^\top,\ten S_{(3)}\ten S_{(3)}^\top), \ten S\rangle + g(\tup)\\
&= \|\ten S(\ten S_{(1)},\ten S_{(2)},\ten S_{(3)})\|_F^2 + g(\tup),
\end{align*}
where $g(\tup )$ is a sum of the seven remainder terms of the form
\begin{align}
\label{eq:r1}
&\langle \ten S(d_1(\bm A,\ten S), \ten S_{(2)}\ten S_{(2)}^\top,\ten S_{(3)}\ten S_{(3)}^\top),\ten S \rangle\\
\label{eq:r2}
&\langle \ten S(d_1(\bm A,\ten S), d_2(\bm B,\ten S), \ten S_{(3)}\ten S_{(3)}^\top),\ten S\rangle\\
\label{eq:r3}
&\langle \ten S(d_1(\bm A,\ten S),d_2(\bm B,\ten S),d_3(\bm C,\ten S)),\ten S\rangle
\end{align}
There are three terms of type (\ref{eq:r1}), and each can be bounded below using Cauchy-Schwarz and Lemma \ref{lemma:submult} as follows:
\[
\langle \ten S(d_1(\bm A,\ten S), \ten S_{(2)}\ten S_{(2)}^\top,\ten S_{(3)}\ten S_{(3)}^\top),\ten S \rangle \geq -\|\ten S\|_F^6\|d_1(\bm A,\ten S)\|_F \geq -\Gamma^{1/4}\|\ten S\|_F^6.
\]
Similarly, we have
\begin{align*}
\langle \ten S(d_1(\bm A,\ten S), d_2(\bm B,\ten S), \ten S_{(3)}\ten S_{(3)}^\top),\ten S\rangle &\geq -\Gamma^{1/2}\|\ten S\|_F^4,\\
\langle \ten S(d_1(\bm A,\ten S),d_2(\bm B,\ten S),d_3(\bm C,\ten S)),\ten S\rangle &\geq -\Gamma^{3/4}\|\ten S\|_F^2.
\end{align*}
Putting this together and applying Lemma \ref{lem:norm_bound_S}, we have that
\[
\frac{1}{r^8}\|\ten S\|_F^8 - 3\Gamma^{1/4}\|\ten S\|_F^6 - 3\Gamma^{1/2}\|\ten S\|_F^4 - \Gamma^{3/4}\|\ten S\|_F^2 \leq (\sqrt{\Gamma}+\|\ten T\|_F)^2,
\]
which means that $\|\ten S\|_F$ must be bounded by $\so((\Gamma+1)^{1/8})$.
From $R$, we have
\begin{align*}
\left( \frac{\Gamma}{\lambda}\right)^{1/4} + \|\ten S\|_F^2
\geq \|\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top\|_F+ \|\ten S_{(1)}\ten S_{(1)}^\top\|_F \geq \|\bm A \bm A^\top\|_F,
\end{align*}
so $\|\bm A\|_F$ is bounded by $\so((\Gamma+1)^{1/8})$. We bound $\bm B$ and $\bm C$ similarly.
\end{proof}
\subsection{Main step: making local improvements}
In order to prove Theorem~\ref{thm:robust}, we rely on the following main lemma:
\begin{lemma}\label{lem:robustmain}
In the same setting as Theorem~\ref{thm:robust}, there exist positive constants $q_1, q_2$,$\tau_1 = \stheta(\epsilon^{q_1})$, $\tau_2 = \stheta(\epsilon^{q_2})$, such that for any point $\ten S,\bm A,\bm B, \bm C$ where $\epsilon < f(\ten S,\bm A,\bm B, \bm C) < O(1)$, one of the following is true:
\begin{enumerate}
\item $\|\nabla f(\tup)\| \ge \tau_1$,
\item $\lambda_{min}(\nabla^2 f(\tup)) \le -\tau_2$,
\item With constant probability, Algorithm \ref{alg:add_direction} constructs a direction of improvement that improves the function value by $\mbox{poly}(\epsilon)$.
\end{enumerate}
\end{lemma}
Algorithm \ref{alg:add_direction} uses notation that we specify in the paragraphs below.
\begin{algorithm}
\begin{algorithmic}
\REQUIRE matrices $\bm A, \bm B, \bm C$, threshold $\sigma$, subspace indicator $(i,j,k) \in \{1,2\}^3$
\STATE Compute the subspaces $U_{1,i}, U_{2,j}, U_{3,k}, V_{1,i}, V_{2,j}, V_{3,k}$
\STATE Sample unit vectors $\bm a, \bm b, \bm c$ uniformly from $U_{1,i}, U_{2,j}, U_{3,k}$
\STATE {\bf if} $i=1$ {\bf then} $\bm u' = (\bm A_1^\top)^+\bm a$; {\bf else} Randomly sample nonzero $\bm u' \in V_{1,2}$
\STATE {\bf if} $j=1$ {\bf then} $\bm v' = (\bm B_1^\top)^+\bm b$; {\bf else} Randomly sample nonzero $\bm v' \in V_{2,2}$
\STATE {\bf if} $k=1$ {\bf then} $\bm w' = (\bm C_1^\top)^+\bm c$; {\bf else} Randomly sample nonzero $\bm w' \in V_{3,2}$
\STATE Return $\bm a, \bm b, \bm c, \bm u'/\|\bm u'\|_2, \bm v'/\|\bm v'\|_2, \bm w'/\|\bm w'\|_2$
\end{algorithmic}
\caption{Sampling algorithm for adding missing directions\label{alg:add_direction}}\end{algorithm}
The proof of this lemma has similar steps to the proof of Theorem~\ref{thm:exact}. However, it is more complicated because we are not looking at exact local minima. We give the details of these steps in the following subsections. A key parameter that we will use is a bound $\tau$ on the regularizer. We will consider different cases when $R(\tup) \ge \tau$ and when $R(\tup) \le \tau$. All of our other parameters (including $\tau_1,\tau_2,\epsilon$) will be polynomials in $\tau$.
For the analysis, it is useful to consider $\point$ and $\ten T$ projected onto various subspaces of interest.
To this end, we introduce the following notation.
Let $\sigma > 0$ be a threshold that we will specify later. For matrix $\bm A$, we let $V_{1,1}$ and $U_{1,1}$ denote the spaces spanned by the left/right singular vectors of $\bm A$ with singular value greater than $\sigma$, and let $V_{1,2} = V_{1,1}^\perp$, $U_{1,2} = U_{1,1}^\perp$. We can then write
$\bm A = \bm A_1 + \bm A_2$, where $\bm A_1 = \mbox{Proj}_{V_{1,1}} \bm A$ contains the larger singular vectors and $\bm A_2 = \mbox{Proj}_{V_{1,2}}\bm A$ contains the smaller singular vectors. Let $\bm P_1$ be the orthogonal projection onto the column-space of $\ten T_{(1)}$ and define $\bm A_3 = \bm A(\bm I - \bm P_1),$ the projection of $\bm A$ onto directions that are unrelated to the true tensor. Similarly, we define $U_{2,1}, U_{2,2}, V_{2,1}, V_{2,2}, \bm P_2, \bm B_1, \bm B_2, \bm B_3$ for matrix $\bm B$ and $U_{3,1}, U_{3,2}, V_{3,1}, V_{3,2}, \bm P_3, \bm C_1, \bm C_2, \bm C_3$ for matrix $\bm C$.
Define $\ten S_{i,j,k} = \ten S( \mbox{Proj}_{V_{1,i}}, \mbox{Proj}_{V_{2,j}}, \mbox{Proj}_{V_{3,k}})$
and $\ten T_{i,j,k} = \ten T(\mbox{Proj}_{U_{1,i}}, \mbox{Proj}_{U_{2,j}}, \mbox{Proj}_{U_{3,k}})$.
We can decompose the tensor loss as
\[
\|\point - \ten T\|_F^2 = \sum_{i,j,k \in \{1,2\}}\|\ten S_{i,j,k}(\bm A_i, \bm B_j, \bm C_k) - \ten T_{i,j,k}\|_F^2.
\]
Our analysis shows how to decrease the objective function if the regularizer or any one of the terms in the right-hand sum is sufficiently large. In particular, after finding a second-order stationary point, the only terms in this sum that may be large are when at least two of $i, j, k$ are equal to $2$. In this case, Algorithm \ref{alg:add_direction} can be used to make further progress toward a local minimum.
\vspace*{-0.2in}
\subsection{Decreasing the Regularizer}
We first show if the regularizer is large, then the gradient is large. This is very similar to Lemma~\ref{lem:nonzeroreg}.
\begin{lemma}
\label{lem:decrease_regularizer}
If $R(\tup) \geq \tau$, then $\|\nabla f(\tup)\|_F \geq 4\lambda\tau/K$.
\end{lemma}
\begin{proof}
By assumption, $\reg(\tup) \geq \tau^{1/2}$, and we have$
\|\nabla R\|_F = \|2\reg\nabla \reg\|_F \geq 2\tau^{1/2}\|\nabla \reg\|_F.
$
By Lemma \ref{lem:reg_gradient} and the Cauchy-Schwarz inequality, we have that
\[
\|\nabla \reg\|_F \geq \frac{1}{2K}\|\nabla \reg\|_F\|(\tup)\|_F \geq \frac{1}{2K}\langle \nabla \reg, (\tup)\rangle = \frac{2\reg}{K}.
\]
Then $\|\nabla R\|_F \geq 4\tau/K$. Since $\|\nabla f\|_F^2 = \|\lambda \nabla R\|_F^2 + \|\nabla L\|_F^2,$ we are done.
\end{proof}
\subsection{Removing Extraneous Directions}
We show that if the projection $\bm A_3$ in the incorrect subspace is large, the gradient must be large so the point cannot be a local minimum.
\begin{lemma}
\label{lem:removing_directions}
Let $\gamma = \stheta(\tau^{1/48})$.
If $R(\tup) < \tau$ and $\|\bm A_3\|_F \geq \gamma$, then
\[
\|\nabla f(\tup)\|_F = \somega(\tau^{1/6}).
\]
\end{lemma}
\begin{proof}
Set $\gamma = C\tau^{1/48}$, where we choose $C$ to be a constant such that
\[
\gamma^2 \geq \max\left(r(\tau^{1/24} +\tau^{1/4}), r^4(4K^6\tau^{1/8}+K^4\tau^{3/8})\right).
\]
This particular definition allows us to simplify inequality \eqref{ineq:SA2_total} below.
Consider the direction $\Delta \bm A = - \bm A_3$.
When we step in this direction, the first-order perturbation of $L(\ten S, \bm A + \epsilon \Delta \bm A, \bm B, \bm C)$ is $-2\epsilon\|\ten S(\bm A_3,\bm B,\bm C)\|_F^2$ (a simple calculation).
For the regularizer,
observe that since $\bm A_3 = \bm A(\bm I - \bm P_3)$, we have $(\bm A+\epsilon \Delta \bm A)(\bm A+\epsilon\Delta \bm A)^\top = \bm A\bm A^\top - (2\epsilon-\epsilon^2) \bm A_3\bm A_3^\top.$
Hence the first-order perturbation of $R$ is
\[
8\epsilon \lambda \reg(\tup)\langle \bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\bm A_3\bm A_3^\top\rangle \leq 8\epsilon\lambda\tau^{3/4}\|\bm A_3\|_F^2.
\]
Intuitively, we will show that the first-order decrease in $L$ is greater than the first-order increase in $R$, so that $\Delta \bm A$ is aligned negatively with $\nabla_{\bm A} f$.
First, through very similar arguments to those found in the proof of Lemmas \ref{lem:bounded_sublevel} and \ref{lem:norm_bound_S}, we have that
\begin{equation}
\label{ineq:perturb1}
\|\ten S(\bm A_3,\bm B,\bm C)\|_F^2 \geq \|\ten S(\bm A_3,\ten S_{(2)}, \ten S_{(3)})\|_F^2 - 2\tau^{1/4}K^6 - \tau^{1/2} K^4
\end{equation}
and if we set $\bm u$ to be the top left singular vector of $\bm A_3$,
\begin{align}
\label{ineq:perturb2}
\|\ten S(\bm A_3,\ten S_{(2)},\ten S_{(3)})\|_F &\geq \frac{1}{r^2}\|\ten S(\bm u,\bm I,\bm I)\|_F^3\|\bm A_3\|_F\\
\|\ten S(\bm u,\bm I,\bm I)\|_F^2 &= \bm u^\top \ten S_{(1)}\ten S_{(1)}^\top \bm u \nonumber \\
&= \bm u^\top \bm A\bm A^\top \bm u + \bm u^\top (\ten S_{(1)}\ten S_{(1)} - \bm A\bm A^\top )\bm u \nonumber \\
\label{ineq:perturb3}
\geq \frac{1}{r}\|\bm A_3\|_F^2 - \tau^{1/4}.
\end{align}
Combining inequalities \eqref{ineq:perturb1}, \eqref{ineq:perturb2}, and \eqref{ineq:perturb3}, we have
\begin{equation}
\label{ineq:SA2_total}
\|\ten S(\bm A_3,\bm B,\bm C)\|_F^2 \geq \frac{1}{r^4}\|\bm A_3\|_F^2\left(\frac{1}{r}\|\bm A_3\|_F^2 - \tau^{1/4}\right)^3 - 2\tau^{1/4}K^6 - \tau^{1/2} K^4
\end{equation}
Using the assumption $\|\bm A_3\|_F\geq \gamma$ and the choice of $\gamma$, we can simplify inequality \eqref{ineq:SA2_total} to
$\|\ten S(\bm A_3,\bm B,\bm C)\|_F^2 \geq \frac{\tau^{1/8}}{2r^4}\|\bm A_3\|_F^2$. Now using the fact that $\lambda = 1/16r^4$ and $\tau ^{1/8} > \tau^{3/4}$,
we have $\frac{\tau^{1/8}}{2r^4}\|\bm A_3\|_F^2 > 8\lambda\tau^{3/4}\|\bm A_3\|_F^2$.
Thus, we see that the first-order decrease in $L$ is greater than the first-order increase in $R$, and the overall first-order decrease in $f$ is $\somega(\tau^{1/8}\|A_3\|_F^2/2r^4)$.
The Taylor expansion of $f$ implies that $|\langle \Delta \bm A, \nabla_{\bm A} f\rangle| \ge \frac{\tau^{1/8}}{2r^4}\|\bm A_3\|_F^2$, so that
\[
\|\nabla f\|_F \ge |\langle \Delta \bm A, \nabla_{\bm A} f\rangle| / \|\Delta \bm A\|_F
= \somega(\tau^{1/8}\gamma) = \somega(\tau^{1/6}),
\]
which provides the desired bound on $\|\nabla f\|_F$.
\end{proof}
From this point forward, set $\sigma = \sqrt{\gamma}$.
An important consequence of the fact that $\bm A_3$ is small is that if $\ten T_{2,1,1}$ is large enough, then $\bm A_1$ must be rank deficient. This rank deficiency allows us to readily contruct a direction of improvement when we are near a saddle point corresponding to a single missing direction. This is also true when we are near saddle points corresponding to two or three missing directions; see section \ref{subsec:missing_dirs}.
To prove the rank deficiency, we use subspace perturbation bounds.
The technical tool we use here is Wedin's Theorem~\citep{wedin1972perturbation,stewart1998perturbation}.
\begin{theorem*}[Wedin's Theorem, adapted from~\cite{stewart1998perturbation}]
Let $\tilde {\bm A}, \bm A, \bm E \in \mathbb{R}^{d\times r}$ with $d \geq r$ and $\tilde{\bm A} = \bm A + \bm E$.
Write the singular value decompositions
\[
\bm A = \begin{pmatrix}
\bm U_1 &\bm U_2
\end{pmatrix}
\begin{pmatrix}
\bm \Sigma\\
\bm 0
\end{pmatrix}
\bm V^\top
\qquad
\tilde{\bm A} = \begin{pmatrix}
\tilde{\bm U_1} &\tilde{\bm U_2}
\end{pmatrix}
\begin{pmatrix}
\tilde{\bm \Sigma}\\
\bm 0
\end{pmatrix}
\tilde{\bm V}^\top
\]
Let $\bm \Theta$ and $\bm \Phi$ denote the matrices of principal angles between the column spans of $\bm U_1, \tilde{\bm U}_1$ and
$\bm V, \tilde{\bm V}$, respectively.
If there exists some $\delta > 0$ such that $\min \sigma(\tilde{\bm \Sigma}) \geq \delta$, then
\[
\sqrt{\|\sin \bm \Theta\|_F^2 + \|\sin \bm \Phi\|_F^2} \leq \frac{\sqrt{2}\|\bm E\|_F}{\delta}.
\]
\end{theorem*}
\begin{lemma}
\label{lem:perturbation}
Let $\bm M \in \mathbb{R}^{r\times d}$, and let $\bm M = \bm M_1 + \bm M_2$, where $\text{rank}(\bm M) = \text{rank}(\bm M_1) = r$.
Let $\bm P, \bm P_1 \in \mathbb{R}^{d\times d}$ be the orthogonal projections onto the row spans of $\bm M$ and $\bm M_1$,
respectively.
Let $\sigma$ be the smallest nontrivial singular value of $\bm M$.
Then
\[
\|\bm P - \bm P_1\|_F \leq \frac{2\|\bm M_2\|_F}{\sigma}.
\]
\end{lemma}
\begin{proof}
This is a corollary of Wedin's Theorem.
Set $\bm A = \bm M_1^\top$, $\tilde{\bm A} = \bm M^\top$, and $\bm E = \bm M_2^\top$.
Note that $\bm A$ and $\tilde{\bm A}$ have full row rank, so we have the SVDs
\[
\bm A = \begin{pmatrix}
\bm U_1 &\bm U_2
\end{pmatrix}
\begin{pmatrix}
\bm \Sigma\\
\bm 0
\end{pmatrix}
\bm V^\top
\qquad
\tilde{\bm A} = \begin{pmatrix}
\tilde{\bm U_1} &\tilde{\bm U_2}
\end{pmatrix}
\begin{pmatrix}
\tilde{\bm \Sigma}\\
\bm 0
\end{pmatrix}
\tilde{\bm V}^\top
\]
where $\bm V, \tilde{\bm V}$ are $r\times r$ orthogonal matrices, $\bm \Sigma, \tilde{\bm\Sigma}$ are $r \times r$, $\bm U_1, \tilde{\bm U}_1$ are $d\times r$, and
$\bm U_2, \tilde{\bm U}_2$ are $d\times (d-r)$.
Since $\bm V$ and $\tilde{\bm V}$ have the same column spans, we have that $\sin \bm \Phi =\bm 0$.
Further, it is a fact that $\|\bm P-\bm P_1\|_F = \sqrt{2}\|\sin \bm\Theta\|_F$.
By assumption, $\min\sigma(\tilde {\bm\Sigma}) = \sigma$.
Then Wedin's Theorem states that
\[
\sqrt{\|\sin \bm\Theta \|_F^2 + \|\sin \bm\Phi\|_F^2} \leq \frac{\sqrt{2}\|\bm E\|_F}{\sigma},
\]
and our result follows immediately.
\end{proof}
\begin{lemma}
\label{lem:rank_bound}
Let $\bm P$ be the orthogonal projection onto the row-span of $\bm A_1$.
If $\text{rank}(\bm A_1) =r$ and $\|\bm A_3\|_F \leq \gamma$, then $\|\ten T(\bm I - \bm P, \bm I, \bm I)\|_F < 2K\sqrt{\gamma}$.
In particular, if any of the $T_{1,j,k}$ ($j,k=1,2$) is large, the rank of $\bm A_1$ must be less than $r$.
\end{lemma}
\begin{proof}
Recall we set $\sigma = \sqrt{\gamma}$.
Let $\bm P_1$ be the orthogonal projection onto the column-span of $\ten T_{(1)}$.
Write $\bm A_{1,1} = \bm A_1\bm P_1$, $\bm A_{1,2} = \bm A_1(\bm I - \bm P_1)$.
Observe that $\|\bm A_{1,2}\|_F \leq \|\bm A_3\|_F \leq \gamma < \sigma$.
Note that $\|\bm A_1 - \bm A_{1,1}\|_F = \|\bm A_{1,2}\|_F < \sigma$, which means that $\text{rank}(\bm A_{1,1}) = r$,
since $\bm A_1$ has distance at least $\sigma$ to the closest lower-rank matrix.
Since $\bm A_{1,1}$ has rank $r$, its rows form a basis for the column-span of $\ten T_{(1)}$,
and so $\bm P_1$ is also the orthogonal projection onto the row-span of $\bm A_{1,1}$.
Then
\begin{align*}
\|\ten T(\bm I - \bm P, \bm I, \bm I)\|_F &= \|\ten T(\bm P_1 - \bm P, \bm I, \bm I)\|_F\\
&\leq \|\ten T\|_F\|\bm P_1 - \bm P\|_F\\
&\leq K\frac{2\|\bm A_{1,2}\|_F}{\sigma}\\
&<2K\sqrt{\gamma},
\end{align*}
where the penultimate line follows from Lemma \ref{lem:perturbation}.
\end{proof}
\subsection{Improving $S$}
Unlike the proof of Theorem~\ref{thm:exact}, we will first focus on the simple case of improving the core tensor $\ten S$. Note that here we only try to make sure we get close to $\ten T_{1,1,1}$ as the components $\bm A,\bm B, \bm C$ may still be missing directions.
\begin{lemma}
\label{lem:improve_S}
Set $\kappa_0 = \sqrt{\gamma}$.
Assume $R(\tup) < \tau$. Then
\[
\|\ten T_{1,1,1} - \ten S(\bm A_1, \bm B_1, \bm C_1)\|_F > \kappa_0\,\, \Rightarrow\,\, \|\nabla f(\tup)\|_F = \Omega(\gamma^{2.5}).
\]
\end{lemma}
\begin{proof}
Define $\ten S^* = \ten T(\bm A_1^+, \bm B_1^+,\bm C_1^+)$, so that $\ten S^*(\bm A_1,\bm B_1,\bm C_1) = \ten T_{1,1,1}$.
We consider the direction $\Delta \ten S = \ten S^* - \ten S_{1,1,1}$.
Observe that $\Delta \ten S(\bm A, \bm B, \bm C) =\ten T_{1,1,1} - \ten S(\bm A_1, \bm B_1, \bm C_1)$.
We can write
\begin{align*}
\ten S(\bm A, \bm B, \bm C) - \ten T = \sum_{i,j,k \in \{1,2\}} \ten S(\bm A_i, \bm B_j, \bm C_k) - \ten T_{i,j,k},
\end{align*}
and this is a sum of mutually orthogonal tensors.
Hence, the
the first-order perturbation of $L(\ten S + \epsilon \Delta \ten S, \bm A, \bm B, \bm C)$ is
\begin{align*}
2\langle \point - \ten T, \Delta \ten S(\bm A, \bm B, \bm C) \rangle &= -2 \|\Delta\ten S(\bm A, \bm B,\bm C)\|_F^2.
\end{align*}
The first-order perturbation in the regularizer $\langle \nabla_{\ten S} R, \Delta \ten S\rangle$ is bounded by $O(\tau^{3/4}\sigma^{-3}) = o(\sigma)$, since $\|\ten S^*\|_F = O(\sigma^{-3})$.
Therefore, the decrease in the tensor loss dominates all other first-order perturbations, so we have a viable direction of improvement.
In particular, by moving in direction $\epsilon \Delta \ten S$,
we decrease the objective function by
\[
\epsilon\cdot \Omega(\|\ten T_{1,1,1} - \ten S(\bm A_1, \bm B_2, \bm C_1)\|_F^2) = \Omega(\epsilon\kappa_0^2).
\]
The direction of movement has norm bounded by
\[\|\ten T_{1,1,1}\|_F \|\bm A_1^+\|_2\|\bm B_1^+\|_2\|\bm C_1^+\|_2 \le K\sigma^{-3}.\] By Cauchy-Schwarz, the gradient has norm at least $\Omega(\kappa_0^2)\times \sigma^3 = \Omega(\gamma^{5/2}).$
\end{proof}
\subsection{Adding missing directions}
\label{subsec:missing_dirs}
Finally, we try to add missing directions to $\bm A, \bm B, \bm C$. As before we separate the cases into missing 1, 2 and 3 directions. This first case (missing one direction) is easy as it is a normal saddle point with negative Hessian.
\begin{lemma}
\label{lem:add_1_direction}
Set $\kappa_1 = 2K\sigma^{3/4}$.
Assume $R(\tup) < \tau$ and $\|\bm A_3\|_F$, $\|\bm B_3\|_F$, and $\|\bm C_3\|_F$ are all less than $\gamma$.
If $\|\ten T_{2,1,1}\|_2 \geq \kappa_1$,
then $\nabla^2 f$ has a negative eigenvalue of at most $-\Omega(\sigma^{15/4})$.
\end{lemma}
\begin{proof}
Since $\kappa_1 > 2K\sqrt{\gamma}$, by Lemma \ref{lem:rank_bound}, we have $\text{rank}(\bm A_1) < r$.
By assumption, there exist unit vectors $\bm a \in U_{1,2}$, $\bm b \in U_{2,1}$, and $\bm c \in U_{3,1}$ such that
$\ten T(\bm a,\bm b,\bm c) \geq \kappa_1$.
Take unit vectors $\bm u, \bm v, \bm w \in \mathbb{R}^r$ such that $\bm A_1^\top \bm u = \bm 0$, $\bm B_1^\top \bm v = \alpha_1 \bm b$, $\bm B_2^\top \bm v =\bm 0$, $\bm C_1^\top \bm w = \alpha_2 \bm c$, and $\bm C_2^\top \bm w = \bm 0$, where
$\alpha_i \geq \sigma$ for $i=1,2$.
In this situation, we are near a second-order saddle point, so we seek to demonstrate a direction with sufficient negative curvature in the objective function.
To this end, define
\begin{align*}
\Delta \bm A = \sigma \bm u\bm a^\top \qquad \Delta \ten S = \bm u\otimes \bm v\otimes \bm w.
\end{align*}
For a step size $\epsilon > 0$, our source of improvement in the tensor loss comes from the second-order perturbation of $L$ in this direction. We aim to compare the second-order decrease in $L$ against the second-order increases in $L$ and $R$.
The second-order perturbation in the tensor loss $\nabla^2 L$ applied to $(\Delta \ten S, \Delta \bm A, \bm 0, \bm 0)$ is
\[
2\langle \diff, \Delta \ten S(\Delta \bm A, \bm B, \bm C)\rangle + \|\Delta \point + \ten S(\Delta \bm A, \bm B, \bm C)\|^2
\]
The magnitude of \emph{decrease} in this perturbation is given by
\begin{align*}
\langle \ten T,\Delta \ten S(\Delta \bm A, \bm B, \bm C)\rangle &= \sigma\ten T(\bm a,\alpha_1 \bm b , \alpha_2 \bm c )\\
&= \sigma \alpha_1\alpha_2\ten T(\bm a,\bm b,\bm c)\\
&\geq \sigma \alpha_1\alpha_2\kappa_1.
\end{align*}
To bound the magnitude of the \emph{increase}, observe that
\begin{align*}
\|\bm B\bm B^\top \bm v\|_F &= \| \alpha_1\bm B_1\bm b\|_F \leq \alpha_1K; \quad \|\bm C\bm C^\top \bm w\|_F \leq \alpha_2K
\end{align*}
Then we have
\begin{align}
\langle \point ,\Delta \ten S(\Delta \bm A, \bm B, \bm C)\rangle &= \langle \point, \sigma \bm a \otimes \bm B^\top \bm v \otimes \bm C^\top \bm w\rangle \nonumber\\
&= \sigma \ten S(\bm A\bm a, \bm B\bm B^\top \bm v, \bm C\bm C^\top \bm w) \nonumber\\
\label{ineq:bound1}
&\leq \sigma^2\alpha_1\alpha_2K^3
\end{align}
Additionally,
\begin{align*}
\|\Delta \point\|_F^2 &= \|\bm A_2^\top \bm u\otimes \alpha_1 \bm b \otimes \alpha_2 \bm c \|_F^2\\
&\leq \sigma^2\alpha_1^2\alpha_2^2\\
\|\ten S_{(1)}^\top \bm u\|_F^2 &= \bm u^\top(\ten S_{(1)}\ten S_{(1)}^\top - \bm A\bm A^\top )\bm u + \bm u^\top\bm A\bm A^\top \bm u\\
&\leq \tau^{1/4} + \sigma^2\\
\|\ten S(\Delta \bm A,\bm B,\bm C)\|_F^2 &=\sigma^2\|\bm a\bm u^\top \ten S_{(1)}(\bm B \otimes \bm C)\|_F^2\\
&\leq \sigma^2 \|\ten S_{(1)}^\top \bm u\|_F^2\|\bm B\|_F^2\|\bm C\|_F^2\\
&\leq\sigma^2 K^4(\tau^{1/4}+\sigma^2)
\end{align*}
Putting this together, we bound $\| \Delta \point + \ten S( \Delta \bm A, \bm B, \bm C)\|_F^2$ above by
\begin{align}
\label{ineq:bound2}
\left(\sigma\alpha_1\alpha_2 + \sigma K^2\sqrt{\tau^{1/4}+\sigma^2}\right)^2
\end{align}
In light of the definition of $\kappa_1$ and inequalities (\ref{ineq:bound1}) and (\ref{ineq:bound2}), the second-order perturbation in $L$ is $-\Omega(\sigma\alpha_1\alpha_2\kappa_1)$, i.e. $L$ decreases to second-order in this direction.
Now we turn our attention to the regularizer.
We need to show that the second-order increase in the regularizer doesn't overwhelm the decrease in $L$.
Note that the regularizer is degree 4 with respect to $\|\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top\|_F$ (and same terms for $\bm B$ and $\bm C$), so the second order derivatives have a quadratic term in $\|\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top\|_F$,
which is $O(\tau^{1/4}) = o(\sigma\alpha_1\alpha_2\kappa_1)$; higher-order terms are negligible in comparison.
We've shown that the loss function decreases by at least $\Omega(\sigma\alpha_1\alpha_2\kappa_1)\cdot \epsilon^2$. Since our direction of improvement has constant norm, this implies that $\nabla^2 f$ has an eigenvalue that is smaller than $-\Omega(\sigma\alpha_1\alpha_2\kappa_1) = - \Omega(\sigma^{15/4})$.
\end{proof}
Next, we need to deal with the high order saddle points. Here our main observation is that if we choose directions randomly in the correct subspace, then the perturbation is going to have a reasonable correlation with the residual tensor with constant probability. This is captured by the following anti-concentration property:
\begin{lemma}
\label{lem:anti}
Let $\ten X \in \mathbb{R}^{d_1\times d_2\times d_3}$, and let $\bm a\in\mathbb{R}^{d_1}, \bm b\in\mathbb{R}^{d_2}, \bm c\in\mathbb{R}^{d_3}$ be independent, uniformly distributed unit vectors. There exist positive numbers $C_1 = \Omega(1/\sqrt{d_1d_2d_3}) = \somega(1), C_2 = \Omega(1)$ such that
\[
\text{Pr}[|\ten X(\bm a,\bm b,\bm c)| \geq C_1\|\ten X\|_F] > C_2.
\]
\end{lemma}
\begin{proof}
Although our Algorithm \ref{alg:add_direction} for sampling missing directions only requires uniform unit vectors (from appropriate subspaces), we construct these vectors as normalized Gaussian vectors for this lemma in order to apply a Gaussian polynomial anti-concentration result (Theorem 8 in \cite{carbery2001distributional}).
As such, let $\bm a', \bm b', \bm c'$ be independent standard Gaussian random vectors (of appropriate dimension) and set $\bm a = \bm a'/\|\bm a'\|_2$, $\bm b = \bm b'/\|\bm b'\|_2$, and $\bm c = \bm c'/\|\bm c'\|_2$.
Note that there exists some constant $p> 0$ such that $\|\bm a'\|_2 \le 2\sqrt{d_1}$, $\|\bm b'\|_2\le 2\sqrt{d_2}$, and $\|\bm c'\| \le 2\sqrt{d_3}$ with probability at least $p$.
Next, note that $\mathbb{E} \ten X(\bm a', \bm b', \bm c') = 0$ and
\begin{align*}
\text{Var}[\ten X(\bm a', \bm b', \bm c')] &= \mathbb{E}(\ten X(\bm a', \bm b', \bm c')^2)\\
&= \mathbb{E}\langle \ten X(\bm a'\bm a'^\top, \bm b'\bm b'^\top, \bm c'\bm c'^\top),\ten X\rangle\\
&= \langle \ten X(\bm I, \bm I, \bm I),\ten X\rangle\\
&= \|\ten X\|_F^2.
\end{align*}
Now $\ten X(\bm a',\bm b',\bm c')/\|\ten X\|_F$ is a degree three polynomial function with unit variance, so the anti-concentration inequality
implies that for any $\epsilon > 0$,
\begin{align*}
\text{Pr}[|\ten X(\bm a',\bm b',\bm c')/\|\ten X\|_F| \leq \epsilon] \leq O(1)\epsilon^{1/3}.
\end{align*}
Simply choosing a constant $\epsilon$ and re-arranging terms completes the proof.
\end{proof}
Using this idea, when $\ten T_{2,2,1}$ is large, we show how to get a direction of improvement.
\begin{lemma}
\label{lem:add_2_directions}
Set $\kappa_2 = 2K\sigma^{1/8}$.
Assume $R(\tup) < \tau$ and $\|\bm A_3\|_F$, $\|\bm B_3\|_F$, and $\|\bm C_3\|_F$ are all less than $\gamma$.
Further assume that $\|\ten T_{2,1,1}\|_2$, $\|\ten T_{1,2,1}\|_2$, and $\|\ten T_{1,1,2}\|_2$ are each less than $\kappa_1$.
Let $\bm a, \bm b, \bm c, \bm u, \bm v, \bm w$ be the output of Algorithm \ref{alg:add_direction} given the input $\bm A, \bm B, \bm C, \sigma, (2,2,1)$.
Define the directions $\Delta \bm A = \bm u\bm a^\top$, $\Delta \bm B = \bm v \bm b^\top$, $\Delta \ten S = \bm u \otimes \bm v \otimes \bm w$.
If $\|\ten T_{2,2,1}\|_2 \geq \kappa_2$, then with constant probability, a step in these directions decreases the objective function
by $\Omega^*(\sigma^{15/8})$.
\end{lemma}
\begin{proof}
First, observe that $\kappa_2 \geq 2K\sqrt{\gamma}$, which implies that $\text{rank}(\bm A_1) < r$ and $\text{rank}(\bm B_1)< r$ by Lemma \ref{lem:rank_bound}.
\begin{comment}
Now let $\bm a', \bm b', \bm c'$ be sampled from the standard multivariate Guassian distribution on $U'_{1,2}$, $U'_{2,2}$, and $U'_{3,1}$, respectively, and let $\bm a, \bm b, \bm c$ be the normalized versions of these.
Let $\bm u, \bm v \in \mathbb{R}^r$ be unit vectors
so that $\bm A_1^\top \bm u = 0$ and $\bm B_1^\top \bm v = 0$.
Define $\bm w' = (\bm C_1^\top)^+\bm c$ and $\bm w = \bm w'/\|\bm w'\|$.
\end{comment}
Set $\alpha$ such that $\alpha \bm c = \bm C_1^\top \bm w$, and note that $\alpha \geq \sigma$.
Per lemma \ref{lem:anti}, with constant probability we have $|\ten T(\bm a,\bm b, \bm c)|$ is with some constant factor of $ \|\ten T_{2,2,1}\|_2$. Therefore, with constant probability, $|\ten T(\bm a,\bm b, \bm c)| \geq C\kappa_2$ for some positive constant $C$.
Observe that $p(\delta) := f(\ten S+\delta \Delta \ten S, \bm A+\delta \Delta \bm A, \bm B+\delta\Delta \bm B, \bm C)$ defines a degree $8$ polynomial in $\delta$.
Set $\delta = \sigma^{1/4}$.
For convenience, define the following expressions related to the perturbations of $L$:
\begin{align*}
L_0 &= \diff\\
L_1 &= \Delta \point + \ten S(\Delta \bm A, \bm B, \bm C) + \ten S(\bm A,\Delta \bm B, \bm C) \\
L_2 &= \Delta \ten S(\Delta \bm A, \bm B, \bm C) + \Delta \ten S(\bm A,\Delta \bm B, \bm C) +\ten S(\Delta \bm A, \Delta \bm B, \bm C)\\
L_3 &= \Delta\ten S(\Delta\bm A, \Delta\bm B,\bm C)
\end{align*}
We can upper bound each of these terms in norm, e.g.
\begin{align*}
\|L_1\|_F &= \| \bm A^\top \bm u \otimes \bm B^\top \bm v \otimes \bm C^\top \bm w + \ten S(\bm u\bm a^\top,\bm B,\bm C) + \ten S(\bm A,\bm v\bm b^\top,\bm C)\|_F\\
&\leq \sigma^2\alpha + 2K^2\sqrt{\tau^{1/4}+\sigma^2}\\
&= O(\alpha\sigma^2 + \sigma).
\end{align*}
Through similar calculations, we have $\|L_2\|_F = O(\alpha\sigma + \sigma)$ and $\|L_3\|_F = O(\alpha)$.
The perturbation in the tensor loss is then
\begin{equation}
\label{eq:loss_perturb}
\delta^3 \langle L_0,L_3\rangle + \delta \langle L_0,L_1\rangle + \delta^2\langle L_0,L_2\rangle + \|\delta L_1+ \delta^2 L_2+\delta^3 L_3\|_F^2.
\end{equation}
Here the first term is responsible for the decrease in tensor loss:
\begin{align*}
\delta^3\langle L_0,L_3\rangle &= \delta^3\alpha\langle \point - \ten T,\bm a \otimes \bm b \otimes \bm c\rangle\\
&\leq \delta^3\alpha(K^2\sigma^2 - \ten T(\bm a,\bm b,\bm c))\\
&= -\delta^3\alpha\Omega(\kappa_2).
\end{align*}
For the other terms, we show that they are small enough so they will not cancel this improvement.
Observe that
\begin{align*}
\langle L_0,L_1\rangle &= \langle L_0, \Delta\ten S(\bm A, \bm B, \bm C)\rangle + \langle L_0, \ten S(\bm u\bm a^\top, \bm B, \bm C) + \ten S(\bm A, \bm v\bm b^\top, \bm C)\rangle\\
&= O(\alpha \sigma^2) + O(\kappa_1\sigma).
\end{align*}
The $O(\kappa_1\sigma)$ term appears because $\|\ten S_{2,1,1}(\bm A, \bm B, \bm C) - \ten T_{2,1,1}\|_F = O(\kappa_1)$,
$\|\ten S_{1,2,1}(\bm A, \bm B, \bm C) - \ten T_{1,2,1}\|_F = O(\kappa_1)$.
For the next term, we note that $\langle L_0, L_2\rangle$ is a sum of three inner products, any two of which we can make nonpositive by flipping the sign of $\Delta \ten S$ and one of $\Delta \bm A$, $\Delta \bm B$ (doing so doesn't change the amount by which the tensor loss decreases).
Hence, by design of Algorithm~\ref{alg:add_direction}, with constant probability we know that $\langle L_0,L_2\rangle \leq 0$.
As a result, we know \eqref{eq:loss_perturb} is at most $-\delta^3 \alpha \Omega(\kappa_2)$.
We now consider the perturbations of the regularizer.
Define the following terms:
\begin{align*}
\reg_{0,1} &= \bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\quad \reg_{0,2} =\bm B\bm B^\top - \ten S_{(2)}\ten S_{(2)}^\top,\quad \reg_{0,3} =\bm C\bm C^\top - \ten S_{(3)}\ten S_{(3)}^\top\\
\reg_{1,1} &= \bm A\Delta\bm A^\top + \Delta\bm A\bm A^\top - \ten S_{(1)}(\Delta\ten S)_{(1)}^\top - (\Delta\ten S)_{(1)}\ten S_{(1)}^\top\\
\reg_{1,2} &= \bm B\Delta\bm B^\top + \Delta\bm B\bm B^\top - \ten S_{(2)}(\Delta\ten S)_{2)}^\top - (\Delta\ten S)_{(2)}\ten S_{(2)}^\top\\
\reg_{1,3} &= - \ten S_{(3)}(\Delta\ten S)_{(3)}^\top - (\Delta\ten S)_{(3)}\ten S_{(3)}^\top,\quad \reg_{2,3} = - (\Delta\ten S)_{(3)}(\Delta\ten S)_{(3)}^\top
\end{align*}
We bound these terms in norm as follows:
\begin{align*}
\|\reg_{1,1}\|_F &= \|\bm A\bm a\bm u^\top + \bm u\bm a^\top \bm A^\top - \bm u\ten S(\bm I,\bm v,\bm w)^\top - \ten S(\bm I,\bm v,\bm w)\bm u^\top\|_F\\
&\leq 2\|\bm A_2\|_F + 2\|\ten S(\bm I,\bm v,\bm w)\|_F\\
&\leq 2\sigma + 2\sqrt{\tau^{1/4}+\sigma^2}\\
&= O(\sigma).
\end{align*}
Likewise, $\|\reg_{1,2}\|_F = O(\sigma)$, $\|\reg_{1,3}\|_F = O(\sigma)$, and $\|\reg_{2,3}\| = O(1)$.
Also note that $\|\reg_{0,i}\|_F \leq \tau^{1/4}$ for $i=1,2,3$.
Using this,
we have
\begin{align*}
\|\reg_{0,1} + \delta \reg_{1,1}\|_F &\leq O(\tau^{1/4}) +O(\delta\sigma)\\
\|\reg_{0,2} + \delta \reg_{1,2}\|_F &\leq O(\tau^{1/4}) +O(\delta\sigma)\\
\|\reg_{0,3} + \delta \reg_{1,3} + \delta^2 \reg_{2,3}\|_F &\leq O(\tau^{1/4}) + O(\delta\sigma) + O(\delta^2).
\end{align*}
All of these terms are dominated by $O(\delta^2)$, and so the perturbed regularizer is bounded by $O(\delta^8) = O(\sigma^2) = o(\sigma^{15/8})$. Hence, we see that the decrease in the tensor loss dominates the increase in the regularizer, as desired.
\end{proof}
We next address the case where $\ten T_{2,2,2}$ is large, which corresponds to $\bm A_1, \bm B_1, \bm C_1$ being rank deficient.
\begin{lemma}
\label{lem:add_3_directions}
Set $\kappa_3 = 2K\sigma^{1/2}$.
Assume $R(\tup) < \tau$ and $\|\bm A_3\|_F$, $\|\bm B_3\|_F$, and $\|\bm C_3\|_F$ are all less than $\gamma$.
Further assume that $\|\ten T_{2,1,1}\|_2$, $\|\ten T_{1,2,1}\|_2$, and $\|\ten T_{1,1,2}\|_2$ and each less than $\kappa_1$,
while $\|\ten T_{2,2,1}\|_2$, $\|\ten T_{2,1,2}\|_2$, and $\|\ten T_{1,2,2}\|_2$ are each less than $\kappa_2$.
Let $\bm a, \bm b, \bm c, \bm u, \bm v, \bm w$ be the output of Algorithm \ref{alg:add_direction} with input $\bm A, \bm B, \bm C, \sigma, (2,2,2)$.
Define the directions $\Delta \bm A = \bm u \bm a^\top$, $\Delta \bm B = \bm v \bm b^\top$, $\Delta \bm C = \bm w \bm c^\top$, $\Delta \ten S = \bm u \otimes \bm v \otimes \bm w$.
If $\|\ten T_{2,2,2}\|_2 \geq \kappa_3$, then with constant probability, a step in these directions decreases the objective function
by $\Omega^*(\sigma^{3/4})$.
\end{lemma}
\begin{proof}
First observe that $\kappa_3 \geq 2K\sqrt{\gamma}$, which by Lemma \ref{lem:rank_bound} means that $\text{rank}(\bm A_1)$, $\text{rank}(\bm B_1)$, and $\text{rank}(\bm C_1)$ are all strictly less than $r$.
Then $\bm A_1$, $\bm B_1$, and $\bm C_1$ are all missing directions from the relevant subspaces of $\ten T$,
and we are near a fourth-order saddle point.
\begin{comment}
Let $\bm a', \bm b', \bm c'$ be sampled from the standard multivariate Guassian distribution on $U'_{1,2}$, $U'_{2,2}$, and $U'_{3,2}$, respectively.
Let $\bm u, \bm v, \bm w$ be unit vectors satisfying $\bm A_1^\top \bm u = \bm B_1^\top\bm v=\bm C_1^\top \bm w=0$.
\end{comment}
By lemma \ref{lem:anti}, with constant probability, $|\ten T(\bm a,\bm b,\bm c)| > C\kappa_3$ for some positive constant $C$.
Again let $p(\delta) = f(\ten S+\delta \Delta \ten S, \bm A+\delta \Delta \bm A, \bm B+\delta\Delta \bm B, \bm C+\delta\Delta \bm C)$,
and set $\delta = \sigma^{1/8}$.
As in the proof of lemma \ref{lem:add_2_directions}, let for $i=0,\ldots,4$, let $L_i$ denote the $i$-th order perturbation term in
\[
(\ten S + \Delta \ten S)(\bm A + \Delta \bm A, \bm B + \Delta \bm B, \bm C + \Delta \bm C) - \ten T.
\]
We can upper bound each of these terms in norm, e.g.
\begin{align*}
\|L_1\|_F &= \|\bm A^\top \bm u \otimes \bm B^\top \bm v \otimes \bm C^\top \bm w + \ten S(\bm u\bm a^\top,\bm B,\bm C) + \ten S(\bm A,\bm v\bm b^\top,\bm C)\\
& + \ten S(\bm A,\bm B,\bm w\bm c^\top)\|_F\\
&\leq 8\sigma^3 + K^2(\|\ten S(\bm u,\bm I,\bm I)\|_F + \|\ten S(\bm I,\bm v,\bm I)\|_F+\|\ten S(\bm I,\bm I,\bm w)\|_F)\\
&\leq 8\sigma^3 + 3K^2\sqrt{\tau^{1/4}+2\sigma^2}\\
&= O(\sigma).
\end{align*}
Through similar calculations, we have $\|L_2\| = O(\sigma)$ and $\|L_3\| = O(\sigma)$.
On the other hand, $\|L_4\| \leq 1$ and $\|L_0\|\leq 2K$.
The perturbation in the tensor loss is then
\begin{equation}
\label{eq:loss_perturb2}
\sum_{i,j = 0}^4 \langle L_i,L_j\rangle \delta^{i+j}
\end{equation}
The decrease in the tensor loss is due to the following term:
\begin{align*}
\delta^4\langle L_0,L_4\rangle &= \delta^4\langle \point - \ten T,\bm a \otimes \bm b \otimes \bm c\rangle\\
&\leq \delta^4( K\sigma^3 - \ten T(\bm a,\bm b,\bm c))\\
&= -\delta^4\Omega(\kappa_3)\\
&= -\Omega(\sigma^{3/4}).
\end{align*}
By a simple Cauchy-Schwarz bound, the other perturbation terms in (\ref{eq:loss_perturb2})
are all bounded by $O(\sigma+ \delta^8) = O(\sigma) = o(\sigma^{3/4})$.
Now we analyze the perturbations of the regularizer.
As before, define the terms
\begin{align*}
\reg_{0,1} &= \bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\quad \reg_{0,2} =\bm B\bm B^\top - \ten S_{(2)}\ten S_{(2)}^\top,\quad \reg_{0,3} =\bm C\bm C^\top - \ten S_{(3)}\ten S_{(3)}^\top\\
\reg_{1,1} &= \bm A\Delta\bm A^\top + \Delta\bm A\bm A^\top - \ten S_{(1)}(\Delta\ten S)_{(1)}^\top - (\Delta\ten S)_{(1)}\ten S_{(1)}^\top\\
\reg_{1,2} &= \bm B\Delta\bm B^\top + \Delta\bm B\bm B^\top - \ten S_{(2)}(\Delta\ten S)_{2)}^\top - (\Delta\ten S)_{(2)}\ten S_{(2)}^\top\\
\reg_{1,3} &= \bm C\Delta\bm C^\top + \Delta\bm C\bm C^\top - \ten S_{(3)}(\Delta\ten S)_{(3)}^\top - (\Delta\ten S)_{(3)}\ten S_{(3)}^\top
\end{align*}
We bound these terms in norm as follows:
\begin{align*}
\|\reg_{1,1}\|_F &= \|\bm A\bm a\bm u^\top + \bm u\bm a^\top \bm A^\top - \bm u\ten S(\bm I,\bm v,\bm w)^\top - \ten S(\bm I,\bm v,\bm w)\bm u^\top\|_F\\
&\leq 2\|\bm A_2\|_F + 2\|\ten S(\bm I,\bm v,\bm w)\|_F\\
&\leq 2\sigma + 2\sqrt{\tau^{1/4}+\sigma^2}\\
&= O(\sigma).
\end{align*}
Likewise, $\|\reg_{1,i}\|_F = O(\sigma)$ for $i=2,3$, and of course $\|\reg_{0,i}\|_F \leq \tau^{1/4}$ for $i=1,2,3$.
Again,
\begin{align*}
\|\reg_{0,i} + \delta \reg_{1,i}\|_F \leq O(\tau^{1/4}) + O(\delta\sigma)
\end{align*}
and using this, we can bound the perturbed regularizer as $O(\delta^4\sigma^4) = o(\sigma^{3/4})$.
Hence, the decrease in the tensor loss dominates all other perturbations, and we improve the objective function by $\Omega(\sigma^{3/4})$.
\end{proof}
\section{Conclusion}
In this paper we showed that the standard nonconvex objective for Tucker decomposition with appropriate regularization does not have any spurious local minima. We further gave a local search algorithm that can optimize a regularized version of the objective in polynomial time.
There are still many open problems for the optimization of the Tucker decomposition objective.
For example, in many applications, the low rank tensor $\ten T$ is not known exactly. We either have significant additive noise $\ten T + \ten E$, or observe only a subset of entries of $\ten T$ (tensor completion). Local search algorithms on the nonconvex objective are able to handle similar settings for matrices~\citep{chi2019nonconvex}. We hope our techniques in this paper can be extended to give stronger guarantees for noisy Tucker decomposition and tensor completion.
\section{Preliminaries}
\subsection{Tensor Notation and Basic Facts}
\label{sec:tensordef}
We use bold lower-case letters like $\bm{u}$ to denote vectors, bold upper-case letters like $\bm A$ to denote matrices, and bold caligraphic upper-case letters like $\ten T$ to denote tensors. We reserve the symbol $\bm I$ to denote the identity matrix; its particular dimension will be clear from context.
Given a third order tensor $\ten S \in \mathbb{R}^{r_1\times r_2 \times r_3}$ and matrices $\bm A \in \mathbb{R}^{r_1\times d_1}$, $\bm B \in \mathbb{R}^{r_2\times d_2}$,
$\bm C \in \mathbb{R}^{r_3\times d_3}$, we define $\tuk{S}{A}{B}{C} \in \mathbb{R}^{d_1\times d_2\times d_3}$ by
\[
[\tuk{S}{A}{B}{C}]_{ijk} = \sum_{xyz} \ten S_{xyz}\bm A_{xi}\bm B_{yj}\bm C_{zk}.
\]
In the special case where one or more of $r_1, r_2, r_3$ equals $1$, we view $\tuk{S}{A}{B}{C}$ appropriately as a matrix, column vector, or scalar.
We equip $\mathbb{R}^{d_1\times d_2\times d_3}$ with the Frobenius inner product $\langle \cdot, \cdot \rangle$ and associated norm $\|\cdot \|_F$ given by
\begin{align*}
\langle \ten X, \ten Y \rangle &= \sum_{i,j,k = 1}^{d_1,d_2,d_3} \ten X_{ijk}\ten Y_{ijk} & \|\ten X\|_F = \sqrt{\langle \ten X, \ten X \rangle}
\end{align*}
We also define the operator $2$-norm $\|\cdot \|_2$ (i.e. the spectral norm) by
\[
\|\ten X\|_2 = \sup\left\{\ten X (\bm u,\bm v,\bm w) \,:\, \|\bm u\|_2 = \|\bm v\|_2 = \|\bm w\|_2 = 1\right\}
\]
These two norms are related as follows \citep{wang2017operator}:$$
\left(\frac{\max(d_1,d_2,d_2)}{d_1d_2d_3}\right)^{1/2}\|\ten X\|_F \leq \|\ten X\|_2 \leq \|\ten X\|_F.
$$
In the special case of $d_1=d_2=d_3 = d$, we have $\|\ten X\|_F \leq d\|\ten X\|_2$.
Another important fact is that for $\sigma = \|\ten X\|_2$, there exist unit vectors $\bm u \in \mathbb{R}^{d_1}$, $\bm v \in \mathbb{R}^{d_2}$, and $\bm w \in \mathbb{R}^{d_3}$
such that the following hold \citep{lim2005singular}:
\begin{align*}
\tuk{X}{u}{v}{w} = \sigma\quad \tuk{X}{I}{v}{w} = \sigma \bm u\quad \tuk{X}{u}{I}{w}= \sigma \bm v\quad \tuk{X}{u}{v}{I} = \sigma \bm w
\end{align*}
Let $\ten X_{(i)} \in \mathbb{R}^{d_i\times \Pi_{j\neq i}d_j}$ denote the factor-$i$ flattening of $\ten X$ (for $i=1,2,3$).
We say $\ten X$ has multilinear rank $(r_1,r_2,r_3)$ if
there exists a tensor $\ten S \in \mathbb{R}^{r_1\times r_2\times r_3}$ and matrices $\bm A \in \mathbb{R}^{r_1\times d_1}$, $\bm B \in \mathbb{R}^{r_2\times d_2}$, and $\bm C \in \mathbb{R}^{r_3\times d_3}$ of minimal dimension such that $\ten X = \tuk{S}{A}{B}{C}$.
The tuple $(\ten S,\bm A,\bm B,\bm C)$ gives a \emph{Tucker decomposition} of $\ten X$.
Note that $\ten X_{(i)}$ has rank $r_i$ and $\tuk{X}{A}{B}{C}_{(1)} = \bm A^\top \ten X_{(1)} (\bm B\otimes \bm C)$ where $\otimes$ denotes the Kronecker product of matrices.
The space of parameters for our objective function is $\mathbb{R}^{r_1\times r_2\times r_3} \times \mathbb{R}^{r_1\times d_1}\times \mathbb{R}^{r_2\times d_2}\times \mathbb{R}^{r_3\times d_3}$. We write a point in this space as $(\tup)$, and equip it with inner product $\langle (\tup), (\ten S', \bm A', \bm B', \bm C')\rangle = \langle \ten S, \ten S'\rangle +\langle \bm A, \bm A'\rangle + \langle \bm B, \bm B'\rangle + \langle \bm C, \bm C'\rangle$ and associated norm $$\|(\tup)\|_F = \sqrt{\|\ten S\|_F^2+\|\bm A\|_F^2+\|\bm B\|_F^2+\|\bm C\|_F^2}.$$
\subsection{Optimization Problem}
\label{sec:optimizationproblem}
For simplicity, in this paper we assume $r_1=r_2=r_3 = r$, and $d_1=d_2=d_3 = d$. It is easy to generalize the result to the case with different $r_i$'s and $d_i$'s.
Let $\ten T \in \mathbb{R}^{d\times d\times d}$ be a fixed third order tensor with multilinear rank $(r,r,r)$ for $r < d$. A simple objective for tensor decomposition can be defined as:
\begin{equation}\label{eq:loss}
L(\ten S,\bm A,\bm B,\bm C) = \|\tuk{S}{A}{B}{C} - \ten T\|_F^2.
\end{equation}
Suppose $\ten T = \ten S^* (\bm A^*, \bm B^*, \bm C^*)$, then Equation \eqref{eq:loss} has a global minimum at $(\ten S^*, \bm A^*, \bm B^*, \bm C^*)$ with the minimum possible $L$ value 0. In fact, due to symmetry, we know there are many more global minimizers of $L$: for any invertible matrices $\bm {Q_A}, \bm {Q_B}, \bm {Q_C} \in \mathbb{R}^{r\times r}$, let $\bm S = \tuk{S^*}{Q_A}{Q_B}{Q_C}$, and $\bm A = \bm {Q_A}^{-1} \bm A^*$, $\bm B =\bm {Q_B}^{-1} \bm B^*$ and $\bm C = \bm {Q_C}^{-1} \bm C^*$, then we also have $\ten T = \tuk{S}{A}{B}{C}$. Therefore, the loss $L$ has infinitely many global optimal solutions.
The existence of many equivalent global optimal solutions causes problems for local search algorithms, especially simpler ones like gradient descent. The reason is that if we scale $\bm A, \bm B, \bm C$ with a large constant $c$, and scale $\ten S$ with $1/c^3$, the tensor $\tuk{S}{A}{B}{C}$ does not change. However, after this scaling the partial gradient of $\ten S$ is multiplied by $c^3$, while the partial gradients of $\bm A,\bm B, \bm C$ are multiplied by $1/c$. When $c$ is large one has to choose a very small step size for gradient descent, and this results in very slow convergence.
We address the problem of scaling by introducing a regularizer $\reg(\tup)$ given by
\begin{equation} \label{eq:reg}
\|\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top\|_F^2 + \|\bm B\bm B^\top - \ten S_{(2)}\ten S_{(2)}^\top\|_F^2 + \|\bm C\bm C^\top - \ten S_{(3)}\ten S_{(3)}^\top\|_F^2.
\end{equation}
Intuitively, the three terms in the regularizer ensure that $\bm A$ and $\ten S$ (similarly, $\bm B, \bm C$ and $\ten S$) have similar norms. Similar regularizers were used for analyzing the optimization landscape of asymmetric matrix problems\citep{park2016non}, where the same scaling problem exists. However, to the best of our knowledge we have not seen this regularizer used for Tucker decomposition.
For technical reasons that will become clear in Section~\ref{sec:landscape} (especially in Lemma~\ref{lem:reg_perturb}), we actually use $R(\ten S,\bm A,\bm B,\bm C) = \reg(\ten S,\bm A,\bm B,\bm C)^2$ as the regularizer with weight $\lambda > 0$, so the final optimization problem we consider is:
\begin{equation} \label{eqn:obj}
\underset{\ten S, \bm A, \bm B, \bm C}{\min} L(\ten S,\bm A,\bm B,\bm C) + \lambda R(\ten S,\bm A,\bm B,\bm C).
\end{equation}
Note that even for Equation~\eqref{eqn:obj}, there are still infinitely many global minimizers. In particular, one can rotate $\bm A$ and $\ten S$ (similarly, $\bm B, \bm C$ and $\ten S$) simultaneously to get equivalent solutions.
A priori it is unclear whether there always exists a global minimizer that achieves 0 loss for Equation~\eqref{eqn:obj}. Our proof in Section~\ref{sec:landscape} implicitly shows that such a solution must exist.
\section{Introduction}
\newcommand{\mathbb{R}}{\mathbb{R}}
Tensor decompositions have been widely applied in data analysis and machine learning. In this paper we focus on Tucker decomposition~\citep{hitchcock1927expression,tucker1966some}. Tucker decomposition has been applied to TensorFaces~\citep{vasilescu2002multilinear}, data compression~\citep{wang2004compact}, handwritten digits~\citep{savas2007handwritten} and more recently to word embeddings~\citep{frandsen2019understanding}.
Unlike CP/PARAFAC~\citep{carroll1970analysis,harshman1970foundations} decomposition, Tucker decomposition can be computed efficiently if the original tensor has low rank. For example, this can be done by high-order SVD~\citep{de2000multilinear}. Many other algorithms have also been proposed for tensor Tucker decomposition, see for example~\citep{de2000best,elden2009newton,phan2014fast}.
In modern applications, the dimension of the tensor and the amount of data available are often quite large. In practice, simple local search algorithms such as stochastic gradient descent are often used. Even for matrix problems where exact solutions can be computed, local search algorithms are often applied directly to a nonconvex objective~\citep{koren2009bellkor,recht2013parallel}. Recently, a line of work~\citep{ge2015escaping,bhojanapalli2016global,sun2016complete,ge2016matrix,sun2016geometric,bandeira2016low} showed that although these problems have nonconvex objectives, they can still be solved by local search algorithms, because they have a simple {\em optimization landscape}. In particular, for matrix problems such as matrix sensing~\citep{bhojanapalli2016global,park2016non} and matrix completion~\citep{ge2016matrix,ge2017no},
it was shown that all local minima are globally optimal. Similar results were also known for special cases of tensor CP decomposition~\citep{ge2015escaping}.
In this paper, we prove similar results for Tucker decomposition. Given a tensor $\ten T \in \mathbb{R}^{d\times d\times d}$ with multilinear rank $(r,r,r)$, the Tucker decomposition of the tensor $\ten T$ has the form
$$
\ten T = \tuk{S^*}{A^*}{B^*}{C^*},
$$
where $\ten S^*\in \mathbb{R}^{r\times r\times r}$ is a core tensor, $\bm A^*, \bm B^*, \bm C^*\in \mathbb{R}^{r\times d}$ are three components (factor matrices). The notation $\tuk{S^*}{A^*}{B^*}{C^*}$ is a multilinear form defined in Section~\ref{sec:tensordef}.
To find a Tucker decomposition by local search, the most straight-forward idea is to directly optimize the following nonconvex objective:
$$
L(\ten S,\bm A,\bm B,\bm C) = \|\ten T - \tuk{S}{A}{B}{C}\|_F^2.
$$
Clearly, $(\ten S^*,\bm A^*, \bm B^*, \bm C^*)$ is a global minimizer. However, since the optimization problem is nonconvex, it is unclear whether any local search algorithm can efficiently find a globally optimal solution. Our first result (Theorem~\ref{thm:exact}) shows that with an appropriate regularizer (designed in Section~\ref{sec:optimizationproblem}), all local minima of Tucker decomposition are globally optimal.
The main difficulty of analyzing the optimization landscape of Tucker decomposition comes from the existence of {\em high order saddle points}. For example, when $\ten S, \bm A,\bm B,\bm C$ are all equal to 0, any local movement of norm $\epsilon$ will only change the objective by at most $O(\epsilon^4)$.
Characterizing the possible locations of such high order saddle points, and showing that they cannot become local minima is one of the major technical contributions of this paper.
In general, even if all local minima are globally optimal, a local search algorithm may still fail to find a global optimal solution due to high order saddle points. In the worst case it is known that 3rd order saddle points can be handled efficiently, while 4th order saddle point are hard to escape from~\citep{anandkumar2016efficient}. The objective $L$ has 4th order saddle points. However, our next result (Theorem~\ref{thm:robust}) shows that there is a specifically designed local search algorithm that can find an approximate global optimal solution in polynomial time.
\section{Characterization of Optimization Landscape}
\label{sec:landscape}
In this section, we analyze the optimization landscape for the objective \eqref{eqn:obj} for Tucker decomposition. In particular, we establish the following result.
\begin{theorem} \label{thm:exact}
For any fixed $\lambda > 0$, all local minima of the objective function $f= L + \lambda R$ as in Equation \eqref{eqn:obj} have loss 0.
\end{theorem}
Note that the theorem would not hold for $\lambda = 0$ (when there is no regularizer). A counter-example is when $\ten T = \bm a^*\otimes \bm b^*\otimes \bm c^*$ for some unit vectors $\bm a^*, \bm b^*, \bm c^*$, and $\ten S = \bm 0, \bm A = \bm a^\top, \bm B = \bm b^\top, \bm C = \bm c^\top$ where $\bm a,\bm b,\bm c$ are unit vectors that are orthogonal to $\bm a^*, \bm b^*, \bm c^*$ respectively. A local change will have no effect if the new $\bm S$ is still $\bm 0$, and will make the objective function larger if the new $\bm S$ is nonzero.
In order to prove this theorem, we demonstrate a direction of improvement for all points $(\tup)$ that don't achieve the global optimum.
A direction of improvement is a tuple $(\Delta\ten S, \Delta\bm A, \Delta\bm B, \Delta\bm C)$ such that
\[
f(\ten S+\epsilon \Delta\ten S,\bm A + \epsilon \Delta\bm A,\bm B + \epsilon \Delta\bm B, \bm C + \epsilon \Delta\bm C) < f(\tup)
\]
for all sufficiently small $\epsilon > 0$. Clearly, if a point $(\tup)$ has a direction of improvement, then it cannot be a local minimum.
Throughout the section,
let $\bm P_1$ ($\bm P_2, \bm P_3$) be the projection onto the column span of $\ten T_{(1)}$ ($\ten T_{(2)}, \ten T_{(3)}$).
Let $\bm A_1 = \bm A\bm P_1$ and $\bm A_2 = \bm A(\bm I-\bm P_1)$ (similarly for $\bm B,\bm C$). The proof works in the following 4 steps:
\vspace*{0.1in}
{\noindent \bf Bounding the regularizer} First we show that when $\nabla f = \bm 0$, the regularizer $R$ must be equal to 0 (Lemm~\ref{lem:nonzeroreg} in Section~\ref{sec:nonzeroreg}). At a high level, this is because the gradient of regularizer $R$ is always orthogonal to the gradient of main term $L$. Therefore if the gradient of the entire objective is $\bm 0$, the gradient of $R$ must also be $\bm 0$. We complete the proof by showing that $\nabla R = \bm 0$ implies $R = 0$.
\vspace*{0.1in}
{\noindent \bf Removing extraneous directions}
Next, we show that when $\nabla f = \bm 0$, the projection in the wrong subspaces $\bm A_2,\bm B_2, \bm C_2$ are all equal to $\bm 0$.
This is because the direction of directly removing the projection in the wrong subspace $\bm A_2$ is a direction of improvement (see Lemma~\ref{lem:a21}).
\vspace*{0.1in}
{\noindent \bf Adding missing directions} After the previous steps, we know that the rows of $\bm A$ are in the column span of $\ten T_{(1)}$. However, the row span of $\bm A$ might be smaller. In this case, there exist directions $\bm a, \bm b, \bm c$ such that $\ten T(\bm a,\bm b,\bm c) > 0$, and $\bm A\bm a = \bm 0$. We will show that in this case we can always add the missing directions into $\bm A$ and $\ten S$.
This is the most technical part of our proof, and high order stationary points may appear when $\bm B\bm b$ or $\bm C\bm c$ are also $\bm 0$. See Sections~\ref{sec:missingdirections}.
\vspace*{0.1in}
{\noindent \bf Fixing $\ten S$} Finally, we know that the components $\bm A,\bm B, \bm C$ must span the correct subspaces. Our final step shows that in this case, if $L > 0$ then it is easy to find a direction of improvement, see Section~\ref{sec:handlings}.
\subsection{Direction of Improvement for Points with Nonzero Regularizer}\label{sec:nonzeroreg}
We show any point with nonzero regularizer must also have a nonzero gradient, therefore the (negative) gradient itself is a direction of improvement.
\begin{lemma} \label{lem:nonzeroreg}
For any $\tup$, if $R(\tup) > 0$ then $\|\nabla f\| > 0$.
\end{lemma}
To prove this, we first show that if the regularizer is nonzero, then its gradient is nonzero.
\begin{lemma}
\label{lem:reg_gradient}
The function $\reg$ satisfies
\[
4\reg(\tup) = \langle \nabla_{\bm A} \reg, \bm A\rangle + \langle \nabla_{\bm B} \reg, \bm B\rangle + \langle \nabla_{\bm C} \reg, \bm C\rangle + \langle \nabla_{\ten S} \reg, \ten S\rangle
\]
\end{lemma}
\begin{proof}
Note the following calculations:
\begin{align*}
\langle \nabla_{\bm A} \reg, \bm A\rangle &= \langle 4(\bm A \bm A^\top- \ten S_{(1)}\ten S_{(1)}^\top)\bm A,\bm A\rangle\\
&= 4\langle \bm A \bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\bm A\bm A^\top\rangle\\
\langle 4\ten S(\ten S_{(1)}\ten S_{(1)}^\top - \bm A\bm A^\top,\bm I,\bm I), \ten S\rangle &= -4\langle \bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\ten S_{(1)}\ten S_{(1)}^\top\rangle
\end{align*}
The left-hand side above is one of the terms in $\nabla_{\ten S}\reg$. Doing the same calculation for the other modes and then adding everything together yields the result.
\end{proof}
We next show that the gradient of the regularizer is always orthogonal to the gradient of the main term (i.e. the tensor loss $L$).
\begin{lemma}
\label{lem:reg_orthogonality}
For any $\tup$, $\langle \nabla L(\tup), \nabla R(\tup)\rangle = 0$.
\end{lemma}
\begin{proof}
We start by calculating the partial gradients for $L$ and $r$. We have
\begin{align*}
&\begin{aligned}[c]
\nabla_{\bm A} L &= 2\ten S_{(1)}(\bm B\otimes \bm C)(\diff)_{(1)}^\top\\
\nabla_{\bm B} L &= 2\ten S_{(2)}(\bm A\otimes \bm C)(\diff)_{(2)}^\top\\
\nabla_{\bm C} L &= 2\ten S_{(3)}(\bm A\otimes \bm B)(\diff)_{(3)}^\top\\
\end{aligned}\quad
\begin{aligned}[c]
\nabla_{\bm A} \reg &= 4(\bm A\bm A^\top- \ten S_{(1)}\ten S_{(1)}^\top)\bm A\\
\nabla_{\bm B} \reg &= 4(\bm B \bm B^\top - \ten S_{(2)}\ten S_{(2)}^\top)\bm B\\
\nabla_{\bm C} \reg &= 4(\bm C\bm C^\top - \ten S_{(3)}\ten S_{(3)}^\top)\bm C\\
\end{aligned}\\
&\nabla_{\ten S} L = 2(\tuk{S}{A}{B}{C}-\ten T)(\bm A^\top,\bm B^\top,\bm C^\top)\\
&\nabla_{\ten S} \reg = 4\ten S(\ten S_{(1)}\ten S_{(1)}^\top - \bm A\bm A^\top,\bm I,\bm I) + 4\ten S(\bm I,\ten S_{(2)}\ten S_{(2)}^\top-\bm B\bm B^\top,\bm I)\\
&\qquad\quad+4\ten S(\bm I,\bm I,\ten S_{(3)}\ten S_{(3)}^\top-\bm C\bm C^\top)
\end{align*}
We now compute the following:
\begin{align*}
\langle \nabla_{\bm A} L, \nabla_{\bm A} \reg \rangle &=
8\langle \ten S_{(1)}(\bm B\otimes \bm C)(\diff)_{(1)}^\top, (\bm A\bm A^\top- \ten S_{(1)}\ten S_{(1)}^\top)\bm A \rangle\\
&= 8\langle (\diff)(\bm A^\top,\bm B^\top,\bm C^\top), \ten S(\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top,\bm I,\bm I)\rangle
\end{align*}
From here it is easy to see that
$\langle \nabla_{\ten S} L, \nabla_{\ten S} \reg\rangle = - \langle \nabla_{\bm A} L, \nabla_{\bm A} \reg\rangle - \langle \nabla_{\bm B} L, \nabla_{\bm B} \reg\rangle - \langle \nabla_{\bm C} L, \nabla_{\bm C} \reg\rangle$, therefore $\langle \nabla L, \nabla \reg\rangle = 0$.
Since $\nabla R = 2\reg\nabla \reg$, the result follows.
\end{proof}
Now we are ready to prove Lemma~\ref{lem:nonzeroreg}:
\begin{proof}
By Lemma \ref{lem:reg_orthogonality}, we know
$\|\nabla f\|_F^2 = \|\nabla L\|_F^2 + \|\nabla R\|_F^2$.
On the other hand, by Lemma \ref{lem:reg_gradient} and an application of the Cauchy-Schwarz inequality, we see that
\[
\|\nabla \reg\|_F\|(\tup)\|_F \geq 4\reg(\tup),
\]
which means that $\|\nabla \reg\|_F > 0$ whenever $R = \reg^2 > 0$.
But $\nabla R = 2\reg\nabla \reg$, so we have that $\|\nabla R\|_F > 0$, whence $\nabla f \neq \bm 0$.
\end{proof}
To facilitate later proofs, we will also show a fact that if one perturbs a solution with 0 regularizer, then the regularizer remains very small.
\begin{lemma}\label{lem:reg_perturb}
If $R = 0$, and $\|\Delta \bm A\|_F+\|\Delta \bm B\|_F + \|\Delta \bm C\|_F+\|\Delta \ten S\|_F \le O(1)$, then $R(\ten S+\epsilon\Delta \ten S, \bm A+\epsilon \Delta \bm A, \bm B+\epsilon \Delta \bm B, \bm C+\epsilon \Delta \bm C) = O(\epsilon^4)$ for sufficiently small $\epsilon$.
\end{lemma}
\begin{proof}
It suffices to check that the term $\|(\bm A+\epsilon \Delta \bm A)(\bm A+\epsilon \Delta \bm A)^\top - (\ten S+\epsilon\Delta \ten S)_{(1)}(\ten S+\epsilon\Delta \ten S)_{(1)}^\top\|_F = O(\epsilon)$, as other terms are symmetric, and the final $R$ is degree 4 over these terms. This is clear as
we know $\|\bm A\bm A^\top - \ten S_{(1)}\ten S_{(1)}^\top\|_F = 0$ because $R=0$, and all the remaining terms are bounded by $O(\epsilon)$.
\end{proof}
\subsection{Removing Extraneous Directions}\label{sec:extradirections}
In this section, we show that if $\bm A$ (respectively $\bm B$, $\bm C$) has a direction in its row-space that is perpendicular to the column-space of $\ten T_{(1)}$ (respectively $\ten T_{(2)}$, $\ten T_{(3)}$), then we have a direction of improvement.
In particular, our goal is to show $\bm A_2 = \bm 0$ for all local minima (symmetric arguments will then show $\bm B_2 = \bm C_2 = \bm 0$). We first show that $\ten S(\bm A_2,\bm B,\bm C) = \bm 0$.
\begin{lemma} \label{lem:a21}
Assume that $R(\tup) = 0$. If $\ten S(\bm A_2,\bm B,\bm C) \neq \bm 0$, then $\Delta \bm A = -\bm A_2$ is a direction of improvement.
\end{lemma}
\begin{proof}
Set $\Delta \bm A = - \bm A_2$.
Then for $\epsilon > 0$
\begin{align*}
L(\ten S,\bm A+\epsilon \Delta \bm A,\bm B,\bm C) &= \|\diff + \epsilon \ten S(\Delta \bm A,\bm B,\bm C)\|_F^2\\
&= L(\tup) -2\epsilon\|\ten S(\bm A_2,\bm B,\bm C)\|_F^2 + O(\epsilon^2),
\end{align*}
since $\langle \diff, \ten S(\bm A_2,\bm B,\bm C)\rangle = \langle \ten S(\bm A_2,\bm B,\bm C), \ten S(\bm A_2,\bm B,\bm C)\rangle$.
Hence, for all sufficiently small $\epsilon$, $L(\ten S,\bm A+\epsilon \Delta \bm A,\bm B,\bm C) < L(\tup)$.
By Lemma~\ref{lem:reg_perturb} we know $R(\ten S,\bm A+\epsilon \Delta \bm A, \bm B, \bm C) = O(\epsilon^4)$.
Hence, for sufficiently small $\epsilon$, the decrease in $L$ will exceed any increase in $R$.
This shows that $\Delta \bm A$ is a direction of improvement.
\end{proof}
We next establish that $R(\tup) =0$ and $\ten S(\bm A_2,\bm B,\bm C) = \bm 0$ together imply that $\bm A_2 = \bm 0$.
\begin{lemma} \label{lem:zeroa2}
If $R(\tup) = 0$ and $\ten S(\bm A_2,\bm B,\bm C) = \bm 0$, then $\bm A_2 = \bm 0$.
\end{lemma}
\begin{proof}
Since $R(\tup) = 0$, we have $\bm B\bm B^\top = \ten S_{(2)} \ten S_{(2)}^\top$ and $\bm C\bm C^\top = \ten S_{(3)} \ten S_{(3)}^\top$. This means the column span of $\ten S_{(2)}$ ($\ten S_{(3)}$) is the same as column span of $\bm B$ ($\bm C$).
Let $\bm B^+$ and $\bm C^+$ denote the pseudoinverses of $\bm B$ and $\bm C$.
Note that the orthogonal projections onto the column-space of $\bm B$ and $\bm C$ are given by
$\bm P_{\bm B} := \bm B\bm B^+$ and $\bm P_{\bm C} := \bm C\bm C^+$, respectively.
Using these facts along with $\ten S(\bm A_2,\bm B,\bm C) =\bm 0$, we have
\begin{align*}
\bm 0 &= \ten S(\bm A_2,\bm B,\bm C)(\bm I,\bm B^+,\bm C^+) = \ten S(\bm A_2,\bm P_{\bm B},\bm P_{\bm C}) = \ten S(\bm A_2,\bm I,\bm I).
\end{align*}
Using the fact that $\ten S_{(1)}\ten S_{(1)}^\top = \bm A \bm A^\top = \bm A_1 \bm A_1^\top + \bm A_2 \bm A_2^\top$, we have
\[
\|\bm A_2\bm A_2^\top\|_F^2 \le \langle\bm A_2\bm A_2^\top, \ten S_{(1)}\ten S_{(1)}^\top\rangle = \|\ten S(\bm A_2, \bm I,\bm I)\|_F^2 = 0,
\]
which, in particular, means that $\bm A_2 = \bm 0$.
\end{proof}
\subsection{Adding Missing Directions}
\label{sec:missingdirections}
We now consider the case where the row-spans of $\bm A$, $\bm B$, and $\bm C$ are not equal to the column-spans of $\ten T_{(1)}$,$ \ten T_{(2)}$, and $\ten T_{(3)}$, respectively. Again by symmetry, we focus on the case when row-span of $\bm A$ is not equal to column-span of $\ten T_{(1)}$.
\begin{lemma}\label{lem:missingdirections} If the row-span of $\bm A$ is a strict subset of the column-span of $\ten T_{(1)}$ and $R = 0$, then there is a direction of improvement.
\end{lemma}
\begin{proof}
If the row-span of $\bm A$ is a strict subset of column-span of $\ten T_{(1)}$, we must have a vector $\bm a$ that is in the column-span of $\ten T_{(1)}$, but $\bm A \bm a = \bm 0$. For this vector we know $\ten T(\bm a, \bm I, \bm I) \ne \bm 0$, therefore there must exist vectors $\bm b,\bm c$ such that $\ten T(\bm a,\bm b, \bm c) > 0$.
This is true even if we restrict $\bm b$ to be either in the row span of $\bm B$ or to satisfy $\bm B b = 0$ (and similarly for $\bm c$), as we can partition the matrix into 4 subspaces based on the projections of its columns to row span of $\bm B$ (and its rows to row span of $\bm C$).
In particular, if we let $\bm b_1$ be the projection of $\bm b$ onto the row-span of $\bm B$ and $\bm c_1$ be the projection of $\bm c$ onto the row-span of $\bm C$, and set $\bm b_2 = \bm b - \bm b_1$ and $\bm c_2 = \bm c - \bm c_1$, then we have $\ten T(\bm a, \bm b, \bm c) = \sum_{i,j\in \{1,2\}} \ten T(\bm a, \bm b_i, \bm c_j).$
Hence, $\ten T(\bm a, \bm b_i, \bm c_j) > 0$ for some choice of $i, j \in \{1,2\}$.
\vspace*{0.1in}
{\noindent \bf One missing direction}
In this case $\bm b$ and $\bm c$ are in row span of $\bm B,\bm C$ respectively.
Choose unit vectors $\bm u,\bm v,\bm w\in \mathbb{R}^r$ such that $\bm A^\top \bm u = \bm 0$, $\bm B^\top \bm v = \alpha_1 \bm b$, and $\bm C ^\top \bm w = \alpha_2\bm c$, where $\alpha_1$ and $\alpha_2$ are positive real numbers.
Consider the directions $\Delta \bm A =\bm u\bm a^\top$, $\Delta \ten S = \bm u\otimes \bm v\otimes \bm w$.
Observe that $\Delta \point = \bm A^\top\bm u\otimes \bm B^\top\bm v\otimes \bm C^\top\bm w = \bm 0$ and $\ten S(\Delta \bm A,\bm B, \bm C) = \bm 0$ since the column-space of $\bm S_{(1)}$ is equal to the column-space of $\bm A$.
Moreover, $\Delta \ten S(\Delta \bm A, \bm B, \bm C) = \bm a\otimes \bm B^\top \bm v\otimes \bm C^\top \bm w = \alpha_1\alpha_2\bm a\otimes \bm b \otimes \bm c$.
Hence, for $\epsilon > 0$, we have
\begin{align*}
L(\ten S + \epsilon \Delta \ten S, \bm A + \epsilon \Delta \bm A, \bm B, \bm C) &= \|\diff + \epsilon^2\Delta\ten S(\Delta \bm A, \bm B, \bm C)\|_F^2\\
&= L(\tup)-2\epsilon^2\alpha_1\alpha_2\ten T(\bm a,\bm b,\bm c) + O(\epsilon^4).
\end{align*}
On the other hand, by Lemma~\ref{lem:reg_perturb} $R(\ten S + \epsilon\Delta\ten S, \bm A + \epsilon \Delta \bm A, \bm B, \bm C) = O(\epsilon^4)$ since $R(\tup) = 0$.
Hence, for small enough $\epsilon$, the improvement in the tensor loss dominates all other perturbations, so we have a direction of improvement.
\vspace*{0.1in}
{\noindent \bf Two missing directions}
Now assume that $\bm A\bm a = \bm B\bm b = \bm 0$, and $\bm c$ is in the row span of $\bm C$.
Choose unit vectors $\bm u,\bm v,\bm w \in \mathbb{R}^r$ such that $\bm A^\top \bm u = \bm B^\top \bm v =\bm 0$ and $\bm C^\top \bm w = \alpha \bm c$ where $\alpha > 0$.
Consider the directions $\Delta \bm A = \bm u\bm a^\top$, $\Delta \bm B = \bm v\bm b^\top$, $\Delta \ten S = \bm u\otimes \bm v\otimes \bm w$.
Through a very similar calculation as in the previous case,
\[
L(\ten S+\epsilon \Delta \ten S, \bm A+\epsilon \Delta \bm A, \bm B+\epsilon \Delta \bm B,\bm C) = L(\tup) - 2\epsilon^3\alpha \ten T(\bm a,\bm b,\bm c) + \epsilon^6\alpha^2.
\]
As before, by Lemma~\ref{lem:reg_perturb} $R(\ten S + \epsilon\Delta\ten S, \bm A + \epsilon \Delta \bm A, \bm B +\epsilon \Delta \bm B, \bm C) = O(\epsilon^4)$.
Hence, the decrease in the tensor loss dominates all other perturbations for sufficiently small $\epsilon$, and so this is a direction of improvement. Note that in this case the amount of improvement is $\Theta(\epsilon^3)$, so the point is a 3rd order saddle point.
The case where $\bm C \bm c = \bm 0$ and $\bm b$ is in the row-span of $\bm B$ is similar, and likewise yields a direction of improvement.
\vspace*{0.1in}
\noindent
{\bf Three missing directions}
Now assume that $\bm A\bm a = \bm B\bm b = \bm C\bm c = \bm 0$, and choose unit vectors $\bm u,\bm v,\bm w \in \mathbb{R}^r$ such that $\bm A^\top \bm a = \bm B^\top \bm v = \bm C^\top \bm w = \bm 0$.
Consider the directions $\Delta \bm A = \bm u\bm a^\top$, $\Delta \bm B = \bm v\bm b^\top$, $\Delta \bm C = \bm w\bm c^\top$, and $\Delta \ten S = \bm u\otimes\bm v\otimes \bm w$.
Once again, most perturbations in the tensor loss vanish, and we have
\[
L(\ten S+\epsilon \Delta \ten S, \bm A+\epsilon\Delta \bm A, \bm B+\epsilon \Delta \bm B,\bm C+\epsilon \Delta \bm C) = L(\tup) - 2\epsilon^4 \ten T(\bm a,\bm b,\bm c) + \epsilon^8.
\]
In this case, the regularizer doesn't change at all,
since $\Delta \ten S_{(i)}\ten S_{(i)}^\top = \bm 0$ for $i = 1,2,3$, $\Delta \bm A \bm A^\top = \Delta \bm B \bm B^\top = \Delta \bm C \bm C^\top =\bm 0$,
and $\Delta \bm A\Delta \bm A^\top - \Delta \ten S_{(1)}\Delta \ten S_{(1)}^\top = \bm u\bm u^\top - \bm u \bm u^\top =\bm 0$ (and the two other analogous terms likewise vanish).
Hence, for sufficiently small $\epsilon$, the objective function decreases, so this is a direction of improvement. This point is a 4th order saddle point.
\end{proof}
\subsection{Improving the core tensor}
\label{sec:handlings}
We finally consider the case where the matrices $\bm A, \bm B, \bm C$ have the correct row-spaces but $\point \neq\ten T$. In this situation, we can make
progress by changing only $\ten S$.
\begin{lemma}\label{lem:handlings}
If $R = 0$, row spans of $\bm A,\bm B,\bm C$ are equal to column span of $\ten T_{(1)},\ten T_{(2)}, \ten T_{(3)}$ respectively, but $L > 0$, then there exists a direction of improvement.
\end{lemma}
\begin{proof}
Since the spans of $\bm A,\bm B, \bm C$ are already correct, let $\bm A^+$ be the pseudoinverse of $\bm A$, then if we let $\ten S' = \ten T(\bm A^+, \bm B^+, \bm C^+)$, we have $\ten S'(\bm A,\bm B,\bm C) = \ten T$.
Consider the direction $\Delta \ten S = \ten S' - \ten S$.
\begin{align*}
L(\ten S+\epsilon \Delta \ten S, \bm A, \bm B, \bm C) &= \|(1-\epsilon)\point - (1-\epsilon)\ten T\|_F^2\\
&= (1-\epsilon)^2L(\tup).
\end{align*}
For the regularizer $R$, again by Lemma~\ref{lem:reg_perturb} we have $R(\ten S+\epsilon \Delta \ten S, \bm A, \bm B, \bm C) = O(\epsilon^4)$. Hence, this is a direction of improvement.
\end{proof}
\subsection{Proof of Main Theorem}
Now with all the lemmas we are ready to prove the main theorem:
\begin{proof}[Proof of Theorem~\ref{thm:exact}] The Theorem follows immediately from the sequence of lemmas.
First, by Lemma~\ref{lem:nonzeroreg}, we know any local minima must satisfy $R = 0$. Next, by Lemma~\ref{lem:zeroa2} and Lemma~\ref{lem:a21}, we know the row spans of $\bm A$, $\bm B$, $\bm C$ must be subsets of column spans of $\ten T_{(1)}, \ten T_{(2)}, \ten T_{(3)}$ respectively. In the third step, by Lemma~\ref{lem:missingdirections}, we further show that the row spans of $\bm A$, $\bm B$, $\bm C$ must be exactly equal to column spans of $\ten T_{(1)}, \ten T_{(2)}, \ten T_{(3)}$ respectively. Finally, by Lemma~\ref{lem:handlings} we know the loss function must be equal to 0.
\end{proof}
|
1,116,691,497,189 | arxiv |
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Communications Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE Computer Society conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This document is a model and instructions for \LaTeX.
Please observe the conference page limits.
\section{Ease of Use}
\subsection{Maintaining the Integrity of the Specifications}
The IEEEtran class file is used to format your paper and style the text. All margins,
column widths, line spaces, and text fonts are prescribed; please do not
alter them. You may note peculiarities. For example, the head margin
measures proportionately more than is customary. This measurement
and others are deliberate, using specifications that anticipate your paper
as one part of the entire proceedings, and not as an independent document.
Please do not revise any of the current designations.
\section{Prepare Your Paper Before Styling}
Before you begin to format your paper, first write and save the content as a
separate text file. Complete all content and organizational editing before
formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on
proofreading, spelling and grammar.
Keep your text and graphic files separate until after the text has been
formatted and styled. Do not number text heads---{\LaTeX} will do that
for you.
\subsection{Abbreviations and Acronyms}\label{AA}
Define abbreviations and acronyms the first time they are used in the text,
even after they have been defined in the abstract. Abbreviations such as
IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use
abbreviations in the title or heads unless they are unavoidable.
\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.
\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.)
\end{itemize}
\subsection{Equations}
Number equations consecutively. To make your
equations more compact, you may use the solidus (~/~), the exp function, or
appropriate exponents. Italicize Roman symbols for quantities and variables,
but not Greek symbols. Use a long dash rather than a hyphen for a minus
sign. Punctuate equations with commas or periods when they are part of a
sentence, as in:
\begin{equation}
a+b=\gamma\label{eq}
\end{equation}
Be sure that the
symbols in your equation have been defined before or immediately following
the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at
the beginning of a sentence: ``Equation \eqref{eq} is . . .''
\subsection{\LaTeX-Specific Advice}
Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.
Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files.
{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.
Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
\subsection{Some Common Mistakes}\label{SCM}
\begin{itemize}
\item The word ``data'' is plural, not singular.
\item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.
\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).
\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.
\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.
\item Do not confuse ``imply'' and ``infer''.
\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.
\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.
\end{itemize}
An excellent style manual for science writers is \cite{b7}.
\subsection{Authors and Affiliations}
\textbf{The class file is designed for, but not limited to, six authors.} A
minimum of one author is required for all conference articles. Author names
should be listed starting from left to right and then moving down to the
next line. This is the author sequence that will be used in future citations
and by indexing services. Names should not be listed in columns nor group by
affiliation. Please keep your affiliations as succinct as possible (for
example, do not differentiate among departments of the same organization).
\subsection{Identify the Headings}
Headings, or heads, are organizational devices that guide the reader through
your paper. There are two types: component heads and text heads.
Component heads identify the different components of your paper and are not
topically subordinate to each other. Examples include Acknowledgments and
References and, for these, the correct style to use is ``Heading 5''. Use
``figure caption'' for your Figure captions, and ``table head'' for your
table title. Run-in heads, such as ``Abstract'', will require you to apply a
style (in this case, italic) in addition to the style provided by the drop
down menu to differentiate the head from the text.
Text heads organize the topics on a relational, hierarchical basis. For
example, the paper title is the primary text head because all subsequent
material relates and elaborates on this one topic. If there are two or more
sub-topics, the next level head (uppercase Roman numerals) should be used
and, conversely, if there are not at least two sub-topics, then no subheads
should be introduced.
\subsection{Figures and Tables}
\paragraph{Positioning Figures and Tables} Place figures and tables at the top and
bottom of columns. Avoid placing them in the middle of columns. Large
figures and tables may span across both columns. Figure captions should be
below the figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the abbreviation
``Fig.~\ref{fig}'', even at the beginning of a sentence.
\begin{table}[htbp]
\caption{Table Type Styles}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
\cline{2-4}
\textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
\hline
copy& More table copy$^{\mathrm{a}}$& & \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[htbp]
\centerline{\includegraphics{fig1.png}}
\caption{Example of a figure caption.}
\label{fig}
\end{figure}
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words
rather than symbols or abbreviations when writing Figure axis labels to
avoid confusing the reader. As an example, write the quantity
``Magnetization'', or ``Magnetization, M'', not just ``M''. If including
units in the label, present them within parentheses. Do not label axes only
with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization
\{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of
quantities and units. For example, write ``Temperature (K)'', not
``Temperature/K''.
\section*{Acknowledgment}
The preferred spelling of the word ``acknowledgment'' in America is without
an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B.
G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor
acknowledgments in the unnumbered footnote on the first page.
\section*{References}
Please number citations consecutively within brackets \cite{b1}. The
sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference
number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at
the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$''
Number footnotes separately in superscripts. Place the actual footnote at
the bottom of the column in which it was cited. Do not put footnotes in the
abstract or reference list. Use letters for table footnotes.
Unless there are six authors or more give all authors' names; do not use
``et al.''. Papers that have not been published, even if they have been
submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers
that have been accepted for publication should be cited as ``in press'' \cite{b5}.
Capitalize only the first word in a paper title, except for proper nouns and
element symbols.
For papers published in translation journals, please give the English
citation first, followed by the original foreign-language citation \cite{b6}.
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\section*{Acknowledgment}
The authors would like to thank...
\section{Introduction}
Named Data Networking (NDN)~\cite{zhang2010named} is a network layer protocol that is being actively researched with the hope of serving as a replacement for the IP protocol. nTorrent~\cite{mastorakis2017ntorrent} is an NDN peer-to-peer file sharing application. The current implementation runs with a few modifications to the base ns-3 network simulator in order to compile and run successfully. The idea behind this paper is to extend the functionality of nTorrent and make it run on top of ndnSIM~\cite{mastorakis2017evolution, mastorakis2016ndnsim, mastorakis2015ndnsim} that features full integration with the NDN Forwarding Daemon (NFD)~\cite{nfd-dev} for simulations.
Our code is available at \url{https://github.com/akshayraman/scenario-ntorrent}.
\section*{Acknowledgments}
We would like to thank Spyridon Mastorakis for providing us with all the help needed to pursue this project.
\bibliographystyle{plain}
\section{Related Work}
nTorrent has been designed to have a hierarchical file structure. At the top of the hierarchy, the .torrent file (Figure~\ref{Figure:torrent-file}) contains the name, size, type of the torrent file, and the signature of the original publisher. It also includes the names of the file manifests that make up the torrent file. Each file manifest (Figure~\ref{Figure:file-manifest}) contains its name, signature of the original publisher, and a list of names of the packets that make up that particular file. Using these names, Interest packets can be sent out to request for the corresponding files or packets.
The fetching strategy currently implemented starts with requesting for the first packet of the first file to the last packet of the last file. Each name also follows a certain naming convention that can help easily identify the name of the torrent file, the file names, and the individual packets in a file. This name is also used by the routers to verify the integrity of the the packet.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/torrent-file}
\caption{\small Structure of a torrent-file}
\label{Figure:torrent-file}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{figures/manifest}
\caption{\small Structure of a file manifest}
\label{Figure:file-manifest}
\end{figure}
|
1,116,691,497,190 | arxiv | \section{Preliminaries}
Resolving the tension between quantum theory and relativity is one of the most important problems of modern physics. In this paper the issue is presented mathematically and the goal is to find a formalism that unifies the mathematics used in both theories. More specifically, the challenge is to find a meaningful way of ``encoding'' the differential geometry of finite dimensional pseudo-Riemannian manifolds into the theory of infinite dimensional Hilbert spaces.
Provided such a ``unified'' formalism exists, it may be useful in particular, in studying the issues of compatibility of quantum theory and relativity and the problem of emergence of the classical world in quantum theory.
The first thing that comes to mind is that the theory of representations of groups provides a partial answer to the challenge. Indeed, it allows one to represent symmetries of a physical system in terms of linear transformations on the Hilbert space of states of the system. For instance, symmetries of Minkowski space-time can be represented by unitary transformations on the Hilbert space of states of a relativistic system. However, despite the undeniable significance of representations in physics there is more to the problem than representations of groups can provide. For instance, if the symmetry group (i.e., group of isometries) of a Riemannian manifold is trivial, representations of the group do not contain any information about the manifold.
On the other hand, there is a link between the topology of a space and algebra of continuous functions on the space that may be useful to tackle the problem. Namely, the
celebrated Gel'fand-Kolmogorov theorem Ref.\cite{Gel} states that an arbitrary compact Hausdorff space $X$ is homeomorphic to the space of all evaluation homomorphisms (delta functions) in the infinite-dimensional vector space dual to the Banach algebra $C(X)$ of continuous functions on $X$. In other words, $X$ can be identified with the set of all delta functions in the space dual to $C(X)$. However, $C(X)$ is not a Hilbert space, the topology is poorer than the Riemannian structure and the condition of compactness is too restrictive. In addition, the fact that elements of $C(X)$ are functions on $X$ makes it difficult to use $C(X)$ independently of $X$. This is a problem if one has in mind the goal of deriving the classical from the quantum theory.
In the case of a single particle system in $\mathbb{R}^{3}$ the most obvious physically meaningful embedding of $\mathbb{R}^{3}$ into the space of states of a particle is by identifying a point ${\bf a}$ in $\mathbb{R}^{3}$ with the state $\delta^{3}_{\bf a}({\bf x})=\delta^{3}({\bf x}-{\bf a})$ of the particle found at ${\bf a}$. This embedding was usefully explored in Refs.\cite{Kryukov},\cite{Kryukov3} to develop a geometric approach to quantum mechanics. Results of Ref.\cite{KryukovJMP} prove that an embedding of this kind is also ideally suited for addressing the issues of the unification of quantum mechanics and special relativity. In the present paper the main results of Ref.\cite{KryukovJMP} are summarized, updated and extended to include curved space-time manifolds of general relativity.
\section{Quantum mechanics and special relativity}
Delta functions are not in the common space $L_{2}(\mathbb{R}^{3})$ of Lebesgue square-integrable functions on $\mathbb{R}^{3}$. So to use this correspondence one first needs to find a Hilbert space of functions that contains delta functions and that is ``approximately equal'' to the space $L_{2}(\mathbb{R}^{3})$. The following theorem takes care of this task (see Refs.\cite{Kryukov3},\cite{KryukovJMP} for details on results in this section and Refs.\cite{Gel-Kos},\cite{Gross} for related original publications on rigged Hilbert spaces).
\begin{thm}
\label{1}
The Hilbert space ${\bf H}$ obtained by completing the space $L_{2}(\mathbb{R}^{3})$ in the metric defined by the inner product
\begin{equation}
\label{inner}
(\varphi, \psi)_{{\bf H}}=\left(\frac{L}{\sqrt {2\pi}}\right)^{3}\int e^{-\frac{L^{2}}{2}({\bf x}-{\bf y})^{2}}\varphi({\bf x}){\overline \psi({\bf y})} d^{3}{\bf x}d^{3}{\bf y}
\end{equation}
with a positive constant $L$ contains delta functions and their derivatives. Furthermore, for a sufficiently large $L$ the ${\bf H}$ and $L_{2}$-norms of any given function $f\in L_{2}(\mathbb{R}^{3})$ are arbitrarily close to each other.
\end{thm}
The map $\omega:a \longrightarrow \delta^{3}_{\bf a}$ is one-to-one, so the set $\mathbb{R}^{3}$ can be identified with the set of all delta functions in ${\bf H}$. Moreover, the induced manifold structure and the metric on the image $M_{3}=\omega(\mathbb{R}^{3})\subset {\bf H}$ are those of the Euclidean space $\mathbb{R}^{3}$. In other words,
\begin{thm}
\label{1a}
The map $\omega:{\bf a} \longrightarrow \delta^{3}_{\bf a}$ is an isometric embedding of the space $\mathbb{R}^{3}$ with the Euclidean metric into the space ${\bf H}$ defined in theorem \ref{1}.
\end{thm}
Note that the map $\omega$ is not linear. In particular, because the norm of any delta function $\delta^{3}_{\bf a}$ in ${\bf H}$ is the same, the image $\omega(\mathbb{R}^{3})$ is a submanifold of the sphere in ${\bf H}$. So the vector structure on $\mathbb{R}^{3}$ is not compatible with the vector structure on ${\bf H}$. However, one can introduce a vector structure on the image $M_{3}$ by defining the operations of addition $\oplus$ and multiplication by a scalar $\lambda \odot$ via $\omega(a)\oplus\omega(b)=\omega(a+b)$ and $\lambda \odot\omega(a)=\omega(\lambda a)$. Moreover, because $\omega$ is a homeomorphism onto $M_{3}$, these operations are continuous in the topology of $M_{3}\subset {\bf H}$.
The Riemannian structure of the Euclidean space is now ``encoded'' into the Hilbert space ${\bf H}$.
At the same time, the space ${\bf H}$ is approximately equal to the space $L_{2}(\mathbb{R}^{3})$ (symbolically, ${\bf H}\approx L_{2}(\mathbb{R}^{3})$). That is, provided the constant $L$ in theorem \ref{1} is sufficiently large, the ${\bf H}$-norms of typical square-integrable functions will be as close as we wish to their $L_{2}(\mathbb{R}^{3})$ norms.
Accordingly, if ${\bf H}$ is used in place of $L_{2}(\mathbb{R}^{3})$ in the usual quantum mechanics, the expected values, probabilities of transition and other measured quantities remain practically the same, ensuring consistency with experiment. In the following the constant $L$ will be set to one and the needed agreement between spaces ${\bf H}$ and $L_{2}(\mathbb{R}^{3})$ will be achieved by an appropriate choice of units.
The next step is to extend results of theorems \ref{1}, \ref{1a} to Minkowski space-time.
For this one needs to work with spaces of functions of four variables ${\bf x}, t$. Let ${\widetilde H}$ be the Hilbert space of functions $f$ of four variables $x=({\bf x},t)$ that is the completion of the space $L_{2}(\mathbb{R}^{4})$ in the metric given by the kernel $e^{-\frac{1}{2}(x-y)^{2}}$. It is easy to see that ${\widetilde H}$ is the orthogonal sum of the subspace ${\widetilde H}_{\rm ev}$ of all functions that are even in the time variable $t$ and the subspace ${\widetilde H}_{\rm odd}$ of all functions that are odd in $t$.
The following theorem generalizes the results of theorem \ref{1} to the case of Minkowski space-time.
\begin{thm}
\label{2}
Let $H$ be the set of all functions $f({\bf x},t)=e^{-t^{2}}\varphi({\bf x},t)$ with $\varphi \in \widetilde{H}$. Consider the Hermitian form $(f,g)_{H_{\eta}}$ on $H$ given by
\begin{equation}
\label{innerM}
(f,g)_{H_{\eta}}=\int e^{-\frac{1}{2}({\bf x}-{\bf y})^{2}+\frac{1}{2}(t-s)^{2}}f({\bf x},t){\overline g({\bf y},s)} d^{3}{\bf x}dtd^{3}{\bf y}ds
\end{equation}
and let $(f,f)_{H_{\eta}}\equiv \left\|f\right\|^{2}_{H_{\eta}}$ be the corresponding quadratic form, or the squared $H_{\eta}$-norm.
Then $H$ is exactly the set of functions whose even and odd components have a finite $H_{\eta}$-norm.
Moreover, $H$ furnished with the inner product $(f,g)_{H_{+}}=(\varphi, \psi)_{\widetilde{H}}$, where $f({\bf x},t)=e^{-t^{2}}\varphi({\bf x},t)$, $g({\bf x},t)=e^{-t^{2}}\psi({\bf x},t)$ is a Hilbert space.
The Hermitian form (\ref{innerM}) defines an indefinite, non-degenerate inner product on $H$, such that $\left\|f\right\|^{2}_{H_{\eta}}>0$ for all even functions $f\neq 0$ and $\left\|f\right\|^{2}_{H_{\eta}}<0$ for all odd functions $f \neq 0$ in $H$. Finally, $H$ contains the delta functions $\delta^{4}_{a}(x)=\delta^{4}(x-a)$ and their derivatives.
\end{thm}
The space $H$ is an example of what is called the {\em Krein space}. A Krein space is a complex vector space $V$ with a Hermitian inner product $(f,g)_{V}$ and such that $V$ is the direct sum of two spaces $H_{1}, H_{2}$ that are Hilbert with respect to the inner products $(f,g)_{V}$ and $-(f,g)_{V}$ respectively and that are orthogonal in the inner product on $V$. The following analogue of theorem \ref{1a} is valid:
\begin{thm}
\label{3}
The map $\omega: N \longrightarrow H$, $\omega(a)=\delta^{4}_{a}$ is an embedding that identifies the Minkowski space $N$ with the submanifold $M_{4}$ of $H$ of all delta functions $\delta^{4}_{a}$, $a \in N$. Under the embedding the indefinite metric on $H$ yields the Minkowski metric on $M_{4}$, while the ${\widetilde H}$-metric yields the ordinary Euclidean metric on $M_{4}$.
\end{thm}
So, similarly to the space $\mathbb{R}^{3}$, the Minkowski space $N$ is now encoded into the space $H$. As before, the map $\omega$ is not linear, but the image $M_{4}$ can be furnished with a linear structure, induced from $N$.
To ``lift'' the theory of relativity from $N$ onto $H$ it turns out to be important that the set $M_{4}$ is a complete set in $H$ (i.e., no element of $H$ is orthogonal to all delta functions in $M_{4}$) and that elements of any finite subset of $M_{4}$ are linearly independent. In this sense, the set $M_{4}$ forms a basis of the space $H$. Because of that physics on $M_{4}$ obtains a unique ``linear extension'' to the entire Hilbert space $H$.
If $\Pi$ is a Poincar{\'e} transformation, and $f$ is a function(al) in $H$, then $\delta_{\Pi}: f \longrightarrow f\circ \Pi^{-1}$ is a linear map on $H$. Because the Hilbert metric on $H$ is not invariant under general Poincar{\'e} transformations, the operator $\delta_{\Pi}$ may not be bounded as a map into $H$ so that the map $\Pi \longrightarrow \delta_{\Pi}$ is not a representation of the Poincar{\'e} group $P$. However, the set of all functions $f\circ \Pi^{-1}$ with a fixed $\Pi$ and $f \in H$ form a Hilbert space $H'$ with the inner product defined by $(\delta_{\Pi}f, \delta_{\Pi}g)_{H'_{+}}=(f,g)_{H_{+}}$. The map $\delta_{\Pi}: H \longrightarrow H'$ is then an isomorphism of Hilbert spaces. Hilbert spaces obtained in such a way can be thought of as different realizations of one and the same abstract Hilbert space ${\bf S}$.
A particular isomorphism $\Gamma: {\bf S} \longrightarrow H$ can be thought of as a coordinate chart on ${\bf S}$ Ref.\cite{Kryukov3}. With this in mind one can formulate the following essential result. In the theorem the expression {\em isometric embedding} refers to an embedding that preserves the indefinite metric. Likewise {\em isomorphism} is an isomorphism of Hilbert spaces that in addition preserves the indefinite metric.
\begin{thm}
\label{5}
Let $\Gamma:{\bf S}\longrightarrow H$ be an isomorphism of the abstract Hilbert space ${\bf S}$ with an additional indefinite metric onto the space $H$ of functions defined in theorem \ref{2}. Let $\gamma: N \longrightarrow \mathbb{R}^{1,3}$ be a global coordinate chart from the Minkowski space-time onto the coordinate space of the observer in an inertial reference frame $K$. Let $\Pi$ be a Poincar{\'e} transformation that relates coordinates associated with frames $K$ and $K'$. Then there exists a unique isometric embedding $\Omega$ and a unique isomorphism $\delta_{\Pi}:H \longrightarrow H'$ such that the diagram
\begin{equation}
\label{diagram}
\begin{CD}
{\bf S} @ >\Gamma>> H @ >\delta_{\Pi}>> H'\\
@ AA\Omega A @ AA \omega A @ AA\omega A \\
N @ >\gamma>> \mathbb{R}^{1,3} @ >\Pi>> \mathbb{R}^{1,3}
\end{CD}
\end{equation}
\newline
is commutative. It follows that within the assumptions of the theorem the embedding $\omega$ preserves the structure of special relativity and extends it in a unique way to the abstract Hilbert space ${\bf S}$.
\end{thm}
A couple of remarks.
\begin{enumerate}
\item
Note that $\delta_{\Pi}$ maps delta functions to delta functions, so that in accord with the diagram (\ref{diagram}) the image of the manifold $M_{4}$ in $H$ is the submanifold $M_{4}$ in $H'$. This together with the fact that $\omega$ is an isometric embedding is what allows for the usual theory of relativity on Minkowski space-time to be a part of the new framework. At the same time completeness of the set $M_{4}$ together with linear independence of its elements makes the entire construction rigid, ensuring uniqueness of the extension.
\item
The proposed method of extension of the Poincar{\'e} group action from $N$ onto ${\bf S}$ can be applied to {\em any} group acting on $N$.
Notice that an arbitrary non-linear transformation acting continuously on $N$ becomes linear when extended to ${\bf S}$. That is so because moving across $N$ corresponds to going ``across dimensions'' of $H$ so that a linear extension of the transformation becomes possible. Completeness of the set $M_{4}$ in $H$ ensures then that such an extension is unique.
\item
In explicit terms the covariance of the construction amounts to the following: {\em(a)} the embedding preserves covariant properties of $4$-tensors (elements of the tensor algebra of Minkowski space-time); {\em(b)} the involved functional objects are also covariant under Poincar{\'e} transformations; {\em(c)} the embedding is equivariant, that is, it commutes with the action of the Poincar{\'e} group.
The first two properties simply mean that the usual $4$-tensors are also elements of the tensor algebra of ${\bf S}$ and that all considered objects are tensorial. The third property signifies that Poncar{\'e} transformations $\Pi \in P$ can be identified with morphisms $\delta_{\Pi}$ of Hilbert spaces.
All three properties follow from the diagram (\ref{diagram}). Indeed, because $\omega$ is an embedding, the differential map $d\omega$ yields embedding of the corresponding tangent and, more generally, tensor bundles, which proves {\em(a)}. Property {\em(c)} is exactly the commutative property of the diagram. To prove {\em(b)} note that
a function $f \in H$ represents an invariant element of ${\bf S}$ and transforms as a vector under $\delta_{\Pi}$: $f'=\delta_{\Pi} f$.
Writing the law $f'=\delta_{\Pi} f$ in the form $f'(x')=(\delta_{\Pi} f)(x')=f(\Pi^{-1} x')=f(x)$, one recovers the usual law of transformation of scalar functions.
The metric operators ${\widehat G}_{H_{+}},{\widehat G}_{H_{\eta}}:H \longrightarrow H^{\ast}$, where $H^{\ast}$ is the dual of $H$ and $(f,g)_{H_{+}}=({\widehat G}_{H_{+}}f,g)$, $(f,g)_{H_{\eta}}=({\widehat G}_{H_{\eta}}f,g)$, define $2$-forms in the tensor algebra of ${\bf S}$. Their transformation law ${\widehat G}_{H'_{+}}=\delta^{\ast -1}_{\Pi}{\widehat G}_{H_{+}}\delta^{-1}_{\Pi}$ and ${\widehat G}_{H'_{\eta}}=\delta^{\ast -1}_{\Pi}{\widehat G}_{H_{\eta}}\delta^{-1}_{\Pi}$, where $\delta^{\ast}_{\Pi}:H'^{\ast}\longrightarrow H^{\ast}$ is the adjoint of $\delta_{\Pi}$, ensures invariance of the inner products. It follows that the metric operators are also covariant quantities.
Note that although no covariant equations for functional quantities were considered so far, it is clear that they simply are tensor equations for fields with values in the tensor algebra of ${\bf S}$ (see Ref.\cite{Kryukov3}).
\item
Let us call the realization $\Gamma:{\bf S} \longrightarrow H$ of ${\bf S}$ a $K$-representation.
The diagram (\ref{diagram}) demonstrates that under the transformation of the frame $K$ by $\gamma \longrightarrow \Pi \circ \gamma$ the $K$-representation changes in a covariant fashion to a unitary equivalent realization $\delta_{\Pi}\circ \Gamma: {\bf S} \longrightarrow H'$ of ${\bf S}$. According to the diagram, this realization is a unique extension of the coordinate system $\Pi \circ \gamma$ of an observer in the reference frame $K'$, or the {\em $K'$-representation of ${\bf S}$}. Note that in general the spaces $H$ and $H'$ have a different functional content. However, both spaces are realizations of the same invariant abstract Hilbert space ${\bf S}$ with the invariant Hilbert and indefinite metrics on it. In other words, only a functional realization of ${\bf S}$ changes from frame to frame, not the space ${\bf S}$ itself. For applications to physics it is particularly important that the inner products of elements of ${\bf S}$ in all realizations remain the same.
In the following it will be advocated that ${\bf S}$ is an appropriate physical space of states of a quantum system. Suppose for now that this is the case and consider an observer in an arbitrary inertial frame $K'$ having access to the space ${\bf S}$ and describing it via $K'$-representation. Then the observer will not be able to use the functional content of the Hilbert space $H'$ of representation or the representation itself to determine the state of motion of the frame $K'$. Rather, similar to the ordinary special relativity, a particular functional realization of the space ${\bf S}$ is not physical, i.e., all such realizations are physically equivalent.
\item The Poincar{\'e} transformations $\Pi$ and their extensions $\delta \Pi$ in the theorem are ``passive'' transformations i.e., they describe changes in coordinate realizations of the fixed, invariant spaces $N$ and ${\bf S}$. The corresponding ``active'' version of the theorem is also possible and is given by the following analogue of diagram (\ref{diagram}):
\begin{equation}
\label{diagramA}
\begin{CD}
{\bf S'} @ <\delta_{\Pi} << {\bf S} @ >\Gamma>> H\\
@ AA\Omega A @ AA \Omega A @ AA\omega A \\
N @ <\Pi<< N @ >\gamma>> \mathbb{R}^{1,3}
\end{CD}
\end{equation}
Here the maps $\gamma, \Gamma, \omega$ and the embedding $\Omega:N \longrightarrow {\bf S}$ are the same as before. The Poincar{\'e} transformation $\Pi$ maps Minkowski space $N$ onto itself. The space ${\bf S'}$ contains the subset $\Omega(N)$ as a complete set and is otherwise defined by the diagram. The isomorphism $\delta_{\Pi}$ is the linear extension of the map $\Omega \Pi \Omega^{-1}$ from $\Omega(N)$ onto ${\bf S}$.
\end{enumerate}
Let's now turn to the embedding of quantum mechanics into the same framework. The first thing to do is to relate the Hilbert space $H$ of functions of four variables ${\bf x}, t$ to the usual Hilbert spaces of functions of three variables ${\bf x}$ with $t$ as a parameter of evolution. For this consider the family of subspaces $H_{\tau}$ of $H$ each consisting of all functionals $\varphi_{\tau}({\bf x},t)=\psi({\bf x},t)\delta(t-\tau)$ for some fixed $\tau \in \mathbb{R}$.
\begin{thm}
\label{6}
Under the inclusion $i:H_{\tau}\longrightarrow H$ the indefinite inner product on $H$ yields a Hilbert metric on $H_{\tau}$ for all $\tau \in \mathbb{R}$.
Furthermore,
let ${\bf H} \approx L_{2}(\mathbb{R}^{3})$ be the Hilbert space defined in theorem \ref{1} (with $L=1$ and a sufficiently small scale to make the approximation valid).
Then for all $\tau \in \mathbb{R}$ the map $I:H_{\tau}\longrightarrow {\bf H}$ defined by $I(\varphi_{\tau})({\bf x})=\psi({\bf x}, \tau)$ is an isomorphism of Hilbert spaces.
\end{thm}
The map $I$ basically identifies each subspace $H_{\tau}$ with the usual space $L_{2}(\mathbb{R}^{3})$ of state functions on $\mathbb{R}^{3}$ considered at time $\tau$. The following result relates the dynamics on the family of subspaces $H_{\tau}$ and the usual space $L_{2}(\mathbb{R}^{3})$ of states of a spinless non-relativistic particle.
\begin{thm}
\label{7}
Let ${\widehat h}=D+V({\bf x},t)$ be a Hamiltonian, such that $D$ is a differential operator in the spatial coordinates and $V$ is a function. Then the path $\varphi_{\tau}({\bf x},t)=\psi({\bf x},t)\delta(t-\tau)$ in $H$ satisfies the equation $\frac{d\varphi_{\tau}}{d\tau}=\left(-\frac{\partial}{\partial t}-i{\widehat h}\right)\varphi_{\tau}$ if and only if the function $\psi({\bf x},t)$ satisfies the Schr{\"o}dinger equation $\frac{\partial \psi({\bf x},t)}{\partial t}=-i{\widehat h}\psi({\bf x},t)$.
At each point of the path $\varphi_{\tau}$ the components $-i{\widehat h}\varphi_{\tau}$, $-\frac{\partial \varphi_{\tau}}{\partial t}$ of the velocity vector $\frac{d \varphi_{\tau}}{d\tau}$ are orthogonal in the indefinite inner product, ${\widetilde H}$-inner product and the inner product on the space $H_{T}$ of the time co-moving representation.
\end{thm}
In the theorem, the space $H_{T}$ of the time co-moving representation is defined by application of the isomorphism
\begin{equation}
\label{timeco}
(\delta_{\Pi_{\tau}}f)({\bf x},t)=f({\bf x},t-\tau)
\end{equation}
to the space $H$ (i.e., the representation is the map $\delta_{\Pi_{\tau}}\circ \Gamma$, where $\Gamma$ is the same as in theorem \ref{5}). The theorem claims that the ordinary Schr{\"o}dinger evolution can be recovered from the evolution $\varphi_{\tau}$ in the space $H$ of functions of four variables by projecting the path $\varphi_{\tau}$ onto the ``co-moving'' subspace $H_{\tau}$ identified via $I$ with ${\bf H}\approx L_{2}(\mathbb{R}^{3})$. While the component $-i{\widehat h}\varphi_{\tau}$ of the velocity describes the motion within the subspace $H_{\tau}$, the orthogonal (``vertical'') component $-\frac{\partial \varphi_{\tau}}{\partial t}$ of the velocity is due to the motion of the subspace $H_{\tau}$ itself.
Remarks:
\begin{enumerate}
\item
Under integration in time the time variable gets replaced with the parameter $\tau$.
In other words, for motions within the family $H_{\tau}$ the evolution parameter $\tau$ used to describe motions in the space $H$ of functions of four variables becomes identified with the usual time variable that appears in Schr{\"o}dinger equation.
\item
The delta factor $\delta(t-\tau)$ in functions in $H_{\tau}$ removes integration in time and therefore eliminates the effect of interference in time that is present for more general elements of $H$.
In fact, the norm of superposition
$\psi_{1}({\bf x},t)\delta(t-\tau)+\psi_{2}({\bf x},t)\delta(t-\tau)$
of functions in $H_{\tau}$
in either $H_{\eta}$, ${\widetilde H}$, or $H_{T}$-metrics is equal to
$\left\|\psi_{1}({\bf x},\tau)+\psi_{2}({\bf x},\tau)\right\|_{{\bf H}}$, which approximates the standard expression due to the relationship ${\bf H}\approx L_{2}(\mathbb{R}^{3})$.
\item
The space $H$ was needed to identify Minkowski space with an isometrically embedded submanifold $M_{4} \subset H$. If this embedding is accepted, the reason for the delta factor $\delta(t-\tau)$ in the non-relativistic limit has a simple explanation.
In fact, elements of the space $H$ have the form $e^{-t^{2}}\varphi({\bf x}, t)$, where $\varphi$ is in the space ${\widetilde H}\approx L_{2}(\mathbb{R}^{4})$ (and the meaning of approximation is the same as in theorem \ref{1}). Likewise, the space $H_{T}$ of the time co-moving representation defined by Eq.(\ref{timeco}) consists of the functions $e^{-(t-\tau)^{2}}\varphi({\bf x},t)$, with $\varphi \in {\widetilde H}\approx L_{2}(\mathbb{R}^{4})$.
The variables ${\bf x},t$ enter symmetrically in the definition of $L_{2}(\mathbb{R}^{4})$, while the factor $e^{-(t-\tau)^{2}}$ breaks the symmetry between ${\bf x}$ and $t$ by making a typical element of $H_{T}$ well localized in the time variable.
In a sufficiently small scale the factor $e^{-(t-\tau)^{2}}$ as a function of $t-\tau$ quickly falls off to almost zero and can be replaced with the delta function $\delta(t-\tau)$. This yields the set of functions in the family of spaces $H_{\tau}$ and by theorems \ref{6} and \ref{7} allows for the usual formalism of quantum mechanics.
\item
Subspaces $H_{\tau}$ are not preserved under the maps $\delta_{\Pi}$ in theorem \ref{5}. In fact, $\delta_{\Pi}$ mixes space and time coordinates and therefore does not preserve the form $\varphi({\bf x},t)\delta(t-\tau)$ of elements of $H_{\tau}$ in general. This is not surprising because standard quantum mechanics is non-relativistic.
However, to provide a valid foundation of the non-relativistic quantum mechanics these subspaces must be preserved under Galileo transformations.
A Galileo transformation $G$ yields the map $\delta_{G}:H \longrightarrow H'$ defined by $\delta_{G}f=f \circ G^{-1}$ for all $f \in H$. This map
transforms the state $\varphi({\bf x}, t)\delta(t-\tau)$ into the state $\varphi(A{\bf x}+{\bf v}t+{\bf b}, t+c)\delta(t+c-\tau)$, where $A$ is an orthogonal transformation, ${\bf v}$ and $ {\bf b}$ are $3$-vectors, and $c$ is a real number. Recall now that $\varphi$ is an element of the Hilbert space ${\bf H}$ with metric given by the kernel $e^{-\frac{1}{2}({\bf x}-{\bf y})^{2}}$. This kernel is obviously invariant under Galileo transformations so that the function $\varphi(A{\bf x}+{\bf v}t+{\bf b}, t+c)$ is still an element of ${\bf H}$. One concludes that Galileo transformations yield isomorphisms between subspaces $H_{\tau}$ (and that the map $G \longrightarrow \delta_{G}$, where $\delta_{G}$ is considered as acting on ${\bf H}$ is a unitary representation of the Galileo group).
\item
The equation $\frac{d\varphi_{\tau}}{d\tau}=\left(-\frac{\partial}{\partial t}-i{\widehat h}\right)\varphi_{\tau}$ with usual Hamiltonian is a well known non-relativistic limit of the Stueckelberg-Schr{\"o}dinger equation in the theory of Stueckelberg Ref.\cite{Stu1} and Horwitz \& Piron Ref.\cite{HorPir}. This theory treats space and time symmetrically and predicts interference in time Refs.\cite{Hor},\cite{Hor2}.
The non-relativistic limit of Stueckelberg theory was investigated by Horwitz and Rotbart Ref.\cite{HorRot}. The approximate equality of the time variable $t$ with the evolution parameter $\tau$ obtained in Ref.\cite{HorRot} is consistent with the definition of $H_{\tau}$.
\item
Newton and Wigner Ref.\cite{NW} argue that delta functions $\delta^{4}_{a}$ cannot represent spatially localized states in a relativistic theory. However, their derivation is based on the condition of orthogonality of a localized state and its spatial displacement, which is not valid in the proposed framework. Note that the delta function locality is present in the relativistic Stueckelberg theory, which is off-shell. If
the Stueckelberg expectation value of the dynamical variable $\widehat {x^{\mu}}$ (the operator of multiplication by the variable $x^{\mu}$, $\mu=0,1,2,3$) is decomposed into a
direct integral over mass, then for each definite mass in the
integral, the Newton-Wigner operator (having Newton-Wigner localized states as eigenstates) emerges. Locality is restored in
the result of the integral Refs.\cite{HorPir},\cite{HorRot}.
The covariant property of the states $\delta^{4}_{a}$ and the operator $\widehat {x^{\mu}}$ does not mean by itself that the found objects are physical. There are well known difficulties: (1) the wave packet $\delta^{3}_{\bf a}$ contains negative energy components; (2) if such a packet is allowed to evolve by the usual relativistic equations it will evolve out of the light cone Ref.\cite{Heg}. Although these difficulties are typical for relativistic on-shell wave equations and were understood within the Stueckelberg approach Ref.\cite{HorPir}, they must be reexamined in the new setting.
\end{enumerate}
\section{Generalizing the framework to curved space-time manifolds}
So far the discussion involved only the classical $3$-dimensional Euclidean space and the Minkowski space-time. If the approach is taken seriously, it becomes essential to check its validity for more general space-times $N$.
It is also important to see whether the Hilbert space into which $N$ is embedded can be defined without specifying the manifold first. For manifolds without additional (pseudo-) Riemannian structure the issues are resolved by the following theorem.
\begin{thm}
\label{0}
Given an arbitrary real $n$-dimensional manifold $N$ there exists a Hilbert space $H_{\mathbb{R}^{n}}$ of continuous functions on $\mathbb{R}^{n}$, such that the set $M_{n}$ of all delta functions in the dual space $H_{\mathbb{R}^{n}}^{\ast}$ is an embedded submanifold of $H_{\mathbb{R}^{n}}^{\ast}$ diffeomorphic to $N$.
\end{thm}
In essence, the theorem claims that an arbitrary $n$-dimensional manifold can be ``encoded'' into an appropriate Hilbert space of functions on $\mathbb{R}^{n}$. To get an idea of how to find the Hilbert space $H_{\mathbb{R}^{n}}$, especially when the topology of the manifold is not trivial, consider the case of a circle $S^{1}$. In this case the space $H_{\mathbb{R}}$ must be a Hilbert space of continuous functions on $\mathbb{R}$. To ensure that the image $M_{1}$ of the map $\omega: \mathbb{R} \longrightarrow H_{\mathbb{R}}^{\ast}$, $\omega(a)=\delta_{a}$ is a circle, one needs $\delta_{a}=\delta_{a+2\pi}$ for all $a \in \mathbb{R}$, which means that functions in $H_{\mathbb{R}}$ must be $2\pi$-periodic. To satisfy these conditions, consider the Sobolev space of continuous $2\pi$-periodic functions on $\mathbb{R}$ with the inner product $(f,g)=\int^{\pi}_{-\pi} \left(f(x)\overline{g}(x)+f'(x)\overline{g'}(x)\right)dx$. It is easy to check that the set of all delta functions in the dual space $H_{\mathbb{R}}^{\ast}$ with the induced topology is homeomorphic to the circle $S^{1}$.
A particular manifold in the theorem is encoded by fixing the {\em functional content} of the Hilbert space rather than fixing the domain of the functions. To put it differently, the manifold $M_{n}$ is ``made of'' functions and not points in the domain of the functions.
The problem of isometric embeddings of Riemannian and pseudo-Riemannian manifolds is now handled by the following theorem.
\begin{thm}
\label{5new}
Let $N$ be a Riemannian or pseudo-Riemannian smooth manifold of dimension $n$. For any point $x\in N$ there is a neighborhood $W$ of $x$ in $N$ and a Hilbert or Krein space $H$ that contains delta functions (evaluation functionals) $\delta^{(n)}_{a}$ for all $a$ in an open set $U$ in $\mathbb{R}^{n}$ such that the set $M_{n}$ of all these delta functions is an embedded submanifold of $H$ isometric to $W$.
\end{thm}
\begin{proof}
It is known that an arbitrary smooth Riemannian or pseudo-Riemannian manifold $N$ of dimension $n$ admits an isometric embedding into the Euclidean or pseudo-Euclidean space $\mathbb{R}^{p}$ of a sufficiently large dimension $p\ge n$, $p=k+l$, where $(k,l)$ is the signature of the metric on $\mathbb{R}^{p}$. Also, by an obvious generalization of theorems \ref{2} and \ref{3} the map $\Omega:\mathbb{R}^{p} \longrightarrow H_{p}$, $\Omega(A)=\delta^{(p)}_{A}$ is an isometric embedding of the space $\mathbb{R}^{p}$ into the Krein space $H_{p}$ defined via the inner product
\begin{equation}
\label{innerMM}
(f,g)_{H_{p}}=\int e^{-\frac{1}{2}\left((X^{1}-Y^{1})^{2}+...+(X^{k}-Y^{k})^{2}\right)+\frac{1}{2}\left((X^{k+1}-Y^{k+1})^{2}+...+(X^{p}-Y^{p})^{2}\right)}f(X^{1},...,X^{p}){\overline g(Y^{1},...,Y^{p})} d^{p}X d^{p}Y
\end{equation}
with $d^{p}X=dX^{1}\cdots dX^{p}$, $d^{p}Y=dY^{1}\cdots dY^{p}$. Note that the analogues of $H_{ev}$, $H_{odd}$ in theorem \ref{2} are obtained here by representing an arbitrary function $f(X^{1},...,X^{k},X^{k+1},...,X^{p})$ as the sum of ``even''
\begin{equation}
\frac{1}{2}\left(f(X^{1},...,X^{k},X^{k+1},...,X^{p})+f(X^{1},...,X^{k},-X^{k+1},...,-X^{p})\right)
\end{equation}
and ``odd''
\begin{equation}
\frac{1}{2}\left(f(X^{1},...,X^{k},X^{k+1},...,X^{p})-f(X^{1},...,X^{k},-X^{k+1},...,-X^{p})\right)
\end{equation}
components. Otherwise the proof mimics the one given in Ref.\cite{KryukovJMP}.
Let's form a Hilbert (Krein) subspace $H_{n}$ of $H_{p}$ in the following fashion. Let $x\in N$ be a point and let $X^{q}(u)$, $q=1,...,p$, $u \in U$, and $U \subset \mathbb{R}^{n}$ be functions describing the isometric embedding of a neighborhood $W \subset N$ of $x$ into $\mathbb{R}^{p}$. By permuting indices of the coordinates $X^{1}, ..., X^{p}$ and considering a smaller neighborhood $W$ if necessary one can always ensure that $u^{1},...,u^{n}$ are just the first $n$ of the coordinates. So, consider the set $H_{n}$ of all function(al)s in $H_{p}$ that have the form $\varphi(X^{1},...,X^{p})\delta(X^{n+1}-X^{n+1}(u))\cdots \delta(X^{p}-X^{p}(u))$, or more briefly $\varphi(X)\delta^{(p-n)}(X-X(u))$ with $u \in U$. Denoting the kernel of the metric in $H_{p}$, given by Eq.(\ref{innerMM}), by $k(X,Y)$, we have for the inner product of two such functionals:
\begin{equation}
\int k(X,Y) \varphi(X)\delta^{(p-n)}(X-X(u)){\overline \psi(Y)}\delta^{(p-n)}(Y-Y(v))d^{p}Xd^{p}Y,
\end{equation}
where $Y^{1}=v^{1},...,Y^{n}=v^{n}$. The delta functions remove integration with respect to $X^{n+1},..., X^{p}$ and $Y^{n+1},..., Y^{p}$, which gives
\begin{equation}
\label{newinner}
\int k(X(u),Y(v)) \varphi(X(u)){\overline \psi(Y(v))}d^{n}ud^{n}v,
\end{equation}
where $du=du^{1}\cdots du^{n}=dX^{1}\cdots dX^{n}$ and similarly for $dv$. The set $H_{n}$ is a closed subspace in $H_{p}$ so it is a Hilbert space. Expression (\ref{newinner}) shows that $H_{n}$ is isomorphic to the Hilbert space $H$ of all functions $\chi(u)=\varphi(X(u))$, $u \in U$ for which $\varphi(X)\delta^{(p-n)}(X-X(u))$ is in $H_{p}$, furnished with the inner product $(\chi, \rho)_{H}=\int k(X(u),Y(v)) \chi(u)\rho(v)d^{n}ud^{n}v$.
Obviously, the functionals $\varphi(u)=\delta^{(n)}(u-a)$, $a \in U$ are in $H$. It remains to show that the metric induced on the set $M_{n}$ of all such functionals in $H$ is the given (pseudo) Riemannian metric on $N$. For this consider a curve $u^{\mu}=a^{\mu}(\tau)$ in $U$ and the corresponding curve $\varphi_{\tau}(u)=\delta^{(n)}(u-a(\tau))$ in $M_{n}$. For the squared $H$-norm of the velocity vector $d \delta^{(n)}(u-a(\tau))/d\tau$ we have
\begin{equation}
\label{10}
\int k(X(u),Y(v)) \frac{d \delta^{(n)}(u-a(\tau))}{d\tau} \frac{d \delta^{(n)}(v-a(\tau))}{d\tau} d^{n}ud^{n}v.
\end{equation}
Simplifying this by the chain rule
\begin{equation}
\frac{d \delta^{(n)}(u-a(\tau))}{d\tau}=-\frac{\partial \delta^{(n)}(u-a(\tau))}{\partial u^{\mu}}\frac{d a^{\mu}(\tau)}{d\tau}
\end{equation}
followed by integration by parts (see Ref.\cite{Kryukov3} for justification), one obtains the expression
\begin{equation}
\left.\frac{\partial^{2}k(X(u), Y(v))}{\partial u^{\mu} \partial v^{\nu}}\right|_{u=v=a(\tau)}\frac{d a^{\mu}(\tau)}{d\tau}\frac{d a^{\nu}(\tau)}{d\tau}.
\end{equation}
But
\begin{equation}
\frac{\partial^{2}k(X(u), Y(v))}{\partial u^{\mu} \partial v^{\nu}}=\frac{\partial^{2}k(X(u), Y(v))}{\partial X^{r} \partial Y^{s}}\frac{\partial X^{r}}{\partial u^{\mu}}\frac{\partial Y^{s}}{\partial v^{\nu}},
\end{equation}
and for the kernel $k(X,Y)$ given by Eq.(\ref{innerMM}) one also has
\begin{equation}
\left.\frac{\partial^{2}k(X(u), Y(v))}{\partial X^{r} \partial Y^{s}}\right|_{u=v}=\eta_{rs},
\end{equation}
where $\eta_{rs}$ are components of the indefinite (Minkowski-like) metric of signature $(k,l)$ on $\mathbb{R}^{p}$. So the squared norm of the velocity vector in Eq.(\ref{10}) is equal to
\begin{equation}
g_{\mu \nu}\frac{d a^{\mu}}{d\tau}\frac{d a^{\nu}}{d\tau},
\end{equation}
where
\begin{equation}
\label{final}
g_{\mu \nu}=\eta_{rs} \left.\frac{\partial X^{r}}{\partial u^{\mu}}\frac{\partial Y^{s}}{\partial v^{\nu}}\right|_{u=v}
\end{equation}
are the components of the induced metric on $M_{n}$.
Recall now that the functions $X^{r}(u)$ describe the isometric embedding of $W \subset N$ into $\mathbb{R}^{p}$. In other words, components of the (pseudo-) Riemannian metric on $W$ are given by
\begin{equation}
{\widetilde g}_{\mu \nu}=\eta_{rs} \frac{\partial X^{r}}{\partial u^{\mu}}\frac{\partial X^{s}}{\partial u^{\nu}}.
\end{equation}
Since this expression coincides with Eq.(\ref{final}), the obtained embedding of $W$ into $H$ is isometric. This completes the proof.
\end{proof}
Several useful observations must be made.
\begin{enumerate}
\item
Theorem \ref{5new} makes it possible to extend the results of theorem \ref{3} to neighborhoods in arbitrary pseudo-Riemannian space-times. In this case the Poincar{\'e} group acting on Minkowski space-time is replaced by the group of diffeomorphisms of a particular neighborhood. This yields the following analogue of diagram (\ref{diagram})
\begin{equation}
\label{diagramB}
\begin{CD}
{\bf S} @ >\Gamma>> H @ >\delta_{D}>> H'\\
@ AA\Omega A @ AA \omega A @ AA\omega A \\
W @ >\gamma>> U @ >D>> U
\end{CD}
\end{equation}
Here $W$ is a neighborhood in curved space-time as defined in theorem \ref{5new}, $\gamma$ is a chart on $W$ and $U$ is the corresponding set in $\mathbb{R}^{4}$, $D$ is an arbitrary diffeomorphism of $U$ and $\delta_{D}$ is its extension to the space $H$ constructed in theorem \ref{5new}. As already mentioned in the remarks following theorem \ref{5}, the existence and uniqueness of the isomorphism $\delta_{D}$ and the space $H'$ can be proved as before.
\item
Recall that the set $M_{4}$ is invariant under transformations $\delta_{\Pi}$, making it possible to ``separate'' special relativity from the Hilbert space framework. In the discussion that followed theorem \ref{7} it was verified that the ``Galileo maps'' $\delta_{G}$ map subspaces $H_{\tau}$ onto themselves. This explains why the non-relativistic quantum mechanics could also be developed within a single Hilbert space of functions of three variables.
Diagrams (\ref{diagram}), (\ref{diagramA}) provide us with a ``covariant'' extension of special relativity. Likewise, diagram (\ref{diagramB}) together with its active version yield a local geometric extension of general relativity. Those extensions are based on isomorphisms of separable Hilbert spaces. If such a scheme is adopted in physics, that would mean that specific functional realizations of the abstract Hilbert space ${\bf S}$, at least within the considered class of realizations, are not physical but rather are similar to various choices of coordinates on space-time. One may disregard this point by saying that the considered isomorphisms of Hilbert spaces of functions are direct analogues of well known changes in representation in quantum theory. However, unlike changes of representation that are simply passive changes in the description of physical reality, the transformations considered here can be realized actively.
Active transformations are capable of creating a new physical reality. For instance, rotation of a massive body can change the gravitational field created by it, while rotation of the coordinate system cannot. Inclusion of active transformations signifies then that the construction is not just formally mathematical, but is capable of affecting physics as well.
\item
If the discussed embedding of the classical space $\mathbb{R}^{3}$ into ${\bf H}$ as well as the embeddings of Minkowski space-time and local embeddings of arbitrary curved space-times into the corresponding Hilbert spaces are taken seriously, then the linearity of quantum theory appears in a completely new light. In fact, the geometry of the abstract Hilbert space ${\bf S}$ and its realizations like $H$ is linear. It is the non-linearity of submanifolds $M_{3}$ and $M_{4}$ that seems to be responsible for the non-linear way in which classical world appears to us.
By replacing the restricted, ``space-time based'' view of the world with its extension to the space ${\bf S}$ one can perhaps obtain a tool for reconciliation of quantum theory and relativity.
\end{enumerate}
\section{Acknowledgments}
I am indebted to Larry Horwitz for his critical review of the results and an anonymous reviewer for useful recommendations and support. I would like to thank Malcolm Forster for numerous discussions and Kent Kromarek for help in improving the exposition. Part of this work was done at UW-Madison Department of Philosophy. I would like to thank the faculty of that department for their hospitality. This work was supported by the NSF grant SES-0849418.
|
1,116,691,497,191 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ith} the global climate change and increasing demand for food crops and renewable energy sources, developing crops that can improve or sustain yields in harsh environments are becoming an integral part of the solution to this worldwide challenge. While modern sequencing technologies offer unprecedented information about genotype, the phenotypic outcomes of plant x environment interactions are hardly predictable. To close this knowledge gap, plant phenotyping is an increasingly important area of research.
Many imaging based technologies have been adapted to measure multiple parameters of a plant under various conditions with high-throughput. For example, the Scanalyzer (LemnaTec GmbH) bench-top system can image hundreds of \textit{arabidopsis} grown in well plates each day, while a custom imaging system at the Danforth Center (Bellwether Foundation Phenotyping Facility, http://www.danforthcenter.org/scientists-research/core-technologies/phenotyping) can image a thousand plants in the growth chamber a day by automated conveyor system and imaging stations.
While imaging of leaves and structures above ground have been made easier by these technologies, studying of belowground roots remain a challenge. Some groups use transparent media such as gel to grow plants to visualize root structure. Most of the current root imaging systems are light-based\cite{Clark2013}\cite{Slovak2014} and can only provide morphological information of the subjects. Thus, imaging methods that reveal physiological information non-invasively with good temporal resolution are of great value to enrich the tool sets for future plant pheontyping\cite{Li2014}.
PET is a functional and molecular imaging technique that provides \textit{in vivo} measurement of dynamic radio-tracer distribution in a whole plant non-invasively. These dynamic PET images reveals the temporal physiological process happening inside plant. With plants grown in a transparent gel media, the anatomical change of plant root can be precisely captured by a low-cost optical imaging system\cite{Topp2013}. The study presented here explores the potential applications of this combined multi-modality imaging system on plant root phenotyping.
\section{Materials and methods}
\subsection{Plant PET imaging system}
The dedicated plant PET system\cite{Wang2014} as shown in Figure \ref{fig:PETSystem} is designed with two unique features: (1) configurable system geometry to accommodate plants of different sizes and shapes; (2) the ability to control the environment in which the plants will be grown and studied.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{PETSystem.eps}
\caption{ A dedicated plant PET imager (Left) seats inside a growth chamber (right). A fume hood adjacent to the plant growth chamber and lead lined radioactive gas delivery lines are used for radio-tracer administration.}
\label{fig:PETSystem}
\end{figure}
This plant PET system also provides $\sim$1.25 mm spatial resolution, which is especially important for imaging of small young plants with complex roots structure. The system sensitivity at center of field of view (FOV) is 1.3\%. The imager has 15 cm trans-axial and 10 cm axial FOV. With the automatic radio-active gas delivery system, the same subjects can be imaged repeatedly without disturbance which is important for plant studies that are very sensitive to environmental change.
\subsection{Optical projection tomography system}
Figure \ref{fig:OPTSystem} shows the setup of the optical projection tomography (OPT) system in our plant imaging lab. A maize plant is germinated and grown inside a glass cylindrical jar filled with transparent gel. During an optical imaging experiment, this jar is seat inside a rectangle water tank to compensate refraction induced distortion in optical images. A small rotation stage controls the rotation angle of the glass jar through magnetic coupling. A DSLR camera captures projection images from different angles with a laptop PC synchronizing object rotation.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{OPTSystem.eps}
\caption{ Optical imaging system: (a) DSLR camera; (b) controlling laptop for synchronizing object rotation and image capturing; (c) water tank and rotation stage for optical imaging.}
\label{fig:OPTSystem}
\end{figure}
With the captured projection images (usually from 72 angles with 5 degree step size), a 3D root image can be reconstructed with some specific reconstruction codes like Rootwork\cite{Gu2011} or RootReader3D\cite{Clark2014}. Some traits analysis can be conducted in 3D, like root system volume, surface area, total root length, number of branches, etc.
\subsection{Imaging protocol}
Figure \ref{fig:OPT_PET_compare} shows a young maize plant with structural image acquired from the OPT system and functional image acquired from the plant PET system with $^{11}$CO$_2$ labeling. Those two modalities of root images exhibit the similar root structure, but also indicates some difference, such as the hot spots representing photosynthetic carbon molecules (mainly sucrose) that appear around the root tips in PET image at a later time point (around 111 minutes). Intuitively, these hot spots should be correlated with the most actively growing roots, since the plant must allocate carbon resources to these root tips. With the optical images, the actual root growth rate can be measured.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{OPT_PET_compare.eps}
\caption{ PET (Left) and corresponding optical (Right) images of the maize roots on the 6th day after germination.}
\label{fig:OPT_PET_compare}
\end{figure}
A series of studies with a total of 3 subjects have been carried out consecutively with the same imaging protocol to figure out the correlation of root growth rate and activity concentration around root tips. PET scanning started on the day when the first green leaf appeared, and was carried out once a day for 5 days. Each morning, around 10 mCi of $^{11}$CO$_2$ was administrated into a custom built plant labeling chamber. After 6 minutes labeling, residual activity was flushed out with fresh air before a 2$\sim$3 hours PET scan. The same plant was imaged 3 times a day by the OPT system (morning, afternoon and evening with 8 hours apart).
\subsection{Image processing}
The OPT images are reconstructed using photos from 72 angles. PET images are reconstructed with an ML-EM algorithm\cite{Mathews2013}. The PET and OPT images are aligned using the AMIDE open source software\cite{Loening2003}. Figure \ref{fig:Coregister_PET_OPT} shows the reconstructed 3D OPT and PET images and the co-registered images which are well aligned. Small fine root structures shown in optical image can not be clearly seen in PET image, which partly attributes to the relatively low spatial resolution and partial volume effect in PET imaging\cite{Soret2007a} and is also likely related to biological fact that less carbon is allocated. Some root tips growing close to the wall of glass jar are absent in the optical image, but can be clearly seen in PET images. This relates to the refraction induced distortion in optical images that can not be fully compensated with the rectangular water tank.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Coregister_PET_OPT.eps}
\caption{ Left: reconstructed 3D optical root image. Right: reconstructed 3D whole plant PET image of a young maize labeled with $^{11}$CO$_2$. Middle: Co-registered 3D PET and OPT images using AMIDE software. Activity concentration represented in PET data is color coded and 3D root images captured from OPT system are in white.}
\label{fig:Coregister_PET_OPT}
\end{figure}
The main roots of each subject are selected from optical images and these images are also used to guide the region of interest (ROI) contouring with PET images. The 3D coordinates of the main roots are tracked with different time points and growth rate (mm/day) of the selected roots are calculated based on these time series data points. PET images are first decay corrected and the activity concentration is measured with the ROIs each has a size of 6.4 mm x 6.4 mm x 6.4 mm (8 x 8 x 8 pixels in image).
\section{Results}
\subsection{Dynamic 3D PET images}
Each PET image is created with data from 3 different bed positions to cover the entire plant (providing $\sim$28 cm axial FOV). Time presented in Figure \ref{fig:Dynamic_3DPET} and Figure \ref{fig:PEToverDays} are referenced to the beginning of the $^{11}$CO$_2$ injection time. The duration of the first 6 image frames is 12 minutes, which is divided into 0.5 minute for the first bed position to cover the shot part of a plant, 5 minutes for the other two positions to image the stem and root parts and 0.5 minutes for completing the needed mechanical motion. The duration of the last 4 frames is increased to 22 minutes (1 minute, 10 minutes, 10 minutes for the 3 bed positions respectively and 1 minute for mechanical motion) to collect enough events for reconstructing clear PET images. The total duration including the labeling and PET scan reaches 2.5 hours to make sure enough activity is already translocated to the root tips.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Dynamic_3DPET.eps}
\caption{ Dynamic PET images with 10 frames acquired with a duration of 160 minutes. The time marked on each frame refers to the start time for acquiring the frame.}
\label{fig:Dynamic_3DPET}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{PEToverDays.eps}
\caption{ PET images for Subject 1 with last frames of all the five days and the corresponding optical image acquired.}
\label{fig:PEToverDays}
\end{figure}
Figure \ref{fig:Dynamic_3DPET} shows the 3D dynamic PET images of Subject 1 acquired on Day 8. Translocation of the $^{11}$C to the root part starts around 30 minutes post-labeling and activity distribution reaches a relatively stable status after two hours. In the last frame of PET images, clear root structure is shown and hot spots appear around the main roots tips. These 5-day PET studies for the 3 subjects show similar dynamic change of $^{11}$CO$_2$ translocation pattern. For these young maize plants, carbon starts to transfer to the root part only after reaching some plant development stage and hot spots appear after that. Figure \ref{fig:PEToverDays} shows the last frame of PET images for Subject 1 of all the 5 days and the actual root growth can be clearly observed for PET images directly.
\subsection{Correlation of activity concentration and roots growth rate}
As shown in Figure \ref{fig:PEToverDays}, the root growth rate is measured with 8 selected ROIs of Subject 1 from optical images of Day 7 and Day 8 and the corresponding activity concentration is measured with PET data of Day 7. A good linear correlation is shown in Figure \ref{fig:GrowthRateVsActivity} between activity concentration and root growth rate in this 24 hour window. These data clearly suggest that the activity accumulated at root tips represents carbon allocation by the plant that drives root growth.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{GrowthRateVsActivity.eps}
\caption{ Correlation of roots growth rate with activity concentration around the root tips. The measured data marked with red squares is based on the roots growth rate measured by OPT and activity concentration measured form PET images of selected ROIs. The data shown with blue diamonds is only based on the PET images where the roots growth rate and activity concentration are measured from.}
\label{fig:GrowthRateVsActivity}
\end{figure}
As mentioned above, with these high spatial resolution PET images, the clear root structure appears at later time point of image frames. The 3D coordinates of roots tips can also be measured from the PET images and the roots growth rate can be calculated accordingly. Figure \ref{fig:GrowthRateVsActivity} also shows the the similar linear correlation between activity concentration around roots tips and roots growth rate measured from PET data directly. The result indicates that these kind of studies can be carried out with regular soil using PET only, which may provide more precise data for modeling the relation between carbon allocation and actual root growth.
\subsection{Environmental stress induced root growth rate change}
Many of the subjects show the similar linear correlation between activity concentration and roots growth rate while some data sets show different and changing relations. Figure \ref{fig:Subject3_GrowthRate_Activtiy} shows the change of correlation between activity concentration at root tips and root growth rate with Subject 3 during the study. The PET data is from Day 6 to Day 8 and optical data is from Day 6 to Day 9. Good correlation still exists between activity concentration at root tips and roots growth rate on Day 6. But on Day 7, the growth rate for root 5 and 6 decreased a lot and a tremendous growth rate decline happened on the following day for those two roots.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Subject3_GrowthRate_Activtiy.eps}
\caption{Change of correlation between roots growth rate and activity concentration around root tips of Subject 3 measurement from Day 6 to Day 8. Upper: co-registered 3D PET and optical images with different days. Lower: correlation of activity concentration around root tips and roots growth rate of 6 selected main roots measured from Day 6 to Day 8.}
\label{fig:Subject3_GrowthRate_Activtiy}
\end{figure}
The answer to this change can track back to the corresponding optical images. Figure \ref{fig:Subject3_root_track} shows the growth tracks of root 5 and 6 from different projection planes with time stamp marked. The root 5 encountered the wall of the glass jar, and started to change its growth direction and this kind of change happened even a bit earlier for root 6 with the optical image viewed from X-Y plane. These results suggest that the root apical meristem may need to consume more carbon to change its growth direction.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Subject3_root_track.eps}
\caption{Root growth tracks of root 5 and 6 from different projection planes of the Subject 3 maize plant.}
\label{fig:Subject3_root_track}
\end{figure}
This new observation demonstrates the potential of combined imaging technique in measuring the up/down modulation of molecular processes in plants study when environmental stress arise.
\subsection{Temporal information released with PET image predicates the root growth ahead}
PET provides near real-time in situ measurement of molecular processes in plants that often precede visible morphological change. This is also observed by our preliminary studies with maize plant. Figure \ref{fig:Predict_root_growth} shows that signatures of carbon allocation can predict lateral root outgrowth around 48 hours prior to the micro-morphological change.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Predict_root_growth.eps}
\caption{Carbon allocation measured by PET (Left) can identify locations of lateral root emergence 48 hours prior to morphological change can be observed (Right).}
\label{fig:Predict_root_growth}
\end{figure}
\section{Discussion and Conclusion}
PET provides \textit{in vivo} measurement of dynamic radiotracer distribution in a whole plant non-invasively. Combined PET and optical imaging study of maize roots shows correlation between the activity concentration in the root tips and root growth rate over days. PET also aid in revealing the answer for some plant physiological puzzles by providing 3D dynamic and functional information of a whole plant.
More applications will be explored by collaborating with plant biologists, combining more imaging modalities, like x-ray CT and hyper-spectral imaging\cite{Furbank2011}\cite{Fiorani2013a}.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,192 | arxiv | \section{Introduction}
Strongly correlated quantum lattice models pose some of the most intriguing physical questions and technical challenges, due to the fact that the number of degrees of freedom increases exponentially with the system size. Classifying the intricacy of calculating ground state energies of such systems has become a vivid branch of complexity theory \cite{Kempe2006-35,Oliveira2008-8,Gottesman2009_05}.
Especially for the analysis of ground state properties in one-dimensional systems, the density-matrix renormalization-group (DMRG) \cite{White1992-11,Schollwoeck2005} provides a numerical approach that is often extraordinarily accurate. It works by variational optimization of a suitable class of states, so-called matrix product states \cite{Accardi1981,Fannes1991,Rommer1997}.
For two- and three-dimensional systems, quantum Monte-Carlo methods (e.g., positive-definite path integral \cite{Suzuki1977-58,Hirsch1982-26} or stochastic series expansion \cite{Sandvik1991-43} representation) are extremely successful for bosonic and unfrustrated spin models, but are bothered by the sign problem \cite{Hirsch1982-26,Takasu1986-75} for some interesting frustrated spin and fermionic models, including the notorious Fermi-Hubbard model
\begin{equation*}
\hat H= -\sum_{\langle i,j\rangle,\sigma } (\hat f_{i\sigma}^\dag \hat f_{j\sigma}^{\phantom{\dag}}+h.c.)
+ U\sum_i \hat n_{i\uparrow}\hat n_{i\downarrow} -\mu\sum_{i,\sigma}\hat n_{i\sigma}
\end{equation*}
which is a candidate for the description of the essential physics of high-temperature superconductivity. Recently, new tools such as the diagrammatic Monte Carlo method have been developed \cite{Prokofev1998-81,VanHoucke2008}, which have a less severe sign problem and have, e.g., been demonstrated to give precise results for the repulsive Fermi-Hubbard model in the (correlated) Fermi liquid regime \cite{Kozik2009}.
In a complementary development, generalizations of DMRG ideas to higher dimensions have been put forward. To this purpose, first, one needs to give an ansatz for the many-particle state for which the number of degrees of freedom does only scale polynomial with system size but is (hopefully) still appropriate to describe, e.g., the ground states of the higher-dimensional system. Second, a way of efficiently evaluating interesting local observables or correlators with respect to the ansatz states needs to be identified.
Third, a corresponding algorithm to determine or approximate the ground state within the ansatz class on a classical computer needs to be worked out.
Focusing first on spin (or equivalently qudit) lattices, several suggestions have been put forward, such as {\it tensor product ans\"atze} or {\it projected entangled pair states} (PEPS) \cite{Niggemann1997-104,Nishino2000-575,Martin-Delgado2001-64,Verstraete2004-7,Isacsson2006-74,Verstraete2008-57},
{\it tree tensor networks} (TTN) \cite{Shi2006-74},
or {\it multiscale entanglement renormalization ans\"atze} (MERA) \cite{Vidal-2005-12,Dawson2008-100,Cincio2008-100,Evenbly2009-79,Giovannetti2009-79}.
In this article we address the question of how higher-dimensional \emph{fermionic} systems can be studied via ansatz states. If one maps the system to a spin model by expressing states and operators in the occupation number representation with respect to a fixed ordering of the modes, inevitably long-range ($O(L^{d-1})$, where $L$ is the linear size of the $d$-dimensional lattice) interaction terms occur, rendering simulation unfeasible:
The spin representation of a term $\hat f_j^\dagger\hat f_k$, $j<k$,
under the Jordan-Wigner transformation \cite{Jordan1928} is for instance
\begin{equation*}
\sigma_j^- \otimes
\bigotimes_{j<l<k} \sigma_l^z
\otimes
\sigma^+_{k},
\end{equation*}
containing a so-called Jordan-Wigner string.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig001.pdf}
\caption{(a) The graphical representation of a FOC as a directed graph. The nodes represent fermionic operators. The arcs (directed edges) represent (partial) multiplications, partial traces, and open indices. Each arc is labeled by the set of modes it corresponds to. The operator corresponding to a certain vertex maps from the modes of all incoming arcs to the modes of the outgoing arcs. In the example, the arc ``$e$'' corresponds to a partial multiplication, arc ``$p$'' to a partial trace, and arcs ``$a$'' and ``$b$'' to open incoming and outgoing indices, respectively. The node at the top corresponds to a ket vector from $\mathcal{F}_{m\cup n}$ and the node at the bottom left to a bra vector (element of the dual of $\mathcal{F}_{f\cup i\cup j}$).
As a whole, the circuit is a fermionic operator mapping from $\mathcal{F}_a$ to $\mathcal{F}_b$.
(b) A FOC for the calculation of the expectation value of a local observable (square in the center) with respect to a MERA state with two renormalization steps. The hatched flat rectangles represent isometries which correspond to a coarse graining step in a (realspace) renormalization procedure. The other rectangles represent unitaries that are supposed to reduce entanglement of adjacent blocks before a coarse graining step. The circuit contains only those unitaries and isometries of the MERA that lie inside the so-called causal cone of the observable; all others cancel out.
(c) FOC for a tree tensor network (TTN) state, here for a genuine tree system, the Bethe lattice with coordination number $z=3$. To have the value of a FOC well-defined, one needs to specify an ordering among the operators, assigning to each operator a number $\tau=1,2,\dotsc$. In example (a), we arbitrarily chose $\tau$ to increase from the bottom to the top. In example (b), a natural ordering, motivated by the picture of subsequent renormalization steps, is also directed from the bottom to the top; as we will explain later, the ordering inside one layer is irrelevant, as the contained isometries are all parity-preserving and operate on disjoint sets of modes. Analogously in example (c), we can choose $\tau$ to increase in radial direction, starting from the central node.}
\label{fig:foc}
\end{figure*}
Accompanied by first numeric results, very recently, fermionic generalizations of MERA states were suggested in Refs.\ \cite{Corboz2009_04,Pineda2009_05} and for PEPS in Ref.~\cite{Kraus2009_04}.
Specifically, in Ref.\ \cite{Pineda2009_05}, also an algorithm for fermionic MERA is given by \emph{dynamical reordering}. It exploits the possibility to change the ordering of the fermionic modes during the algorithm to confine all occurring Jordan-Wigner strings to a sublattice of finite extent, the \emph{causal cone} of, e.g., a local observable in the MERA. Going beyond that result, here, we pose the question whether a given general circuit of fermionic operators (FOC, examples in Fig.~\ref{fig:foc}) can be contracted with the \emph{same} efficiency as a corresponding circuit of qudit operators (QUOC). This is answered in the affirmative for the case where each operator in the FOC is \emph{parity-symmetric} (either fermion number parity preserving or changing): We show constructively that the elementary contraction operations for such a FOC can be executed in an arbitrary sequence and give a detailed account of the algorithm.
As compared to the requirements for the contraction of a certain QUOC with a given contraction scheme, the number of operations and memory requirements for the same contraction scheme, applied to a corresponding FOC, increase only by a marginal amount.
This allows to translate the algorithms already developed for spin systems (for PEPS, e.g., in Refs.\ \cite{Nishino2000-575,Verstraete2004-7,Gu2008-78,Jordan2008-101,Orus2009_05}, for MERA, e.g., in Refs.\ \cite{Dawson2008-100,Rizzi2008-77,Evenbly2009-79,Giovannetti2009-79}) to the fermionic case without loss of computational efficiency.
Giving further details for the case of PEPS, we argue that application of the FOC scheme to fermionic PEPS appears to provide a more efficient algorithm than that presented in Ref.\ \cite{Kraus2009_04} where a mapping to a spin system was employed by choice of a fixed mode ordering.
In Sec.~\ref{sec:FOC} the idea of the FOC is introduced and it is given a proper definition. Rules for the execution of the elementary contraction operations for two or one operators are derived in Sec.~\ref{sec:contractions}, after which the importance of a predefined order among the operators constituting the FOC is pointed out in Sec.~\ref{sec:operatorOrder}. It is also explained how this operator order can be modified with marginal computational cost, allowing to efficiently execute the elementary contractions in an arbitrary sequence. The implications on computational efficiency and locality considerations are summarized in Sec.~\ref{sec:costs}. Sec.~\ref{sec:furtherOp} introduces further useful operations on FOCs that are employed in an efficient contraction algorithm for fermionic (i)PEPS in Sec.~\ref{sec:PEPS}. The article closes with a short discussion.
\section{Fermionic operator circuit}\label{sec:FOC}
\subsection{General structure}
A fermionic operator circuit (FOC) is a product of (not necessarily physical, i.e., in general not particle number parity preserving) fermionic operators $\hat A_i:\mathcal{F}_m\to\mathcal{F}_n$ of in general different support, specified by sets of mode labels $m,n\subset\cal L$.
Further elements of FOCs are partial traces and partial projections.
Each mode label $x\in\mc L$ occurs at most twice, once for an incoming mode of some operator and, the second time, for an outgoing mode of the same or another operator. This means for graphical representations of FOCs as graphs, where each vertex corresponds to one operator $\hat A_i$, that each arc (directed edge) of the graph carries a set of unique mode labels. As explained in Sec.~\ref{sec:FOC-definition} this convention allows for a convenient definition of the FOC such that it has a well-defined value.
Prominent examples of FOCs are fermionic versions of known qudit operator circuits (QUOC), important for the simulation of strongly correlated $d$-dimensional systems:
\emph{multiscale entanglement renormalization ans\"atze} (MERA) \cite{Vidal-2005-12} and \emph{tree tensor networks} (TTN) \cite{Shi2006-74}; Fig.~\ref{fig:foc}. As we show in Sec.~\ref{sec:PEPS} also the fermionic variants of {\it tensor product ans\"atze} or {\it projected entangled pair states} (PEPS) \cite{Niggemann1997-104,Nishino2000-575,Martin-Delgado2001-64,Verstraete2004-7} are covered in the FOC framework; Fig.~\ref{fig:fPEPS_a}.
For a MERA, a possible choice for mode labels are the renormalization step $\tau$ combined with a site label from the corresponding lattice.
For numerical purposes, each fermionic operator $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{n}$ of the circuit is stored in an occupation number representation with respect to certain orderings $\mf{m}$ and $\mf{n}$ of the sets of modes $m,n\subset\mc L$. We consider such orderings as \emph{bijective enumerations} $\mf m:\{1,\dots,|m|\}\to m$ and $\mf n:\{1,\dots,|n|\}\to n$ of the sets, where $|m|$ denotes the number of elements in $m$. We may also treat such enumerations as vectors.
For a chosen ordering $\mf{n}$ of the modes in $n$, we denote the basis states of the Fock space $\mathcal{F}_n$ by
\begin{equation}
\Ket{\vec{n}}{\mf{n}}= \Ket{n_1,\dots,n_{|n|}}{\mf{n}}:= (\hat f_{\mf{n}_1}^\dag)^{n_1}\dots(\hat f_{\mf{n}_{|n|}}^\dag)^{n_{|n|}} \Ket{\text{\o}}{n},
\end{equation}
where $\Ket{\text{\o}}{n}$ labels the vacuum state of the Fock space $\mathcal{F}_n$ and $\hat f_i$ are the corresponding anticommuting ladder operators with $\{\hat f_i,\hat f_j^\dag\}=\delta_{ij}$.
The operator $\hat A$ can hence be stored as the complex $2^{|n|}\times2^{|m|}$ matrix
\begin{equation}\label{eq:JWrep}
J_{\mf{n},\mf{m}}(\hat A) = \sum_{\vec{n},\vec{m}} |\vec{n})\Bra{\vec{n}}{\mf{n}} \hat A \Ket{\vec{m}}{\mf{m}}(\vec{m}|.
\end{equation}
This is an occupation number representation or Jordan-Wigner transform \cite{Jordan1928} of the operator $\hat A$.
Of course it is also possible to restrict (for each set of modes) to a reduced basis. The only information about the basis states actually needed is their particle number parity; see Sec.~\ref{sec:modeReordering}.
The states occurring in \eqref{eq:JWrep} are elements of different Hilbert spaces:
$\Ket{\vec{m}}{\mf{m}}\in \mathcal{F}_{m}$,
$\Ket{\vec{n}}{\mf{n}}\in \mathcal{F}_{n}$,
$|\vec{m})\in \mathcal{B}_{|m|}$, \text{ and}
$|\vec{n})\in \mathcal{B}_{|n|}$, where $\mathcal{B}_{|n|}$ denotes the $|n|$-qubit Hilbert space
\begin{equation}
\mathcal{B}_{|n|}=({\mathbbm{C}}^2)^{\otimes |n|}.
\end{equation}
A similar approach can be used for anyonic systems \cite{Fradkin1989-63}.
\subsection{Definition of a FOC} \label{sec:FOC-definition}
A fermionic operator circuit is specified by a set of fermionic operators $\{\hat A_i:\mathcal{F}_{m_i}\to\mathcal{F}_{n_i}\}$, where each mode label occurs at most twice, once as an incoming mode of an operator $\hat A_i$ and once as an outgoing mode of an operator $\hat A_j$.
Mode labels which occur two times in this fashion imply a (partial) multiplication, Fig.~\ref{fig:operations}a, or (partial) trace, Fig.~\ref{fig:operations}b, of the corresponding operators with respect to that set of modes. Both operations together define a general contraction of two operators, namely contraction of some outgoing modes of $\hat A$ with some incoming modes of $\hat B$ and, simultaneously, of some incoming modes of $\hat A$ with some outgoing modes of $\hat B$; see Fig.~\ref{fig:operations}c.
Mode labels, which occur only once, correspond to modes that the FOC as a whole maps from or maps to.
To have the value of a FOC well-defined, one needs to specify an ordering of the contained operators $\{\hat A_i\}$. The value of the FOC is then defined by the one resulting from doing the contractions in the order $\hat A_N\circ\dotsc\circ\hat A_2\circ\hat A_1$, where ``$\hat B\circ \hat A$'' denotes the contraction of all common modes of the operators $\hat A$ and $\hat B$; see Fig.~\ref{fig:operations}c. As discussed in Sec.~\ref{sec:operatorOrder}, this operation is associative but in general not commutative, $\hat B\circ\hat A\neq \hat A\circ\hat B$.
\subsection{Remarks on the definition} \label{sec:FOC-remarks}
\begin{figure}[t]
\centering
\includegraphics[width=0.86\linewidth]{fig002.pdf}
\caption{(Color online)
The operator order goes from the bottom to the top. Left: Example for an operator circuit on a lattice $L=m\cup n\cup p\cup q$. It corresponds to the expression $\operatorname{Tr}_{m\cup q}( \Bra{\text{\o}}{n} \hat{\mc A}_4 \cdot \dotsc \cdot \hat{\mc A}_1 \Ket{\text{\o}}{p})$, cmp.\ Eq.~\eqref{eq:latticeCircuit}, where, e.g., $\hat{\mc A}_1=\hat{a}_1\otimes \operatorname{Id}_{p\cup q}$. For convenience, we require in the definition of fermionic operator circuits, Sec.~\ref{sec:FOC-definition}, that each mode occurs at most twice, once as an incoming mode and once as an outgoing mode of some operators. This can be achieved by a relabeling of the modes, yielding the FOC $\hat A_4\circ\dotsc\circ\hat A_1$ (right). This does not change the matrix elements of the operators and the FOC. One has for example
$\Bra{\vec{m}'\vec{n}'}{\mf{m}^{(2)}\oplus\mf{n}^{(2)}}\hat A_1\Ket{\vec{m}\vec{n}}{\mf{m}^{(1)}\oplus\mf{n}^{(1)}}
=\Bra{\vec{m}'\vec{n}'}{\mf{m}\oplus\mf{n}}\hat a_1\Ket{\vec{m}\vec{n}}{\mf{m}\oplus\mf{n}}$,
where $\mf{m}$, $\mf{n}$, $\mf{m}^{(i)}$, $\mf{n}^{(i)}$ are orderings of the sets of modes $m$, $n$, $m_i$, and $n_i$. Here, with the relabeling, also the partial projections for operators $\hat a_2$ and $\hat a_3$ have been executed.}
\label{fig:latticeCircuit}
\end{figure}
In Sec.~\ref{sec:contractions}, as for the partial contraction operation, we will also give a rule for a partial projection of some modes to basis states (i.e., $\{\hat n_i\}$ eigenstates). This is actually already covered by the contraction operation but perhaps useful to have explicitly, as such projections are frequently used in considerations on operator circuits.
Note that the operators $\{\hat A_i\}$ are not assumed to be from the so-called algebra of physical operators -- i.e., particle number parity preserving. This is for example useful when calculating correlators of the form $\langle \hat f_i^\dag\hat f_j\rangle$ with respect to MERA or TTN states. In such a calculation, the operators $\hat f_i^\dag$ and $\hat f_j$ become (clearly not parity preserving) elements of a FOC.
However, it will be explained in Sec.~\ref{sec:operatorOrder} that in order to be able to do the contraction of the FOC in an arbitrary sequence (necessary to get optimum numerical efficiency), i.e., to be able to deviate from the order $\hat A_N\circ\dotsc\circ\hat A_2\circ\hat A_1$, it is in general necessary that each $\hat A_i$ is either parity preserving or parity changing.
That mode labels are required to be unique is not a limitation. Consider for example an operator circuit that is defined on a lattice $L$ and does not have that property,
\begin{equation}\label{eq:latticeCircuit}
\operatorname{Tr}_t( \Bra{\text{\o}}{o} \hat{\mc A}_N \cdot \dotsc \cdot \hat{\mc A}_2 \cdot \hat{\mc A}_1 \Ket{\text{\o}}{i}).
\end{equation}
Here $t\subset L$ denotes a subset of modes that are traced out, and $i$ and $o\subset L$ denote subsets of modes that are projected out; $t\cap (i\cup o)=\emptyset$. The circuit hence maps from $\mathcal{F}_{L\setminus(t\cup i)}$ to $\mathcal{F}_{L\setminus(t\cup o)}$. Each operator $\hat{\mc A}_i$ acts nontrivially on a subset of the modes: $\hat{\mc A}_i=\hat a_i\otimes \operatorname{Id}_{L\setminus \ell_i}$ with $\hat a_i:\mathcal{F}_{\ell_i}\to\mathcal{F}_{\ell_i}$ where $\ell_i\subset L$.
Now, relabeling of the modes to make the modes unique as depicted in Fig.~\ref{fig:latticeCircuit}, does of course not change the matrix elements of the FOC. It yields a proper FOC $\hat A_N\circ \dotsc\circ\hat A_2\circ\hat A_1$, where each operator $\hat A_i$ has the same matrix elements as the corresponding $\hat a_i$ (partial projections onto the vacuum can be executed in the same step, as in our example, or introduced as separate elements of the FOC). The contraction rules in Sec.~\ref{sec:contractions} are constructed such that this FOC and \eqref{eq:latticeCircuit} have the same matrix elements, i.e., are related by a trivial relabeling of incoming and outgoing modes.
\subsection{Rationale behind calculations and derivations}
\begin{itemize}
\item The fermionic operators are maps from one Fock space of ``incoming modes'' to another (in general unrelated) Fock space of ``outgoing modes''. In general, they are of different dimension.
\item Each arc (directed edge) in a graphical representation of a FOC corresponds to a set of unique fermionic modes.
\item Vacuum states are mode specific. Ladder operators of other unrelated modes commute with the vacuum state for other modes. Take for example $n=\{1,2\}$ and $\mf{n}=(1,2)$, then
\begin{eqnarray}
\hat f_3^\dag \Ket{n_1n_2}{\mf{n}}
&=&\hat f_3^\dag (\hat f_1^\dag)^{n_1} (\hat f_2^\dag)^{n_2} \Ket{\text{\o}}{n} \nonumber\\
&=&(-1)^{n_1+n_2} (\hat f_1^\dag)^{n_1} (\hat f_2^\dag)^{n_2} \Ket{\text{\o}}{n}\cdot \hat f_3^\dag \nonumber\\
&=&(-1)^{n_1+n_2} \Ket{n_1n_2}{\mf{n}}\cdot \hat f_3^\dag
\label{eq:permutationTrick}
\end{eqnarray}
The rationale behind this is that if we have an expression $\Bra{\text{\o}}{m\cup n} \hat A_m \hat A_n \Ket{\text{\o}}{m\cup n}$ for disjoint sets of modes $m$ and $n$, and where $\hat A_m$ and $\hat A_n$ are polynomials in the ladder operators of the modes in $m$ and $n$, respectively, we have
\begin{equation*}
\Bra{\text{\o}}{n}\Bra{\text{\o}}{m} \hat A_m \hat A_n \Ket{\text{\o}}{m}\Ket{\text{\o}}{n}
=\Bra{\text{\o}}{m} \hat A_m \Ket{\text{\o}}{m} \Bra{\text{\o}}{n} \hat A_n \Ket{\text{\o}}{n}
\end{equation*}
\end{itemize}
\subsection{Notation}
We use the Einstein summation convention, i.e., basis state labels that occur twice in an expression presuppose summation over that basis.
Basis states for a certain set $m\subset\cal L$ of $|m|$ fermionic modes and an ordering $\mf{m}$ of those modes will be denoted by $\Ket{\vec{m}}{\mf{m}}= (\hat F^{\vec{m}}_{\mf{m}})^\dag \Ket{\text{\o}}{m}$, where $\vec{m}\in \{0,1\}^{|m|}$ and
\begin{equation*}
\hat F^{\vec{m}}_{\mf{m}}:= (\hat f_{\mf{m}_{|m|}})^{m_{|m|}}\cdot\dotsc\cdot (\hat f_{\mf{m}_2})^{m_2}(\hat f_{\mf{m}_1})^{m_1}.
\end{equation*}
The number of particles in a basis state $\Ket{\vec{m}}{\mf{m}}$ is denominated by
\begin{equation}
\bar{m}:= \sum_i m_i.
\end{equation}
The parity of the basis state is $(-1)^{\bar m}$.
Whenever we refer to Fock spaces for unions of sets of modes, as in $\mathcal{F}_{m\cup n}$, it is implied that those sets of modes are disjoint, i.e., $m\cap n=\emptyset$ in that case.
With $\hat B\cdot_n\hat A$, a partial multiplication is denoted. Only the outgoing modes $n$ of $\hat A$ are contracted with the corresponding same incoming modes $n$ of $\hat B$. Correspondingly $\operatorname{Tr}_r\hat B$ denotes a partial trace, the contraction of incoming modes $r$ with outgoing modes $r$.
By $\hat B\circ\hat A$, we denote a (partial) contraction of all common incoming/outgoing modes of $\hat A$ with corresponding outgoing/incoming modes of $\hat B$.
\section{Contractions} \label{sec:contractions}
In the following, rules are given for all elementary contraction operations needed during the evaluation of a FOC. No non-local Jordan-Wigner transformations occur. The only reordering of modes necessary is for incoming or outgoing modes of single operators, directly before a partial multiplication, trace etc.\ that they are affected by.
\subsection{Reordering of modes} \label{sec:modeReordering}
\begin{figure}[t]
\centering
\includegraphics[width=0.65\linewidth]{fig003.pdf}
\caption{(Color online)
To implement contraction schemes for FOCs on a computer, we represent every operator $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{n}$ in an occupation number representation $J_{\mf{n},\mf{m}}(\hat A)$. The primitive contraction rules, given in Sec.~\ref{sec:contractions}, pose some preconditions on the orderings of modes (to get simple formulae). Hence, before applying those rules, it is in general necessary to change, e.g., from $J_{\mf{n},\mf{m}}(\hat A)$ to a representation $J_{\mf{n}',\mf{m}'}(\hat A)$ with different mode ordering. In the depicted example, the order of the outgoing modes changes from $\mf{n}=(\mf{n}_1,\mf{n}_2,\mf{n}_3)$ to $\mf{n}'=(\mf{n}_2,\mf{n}_3,\mf{n}_1)$. As explained in Sec.~\ref{sec:modeReordering}, this requires application of the swap matrix $S$ [Eq.~\eqref{eq:swapOperator}] -- in this example two times.}
\label{fig:modeReordering}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{fig004.pdf}
\caption{(Color online)
Listing of all (contraction) operations that are needed to evaluate a FOC:
(a) partial multiplication $\hat B\cdot_n\hat A$, (b) partial trace $\operatorname{Tr}_r\hat A$, (c) partial contraction $\hat B\circ\hat A=\operatorname{Tr}_r \hat B\cdot_n\hat A$, and (d) partial projection. The latter two operations are not primitives, but are rather applications of partial multiplication and trace. For numerical purposes, it is however useful to implement them. In all cases, the lower operator is defined to come first in the operator ordering.}
\label{fig:operations}
\end{figure*}
Assume we are given a fermionic operator $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{n}$ in the occupation number representation $J_{\mf{n},\mf{m}}(\hat A)$. The contraction rules to follow, will pose some preconditions on the orderings of modes (to get simple formulae). We need hence to be able to derive from $J_{\mf{n},\mf{m}}(\hat A)$ representations $J_{\mf{n}',\mf{m}'}(\hat A)$ with different mode orders.
All reorderings can be written as sequences of two mode swaps. Let us assume that $\mf{m}'=\mf{m}$ and that orderings $\mf{n}$ and $\mf{n}'$ differ only in modes $\mf{n}_j$ and $\mf{n}_k$ (for $1\leq j<k\leq |n|$), i.e., $\mf{n}_j'=\mf{n}_k$ and $\mf{n}_k'=\mf{n}_j$. Where in the old representation, $|\vec{n})$ corresponds to the state $\Ket{\vec{n}}{\mf{n}}=(\hat f_{\mf{n}_1})^{n_1}\dots (\hat f_{\mf{n}_j})^{n_j} \dots (\hat f_{\mf{n}_k})^{n_k}\dots \Ket{\text{\o}}{n}$, it corresponds in the new representation to $\Ket{\vec{n}}{\mf{n}'}=(\hat f_{\mf{n}_1})^{n_1}\dots (\hat f_{\mf{n}_k})^{n_j} \dots (\hat f_{\mf{n}_j})^{n_k}\dots \Ket{\text{\o}}{n}$.
To derive the corresponding transformation on the representations of $\hat A$, note that an operator $\hat S_{jk}$ that swaps the modes, i.e., $\hat S_{jk} \hat f_j \hat S_{jk}^\dag= \hat f_k$ and $\hat S_{jk} \hat f_k \hat S_{jk}^\dag= \hat f_j$ is given by \cite{Bravyi2002-1}
\begin{equation}
\hat S_{jk}=\mathbbm{1}-\hat f_j^\dag \hat f^{\phantom{\dag}}_j -\hat f_k^\dag \hat f^{\phantom{\dag}}_k +\hat f_j^\dag \hat f^{\phantom{\dag}}_k + \hat f_k^\dag \hat f^{\phantom{\dag}}_j.
\end{equation}
With $\hat S_{jk} \Ket{\text{\o}}{n}=\Ket{\text{\o}}{n}$, we have hence
\begin{equation}
J_{\mf{n}',\mf{m}}(\hat A) = J_{\mf{n},\mf{m}}(\hat S_{jk} \hat A)
= J_{\mf{n},\mf{n}}(\hat S_{jk}) J_{\mf{n},\mf{m}}(\hat A)
\end{equation}
The occupation number representation (Jordan-Wigner transform) of a term $\hat f_j^\dagger
\hat f_k$ is $\sigma^-_j \otimes (\bigotimes_{l=j+1}^{k-1} \sigma_l^z) \otimes \sigma^+_{k}$, where the $\sigma^{\alpha}$ denote the Pauli matrices. The swap operator for two consecutive modes is in the relevant subspace
\begin{eqnarray}
S&:=&J_{(i,i+1),(i,i+1)}(\hat S_{i,i+1})\nonumber\\
&=& \phantom{+}|0,0)(0,0|-|1,1)(1,1|\nonumber\\
&& +|0,1)(1,0| + |1,0)(0,1|.
\label{eq:swapOperator}
\end{eqnarray}
In practice one may choose to execute all mode reorderings by application of corresponding sequences of swap operators for consecutive modes; see Fig.~\ref{fig:modeReordering}.
Swapping of whole sets of modes, e.g., useful when retaining reduced bases, can be done as well. Consider an operator $\hat B:\mathcal{F}_{m}\to\mathcal{F}_{u\cup v\cup x\cup z}$ given in the representation $J_{\mf{n},\mf{m}}(\hat B)$ with $\mf{n}=\mf{u}\oplus\mf{v}\oplus\mf{x}\oplus\mf{z}$ where $\mf{u}$, $\mf{v}$, $\mf{x}$, $\mf{z}$ are orderings for the modes in $u$, $v$, $x$, and $z$. Swapping $\mf{v}$ and $\mf{x}$, is achieved by $(\vec{uxvz}|J_{\mf{n}',\mf{m}}(\hat B)|\vec{m}) = (-1)^{\bar{x}\bar{v}}(\vec{uvxz}|J_{\mf{n},\mf{m}}(\hat B)|\vec{m})$,
where $\mf{n}'=\mf{u}\oplus\mf{x}\oplus\mf{v}\oplus\mf{z}$.
\subsection{Contraction of some outgoing modes of $\hat A$ with the corresponding incoming modes of $\hat B$} \label{sec:multiplyAB}
The partial multiplication of two operators is depicted in Fig.~\ref{fig:operations}a.
Let $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{n\cup p}$ and $\hat B:\mathcal{F}_{n\cup q}\to\mathcal{F}_{k}$, i.e., the operators' outgoing/incoming supports overlap in the modes $n$. Let $\mf{m}$, $\mf{n}$, $\mf{p}$, and $\mf{q}$ be orderings for the modes in $m$, $n$, $p$, and $q$. Assuming we have the two operators in representations $A=J_{\mf{a},\mf{m}}(\hat A)$ and $B=J_{\mf{k},\mf{b}}(\hat B)$ with $\mf{a}=\mf{n}\oplus\mf{p}$ and $\mf{b}=\mf{n}\oplus\mf{q}$, the resulting operator $\hat C:= \hat B\cdot_{n}\hat A:\mathcal{F}_{m\cup q}\to\mathcal{F}_{k\cup p}$ with orderings $\mf{c}_1=\mf{k}\oplus\mf{p}$, $\mf{c}_2=\mf{m}\oplus\mf{q}$ is
\begin{eqnarray}
\hat C&=&\hat B\cdot_{n}\hat A \nonumber\\
&=& \Ket{\vec{k}}{\mf{k}}(\vec{k}|B|\vec{n}'\vec{q})\Bra{\vec{n}'\vec{q}}{\mf{b}} \cdot \Ket{\vec{n}\vec{p}}{\mf{a}}(\vec{n}\vec{p}|A|\vec{m})\Bra{\vec{m}}{\mf{m}} \nonumber\\
&=& (-1)^{\bar{p}\bar{q}+(\bar{p}+\bar{q})(\bar{n}+\bar{n}')}
\Ket{\vec{k}}{\mf{k}}(\vec{k}|B|\vec{n}'\vec{q}) \nonumber\\
&& \times \Bra{\text{\o}}{q}\Bra{\text{\o}}{n}
(F^{\vec{p}}_\mf{p})^\dag F^{\vec{n}'}_\mf{n} (F^{\vec{n}}_\mf{n})^\dag F^{\vec{q}}_\mf{q}
\Ket{\text{\o}}{n}\Ket{\text{\o}}{p} \nonumber\\
&&\times (\vec{n}\vec{p}|A|\vec{m})\Bra{\vec{m}}{\mf{m}} \nonumber\\
&=&(-1)^{\bar{p}\bar{q}} \cdot \Ket{\vec{k}\vec{p}}{\mf{c}_1}(\vec{k}|B|\vec{n}\vec{q}) (\vec{n}\vec{p}|A|\vec{m}) \Bra{\vec{m}\vec{q}}{\mf{c}_2} \nonumber\\
&=:& \Ket{\vec{k}\vec{p}}{\mf{c}_1}(\vec{k}\vec{p}|C|\vec{m}\vec{q})\Bra{\vec{m}\vec{q}}{\mf{c}_2},
\label{eq:partialMultiply}
\end{eqnarray}
where $C$ is the representation $C=J_{\mf{c}_1,\mf{c}_2}(\hat C)$. In short, the transformation rule for the occupation number representations reads
\begin{equation}
(\vec{k}\vec{p}|C|\vec{m}\vec{q}) = (-1)^{\bar{p}\bar{q}}(\vec{k}|B|\vec{n}\vec{q}) (\vec{n}\vec{p}|A|\vec{m}).
\end{equation}
In appendix~\ref{sec:multiplyAB_alternative}, an alternative derivation of this rule is given, where the support of operators $\hat A$ and $\hat B$ is extended prior to the contraction such that there is no need for applying the commutation prescription \eqref{eq:permutationTrick}. The result is the same.
\subsection{Partial trace of an operator}
The partial trace of an operator is depicted in Fig.~\ref{fig:operations}b.
Let $\hat A:\mathcal{F}_{m\cup r}\to\mathcal{F}_{n\cup r}$, i.e., the operator's outgoing and incoming supports overlap in the modes $r$. Such operators can always be decomposed in the form
\begin{equation}
\hat A=\hat A_+ + \hat A_-,
\end{equation}
where $\hat A_+$ is the particle number parity preserving and $\hat A_-$ the parity changing component, i.e.,
\begin{equation}
(-1)^{\hat N_n+\hat N_r}\hat A_\pm= \pm \hat A_\pm(-1)^{\hat N_m+\hat N_r}
\end{equation}
with $\hat N_r:= \sum_{i\in r} \hat f^\dag_i\hat f_i$.
The correct expression for the partial trace follows from its defining property that $\operatorname{Tr}(\hat A\hat B)=\operatorname{Tr}(\operatorname{Tr}_r(\hat A)\hat B)$ for all operators $\hat B$ that have no support on modes $r$. Hence, let us
consider such an operator $\hat B:\mathcal{F}_{n\cup r}\to\mathcal{F}_{m\cup r}$ with no support on $r$, i.e., $\hat f_{i}\hat B_\pm=\pm \hat B_\pm\hat f_{i}$ $\forall_{i\in r}$.
Let $\mf{m}$, $\mf{n}$, $\mf{r}$ be orderings for the modes in $m$, $n$, and $r$. Further let $\mf{a}=\mf{m}\oplus\mf{r}$ and $\mf{b}=\mf{n}\oplus\mf{r}$.
The operator's matrix elements obey
\begin{eqnarray}
&&\Bra{\vec{m}\vec{r}'}{\mf{a}} \hat B\Ket{\vec{n}\vec{r}}{\mf{b}} \nonumber\\
&&=
\Bra{\text{\o}}{a} \hat F^{\vec{r}'}_\mf{r}\hat F^{\vec{m}}_\mf{m}\hat B (\hat F^{\vec{n}}_\mf{n})^\dag(\hat F^{\vec{r}}_\mf{r})^\dag
\Ket{\text{\o}}{b} \nonumber\\
&&=(-1)^{\bar{r}'\bar{m}+\bar{r}\bar{n}}
\Bra{\text{\o}}{m}\Bra{\text{\o}}{r} \hat F^{\vec{m}}_\mf{m}\hat F^{\vec{r}'}_\mf{r}\hat B (\hat F^{\vec{r}}_\mf{r})^\dag(\hat F^{\vec{n}}_\mf{n})^\dag
\Ket{\text{\o}}{r} \Ket{\text{\o}}{n} \nonumber\\
&&=\delta_{\vec{r}\vec{r}'} (-1)^{\bar{r}(\bar{m}+\bar{n})} \Bra{\vec{m}}{\mf{m}} \hat B_+ +(-1)^{\bar{r}}\hat B_-\Ket{\vec{n}}{\mf{n}} \nonumber\\
&&=\delta_{\vec{r}\vec{r}'} \Bra{\vec{m}}{\mf{m}} \hat B\Ket{\vec{n}}{\mf{n}}.
\label{eq:matrixElement}
\end{eqnarray}
Requiring that
\begin{equation*}
\operatorname{Tr}(\hat A\hat B)
= \Bra{\vec{n}\vec{r}}{\mf{b}} \hat A \Ket{\vec{m}\vec{r}}{\mf{a}} \Bra{\vec{m}}{\mf{m}} \hat B\Ket{\vec{n}}{\mf{n}}
=\operatorname{Tr}((\operatorname{Tr}_r \hat A)\hat B),
\end{equation*}
is true for all operators $\hat B$ with the properties stated above, leads to the conclusion that the partial trace for the modes $r$ is simply given by the expression
\begin{equation}\label{eq:partialTrace}
\operatorname{Tr}_r \hat A = \sum_{\vec{r}}
\Ket{\vec{n}}{\mf{n}}\Bra{\vec{n}\vec{r}}{\mf{b}} \hat A \Ket{\vec{m}\vec{r}}{\mf{a}}\Bra{\vec{m}}{\mf{m}}.
\end{equation}
Hence, assuming we have the operator in the representation $J_{\mf{b},\mf{a}}(\hat A)$, the resulting operator $\operatorname{Tr}_r \hat A:\mathcal{F}_{m}\to\mathcal{F}_{n}$ is in the occupation number representation
\begin{equation}\label{eq:partialTraceJW}
(\vec{n}|J_{\mf{n},\mf{m}}(\operatorname{Tr}_r \hat A )|\vec{m}) = (\vec{n}\vec{r}|J_{\mf{b},\mf{a}}(\hat A)|\vec{m}\vec{r}).
\end{equation}
Please note that we have chosen the orderings of the modes such that, in Eq.~\eqref{eq:matrixElement}, two sign factors compensate -- that of a mode reordering with one from commuting $\hat F^{\vec{r}}_\mf{r}$ and the operator $\hat B$.
A sign factor $(-1)^{\bar{r}(\bar{m}+\bar{n})}$ would occur in the expressions for the partial trace, had we swapped the order of $\mf{m}$ ($\mf{n}$) and $\mf{r}$ in the ordering of the incoming (outgoing) modes, i.e., $\mf{a}=\mf{r}\oplus\mf{m}$ ($\mf{b}=\mf{r}\oplus\mf{n}$) instead of our choice here. For such a case, the preparative mode reordering would take account of the sign factor and then, having realized the preconditions of it, one would apply rule \eqref{eq:partialTraceJW}.
\subsection{Contraction of some outgoing modes of $\hat A$ with the corresponding incoming modes of $\hat B$ and vice versa}
Combining partial multiplication \eqref{eq:partialMultiply} with partial trace \eqref{eq:partialTrace} we obtain a general partial contraction, namely, that of some outgoing modes $n$ of operator $\hat A$ with the corresponding incoming modes of $\hat B$ and, simultaneously, contraction of some outgoing modes $r$ of $\hat B$ with the corresponding incoming modes of $\hat A$.
This corresponds to the partial contraction depicted in Fig.~\ref{fig:operations}c.
Let $\hat A:\mathcal{F}_{m\cup r}\to\mathcal{F}_{n\cup p}$ and $\hat B:\mathcal{F}_{n\cup q}\to\mathcal{F}_{k\cup r}$, i.e., the operators outgoing/incoming supports overlap in the modes $n$ and $r$.
Let $\mf{m}$, $\mf{n}$, $\mf{r}$, $\mf{p}$, $\mf{q}$, $\mf{k}$ be orderings for the modes in $m$, $n$, $r$, $p$, $q$, and $k$. Assuming we have the two operators in representations $A=J_{\mf{n}\oplus\mf{p},\mf{m}\oplus\mf{r}}(\hat A)$ and $B=J_{\mf{k}\oplus\mf{r},\mf{n}\oplus\mf{q}}(\hat B)$, with $\mf{a}=\mf{k}\oplus\mf{p}$ and $\mf{b}=\mf{m}\oplus\mf{q}$,
the resulting operator $\hat C:\mathcal{F}_{m\cup q}\to\mathcal{F}_{k\cup p}$ is
\begin{eqnarray*}
\hat C&=&\operatorname{Tr}_r\hat B\cdot_{n}\hat A \nonumber\\
&=&(-1)^{\bar{p}\bar{q}+\bar{r}(\bar{p}+\bar{q})}\cdot \Ket{\vec{k}\vec{p}}{\mf{a}}(\vec{k}\vec{r}|B|\vec{n}\vec{q}) (\vec{n}\vec{p}|A|\vec{m}\vec{r}) \Bra{\vec{m}\vec{q}}{\mf{b}},
\end{eqnarray*}
i.e.,
\begin{multline}
(\vec{k}\vec{p}|J_{{\mf{a}},{\mf{b}}}(\hat C)|\vec{m}\vec{q})\\
=(-1)^{\bar{p}\bar{q}+\bar{r}(\bar{p}+\bar{q})}\cdot (\vec{k}\vec{r}|B|\vec{n}\vec{q}) (\vec{n}\vec{p}|A|\vec{m}\vec{r}).
\label{eq:partialContract}
\end{multline}
In the following, $\hat B\circ\hat A$ denotes a (partial) contraction of all common incoming/outgoing modes of $\hat A$ with corresponding outgoing/incoming modes of $\hat B$ according to Eq.~\eqref{eq:partialContract}.
\subsection{Partial projection}\label{sec:partialProjection}
The partial projection for an operator is depicted in Fig.~\ref{fig:operations}d.
Let $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{r\cup n}$.
Let $\mf{r}$, $\mf{m}$, $\mf{n}$ be orderings for the modes in $r$, $m$, and $n$. Further let $\mf{a}=\mf{r}\oplus\mf{n}$.
After projection of modes $r$ onto a basis state ($\{\hat n_i\}_{i\in r}$ eigenstate) $\Ket{\vec{r}'}{\mf{r}}= (\hat F^{\vec{r}'}_\mf{r})^\dag \Ket{\text{\o}}{r}$, the resulting operator $\hat A':\mathcal{F}_{m}\to\mathcal{F}_{n}$ is
\begin{eqnarray}
\hat A'&=&\Bra{\vec{r}'}{\mf{r}} \cdot \Ket{\vec{r}\vec{n}}{\mf{a}}(\vec{r}\vec{n}|J_{\mf{a},\mf{m}}(\hat A)|\vec{m})\Bra{\vec{m}}{\mf{m}} \nonumber\\
&=& \Ket{\vec{n}}{\mf{n}}(\vec{r}'\vec{n}|J_{\mf{a},\mf{m}}(\hat A)|\vec{m})\Bra{\vec{m}}{\mf{m}},
\end{eqnarray}
i.e.,
\begin{equation}
(\vec{n}|J_{\mf{n},\mf{m}}(\hat A')|\vec{m}) = (\vec{r}'\vec{n}|J_{\mf{a},\mf{m}}(\hat A)|\vec{m}).
\label{eq:partialProjection}
\end{equation}
A sign factor $(-1)^{\bar{r}'\bar{n}}$ would occur, if we would swap the order of modes $\mf{r}$ and $\mf{n}$ in the order $\mf{a}$ of the outgoing modes.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\linewidth]{fig005.pdf}
\caption{The most general FOC with three operators. To verify that the contraction of operators as given by rule \eqref{eq:partialContract} is associative, one needs to compare the results of $\hat C\circ(\hat B\circ\hat A)$ and $(\hat C\circ\hat B)\circ\hat A$. Both do indeed agree.}
\label{fig:associativity}
\end{figure}
\section{Operator order and contraction sequence}\label{sec:operatorOrder}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{fig006.pdf}
\caption{(Color online)
To allow for arbitrary contraction sequences, one needs to be able to change the operator ordering. In the diagrams, the operator order is defined to increase from the bottom to the top. If each operator is parity-symmetric (either preserves or changes the fermion number parity; $s=0$ or $s=1$), swapping of operators can be done and the resulting sign factors taken account of efficiently. (a) The generic rule \eqref{eq:operatorOrderSwap} for swapping two operators that are neighbors in the ordering. (b) Identities for the most generic FOC with three operators, the same as in Fig.~\ref{fig:associativity}, depicted in a slightly different fashion. A minus sign at the contraction arc for a mode set $n$ indicates that a sign factor $(-1)^{\bar n}$ is to be inserted in the contraction formula (see text).}
\label{fig:operatorReordering}
\end{figure*}
In Sec.~\ref{sec:FOC-definition}, the value of the FOC was defined as the value resulting from executing the contractions of the constituting operators $\hat A_i$ with respect to a certain operator order, $\hat A_N\circ\dotsc\circ\hat A_2\circ\hat A_1$. This definition is only sufficient if the contraction \eqref{eq:partialContract} of operators, as depicted in Fig.~\ref{fig:operations}c, is indeed associative. For the most general FOC of three operators $\hat A:\mathcal{F}_{a\cup b\cup c}\to\mathcal{F}_{d\cup e\cup f}$, $\hat B:\mathcal{F}_{f\cup g\cup h}\to\mathcal{F}_{c\cup k\cup n}$, and $\hat C:\mathcal{F}_{e\cup j\cup k}\to\mathcal{F}_{a\cup h\cup m}$ one finds indeed (see Fig.~\ref{fig:associativity})
\begin{equation}
\hat C\circ(\hat B\circ\hat A) = (\hat C\circ\hat B)\circ\hat A,
\end{equation}
confirming the consistency of the contraction rule \eqref{eq:partialContract}.
Numerically it may be more efficient to execute for example first the contraction between $\hat A_1$ and $\hat A_3$ and contract the result with $\hat A_2$ afterwards. To be able to choose an arbitrary sequence for the contractions as is possible for the corresponding QUOCs, we need to be able to change the ordering of the operators without changing the value of the FOC.
In the elementary contractions, the ordering of the affected operators matters, i.e., for two operators $\hat A:\mathcal{F}_{m\cup r}\to\mathcal{F}_{n\cup p}$ and $\hat B:\mathcal{F}_{n\cup q}\to\mathcal{F}_{k\cup r}$, we have in general $\operatorname{Tr}_r \hat B\cdot_n \hat A \neq \operatorname{Tr}_n \hat A\cdot_r \hat B$. However, if each of the two operators is either parity preserving ($s=0$) or parity changing ($s=1$), we find the simple relation
\begin{equation} \label{eq:operatorOrderSwap}
\operatorname{Tr}_r \hat B\cdot_n \hat A = (-1)^{s_A s_B}\operatorname{Tr}_n[ (\hat P_n \cdot_n \hat A \cdot_r \hat P_r )\cdot_r \hat B],
\end{equation}
where $\hat P_n: \mathcal{F}_n\to \mathcal{F}_n $ with $\Bra{\vec{n}'}{\mf{n}}\hat P_n\Ket{\vec{n}}{\mf{n}}=\delta_{\vec{n}\vec{n}'}(-1)^{\bar n}$. In the more compact notation this reads $\hat B\circ\hat A = (-1)^{s_A s_B} \hat P_n \circ \hat A \circ \hat P_r \circ \hat B$. In an implementation, instead of inserting the $\hat P_n$ in this fashion as operators or applying them directly to $\hat A$ or $\hat B$, more efficiently, one may introduce a binary counter (with initial state 0) for each contraction arc -- in this case, for the contraction with respect to modes $n$. Whenever a factor $\hat P_n$ arises when swapping the order of operators that have both support on $n$, the state of the binary counter is inverted. Once, the contraction with respect to modes $n$ is executed, one inserts the factor $(-1)^{\bar{n}}$ in the corresponding expression, if the state of the counter is 1. In the graphical representation of a FOC, we denote the state 1 of the counter by a minus sign at the corresponding contraction arc as exemplified in Fig.~\ref{fig:operatorReordering}. The numerical overhead for keeping track of those signs is marginal.
In the following, operators $\hat A:\mathcal{F}_m\to\mathcal{F}_n$ that are either fermion number parity preserving or changing,
\begin{equation} \label{eq:paritySymmetric}
(-1)^{\hat N_n}\hat A=\pm\hat A(-1)^{\hat N_m},
\end{equation}
are called \emph{parity-symmetric}. Also FOCs that contain only parity-symmetric operators are called parity-symmetric.
Using the above result, it is possible to do the operator contractions of a parity-symmetric FOC in an arbitrary sequence. One starts with the predefined operator order. To execute the contraction of two (arbitrary) operators of the FOC:
\begin{itemize}
\item Apply rule \eqref{eq:operatorOrderSwap} to bring the two operators into direct neighborhood in the operator order, keeping track of the resulting sign factors for the contraction arcs and of the global sign,
\item apply mode swapping operators as described in Sec.~\ref{sec:modeReordering}, to bring the occupation number representations of the two operators into accord with the precondition of the general contraction rule \eqref{eq:partialContract}, and
\item replace the two operators by their contraction according to the rule \eqref{eq:partialContract}.
\end{itemize}
Consequently, the contraction of a FOC can be done efficiently -- with the same sequence of partial contractions as for a corresponding qudit operator circuit. No non-local Jordan-Wigner transformations occur. Marginal computational overheads result from keeping track of certain sign factors when doing contractions in a sequence that deviates from the ordering of the circuit's operators and reordering of modes for incoming or outgoing modes of single operators, directly before a partial multiplication, trace etc.\ that they are affected by.
The operator order is part of the definition of a FOC. For the example of the fermionic MERA it can be chosen to agree with the physical interpretation as consecutive renormalization steps; i.e., the operator order is increasing with the renormalization number. As all unitaries (isometries) of a particular renormalization stage commute, the ordering among those can be chosen arbitrarily. In Sec.~\ref{sec:PEPS} a useful operator ordering for fermionic PEPS is presented.
\section{Computational costs and locality} \label{sec:costs}
Given a contraction sequence for a qudit operator circuit (QUOC), the same sequence can be used for a corresponding parity-symmetric FOC (for which all qudit operators are replaced by parity-symmetric fermionic operators of identical dimension). There is hence no memory or computational overhead \emph{per se}. For the elementary contraction operations stated in Sec.~\ref{sec:contractions}, a certain ordering of the modes was being assumed, prior to the operation.
If one uses the contraction operations as stated there, one gets a marginal overhead from the corresponding preparative mode reorderings; Sec.~\ref{sec:modeReordering}. The number of numerical operations needed for a reordering is proportional to the size of the operator matrix: every reordering can be achieved by a sequence of swaps of consecutive modes. The product of appropriate swaps yields a reordering operator that is sparse with exactly one entry $\pm 1$ in each row and column. To apply such an operator to either side of $\hat A:\mathcal{F}_{m}\to\mathcal{F}_{n}$, requires only $\chi_m\chi_n$ operations, where $\chi_m$ and $\chi_n$ are the dimensions of the (possibly reduced) incoming and outgoing Hilbert spaces.
Every contraction of the operator, except for partial traces or projections, would however already require a larger number of numerical operations. The computational overhead is hence marginal. There is no overhead in memory requirements.
Further, all considerations about locality, hence, carry over directly from those of the known QUOCs (for instance the qudit MERA) to the corresponding FOC (e.g., the fermionic MERA).
In the calculation of local expectation values w.r.t.\ a MERA, only operators inside a causal cone of the observable enter the actual calculation (all others cancel). That Jordan-Wigner strings outside the causal cone can be avoided for the fermionic MERA has already been shown by an alternative approach in Ref.\ \cite{Pineda2009_05}, see also Ref.\ \cite{Corboz2009_04}.
\section{Further operations on FOCs} \label{sec:furtherOp}
\begin{figure}[t]
\centering
\includegraphics[width=0.92\linewidth]{fig007.pdf}
\caption{(Color online)
In all subplots, the operator order is defined to increase from the bottom to the top.
(a) It is possible to reverse contraction arcs. The resulting operators can be expressed in terms of matrix elements of the original operators; see Sec.~\ref{sec:reversing}. Reversing the arc for modes $n$ yields the sign factor $(-1)^{\bar n(\bar p+\bar q)}$.
(b--d) It is possible to decompose operators ($\hat A$) by singular value decomposition, resulting in circuits of the form (c) or (d). This also allows for the reduction of retained Hilbert space dimensions: Contract operators $\hat B$ and $\hat C$ to obtain an operator $\hat A$, apply the singular value decomposition to it and truncate (some of the smallest) singular values, to obtain an approximation of $\hat C\circ\hat B$.
Reversing contraction arcs and truncation of Hilbert spaces via singular value decomposition are for example employed in the contraction algorithm for fermionic PEPS in Sec.~\ref{sec:PEPS}.}
\label{fig:reverse-n-svd}
\end{figure}
\subsection{Hermitian conjugation}\label{sec:herm}
The Hermitian conjugate of a FOC is simply given by
\begin{equation}
(\hat A_N\circ\dotsc\circ\hat A_1)^\dag=\hat A_1^\dag\circ\dotsc\circ\hat A_N^\dag .
\end{equation}
The operator order is reversed and one has to take the Hermitian conjugate of each fermionic operator in the circuit. In the representation as a directed graph, all arcs are reversed.
The Hermitian conjugate is for example of interest when calculating expectation values with respect to a (pure) FOC state. Fig.~\ref{fig:fPEPS_a}a shows it for the example of a fermionic PEPS.
\subsection{Reversing contraction arcs}\label{sec:reversing}
For algorithms operating on FOCs, as for example the one for fermionic PEPS presented in Sec.~\ref{sec:PEPS}, it is sometimes useful to reverse contraction arcs, i.e., to change outgoing modes of one operator to incoming modes and vice versa at the operators it is contracted with; see Fig.~\ref{fig:reverse-n-svd}a. Let $\hat A:\mathcal{F}_{m\cup r}\to\mathcal{F}_{n\cup s\cup p}$ and $\hat B:\mathcal{F}_{n\cup s\cup q}\to\mathcal{F}_{k\cup r}$, i.e., the operators outgoing/incoming supports overlap in the modes $n$, $r$ and $s$.
Let $\mf{m}$, $\mf{n}$, $\mf{r}$, $\mf{s}$, $\mf{p}$, $\mf{q}$, $\mf{k}$ be orderings for the modes in $m$, $n$, $r$, $s$, $p$, $q$, and $k$.
For reversing the arc corresponding to modes $n$, i.e., changing the modes $n$ to be incoming (outgoing) at operator $\hat A$ ($\hat B$), the relations between $\hat A$ and $\hat B$ and the resulting operators (as depicted in Fig.~\ref{fig:reverse-n-svd}a) are
\begin{gather*}
\Bra{\vec{kr}}{\mf{k}\oplus\mf{r}}\hat B\Ket{\vec{nsq}}{\mf{n}\oplus\mf{s}\oplus\mf{q}}
= (-1)^{\bar n\bar q}
\Bra{\vec{knr}}{\mf{k}\oplus\mf{n}\oplus\mf{r}}\hat B'\Ket{\vec{sq}}{\mf{s}\oplus\mf{q}},\\
\Bra{\vec{nsp}}{\mf{n}\oplus\mf{s}\oplus\mf{p}}\hat A\Ket{\vec{mr}}{\mf{m}\oplus\mf{r}}
= (-1)^{\bar n\bar p}
\Bra{\vec{sp}}{\mf{s}\oplus\mf{p}}\hat A'\Ket{\vec{mnr}}{\mf{m}\oplus\mf{n}\oplus\mf{r}},
\end{gather*}
such that $\hat B\circ\hat A = \hat B'\circ\hat A'$.
\subsection{Singular value decomposition and truncation}\label{sec:svd}
It is possible to decompose an operator $\hat A:\mathcal{F}_{m\cup n}\to\mathcal{F}_{u\cup v}$ by singular value decomposition with respect to arbitrary splittings of the incoming and outgoing modes. The resulting circuits can be chosen to be of the form $\hat C\circ\hat B$ or $\hat C\circ\hat \Lambda\circ\hat B$, where $\hat \Lambda:\mathcal{F}_z\to\mathcal{F}_x$ ($|x|=|z|$) is a diagonal operator encoding the singular values; see Figs.~\ref{fig:reverse-n-svd}b--\ref{fig:reverse-n-svd}d.
This also allows for truncation of modes (or the reduction of Hilbert space dimensions): Contract two operators $\hat C:\mathcal{F}_{m\cup x}\to\mathcal{F}_{u}$ and $\hat B:\mathcal{F}_{n}\to\mathcal{F}_{x\cup v}$, as in Fig.~\ref{fig:reverse-n-svd}c to obtain an operator $\hat A$, apply the singular value decomposition to it and truncate (some of the smallest) singular values, to obtain an approximation of $\hat C\circ\hat B$ where the dimension of the retained Hilbert space for the modes in $x$ has been reduced.
Let $\mf{m}$, $\mf{n}$, $\mf{u}$, $\mf{v}$, $\mf{x}$, and $\mf{z}$ be orderings of modes in $m$, $n$, $u$, $v$, $x$, and $z$.
The contraction of the FOC $\hat C\circ \hat B$, as depicted in Fig.~\ref{fig:reverse-n-svd}c yields
\begin{multline}
\hat C\circ \hat B = (-1)^{\bar m\bar v}\Ket{\vec{uv}}{\mf{u}\oplus\mf{v}}\Bra{\vec{u}}{\mf{u}}\hat C\Ket{\vec{xn}}{\mf{x}\oplus\mf{n}}\\
\times \Bra{\vec{xv}}{\mf{x}\oplus\mf{v}}\hat B\Ket{\vec{n}}{\mf{n}}\Bra{\vec{nm}}{\mf{n}\oplus\mf{m}}.
\end{multline}
With the occupation number representation $A:=J_{\mf{u}\oplus\mf{v},\mf{n}\oplus\mf{m}}(\hat A)$ of $\hat A$, we can hence decompose the operator by applying the singular value decomposition to the matrix $\tilde A$ defined by
\begin{gather}
(\vec{uv}|\tilde A|\vec{nm}) := (-1)^{\bar m\bar v}(\vec{uv}|A|\vec{nm}),\\
\tilde A = U \Lambda V,
\end{gather}
where $U$ and $V$ are unitary and $\Lambda$ is the diagonal matrix of singular values. The operators of the resulting circuit $\hat C\circ \hat B$ can then be chosen as ($0<\alpha<1$)
\begin{equation}
J_{\mf{u},\mf{x}\oplus\mf{m}}(\hat C)=U\Lambda^{\alpha},\quad
J_{\mf{x}\oplus\mf{v},\mf{n}}(\hat B)=\Lambda^{1-\alpha}V.
\end{equation}
When the singular values are to be separated into a third operator $\hat \Lambda$ as depicted in Fig.~\ref{fig:reverse-n-svd}d, the operators of the resulting circuit $\hat C\circ\hat \Lambda \circ\hat B$ are given by
\begin{equation}
J_{\mf{u},\mf{x}\oplus\mf{m}}(\hat C)=U,\quad
J_{\mf{x},\mf{z}}(\hat \Lambda) = \Lambda,\quad
J_{\mf{z}\oplus\mf{v},\mf{n}}(\hat B)=V.
\end{equation}
Reduction of Hilbert space dimensions (\emph{truncation}) via singular value decomposition is for example employed in the algorithm for evaluating expectation values with respect to fermionic PEPS in an approximative fashion; see Sec.~\ref{sec:PEPS}.
\section{Fermionic PEPS}\label{sec:PEPS}
\begin{figure*}[p]
\centering
\includegraphics[width=\textwidth]{fig008.pdf}
\caption{(Color online)
(a) A fermionic PEPS can be constructed as a FOC, where fermionic operators are assigned to each lattice site. As chosen here for a square lattice, each operator has two sets of incoming modes from operators on neighboring sites and two outgoing sets of modes to operators of the remaining nearest neighbors. One outgoing set of modes corresponds to the physical site Hilbert space. The Hermitian conjugate of the circuit $\hat A_N\circ\cdots\circ\hat A_1$ is $\hat A_1^\dag\circ\cdots\circ\hat A_N^\dag$. All contraction arcs and the operator order (gray line below/above the circuit) are reversed. This side effect can be reverted (without changing the value of the FOC) by applying Eq.~\eqref{eq:operatorOrderSwap} and the rule derived in Sec.~\ref{sec:reversing} with only a marginal computational overhead.
(b) To evaluate a local expectation value, the FOCs for bra, local observable, and ket have to be composed. The operator order can again be changed for later convenience -- in this case no additional sign factors occur, as all swapped operators have no common contraction arcs. For the definition of the objects on the right hand side, see also Fig.~\ref{fig:fPEPS_b}a.}
\label{fig:fPEPS_a}
\end{figure*}
\begin{figure*}[p]
\centering
\includegraphics[width=\textwidth]{fig009.pdf}
\caption{(Color online)
(a) Definition of the objects on the right hand side of Fig.~\ref{fig:fPEPS_a}b -- here, in particular, for the site where the local observable acts nontrivially. (b) The FOC for the evaluation of a local observable is contracted by considering the first row of the FOC as a fermionic state $\ket{\chi_1}$ and applying the other rows as operators to it $\ket{\chi_{y}}=\hat T_y \ket{\chi_{y-1}}$. Doing this in an exact manner, the number of degrees of freedom per site for the states $\ket{\chi_{y}}$ would in general increase exponentially with $y$. One can decrease them during the algorithm for the case of a finite (infinite, translationally invariant) lattice by applying the DMRG (iTEBD) algorithm.}
\label{fig:fPEPS_b}
\end{figure*}
The FOC framework incorporates a fermionic version of the class of qudit states called \emph{tensor product ans\"atze} \cite{Niggemann1997-104,Nishino2000-575,Martin-Delgado2001-64} or \emph{projected entangled pair states} (PEPS) \cite{Verstraete2004-7}. In Ref.\ \cite{Kraus2009_04} it was suggested to obtain fermionic PEPS by applying fermionic parity-symmetric (projection) operators to a tensor product of maximally entangled pair states. The detour over maximally entangled states is not necessary (but also not harmful); as depicted on the left hand side of Fig.~\ref{fig:fPEPS_a}a, a fermionic PEPS on a square lattice, equivalently, can be defined by assigning to each lattice site $(x,y)$ (away from the boundaries) a parity-symmetric fermionic operator $\hat A:\mathcal{F}_{a\cup m}\to\mathcal{F}_{b\cup n\cup s}$ where $a$ and $m$ are sets of incoming modes from operators on neighboring sites $(x+1,y)$ and $(x,y+1)$, and $b$ and $n$ are outgoing modes to operators on sites $(x-1,y)$ and $(x,y-1)$. The set of modes $s$ composes the local physical Hilbert space of site $(x,y)$. In the FOC framework, the generalization to more complicated or higher-dimensional lattices is straightforward. The choice of the direction of the contraction arcs is (an arbitrary) part of the definition of the state and can also be changed later as described in Sec.~\ref{sec:reversing}.
To complete the definition of the fermionic PEPS one needs to specify an (initial) operator order. An example is given on the left hand side of Fig.~\ref{fig:fPEPS_a}a, where the gray line below the lattice indicates the lexicographic order with respect to lattice coordinates $(-x,y)$.
In Ref.\ \cite{Kraus2009_04} it was described how the FOC of a fermionic PEPS can be mapped to a QUOC by choosing a fixed ordering of all modes. This was achieved with one additional bond per horizontal contraction arc (i.e., a factor of four in the number of degrees of freedom per site) and a correspondingly reduced computational efficiency (a factor of several powers of four) for the evaluation of expectation values, calculation of ground states etc.
The approach presented here is an alternative one, emphasizing that the mapping to a QUOC (with a fixed mode order) is not necessary. All manipulations and contractions on fermionic PEPS can be done according to the rules described in Secs.~\ref{sec:contractions}, \ref{sec:operatorOrder}, and \ref{sec:furtherOp}. In that case, compared to the same operations on a corresponding qudit PEPS (replacing the fermionic operators with qudit operators of identical dimensions), only marginal computational overheads arise.
Fig.~\ref{fig:fPEPS_a} shows graphically how the FOC for the evaluation of a local expectation value $\bra{\psi}\hat O\ket{\psi}$ can be constructed.
For the bra vector (dual vector) $\bra{\psi}$, operator order and contraction lines reverse as a side effect of taking the Hermitian conjugate; Sec.~\ref{sec:herm}. For later convenience this is reverted by applying Eq.~\eqref{eq:operatorOrderSwap} and the rule derived in Sec.~\ref{sec:reversing}. After composing bra, observable, and ket, the operator order can again be changed conveniently, this time without any sign factors occurring, as all swapped operators share no common contraction arcs; Fig.~\ref{fig:fPEPS_a}b. As in the qudit case \cite{Verstraete2004-7}, the contraction of the resulting circuit can be executed row by row, i.e., by treating the lowest row as a one-dimensional fermionic state $\ket{\chi_1}$ to which the operators of the following row $\hat T_y$ (\emph{row transfer matrix}) are applied; $\ket{\chi_{y}}=\hat T_y \ket{\chi_{y-1}}$. No additional sign factors occur due to operator reorderings (Fig.~\ref{fig:fPEPS_b}), but only due to mode reorderings (Sec.~\ref{sec:modeReordering}) before contractions (marginal overhead). An essential aspect of PEPS algorithms is that contractions, e.g., for the evaluation of expectation values, cannot be executed exactly, as the stepwise application of the row transfer matrices would in general lead to an exponential growth in the number of modes per site for $\ket{\chi_y}$. As suggested in Ref.\ \cite{Verstraete2004-7}, this can be circumvented by applying a variant of the density-matrix renormalization-group (DMRG) algorithm \cite{White1992-11,Schollwoeck2005} to each state $\ket{\chi_y}$, before executing the contractions to the next row. The only purpose of the DMRG procedure is here to reduce the number of degrees of freedom in each step to a manageable number, and hence, do contractions in an approximative fashion. The essential operation is to do Schmidt decompositions of $\ket{\chi_y}$. This can be done for FOCs as described in Sec.~\ref{sec:svd}.
The FOC framework also allows to simulate infinite fermionic PEPS. To this purpose, the fermionic PEPS is to be defined by repetition of an elementary cell FOC; cmp.\ to Ref.\ \cite{Jordan2008-101} for the qudit case. The algorithm does not deviate substantially from the finite-size case. The biggest difference being that, for the reduction of degrees of freedom in states $\ket{\chi_y}$, one has to use a translationally invariant formulation of the DMRG algorithm, basically the iTEBD algorithm as described in Ref.\ \cite{Orus2008-78}, again based on the ability to do singular value decompositions (Sec.~\ref{sec:svd}).
With this, one has a translation of the algorithms for the calculation of approximative ground state or time-evolved qudit (i)PEPS \cite{Verstraete2004-7,Jordan2008-101} to the fermionic case without reduction of the computational efficiency, as those algorithms are based on the ability to contract operator circuits just as in our example.
\section{Discussion} \label{sec:discussion}
In Ref.\ \cite{Pineda2009_05} it was shown that contractions of fermionic unitary circuits with a causal cone (for instance the evaluation of local observables w.r.t.\ a MERA) can be done without occurrence of any Jordan-Wigner strings outside the causal cone. Here, this result was extended in proving that arbitrary parity-symmetric fermionic operator circuits can actually be contracted with the \emph{same} computational effort and memory requirements as a corresponding QUOC. This remarkable result follows from the fact that a given contraction sequence for a QUOC can be implemented for a corresponding FOC with essentially the same number of computational operations. We have presented the required contraction primitives and discussed the marginal computational overheads.
This allows to translate algorithms on QUOCs to corresponding algorithms on FOCs.
For example in the algorithm for scale-invariant MERA as studied in Refs.\ \cite{Giovannetti2009-79,Montangero2008-10,Pfeifer2009-79}, the super operator simply becomes a fermionic super operator. Its iterative application to an observable yields the expectation value of the observable in the thermodynamic limit.
For the special example of the FOC being a MERA, in Ref.\ \cite{Corboz2009_04}, first numerical results where presented (postponing a description of the algorithm for a later publication). A scheme for fermionic PEPS was suggested in Ref.\ \cite{Kraus2009_04}. The suggested mapping to a QUOC used there seems numerically less efficient than the contraction scheme presented here. Instead of encoding the fermionic sign factors by increasing tensor dimensions, they can be taken account of during contractions, specifically in preparative mode reorderings, and operator order swaps. The resulting marginal overhead appears smaller.
It will be interesting to see to what extent variational ans\"atze like fermionic variants of PEPS or MERA, both satisfying entropic area laws \cite{Amico2008-80,Eisert2008,Latorre2009}, will be able to appropriately
grasp the correlations present in critical fermionic strongly correlated models, models that are
known to violate such area laws logarithmically \cite{Wolf2005,Gioev2005,Barthel2006-74,Li2006,Cramer2006}.
First numerical results \cite{Corboz2009_04,Kraus2009_04,Pineda2009_05} seem promising.
It is the hope that the framework discussed in this work will help in constructing fermionic variants of variational approaches to simulate strongly correlated fermions in higher dimensions.
\acknowledgments
We thank V.\ Giovannetti, M.\ Rizzi, U.\ Schollw{\"o}ck, and S.-Y.~Jang for discussions. This work has been supported by the EU (QAP, MINOS, QAP), and the EURYI.\\
|
1,116,691,497,193 | arxiv | \section{Introduction}
\IEEEPARstart{T}{he} genetic toggle switch is a fundamental component in synthetic biology as it plays a major role in cell differentiation and decision making \cite{alon2006introduction,chen2010modeling}. Its importance comes from its ability to endow host cells with memory of some previous stimulus reporting this information as high expression rate of a specific repressor protein \cite{gardner2000construction,tian2006stochastic,wu2013engineering}.
The genetic toggle switch as first designed in \cite{gardner2000construction} consists of two repressor proteins, both repressing each other's promoter, so that only one protein is fully expressed at any time. From a modelling viewpoint, the genetic toggle switch is a bistable dynamical system, possessing two stable equilibria, each associated to a fully expressed protein, and a saddle equilibrium point, whose stable manifold is the boundary separating the basins of attraction of the other two.
Different approaches have been presented to control the response of genetic toggles switches. Examples include methods based on piecewise affine approximations \cite{chaves2011exact}, pulse shaping of the external inputs based on monotone systems theory \cite{sootla2016shaping}, and the analysis of the stationary probability distributions of the outputs in different working conditions \cite{petrides2017understanding}.
Recently, in \cite{lugagne2017balancing} the problem has been studied of dynamically ``balancing'' a genetic toggle switch (based on the LacI/TetR promoters in {E.coli}, schematically shown in Figure \ref{fig:ecoli_ts}) in an undecided state somewhere in between its two stable equilibrium points. The expression level of the two repressing proteins can be controlled by regulating the concentration of two inducer molecules, aTc and IPTG.
The former, aTc, binds to TetR, increasing the rate of production of LacI, and therefore causing the cell to commit to the stable equilibrium point corresponding to high expression of LacI (high LacI/low TetR). The latter, IPTG, binds instead to LacI, causing the commitment of the cell to the other stable equilibrium point (high TetR/low LacI). From a dynamical systems viewpoint, varying the two input signals causes the occurrence of two saddle-node bifurcations changing the phase portrait of the system from bistability to monostability (Figure \ref{fig:nullclines}).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{ecoli_ts}
\caption{Genetic toggle switch embedded in \emph{E.coli} considered in \cite{lugagne2017balancing} (Figure reused under Creative Commons license).}
\label{fig:ecoli_ts}
\end{center}
\end{figure}
In their work Lugagne et al. \cite{lugagne2017balancing} focus on both the problem of controlling a single cell and that of taming the behavior of the whole population. Their approach is based on considering the toggle switch as a multi-input control system and is aimed at using both inputs to keep the switch evolving in a neighborhood of its saddle point; a problem they propose as a test-bed scenario in synthetic biology similar to that of stabilizing an inverted pendulum in classical control.
When implementing single cell control, the fluorescence level of the reporter proteins in a single cell are measured and compared to their reference values. Two different classes of controllers were used in \cite{lugagne2017balancing}, PI and bang-bang, both designed independently for each control input (aTc and IPTG). Using PI controllers on both input channels, it is possible to make the single cell evolve (oscillate) near the saddle point. Although the controlled cell follows (on average) the desired reference, the rest of the population is observed to drift away, converging instead to some other equilibrium point.
Surprisingly, it is reported in \cite{lugagne2017balancing} that this undesired effect is absent when the \emph{single} cell is controlled by two independent bang-bang inputs with the rest of the population exhibiting an evolution similar to the target cell in this case.
To further explore this effect, the authors then consider an open-loop \emph{periodic stimulation} (two mutually exclusive pulse waves with prescribed width) to control the whole population. Again the whole population is shown to converge to some periodic orbit surrounding the saddle point with a remarkable level of coherence in terms of both mean and standard deviation despite cell-to-cell variability and other phenotypic differences between cells.
Using an in-silico model this effect is explained in \cite{lugagne2017balancing} as due to the phase portrait of the forced system periodically changing from one presenting a unique high-LacI equilibrium point to another with a unique high-TetR equilibrium point.
Heuristically, this results in an \emph{average} phase portrait having a unique attractor in between the former two given that, as conjectured in \cite{lugagne2017balancing}, the cell dynamics and the periodic excitation act on different time-scales. Also, changing the characteristics of the periodic PWM forcing (such as period, width and amplitude of the pulses) shifts the position of the average attractor causing cells to evolve towards a different target solution.
Despite providing some qualitative explanation of the experimental observations, several open questions remain. For instance, what causes the massive reduction in standard deviation between different cells in the population and what the period/duty cycle should be of the control inputs to achieve the desired behavior. Also, the challenge remains of designing better multi-input {\em feedback} strategies to control populations of host cells endowed with synthetic toggle switches.
In this letter, we address some of these open problems by providing an analytical investigation of the phenomena reported in \cite{lugagne2017balancing}. We start by deriving a \emph{quasi-steady state model} of the toggle-switch system proposed therein. Using formal averaging techniques for nonlinear systems \cite{khalil2002nonlinear}, we derive an autonomous \emph{average vector field}, whose solutions, under some conditions, approximate those of the original time-varying system.
To simplify the analysis, we assume that the diffusion of the inducer molecules across the cell membrane is \emph{instantaneous}.
We prove that if the average vector field has a unique attracting equilibrium point $\bar{x}_\mathrm{av}$, whose position in state space depends on the duty cycle $D$ and on the amplitude of the forcing pulse waves $u_{\mathrm{aTc}}(t)$ and $u_{\mathrm{IPTG}}(t)$, then every solution of the original time-varying system asymptotically converges to a periodic orbit in some neighborhood of $\bar{x}_\mathrm{av}$. We compare our model predictions with the experimental observations made in \cite{lugagne2017balancing} and with the mean-value trajectories of the original model proposed therein. We use the model and its analysis to provide some indications on how the parameters of the toggle switch may be tuned to enhance its response to the class of periodic inputs of interest, and exploit the results to synthesize an external control strategy to regulate the mean-value of the measured fluorescence of the reporter proteins in the cell at some desired value.
We wish to emphasize that the analysis provided in this {letter} can be instrumental for the design of further control strategies for this particularly relevant class of synthetic devices and to investigate the effects at the population level of different types of periodic stimuli to the cells.
\section{Mathematical model of the toggle switch}
\label{sec:model_and_input}
\subsection{Transcription-translation model}
The deterministic model of the toggle switch that we start from can be given as follows \cite{lugagne2017balancing}
\footnotesize
\begin{align}
\label{eq:transcr_laci}
& \begin{aligned}
\frac{d\, mRNA_{\mathrm{LacI}}}{dt}=\; & \kappa_\mathrm{L}^\mathrm{m0} + \frac{\kappa_\mathrm{L}^\mathrm{m}}{1+ \left( \frac{TetR}{\theta_{\mathrm{TetR}} } \cdot \frac{1}{1 + \left( aTc/\theta_{\mathrm{aTC}} \right)^{\eta_{\mathrm{aTc}}} } \right)^{\eta_{\mathrm{TetR}}} } \\
& - g_\mathrm{L}^\mathrm{m} \cdot mRNA_{\mathrm{LacI}}
\end{aligned}
\\
\label{eq:trascr_tetr}
& \begin{aligned}
\frac{d\, mRNA_{\mathrm{TetR}}}{dt}=\; & \kappa_\mathrm{T}^\mathrm{m0} + \frac{\kappa_\mathrm{T}^\mathrm{m}}{1+ \left( \frac{LacI}{\theta_{\mathrm{LacI}} } \cdot \frac{1}{1 + \left( IPTG/\theta_{\mathrm{IPTG}} \right)^{\eta_{\mathrm{IPTG}}} } \right)^{\eta_{\mathrm{LacI}}} } \\
& - g_\mathrm{T}^\mathrm{m} \cdot mRNA_{\mathrm{TetR}}
\end{aligned}
\\
\label{eq:transl_laci}
& \frac{d\, LacI}{dt}= \kappa_\mathrm{L}^\mathrm{p} \cdot mRNA_{\mathrm{LacI}} - g_\mathrm{L}^\mathrm{p} \cdot LacI\\
\label{eq:transl_tetr}
& \frac{d\, TetR}{dt}= \kappa_\mathrm{T}^\mathrm{p} \cdot mRNA_{\mathrm{TetR}} - g_\mathrm{T}^\mathrm{p} \cdot TetR
\end{align}
\normalsize
In the above equations the variables denote concentrations of molecules inside the cell, and the parameters $\kappa_\mathrm{L/T}^\mathrm{m0}$, $\kappa_\mathrm{L/T}^\mathrm{m}$, $\kappa_\mathrm{L/T}^\mathrm{p}$, $g_\mathrm{L/T}^\mathrm{m}$, $g_\mathrm{L/T}^\mathrm{p}$ are leakage transcription, transcription, translation, mRNA degradation, and protein degradation rates, respectively. All parameter values are provided in Supplementary Table 1 and are also the same used in \cite{lugagne2017balancing}.
The inducer molecules diffuse in and out of the cell across the membrane with non-symmetrical exchange dynamics modeled by
\footnotesize
\begin{align}
\label{eq:diffusion_atc}
\frac{d\, aTc}{dt}= &
\begin{cases}
k^{\mathrm{in}}_{\mathrm{aTc}} (u_{\mathrm{aTc}} - aTc), & \mbox{ if }\ u_{\mathrm{aTc}} > aTc\\
k^{\mathrm{out}}_{\mathrm{aTc}} (u_{\mathrm{aTc}} - aTc), & \mbox{ if }\ u_{\mathrm{aTc}} \leq aTc
\end{cases},\\
\label{eq:diffusion_iptg}
\frac{d\, IPTG}{dt}= &
\begin{cases}
k^{\mathrm{in}}_{\mathrm{IPTG}} (u_{\mathrm{IPTG}} - IPTG), & \mbox{ if }\ u_{\mathrm{IPTG}} > IPTG\\
k^{\mathrm{out}}_{\mathrm{IPTG}} (u_{\mathrm{IPTG}} - IPTG), & \mbox{ if }\ u_{\mathrm{IPTG}} \leq IPTG
\end{cases},
\end{align}
\normalsize
where $aTc$ and $IPTG$ denote the concentrations of the inducer molecules inside the cell, while $u_{\mathrm{aTc}}$ and $u_{\mathrm{IPTG}}$ those in the growth medium.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.75\columnwidth]{nullclines}
\caption{Nullclines of the toggle switch system \eqref{eq:sys_nondim}. Main picture: bistability: two stable and one saddle equilibrium points. Reference values $aTc=20\,\mathrm{ng/ml}$, $IPTG=0.25\,\mathrm{mM}$. Insets: a) monostability: unique high LacI/low TetR equilibrium point. $aTc=50\,\mathrm{ng/ml}$, $IPTG=0.25\,\mathrm{mM}$; b) monostability: unique high TetR/low LacI equilibrium point. $aTc=20\,\mathrm{ng/ml}$, $IPTG=0.50\,\mathrm{mM}$ }
\label{fig:nullclines}
\end{center}
\end{figure}
\subsection{Quasi-steady state model}
Assuming that the concentrations of the mRNA molecules reach steady state more rapidly than their corresponding proteins, that LacI and TetR proteins degrade at the same rate, that is $g_\mathrm{L}^\mathrm{p}=g_\mathrm{T}^\mathrm{p}=g^\mathrm{p}$, and using the following dimensionless variables (similarly as done in \cite{kuznetsov2004synchrony,nikolaev2016quorum})
\begin{equation}
\label{eq:adim_variables}
t'=g^\mathrm{p}\, t, \ \ x_1=\frac{LacI}{\theta_{\mathrm{LacI}} }, \ \ x_2=\frac{TetR}{\theta_{\mathrm{TetR}}},
\end{equation}
we obtain the following nondimensional quasi-steady state model of the genetic toggle switch
\begin{equation}
\label{eq:sys_nondim}
\begin{split}
\frac{dx_1}{dt'} &= k_1^0 + \frac{k_1}{1+ x_2^2 \cdot w_1(t'/g^\mathrm{p}) } - x_1\\
\frac{dx_2}{dt'} &= k_2^0 + \frac{k_2}{1+ x_1^2 \cdot w_2(t'/g^\mathrm{p}) } - x_2
\end{split}
\end{equation}
where
\begin{equation}
\label{eq:parameter_adim_1}
k_1^0=\frac{\kappa_\mathrm{L}^\mathrm{m0}\,\kappa_\mathrm{L}^\mathrm{p} }{g_\mathrm{L}^\mathrm{m}\,\theta_{\mathrm{LacI}}\, g^\mathrm{p} }, \quad k_1=\frac{ \kappa_\mathrm{L}^\mathrm{m}\,\kappa_\mathrm{L}^\mathrm{p}}{g_\mathrm{L}^\mathrm{m}\,\theta_{\mathrm{LacI}}\, g^\mathrm{p} },
\end{equation}
and
\begin{equation}
\label{eq:parameter_adim_2}
k_2^0=\frac{\kappa_\mathrm{T}^\mathrm{m0}\,\kappa_\mathrm{T}^\mathrm{p} }{g_\mathrm{T}^\mathrm{m}\,\theta_{\mathrm{TetR}}\, g^\mathrm{p} }, \quad k_2=\frac{\kappa_\mathrm{T}^\mathrm{m}\,\kappa_\mathrm{T}^\mathrm{p} }{g_\mathrm{T}^\mathrm{m}\,\theta_{\mathrm{TetR}}\, g^\mathrm{p} },
\end{equation}
are dimensionless parameters, and we have set $\eta_{\mathrm{LacI}}=\eta_{\mathrm{TetR}}=2$.
The steps of the previous derivation are reported in the Supplementary Material.
The nonlinear functions $w_1(t)$ and $w_2(t)$ in \eqref{eq:sys_nondim} take into account the static relationship between the repressor protein (TetR or LacI) and their regulator molecule (aTc or IPTG, respectively). They are shown in Figure \ref{fig:w_functions_sq} and are defined as
\begin{align}
\label{eq:w1_function}
w_1(aTc(t))= & \frac{1}{\left(1 + \left( \frac{aTc(t)}{\theta_{\mathrm{aTC}}} \right)^{\eta_{\mathrm{aTc}}} \right)^{\eta_{\mathrm{TetR}}} } \\
\label{eq:w2_function}
w_2(IPTG(t))= & \frac{1}{\left(1 + \left( \frac{IPTG(t)}{\theta_{\mathrm{IPTG}}} \right)^{\eta_{\mathrm{IPTG}}} \right)^{\eta_{\mathrm{LacI}}}}
\end{align}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{w_functions_sq}
\caption{Top: Static nonlinear functions $w_1(aTc)$ and $w_2(IPTG)$ as in \eqref{eq:w1_function} and \eqref{eq:w2_function}. Bottom: Pulse wave $s_\mathrm{q}(t)$: period $1$, duty cycle $D\in[0,1]$.}
\label{fig:w_functions_sq}
\end{center}
\end{figure}
System \eqref{eq:sys_nondim} with the static relations \eqref{eq:w1_function}-\eqref{eq:w2_function} and diffusion dynamics across the cell membrane \eqref{eq:diffusion_atc}-\eqref{eq:diffusion_iptg} can be represented in block form as in Figure \ref{fig:block_plant}. The cell membrane acts as a linear (non-symmetrical) first order low-pass filter for the signals $u_{\mathrm{aTc}}(t)$ and $u_{\mathrm{IPTG}}(t)$ with a cut-off frequency that depends on the diffusion exchange rates $k_{\mathrm{aTc}}^{\mathrm{in/out}}$ and $k_{\mathrm{IPTG}}^{\mathrm{in/out}}$. Hence, $aTc(t)$ and $IPTG(t)$ are filtered version of their respective input signals whose attenuation depends both on the cut-off frequency and on their spectral density.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{block_plant2}
\caption{Block diagram of system \eqref{eq:sys_nondim} with diffusion dynamics across the cell membrane \eqref{eq:diffusion_atc}-\eqref{eq:diffusion_iptg}.}
\label{fig:block_plant}
\end{center}
\end{figure}
In our analysis we make the following simplifying assumption.
\begin{assumption}
The diffusion dynamics of the inducer molecules, aTc and IPTG, across the cell membrane is instantaneous, that is
\label{ass:diffusion}
\begin{align}
aTc(t)&=u_{\mathrm{aTc}}(t),\\
IPTG(t)&=u_{\mathrm{IPTG}}(t),
\end{align}
for every $t\geq t_0$.
\end{assumption}
Later in Section \ref{sec:simulations}, we will compare our results derived from system \eqref{eq:sys_nondim} under the above Assumption \ref{ass:diffusion} with the solutions of the complete toggle switch model \eqref{eq:transcr_laci}-\eqref{eq:transl_tetr} with more realistic diffusion dynamics given by \eqref{eq:diffusion_atc}-\eqref{eq:diffusion_iptg}.
\section{Averaging analysis of the toggle switch under PWM input signals}
\subsection{Forcing signals}
Following \cite{lugagne2017balancing}, the concentrations of the inducers in the growth medium are selected as two mutually exclusive pulse waves of period $T$, duty cycle $D\in[0,1]$ and amplitude $\bar{u}_{\mathrm{aTc}}$ and $\bar{u}_{\mathrm{IPTG}}$, respectively, that is
\begin{align}
\label{eq:u_atc_squarewave}
u_{\mathrm{aTc}}(t)&= \bar{u}_{\mathrm{aTc}} \cdot \left(1-s_\mathrm{q}\left( t/T \right) \right)\\
\label{eq:u_iptg_squarewave}
u_{\mathrm{IPTG}}(t)&= \bar{u}_{\mathrm{IPTG}} \cdot s_\mathrm{q}\left( t/T \right)
\end{align}
where $s_\mathrm{q}(t)$ is the pulse wave taking values 0 and 1, with period $1$ and duty cycle $D$, reported in Figure \ref{fig:w_functions_sq}. In the experiments described in \cite{lugagne2017balancing}, the amplitude $\bar{u}_{\mathrm{aTc}}$ and $\bar{u}_{\mathrm{IPTG}}$ were allowed to take values between $0$ and $100\,\mathrm{ng/ml}$, and $0$ and $1\, \mathrm{mM}$, respectively.\\
Note that $D=0$ corresponds to ``high aTc/no IPTG'' in the growth medium which in turns results in full steady-state expression of LacI (high $x_1$). Likewise, $D=1$ corresponds to ``no aTc/high IPTG'' yielding full expression of TetR (high $x_2$). Therefore, the duty cycle can be used to control the ratio between the activation time of the two monostable systems associated to the presence or absence of the two inducer molecules whose nullclines are shown in the insets in Figure \ref{fig:nullclines}.
Under Assumption \ref{ass:diffusion} it follows that
\begin{equation}
\label{eq:atc_squarewave}
\begin{split}
w_1(t)& = w_1(aTc(t))
= w_1\left( \bar{u}_{\mathrm{aTc}} \cdot \left(1-s_\mathrm{q}\left( t/T \right) \right) \right)\\
& = \bar{w}_1 + (1- \bar{w}_1) \cdot s_\mathrm{q}\left(t/T\right),
\end{split}
\end{equation}
where $\bar{w}_1=w_1( \bar{u}_{\mathrm{aTc}} )$, and
\begin{equation}
\label{eq:iptg_squarewave}
\begin{split}
w_2(t)& = w_2(IPTG(t))
= w_2\left( \bar{u}_{\mathrm{IPTG}} \cdot s_\mathrm{q}\left( t/T \right) \right)\\
& = 1 - (1-\bar{w}_2) \cdot s_\mathrm{q}\left(t/T\right),
\end{split}
\end{equation}
where $\bar{w}_2=w_2( \bar{u}_{\mathrm{IPTG}} )$.
Therefore, $w_i(t)$ is a pulse wave taking values between $1$ and $\bar{w}_i$.
\subsection{Average vector field}
By rescaling time setting $\tau=\frac{t'}{T g^\mathrm{p}}$, system \eqref{eq:sys_nondim} can be recast as
\begin{equation}
\label{eq:sys_orig}
\begin{split}
\frac{d x_1}{d\tau} & = \varepsilon \left[ k_1^0 + \frac{k_1}{1+ x_2^2 \cdot w_1(\tau T) } - x_1 \right]\\
\frac{d x_2}{d\tau} & = \varepsilon \left[ k_2^0 + \frac{k_2}{1+ x_1^2 \cdot w_2(\tau T) } - x_2 \right]
\end{split}
\end{equation}
with $\varepsilon=T g^\mathrm{p}$. The vector field in \eqref{eq:sys_orig} is time-varying in $\tau$ with period $1$, and it is now in a form amenable for periodic averaging analysis (see Supplementary Material).
In particular, the average vector field, say $f_\mathrm{av}(x)$, can be obtained by integrating the vector field in \eqref{eq:sys_orig} over a period, yielding
\begin{equation*}
\begin{split}
f_{\mathrm{av},1}(x)&= \frac{1}{1}\int_0^1 \left( k_1^0 + \frac{k_1}{1+ x_2^2 \cdot w_1(\tau T) } - x_1 \right) d\tau\\
& = k_1^0 + k_1 \!\! \left( \! \int_0^D \!\!\!\!\! \frac{1}{1+x_2^2 \! \cdot \! 1} d\tau \! + \!\! \int_D^1 \!\! \frac{1}{1+x_2^2 \! \cdot \! \bar{w}_1} d\tau \!\! \right) \! - \! x_1\\
& = k_1^0+ k_1 \left( \frac{D}{1+x_2^2}+ \frac{1-D}{1+x_2^2 \!\cdot \! \bar{w}_1} \right) -x_1,
\end{split}
\end{equation*}
where we used \eqref{eq:atc_squarewave}, and similarly for $f_{\mathrm{av},2}(x)$,
\begin{equation*}
\begin{split}
f_{\mathrm{av},2}(x)&= \frac{1}{1}\int_0^1 \left( k_2^0 + \frac{k_2}{1+ x_1^2 \cdot w_2(\tau T) } - x_2 \right) d\tau\\
& = k_2^0 + k_2 \!\! \left( \! \int_0^D \!\!\!\!\! \frac{1}{1+x_1^2 \!\cdot\! \bar{w}_2} d\tau \! + \!\! \int_D^1 \!\! \frac{1}{1+x_1^2 \! \cdot \! 1} d\tau \!\! \right) \! - \! x_2\\
& = k_2^0+ k_2 \left( \frac{D}{1+x_1^2 \!\cdot\! \bar{w}_2}+ \frac{1-D}{1+x_1^2} \right) -x_2,
\end{split}
\end{equation*}
where we used \eqref{eq:iptg_squarewave}.
Hence, the resulting \emph{average system} is
\begin{equation}
\label{eq:sys_average}
\begin{split}
\frac{d x_1}{d\tau} & = \varepsilon \left[k_1^0+ k_1 \left( \frac{D}{1+x_2^2}+ \frac{1-D}{1+x_2^2\cdot \bar{w}_1} \right) -x_1 \right]\\
\frac{d x_2}{d\tau} & = \varepsilon \left[ k_2^0+ k_2 \left( \frac{D}{1+x_1^2 \cdot \bar{w}_2}+ \frac{1-D}{1+x_1^2} \right) -x_2 \right]
\end{split}
\end{equation}
Let $x(\tau,\varepsilon)$ and $x_\mathrm{av}(\varepsilon\tau)$ denote the solutions to \eqref{eq:sys_orig} and \eqref{eq:sys_average}, respectively. Assume $\bar{x}_\mathrm{av}$ is an exponentially stable equilibrium point of the average system \eqref{eq:sys_average}. Let $\Omega$ be a compact subset of its basin of attraction, and assume $x_\mathrm{av}(0)\in\Omega$, and $x(0,\varepsilon)-x_\mathrm{av}(0)=O(\varepsilon)$. Then, from \cite[Theorem 10.4]{khalil2002nonlinear}, there exists a positive parameter $\varepsilon^\ast=T^\ast g^\mathrm{p}$ such that for all $0<\varepsilon<\varepsilon^\ast$
\begin{equation}
\label{eq:averaging_bound}
x(\tau,\varepsilon)-x_\mathrm{av}(\varepsilon\tau)=O(\varepsilon)
\end{equation}
for all $\tau>0$.
That is, solutions $x(\tau,\varepsilon)$ to system \eqref{eq:sys_orig} can be approximated by solutions $x_\mathrm{av}(\varepsilon\tau)$ to \eqref{eq:sys_average} with an error that is proportional to $\varepsilon$. As a consequence, if $\bar{x}_\mathrm{av}$ is the unique equilibrium point of system \eqref{eq:sys_average}, then for all $0<\varepsilon<\varepsilon^\ast$ system \eqref{eq:sys_orig} has a unique, exponentially stable, periodic solution $\bar{x}(\tau,\varepsilon)$ in a $O(\varepsilon)$-neighborhood of $\bar{x}_\mathrm{av}$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\linewidth]{equilibria_varying_equally}
\caption{Equilibrium points $\bar{x}_\mathrm{av}$ of \eqref{eq:sys_average} as a function of duty cycle $D$ rescaled in arbitrary fluorescence units using \eqref{eq:adim_variables}. Each dot represents the location of the unique stable equilibrium point of system \eqref{eq:sys_average} evaluated for $D$ taking values in the interval $[0,1]$ with increments of $0.01$. }
\label{fig:equilibria_varying_equally}
\end{center}
\end{figure}
The number and position in state space of the equilibrium points $\bar{x}_\mathrm{av}$ of the average system \eqref{eq:sys_average} depend on the specific choice of the amplitudes $\bar{u}_{\mathrm{aTc}}$ and $\bar{u}_{\mathrm{IPTG}}$ of the pulse waves, and also on the value of the duty cycle $D$.
For example, for the reference values $\bar{u}_{\mathrm{aTc}}=50\, \mathrm{ng/ml}$ and $\bar{u}_{\mathrm{IPTG}}=0.5\, \mathrm{mM}$, system \eqref{eq:sys_average} is monostable and the position of the equilibrium point $\bar{x}_\mathrm{av}$ varies monotonically with $D$ as reported in Figure \ref{fig:equilibria_varying_equally} (blue dots).
Hence, given certain values of $\bar{u}_{\mathrm{aTc}}$ and $\bar{u}_{\mathrm{IPTG}}$, it is possible to move the position of $\bar{x}_\mathrm{av}$ on the corresponding curve by varying $D$ (Supplementary Figure S1).
The phase portrait of the average system \eqref{eq:sys_average} together with a representative solution of the time-varying system \eqref{eq:sys_orig} for $D$ equal to $0.5$ are depicted in Figure \ref{fig:no_approx_dc_05}, while for $D$ equal to $0.2$ and $0.8$ are reported in Supplementary Figure S2.
The parameter $\varepsilon$ has been set to $0.1$ which corresponds to a forcing period $T=\varepsilon/g^\mathrm{p}\approx 6\, \mathrm{min}$, and the system has been simulated for $t_f=\tau_f\, T\approx 50 \cdot 6= 300 \, \mathrm{min}$.
Larger values of $\varepsilon$ correspond to larger values of the forcing period $T$. In turn, from \eqref{eq:averaging_bound}, this also implies that the solution $x(\tau,\varepsilon)$ of \eqref{eq:sys_orig} will asymptotically converge to a periodic solution $\bar{x}(\tau,\varepsilon)$ contained in a larger set (Figure \ref{fig:no_approx_dc_05_eps_10}), and hence to a worse approximation (see also Supplementary Figure S6 for their time evolution).
\begin{figure}
\centering
\subfigure[$D=0.5$, $T\approx 6\,\mathrm{min}$ ($\varepsilon=0.1$).]
{
\includegraphics[width=0.7\linewidth]{no_approx_dc_05}
\label{fig:no_approx_dc_05}
}
\\
\subfigure[$D=0.5$, $T\approx 180\,\mathrm{min}$ ($\varepsilon=3$).]
{
\includegraphics[width=0.7\linewidth]{no_approx_dc_05_eps_3}
\label{fig:no_approx_dc_05_eps_10}
}
\caption{Background: phase portrait of the average system \eqref{eq:sys_average}. Red line: the solution of the time-varying system \eqref{eq:sys_orig} with $\bar{u}_{\mathrm{aTc}}=50\, \mathrm{ng/ml}$ and $\bar{u}_{\mathrm{IPTG}}=0.5\, \mathrm{mM}$ from initial condition ${[1,1]}^{\mathsf{T}}$.}
\label{fig:pplane_2}
\end{figure}
\section{Diffusion effects}
\label{sec:simulations}
The analysis in the previous section was conducted under Assumption \ref{ass:diffusion}. As already mentioned before, the cell membrane acts as a low-pass filter, hence, when Assumption \ref{ass:diffusion} is dropped, $aTc(t)$ and $IPTG(t)$ will not anymore be ideal pulse waves but their filtered versions through the cell membrane. Therefore, in order for the average system \eqref{eq:sys_average} to continue being a good approximation of the actual cell response, the cut-off frequency of the two low-pass filters should be sufficiently higher than the fundamental frequency $1/T$ of the input pulse waves. However, due to the inevitable attenuation of high-frequency harmonics, there will always be a mismatch between the actual mean response of the cell and the value predicted by \eqref{eq:sys_average}.
The effects of relaxing Assumption \ref{ass:diffusion} on the time response of system \eqref{eq:sys_orig} can be observed in Supplementary Figures S4-S5.
The mean steady-state response of the complete four-dimensional system \eqref{eq:transcr_laci}-\eqref{eq:transl_tetr} with diffusion dynamics \eqref{eq:diffusion_atc}-\eqref{eq:diffusion_iptg} is compared in Figure \ref{fig:sim50}, and the corresponding equilibrium point $\bar{x}_{\mathrm{av}}(D)$ predicted by the autonomous two-dimensional average system \eqref{eq:sys_average}, for a representative value of the PWM amplitudes and different values of $D$ (see Supplementary Figure S3 for a different choice).
Although as expected there is no perfect matching between the two, the observed behavior is well captured by the average system. Note that in regulation problems, this mismatch can be compensated by designing an adequate feedback action.
When, on the other hand, the cut-off frequency of one of the filters is lower than the frequency $1/T$ of the input pulse waves, the input signal will be highly attenuated, resulting in the simple regulation of the toggle switch to either one of the stable equilibrium points (a phenomenon that was reported in the experiments described in \cite[Supplementary Figure 8]{lugagne2017balancing}). A similar phenomenon can also occur when the duty cycle is close to $0$ or $1$. Indeed, close to these values, the amplitude of the harmonics of the pulse wave is $|a_n|=\left| \frac{2 \bar{u}}{n\,\pi} \sin(n\pi D)\right| \approx 2\bar{u}D$, therefore low-frequency harmonics will have amplitudes similar to those of high-frequency ones, and the pulse wave will be highly attenuated.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.7\linewidth]{sim50}
\caption{Orange dots: Mean-value, evaluated at regime, of the response of system \eqref{eq:transcr_laci}-\eqref{eq:transl_tetr} (with membrane dynamics \eqref{eq:diffusion_atc}-\eqref{eq:diffusion_iptg}) to PWM inputs with $T=240\,\mathrm{min}$ and varying $D$ from $0.05$ to $0.95$ with increments of $0.05$. Blue dots: corresponding equilibrium point $\bar{x}_{\mathrm{av}}(D)$ of system \eqref{eq:sys_average} rescaled in a.u. using \eqref{eq:adim_variables}. Amplitude of pulse waves set to $\bar{u}_{\mathrm{aTc}}=50\, \mathrm{ng/ml}$ and $\bar{u}_{\mathrm{IPTG}}=0.5\, \mathrm{mM}$.}
\label{fig:sim50}
\end{center}
\end{figure}
\section{Perspectives for control}
\label{sec:control}
We wish to emphasize that the analytical results derived here can be exploited for the synthesis of external controllers to regulate the mean-value of the output response of the genetic toggle switch. Specifically, we propose the control schematic shown in Figure \ref{fig:block_controller} which is currently under development and will be presented elsewhere. Indeed, as done in Figure \ref{fig:equilibria_varying_equally}, it is possible to numerically compute $\bar{x}_\mathrm{av}$ as a function of $\bar{u}_\mathrm{aTc}$, $\bar{u}_\mathrm{IPTG}$ and $D$, and get interpolating curves $\Gamma_{\bar{u}_\mathrm{aTc},\bar{u}_\mathrm{IPTG}}(D)$. From these one can then obtain, for given values of the amplitude $\bar{u}_\mathrm{aTc}$ and $\bar{u}_\mathrm{IPTG}$, the duty cycle $D_\mathrm{ref}$ corresponding to the desired average set-point $\bar{x}_\mathrm{av}^\mathrm{ref}$, that is $D_{\mathrm{ref}}=\Gamma_{\bar{u}_\mathrm{aTc},\bar{u}_\mathrm{IPTG}}^{-1}(\bar{x}_\mathrm{av}^\mathrm{ref})$. The mismatch $e$ between the measured mean-value of the plant outputs and $\bar{x}_\mathrm{av}^\mathrm{ref}$ is then projected by $\pi$ onto the curve $\Gamma_{\bar{u}_\mathrm{aTc},\bar{u}_\mathrm{IPTG}}$ and compensated by a PI controller. The control scheme should also take into account the effects of the sampling time and of the slow transients.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{block_controller2}
\caption{External controller for the regulation of the mean-response of a genetic toggle switch.}
\label{fig:block_controller}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We derived and analyzed a model to capture the response of the genetic toggle switch to mutually exclusive PWM inputs observed experimentally in \cite{lugagne2017balancing}. The analysis was based on the assumption that the diffusion of inducer molecules across the cell membrane is instantaneous. From this, using the periodic averaging method for nonlinear systems, we derived an autonomous vector field that describes the dynamics of the mean-value of the periodic solutions of the original system. After discussing the predictions of the model under the assumption of instantaneous diffusion, we relaxed this assumption so that the input signals become filtered versions of themselves worsening the predictions.
However, even if it is not possible to eliminate the attenuation due to the cell membrane, our analysis shows that to mitigate its effects the frequency $1/T$ of the input pulse waves should be chosen sufficiently lower than the cut-off frequency of the low-pass membrane filter, and extreme values of the duty cycle $D$ should be avoided.
At the same time, we find that to avoid large oscillations around $\bar{x}_\mathrm{av}$, the parameter $\varepsilon=T g^\mathrm{p}$, that is the ratio between the time-scales of the forcing inputs and system dynamics, should be taken as small as possible, e.g., for fixed $T$, by cooling down the temperature of the growth medium and thus reducing the cell growth rate and therefore $g^\mathrm{p}$.
Future work will be aimed at quantifying the effects of the attenuation of the input signals due to the cell membrane to improve the predictions of our model, and at implementing and validating (in-silico and in-vivo) external controllers, also capable of modulating the ON/OFF values of the pulse waves.
Furthermore, we also plan to investigate the effect that different classes of periodic forcing could have on the variance of the response of a population of cells with extrinsic noise.
\footnotesize
\section*{ACKNOWLEDGMENT}
The authors wish to acknowledge support from the research project COSY-BIO (Control Engineering of Biological Systems for Reliable Synthetic Biology Applications) funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 766840.
\bibliographystyle{IEEEtran}
|
1,116,691,497,194 | arxiv | \section{Introduction}
After the pioneering works due to Baras and Goldstein \cite{BG84a}, \cite{BG84b}, the heat equation with inverse-square potential in bounded and unbounded domains has attracted considerable attention during the last decades, we cite \cite{AFP17}, \cite{FM15}, \cite{Gul02}, \cite{IKM19}, \cite{IO19}, \cite{Mar03}, \cite{MS10} and \cite{VZ00} to name only few.
Our aim is to contribute to the study of the heat equation by incorporating more singular potentials. The major obstacle for considering general coefficients is related to the multiplication problem for distributions \cite{Sch54}. There are several ways to overcome this problem. One way is to use the notion of very weak solutions.
The concept of very weak solutions was introduced in \cite{GR15} for the analysis of second order hyperbolic equations with non-regular time-dependent coefficients, and was applied for the study of several physical models in \cite{MRT19}, \cite{RT17a}, and in \cite{RT17b}. In these papers the very weak solutions are presented for equations with time-dependent coefficients. In the recent paper \cite{Gar20}, the author introduces the concept of the very weak solution for the wave equation with space-depending coefficient. Here we study the Cauchy problem for the heat equation with a non-negative potential, we allow the potential to be discontinuous or even less regular and we want to apply the concept of very weak solutions to establish a well-posedness result. Also, we note that very weak solutions for fractional Klein-Gordon equations with singular masses were considered in \cite{ARST21}.
In this paper we consider the heat equation with strongly singular potentials, in particular, with a $\delta$-function and with a behaviour like "multiplication" of $\delta$-functions. The existence result of very weak solutions is proved. Also, we show the uniqueness of the very weak solution and the consistency with the classical solution in some appropriate senses. The cases of positive and negative potentials are studied and numerical simulations are given. Finally, one observes so-called "laser heating and cooling" effects depending on a sign of the potential.
\section{Part I: Non-negative potential}
In this section we consider the case when the potential $q$ is non-negative. But first let us fix some notations. For our convenience, we will write $f\lesssim g$, which means that there exists a positive constant $C$ such that $f \leq Cg$. Also, let us define
\begin{equation*}
\Vert u(t,\cdot)\Vert_{k}:= \Vert \nabla u(t,\cdot)\Vert_{L^2} + \sum_{l=0}^{k}\Vert \partial_{t}^{l}u(t,\cdot)\Vert_{L^2},
\end{equation*}
for all $k\in\mathbb Z_{+}$. In the case when $k=0$, we simply use $\Vert u(t,\cdot)\Vert$ instead of $\Vert u(t,\cdot)\Vert_{0}$.
Fix $T>0$. In the domain $\Omega:=\left(0,T\right)\times \mathbb{R}^{d}$ we consider the heat equation
\begin{equation}
\label{Equation}
\partial_{t}u(t,x)-\Delta u(t,x) + q(x)u(t,x)=0, \,\, (t,x)\in\Omega,
\end{equation}
with the Cauchy data $u(0,x)=u_{0}(x),$ where the potential $q$ is assumed to be non-negative and singular.
In the case when the potential is a regular function, we have the following lemma.
\begin{lem}\label{Lemma 1}
Let $u_{0}\in H^{1}(\mathbb{R}^d)$ and suppose that $q\in L^{\infty}(\mathbb{R}^d)$ is non-negative. Then, there is a unique solution $u\in C^{1}(\left[0,T\right]; L^{2}) \cap C(\left[0,T\right]; H^{1})$ to (\ref{Equation}) and it satisfies the energy estimate
\begin{equation}
\Vert u(t,\cdot)\Vert \lesssim \left(1+\Vert q\Vert_{L^{\infty}}\right)\Vert u_{0}\Vert_{H^{1}}. \label{Energy estimate}
\end{equation}
\end{lem}
\begin{proof}
By multiplying the equation (\ref{Equation}) by $u_t$ and integrating with respect to $x$, we obtain
\begin{equation}
Re \left(\langle u_{t}(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2} + \langle -\Delta u(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2} + \langle q(\cdot)u(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2} \right)=0. \label{Energy functional}
\end{equation}
One observes
\begin{equation*}
Re \langle u_{t}(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2}=\langle u_{t}(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2}=\Vert u_{t}(t,\cdot)\Vert_{L^2}^{2}.
\end{equation*}
Also, we see that
\begin{equation*}
Re \langle -\Delta u(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2}=\frac{1}{2}\partial_{t}\langle \nabla u(t,\cdot),\nabla u(t,\cdot)\rangle_{L^2}=\frac{1}{2}\partial_{t}\Vert \nabla u(t,\cdot)\Vert_{L^2}^{2}
\end{equation*}
and
\begin{equation*}
Re \langle q(\cdot)u(t,\cdot),u_{t}(t,\cdot)\rangle_{L^2}=\frac{1}{2}\partial_{t}\langle q^{\frac{1}{2}}(\cdot)u(t,\cdot),q^{\frac{1}{2}}(\cdot)u(t,\cdot)\rangle_{L^2}=\frac{1}{2}\partial_{t}\Vert q^{\frac{1}{2}}(\cdot) u(t,\cdot)\Vert_{L^2}^{2}.
\end{equation*}
It follows from (\ref{Energy functional}) that
\begin{equation}
\partial_{t}\left[\Vert \nabla u(t,\cdot)\Vert_{L^2}^{2}+\Vert q^{\frac{1}{2}}(\cdot) u(t,\cdot)\Vert_{L^2}^{2}\right] =-2 \Vert u_{t}(t,\cdot)\Vert_{L^2}^{2}. \label{Energy functional 1}
\end{equation}
Let us denote by
\begin{equation*}
E(t):=\Vert \nabla u(t,\cdot)\Vert_{L^2}^{2}+\Vert q^{\frac{1}{2}}(\cdot) u(t,\cdot)\Vert_{L^2}^{2},
\end{equation*}
the energy functional.
It follows from (\ref{Energy functional 1}) that $E^{\prime}(t) \leq 0$, and thus
\begin{equation*}
E(t)\leq E(0).
\end{equation*}
By taking into account that
$\Vert q^{\frac{1}{2}}(\cdot)u_{0}(\cdot)\Vert_{L^2}^{2}$ can be estimated by
\begin{equation*}
\Vert q^{\frac{1}{2}}(\cdot)u_{0}(\cdot)\Vert_{L^2}^{2}\leq \Vert q(\cdot)\Vert_{L^{\infty}}\Vert u_{0}(\cdot)\Vert_{L^2}^{2},
\end{equation*}
we get
\begin{equation*}
\Vert \nabla u(t,\cdot)\Vert_{L^2}^{2}+\Vert q^{\frac{1}{2}}(\cdot) u(t,\cdot)\Vert_{L^2}^{2} \leq \Vert \nabla u_{0}\Vert_{L^2}^{2}+\Vert q(\cdot)\Vert_{L^{\infty}}\Vert u_{0}\Vert_{L^2}^{2}.
\end{equation*}
Thus, we have
\begin{equation}
\Vert q^{\frac{1}{2}}(\cdot) u(t,\cdot)\Vert_{L^2}^{2}\leq \Vert \nabla u_{0}\Vert_{L^2}^{2}+\Vert q(\cdot)\Vert_{L^{\infty}}\Vert u_{0}\Vert_{L^2}^{2} \label{Estimate qxu}
\end{equation}
and
\begin{equation*}
\Vert \nabla u(t,\cdot)\Vert_{L^2}^{2}\leq \Vert \nabla u_{0}\Vert_{L^2}^{2}+\Vert q(\cdot)\Vert_{L^{\infty}}\Vert u_{0}\Vert_{L^2}^{2},
\end{equation*}
and consequently, one can be seen that
\begin{equation}
\Vert \nabla u(t,\cdot)\Vert_{L^2}\leq \left(1+\Vert q\Vert_{L^{\infty}}^{\frac{1}{2}}\right)^{2}\Vert u_{0}\Vert_{H^{1}}. \label{Estimate grad u}
\end{equation}
To obtain the estimate for $u$, we rewrite the equation (\ref{Equation}) as follows
\begin{equation}
\label{Equation Duhamel}
u_{t}(t,x)-\Delta u(t,x)= -q(x)u(t,x) ,~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d}.
\end{equation}
Here, considering $-q(x)u(t,x)$ as a source term, we denote it by $f(t, x):=-q(x)u(t,x)$. By using Duhamel's principle (see, e.g. \cite{Eva98}), we represent the solution to (\ref{Equation Duhamel}) in the form
\begin{equation}
u(t,x)=\phi_{t}\ast u_{0}(x) + \int_{0}^{t}\phi_{t-s}\ast f_{s}(x)ds, \label{Sol Eq Duhamel}
\end{equation}
where $f_{s}=f(s, \cdot)$ and $\phi_{t}=\phi(t, \cdot)$. Here, $\phi$ is the fundamental solution (heat kernel) to the heat equation, and it satisfies
\begin{equation*}
\Vert \phi(t, \cdot)\Vert_{L^{1}}=1.
\end{equation*}
Now, taking the $L^{2}$-norm in (\ref{Sol Eq Duhamel}) and using Young's inequality, we arrive at
\begin{align*}
\Vert u(t,\cdot)\Vert_{L^{2}} & \leq \Vert \phi_{t}\Vert_{L^{1}}\Vert u_{0}\Vert_{L^{2}} + \int_{0}^{T}\Vert \phi_{t-s}\Vert_{L^{1}}\Vert f_{s}\Vert_{L^{2}} ds\\
& \leq \Vert u_{0}\Vert_{L^{2}} + \int_{0}^{T}\Vert f_{s}\Vert_{L^{2}} ds\\
& \leq \Vert u_{0}\Vert_{L^{2}} + \int_{0}^{T}\Vert q(\cdot)u(s,\cdot)\Vert_{L^{2}} ds.
\end{align*}
We estimate the term $\Vert q(\cdot)u(s,\cdot)\Vert_{L^{2}}$ as
\begin{equation*}
\Vert q(\cdot)u(s,\cdot)\Vert_{L^{2}} \leq \Vert q\Vert_{L^{\infty}}^{\frac{1}{2}}\Vert q^{\frac{1}{2}}u(s,\cdot)\Vert_{L^{2}},
\end{equation*}
and using the estimate (\ref{Estimate qxu}), one observes
\begin{equation}
\Vert u(t,\cdot)\Vert_{L^2}\lesssim \left(1+\Vert q\Vert_{L^{\infty}}^{\frac{1}{2}}\right)^{2}\Vert u_{0}\Vert_{H^{1}}. \label{Estimate u}
\end{equation}
Summing the estimates proved above, we conclude (\ref{Energy estimate}).
\end{proof}
\begin{rem}
We can also prove that the estimate
\begin{equation*}
\Vert \partial_{t}^{k}u(t,\cdot)\Vert_{L^2}\lesssim \left(1+\Vert q\Vert_{L^{\infty}}\right
\Vert u_{0}\Vert_{H^{2k+1}},
\end{equation*}
is valid for all $k\geq 0$, by requiring higher regularity on $u_0$. To do so, we denote by $v_{0}:=u$ and its derivatives by $v_{k}:=\partial_{t}^{k}u$, where $u$ is the solution of the Cauchy problem (\ref{Equation}). Using (\ref{Estimate u}) and the property that if $v_{k}$ solves the equation
\begin{equation*}
\partial_{t}v_{k}(t,x)-\Delta v_{k}(t,x) + q(x)v_{k}(t,x)=0,
\end{equation*}
with the initial data $v_{k}(0,x)$, then $v_{k+1}=\partial_{t}v_{k}$ solves the same equation with the initial data
\begin{equation*}
v_{k+1}(0,x)=\Delta v_{k}(0,x)-q(x)v_{k}(0,x),
\end{equation*}
we get our estimate for $\partial_{t}^{k}u$ for all $k\geq 0$.
\end{rem}
To prove the uniqueness and consistency of the very weak solution, we will also need the following lemma.
\begin{lem}
\label{Lemma 2}
Let $u_{0}\in H^{1}(\mathbb{R}^{d})$ and assume that $q\in L^{\infty}(\mathbb{R}^d)$ is non-negative. Then, the estimate
\begin{equation}
\label{Energy estimate 2}
\Vert u(t,\cdot)\Vert_{L^2} \lesssim \Vert u_{0}\Vert_{L^2},
\end{equation}
holds for the unique solution $u\in C^{1}(\left[0,T\right];L^{2})\cap C(\left[0,T\right];H^{1})$ of the Cauchy problem (\ref{Equation}).
\end{lem}
\begin{proof}
Again, by multiplying the equation (\ref{Equation}) by $u$ and integrating over $\mathbb{R}^{d}$ in $x$, we derive
\begin{equation*}
Re \left(\langle u_{t}(t,\cdot),u(t,\cdot)\rangle_{L^2} + \langle -\Delta u(t,\cdot),u(t,\cdot)\rangle_{L^2} + \langle q(\cdot)u(t,\cdot),u(t,\cdot)\rangle_{L^2} \right)=0.
\end{equation*}
Using the similar arguments as in Lemma \ref{Lemma 1}, we obtain
\begin{equation}
\label{Energy functional 2}
\partial_{t}\Vert u(t,\cdot)\Vert_{L^2}^{2} = - \Vert \nabla u(t,\cdot)\Vert_{L^2}^{2} - \Vert q^{\frac{1}{2}}(\cdot)u(t,\cdot)\Vert_{L^2}^{2} \leq0.
\end{equation}
This ends the proof of the lemma.
\end{proof}
Now, let us show that the Cauchy problem (\ref{Equation}) has a very weak solution. We start by regularising the coefficient $q$ and the initial data $u_0$ using a suitable mollifier $\psi$, generating families of smooth functions $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$. Namely,
\begin{equation*}
q_{\varepsilon}(x)=q\ast \psi_{\varepsilon }(x),~~~~u_{0,\varepsilon}(x)=u_{0}\ast \psi_{\varepsilon }(x),
\end{equation*}
where
\begin{equation*}
\psi_{\varepsilon }(x)=\omega(\varepsilon)^{-d}\psi(x/\omega(\varepsilon)), ~~~\varepsilon\in\left(0,1\right],
\end{equation*}
and $\omega(\varepsilon)$ is a positive function converging to $0$ as $\varepsilon \rightarrow 0$ to be chosen later. The function $\psi$ is a Friedrichs-mollifier, i.e. $\psi\in C_{0}^{\infty}(\mathbb{R}^{d})$, $\psi\geq 0$ and $\int\psi =1$.
\begin{assum}
On the regularisation of the coefficient $q$ and the initial data $u_{0}$ we make the following assumptions:
there exist $N, N_{0}\in \mathbb{N}_{0}$ such that
\begin{equation}
\label{Moderetness hyp data}
\Vert u_{0,\varepsilon}\Vert_{H^1}\leq C_{0}\omega(\varepsilon)^{-N_0},
\end{equation}
and
\begin{equation}
\label{Moderetness hyp coeff}
\Vert q_{\varepsilon}\Vert_{L^{\infty}}\leq C\omega(\varepsilon)^{-N},
\end{equation}
for $\varepsilon\in(0, 1]$.
\end{assum}
\begin{rem}
We note that such assumptions are natural for distributions. Indeed, by the structure theorems for distributions (see, e.g. \cite{FJ98}), we know that every compactly supported distribution can be represented by a
finite sum of (distributional) derivatives of continuous functions. Precisely, for $T\in \mathcal{E}'(\mathbb{R}^{d})$ we can find $n\in \mathbb{N}$ and functions $f_{\alpha}\in C(\mathbb{R}^{d})$ such that $T=\sum_{\vert \alpha\vert \leq n}\partial^{\alpha}f_{\alpha}$. The convolution of $T$ with a mollifier yields
\begin{equation*}
T\ast\psi_{\varepsilon}=\sum_{\vert \alpha\vert \leq n}\partial^{\alpha}f_{\alpha}\ast\psi_{\varepsilon}=\sum_{\vert \alpha\vert \leq n}f_{\alpha}\ast\partial^{\alpha}\psi_{\varepsilon}=\sum_{\vert \alpha\vert \leq n}\omega(\varepsilon)^{-\vert\alpha\vert}f_{\alpha}\ast\left(\omega(\varepsilon)^{-1}\partial^{\alpha}\psi(x/\omega(\varepsilon))\right).
\end{equation*}
It is clear that $T$ satisfies the above assumptions.
\end{rem}
\subsection{Existence of very weak solutions}
In this subsection we deal with the existence of very weak solutions. We start by calling the definition of the moderateness.
\begin{defn}[Moderateness] \label{Def:Moderetness}
Let $X$ be a Banach space with the norm $\|\cdot\|_{X}$. Then we say that a net of functions $(f_{\varepsilon})_{\varepsilon}$ from $X$ is $X$-moderate, if there exist $N\in\mathbb{N}_{0}$ and $c>0$ such that
\begin{equation*}
\Vert f_{\varepsilon}\Vert_{X} \leq c\omega(\varepsilon)^{-N}.
\end{equation*}
\end{defn}
In what follows, we will use particular cases of $X$. Namely, ${H^1}$-moderate, ${L^{\infty}}$-moderate, and $C(\left[0,T\right];H^{1})$-moderate families. For the last, we will shortly write $C$-moderate.
\begin{rem}
By assumptions, $(u_{0,\varepsilon})_{\varepsilon}$ and $(q_{\varepsilon})_{\varepsilon}$ are moderate.
\end{rem}
Now we will fix a notation. By writing $q\geq 0$, we mean that all regularisations $q_\varepsilon$ in our calculus are non-negative functions.
\begin{defn}
Let $q\geq 0$. The net $(u_{\varepsilon})_{\varepsilon}$ is said to be a very weak solution to the Cauchy problem (\ref{Equation}), if there exist an ${L^{\infty}}$-moderate regularisation $(q_{\varepsilon})_{\varepsilon}$ of the coefficient $q$ and $H^1$-moderate regularisation $(u_{0,\varepsilon})_{\varepsilon}$ of the initial function $u_0$, such that $(u_{\varepsilon})_{\varepsilon}$ solves the regularized equation
\begin{equation}
\label{Regularized equation}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) + q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0, ~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},
\end{equation}
with the Cauchy data $u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),$ for all $\varepsilon\in\left(0,1\right]$, and is $C$-moderate.
\end{defn}
With this setup the existence of a very weak solution becomes straightforward. But we will also analyse its properties later on.
\begin{thm}[Existence of a very weak solution]
Let $q\geq 0$. Assume that the regularisations of the coefficient $q$ and the Cauchy data $u_{0}$ satisfy the assumptions (\ref{Moderetness hyp data}) and (\ref{Moderetness hyp coeff}). Then the Cauchy problem (\ref{Equation}) has a very weak solution.
\end{thm}
\begin{proof}
Using the moderateness assumptions (\ref{Moderetness hyp data}), (\ref{Moderetness hyp coeff}), and the energy estimate (\ref{Energy estimate}), we arrive at
\begin{align*}
\Vert u_{\varepsilon}(t,\cdot)\Vert & \lesssim \omega(\varepsilon)^{-N} \times \omega(\varepsilon)^{-N_{0}}\\
& \lesssim \omega(\varepsilon)^{-N-N_{0}},
\end{align*}
concluding that $(u_{\varepsilon})_{\varepsilon}$ is $C$-moderate.
\end{proof}
\subsection{Uniqueness results}
In this subsection we discuss uniqueness of the very weak solution to the Cauchy problem (\ref{Equation}) for different cases of regularity of the potential $q$.
\subsubsection{\textbf{The classical case}} In the case when $q\in C^{\infty}(\mathbb{R}^{d})$, we require further conditions on the mollifiers, to ensure the uniqueness.
In the sequel, we are interested in the families of mollifiers with "$n$" vanishing moments. Let us define them as in the following.
\begin{defn} \label{defn moments}
\leavevmode
\begin{itemize}
\item We denote by $\mathbb{A}_{n}$, the set of mollifiers defined by
\begin{equation}
\mathbb{A}_{n}=\left\lbrace \text{Friedrichs-mollifiers } \psi ~:~ \int_{\mathbb{R}^d}x^{k}\psi(x)dx=0 ~\text{ for }~ 1\leq k\leq n\right\rbrace. \label{Moments condition}
\end{equation}
\item We say that $\psi\in \mathbb{A}_{\infty}$, if $\psi\in \mathbb{A}_{n}$ for all $n\in \mathbb{N}$.
\end{itemize}
\end{defn}
\begin{rem}
To construct such sets of mollifiers, we consider a Friedrichs-mollifier $\psi$ and set
\begin{equation*}
\Phi(x)=a_{0}\psi(x)+a_{1}\psi^{\prime}(x)+...+a_{n-1}\psi^{n-1}(x),
\end{equation*}
where the constants $a_{0}, \dots, a_{n-1}$ are determined by the conditions in (\ref{Moments condition}).
\end{rem}
\begin{lem} \label{lem q_eps-q}
For $N\in \mathbb{N}$, let $\psi\in\mathbb{A}_{N-1}$ and assume that $q\in C^{\infty}(\mathbb{R}^{d})$. Then, the estimate
\begin{equation}
\vert q_{\varepsilon}(x)-q(x)\vert \leq C\omega^{N}(\varepsilon) \label{Estimate q_eps-q}
\end{equation}
holds true for all $x\in \mathbb{R}^{d}$.
\end{lem}
\begin{proof}
Let $x\in \mathbb{R}^{d}$. We have
\begin{equation*}
\vert q_{\varepsilon}(x)-q(x)\vert \leq \omega^{-d}(\varepsilon)\int_{\mathbb{R}^{d}}\vert q(y)-q(x)\vert\psi\left(\omega^{-1}(\varepsilon)(y-x)\right) dy.
\end{equation*}
Making the change $z=\omega^{-d}(\varepsilon)(y-x)$, we get
\begin{equation*}
\vert q_{\varepsilon}(x)-q(x)\vert \leq \int_{\mathbb{R}^{d}}\vert q(x+\omega(\varepsilon)z)-q(x)\vert\psi(z) dz.
\end{equation*}
Expanding $q$ to order $N-1$, we get
\begin{equation*}
q(x+\omega(\varepsilon)z)-q(x)=\sum_{k=0}^{N} \frac{1}{(k-1)!}D^{(k-1)}q(x)(\omega(\varepsilon)z)^{k-1}+\mathcal{O}(\omega^{N}(\varepsilon)).
\end{equation*}
We get our estimate provided that the first $N-1$ moments of the mollifier $\psi$ vanish, finishing the proof of the lemma.
\end{proof}
To make things clear in what follows, we briefly repeat our regularisation nets. We regularise the coefficient $q$ and the initial data $u_0$ using suitable mollifiers $\psi, \Tilde{\psi}$, generating families of smooth functions $(q_{\varepsilon})_{\varepsilon}, (\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}, (\Tilde{u}_{0,\varepsilon})_{\varepsilon}$. Namely,
\begin{equation*}
q_{\varepsilon}(x)=q\ast \psi_{\varepsilon }(x),~~~~u_{0,\varepsilon}(x)=u_{0}\ast \psi_{\varepsilon }(x),
\end{equation*}
\begin{equation*}
\Tilde{q}_{\varepsilon}(x)=q\ast \Tilde{\psi}_{\varepsilon }(x),~~~~\Tilde{u}_{0,\varepsilon}(x)=u_{0}\ast \Tilde{\psi}_{\varepsilon }(x),
\end{equation*}
where
\begin{equation*}
\psi_{\varepsilon }(x)=\omega(\varepsilon)^{-d}\psi(x/\omega(\varepsilon)), ~~~\varepsilon\in\left(0,1\right],
\end{equation*}
\begin{equation*}
\Tilde{\psi}_{\varepsilon }(x)=\omega(\varepsilon)^{-d}\Tilde{\psi}(x/\omega(\varepsilon)), ~~~\varepsilon\in\left(0,1\right],
\end{equation*}
and $\omega(\varepsilon)$ is a positive function converging to $0$ as $\varepsilon \rightarrow 0$ to be chosen later.
\begin{defn}
We say that the very weak solution to the Cauchy problem (\ref{Equation}) is unique, if for all $\psi, \Tilde{\psi}\in \mathbb{A}_{\infty}$, such that
\begin{equation}\label{A-negl}
\Vert u_{0,\varepsilon} - \Tilde{u}_{0,\varepsilon}\Vert_{L^2} \lesssim \omega^{k}(\varepsilon) \,\, \left(\hbox{and} \,\, \Vert q_{\varepsilon} - \Tilde{q}_{\varepsilon}\Vert_{L^{\infty}} \lesssim \omega^{k}(\varepsilon)\right),
\end{equation}
for all $k>0$, we have
\begin{equation*}
\Vert u_{\varepsilon}(t,\cdot)-\Tilde{u}_{\varepsilon}(t,\cdot)\Vert_{L^{2}} \leq \omega^{N}(\varepsilon),
\end{equation*}
for all $N\in \mathbb{N}$, where $(u_{\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$ solve, respectively, the families of the Cauchy problems
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) + q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}\Tilde{u}_{\varepsilon}(t,x)-\Delta \Tilde{u}_{\varepsilon}(t,x) + \Tilde{q}_{\varepsilon}(x)\Tilde{u}_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
\Tilde{u}_{\varepsilon}(0,x)=\Tilde{u}_{0,\varepsilon}(x).
\end{array}
\right.
\end{equation*}
Also, the families of functions satisfying the properties \eqref{A-negl}, we call $\mathbb A_{\infty}$--negligible initial functions and coefficients, respectively.
\end{defn}
\begin{rem}
\label{Rem-negl}
We note that for any two $\psi, \Tilde{\psi}\in \mathbb{A}_{\infty}$ the difference of the corresponding regularisations of the coefficient $q\in C^{\infty}(\mathbb{R}^{d})$ is an $\mathbb A_{\infty}$--negligible function, that is,
$$
\Vert q_{\varepsilon} - \Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\lesssim \omega^{k}(\varepsilon),
$$
for all $k>0$, for all $\varepsilon\in(0, 1]$. Moreover, $(q_{\varepsilon}-q)_{\varepsilon\in(0, 1]}$ is also an $\mathbb A_{\infty}$--negligible family of functions.
\end{rem}
Note that the result of this remark holds for smooth functions. But in general, it also makes sense for other classes of regular functions and distributions. For more detailed analysis on the topic, the readers are referred to the paper \cite{GR15}.
\begin{thm}
\label{thm unicity classic}
Let $T>0$. Assume that a non-negative function $q\in C^{\infty}(\mathbb{R}^{d})$ and $u_{0}\in H^{1}(\mathbb{R}^{d})$ satisfy the conditions (\ref{Moderetness hyp data}) and (\ref{Moderetness hyp coeff}), respectively. Then, the very weak solution of the Cauchy problem (\ref{Equation}) is unique.
\end{thm}
\begin{proof}
Let $\psi, \Tilde{\psi}\in \mathbb{A}_{\infty}$ and consider $(q_{\varepsilon})_{\varepsilon}, (\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$ the regularisations of the coefficient $q$ and the data $u_0$ with respect to $\psi$ and $\Tilde{\psi}$. Assume that
\begin{equation}
\Vert u_{0,\varepsilon} - \Tilde{u}_{0,\varepsilon}\Vert_{L^2} \leq C_{k}\omega^{k}(\varepsilon),
\end{equation}
for all $k>0$. Then, $u_{\varepsilon}$ and $\Tilde{u_{\varepsilon}}$, the solutions to the related Cauchy problems, satisfy the equation
\begin{equation}
\left\lbrace
\begin{array}{l}
\partial_{t}(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)-\Delta (u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x) + q_{\varepsilon}(x)(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)=f_{\varepsilon}(t,x),\\
(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(0,x)=(u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon})(x), \label{Equation uniqueness classic}
\end{array}
\right.
\end{equation}
with
\begin{equation*}
f_{\varepsilon}(t,x)=(\Tilde{q}_{\varepsilon}(x)-q_{\varepsilon}(x))\Tilde{u}_{\varepsilon}(t,x).
\end{equation*}
Let us denote by $U_{\varepsilon}(t,x):=u_{\varepsilon}(t,x)-\Tilde{u}_{\varepsilon}(t,x)$ the solution to the problem (\ref{Equation uniqueness classic}). Using Duhamel's principle, $U_{\varepsilon}$ is given by
\begin{equation*}
U_{\varepsilon}(t, x)=W_{\varepsilon}(t, x) + \int_{0}^{t}V_{\varepsilon}(x,t-s;s)ds,
\end{equation*}
where $W_{\varepsilon}(t, x)$ is the solution to the problem
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}W_{\varepsilon}(t, x)-\Delta W_{\varepsilon}(t, x) + q_{\varepsilon}(x)W_{\varepsilon}(t, x)=0,\\
W_{\varepsilon}(0, x)=(u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon})(x),
\end{array}
\right.
\end{equation*}
and $V_{\varepsilon}(x,t;s)$ solves
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}V_{\varepsilon}(x,t;s)-\Delta V_{\varepsilon}(x,t;s) + q_{\varepsilon}(x)V_{\varepsilon}(x,t;s)=0,\\
V_{\varepsilon}(x,0;s)=f_{\varepsilon}(s,x).
\end{array}
\right.
\end{equation*}
Taking $U_{\varepsilon}$ in $L^{2}$-norm and using (\ref{Energy estimate 2}) to estimate $V_{\varepsilon}$ and $W_{\varepsilon}$, we arrive at
\begin{align*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2} & \leq \Vert W_{\varepsilon}(t, \cdot)\Vert_{L^2} + \int_{0}^{T}\Vert V_{\varepsilon}(\cdot,t-s;s)\Vert_{L^2} ds\\
& \lesssim \Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^2} + \int_{0}^{T}\Vert f_{\varepsilon}(s,\cdot)\Vert_{L^2} ds\\
& \lesssim \Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^2} + \Vert \Tilde{q}_{\varepsilon}-q_{\varepsilon}\Vert_{L^{\infty}}\int_{0}^{T}\Vert \Tilde{u}_{\varepsilon}(s,\cdot)\Vert_{L^2} ds.
\end{align*}
The net $(\tilde{u}_{\varepsilon})_{\varepsilon}$ is moderate, the uniqueness of the very weak solution follows by the assumption that $(u_{0,\varepsilon} - \Tilde{u}_{0,\varepsilon})_{\varepsilon\in(0, 1]}$ is an $\mathbb A_{\infty}$--negligible family of initial functions, that is,
\begin{equation*}
\Vert u_{0,\varepsilon} - \Tilde{u}_{0,\varepsilon}\Vert_{L^2} \leq C_{k}\omega^{k}(\varepsilon) \text{~~~for all~~} k>0,
\end{equation*}
the application of Lemma \ref{lem q_eps-q} and Remark \eqref{Rem-negl} due to the $\mathbb A_{\infty}$--negligibly of the family of coefficients
$\Tilde{q}_{\varepsilon}$ and $q_{\varepsilon}$. This ends the proof of the theorem.
\end{proof}
\subsubsection{\textbf{The singular case}}
In the case when $q$ is singular, we prove uniqueness in the sense of the following definition.
\begin{defn} \label{defn:uniqueness singular case}
We say that the very weak solution to the Cauchy problem (\ref{Equation}) is unique, if for all families $(q_{\varepsilon})_{\varepsilon}$, $(\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$, regularisations of the coefficient $q$ and $u_0$, satisfying
\begin{equation*}
\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k} \text{~~for all~~} k>0
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l} \text{~~for all~~} l>0,
\end{equation*}
then
\begin{equation*}
\Vert u_{\varepsilon}(t,\cdot)-\Tilde{u}_{\varepsilon}(t,\cdot)\Vert_{L^{2}} \leq C_{N}\varepsilon^{N},
\end{equation*}
for all $N>0$, where $(u_{\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$ solve, respectively, the families of the Cauchy problems
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) + q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}\Tilde{u}_{\varepsilon}(t,x)-\Delta \Tilde{u}_{\varepsilon}(t,x) + \Tilde{q}_{\varepsilon}(x)\Tilde{u}_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
\Tilde{u}_{\varepsilon}(0,x)=\Tilde{u}_{0,\varepsilon}(x).
\end{array}
\right.
\end{equation*}
\end{defn}
We note that in particular the hypotheses of this definition are fulfilled when $q$ is smooth. But it is not the only case. For more suitable examples of the coefficient $q$, we refer to \cite{GR15}, where a number of classes of regular and distributional $q$ are analysed.
\begin{thm}\label{thm uniqueness}
Let $T>0$. Assume that $q\geq0$ and $u_{0}\in H^{1}(\mathbb{R}^{d})$ satisfy the moderateness assumptions (\ref{Moderetness hyp data}) and (\ref{Moderetness hyp coeff}), respectively. Then, the very weak solution to the Cauchy problem (\ref{Equation}) is unique.
\end{thm}
\begin{proof}
Let $(q_{\varepsilon})_{\varepsilon}$, $(\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$, regularisations of the coefficient $q$ and the data $u_0$, satisfying
\begin{equation*}
\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k}, \text{~~for all~~} k>0,
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l}, \text{~~for all~~} l>0.
\end{equation*}
Then, $(u_{\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$, the solutions to the related Cauchy problems, satisfy
\begin{equation}
\left\lbrace
\begin{array}{l}
\partial_{t}(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)-\Delta (u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x) + q_{\varepsilon}(x)(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)=f_{\varepsilon}(t,x),\\
(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(0,x)=(u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon})(x), \label{Equation uniqueness}
\end{array}
\right.
\end{equation}
with
\begin{equation*}
f_{\varepsilon}(t,x)=(\Tilde{q}_{\varepsilon}(x)-q_{\varepsilon}(x))\Tilde{u}_{\varepsilon}(t,x).
\end{equation*}
Let us denote by $U_{\varepsilon}(t,x):=u_{\varepsilon}(t,x)-\Tilde{u}_{\varepsilon}(t,x)$ the solution to the equation (\ref{Equation uniqueness}). Using similar arguments as in Theorem \ref{thm unicity classic}, we get
\begin{equation*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2} \lesssim \Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^2} + \Vert \Tilde{q}_{\varepsilon}-q_{\varepsilon}\Vert_{L^{\infty}}\int_{0}^{T}\Vert \Tilde{u}_{\varepsilon}(s,\cdot)\Vert_{L^2}.
\end{equation*}
The family $(\Tilde{u}_{\varepsilon})_{\varepsilon}$ is a very weak solution to the Cauchy problem (\ref{Equation}), it is then moderate, i.e. there exists $N_0 \in \mathbb{N}_{0}$ such that
\begin{equation*}
\Vert \Tilde{u}_{\varepsilon}(s,\cdot)\Vert_{L^2} \leq c\omega^{-N_0}(\varepsilon).
\end{equation*}
On the other hand, we have that $\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k}$, for all $k>0$, and $\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l}$, for all $l>0$. Thus, we obtain that
\begin{equation*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2}:=\Vert u_{\varepsilon}(t,\cdot)-\Tilde{u}_{\varepsilon}(t,\cdot)\Vert_{L^2} \lesssim \varepsilon^{N},
\end{equation*}
for all $N>0$, showing the uniqueness of the very weak solution.
\end{proof}
\subsection{Consistency with the classical case}
Now we show that if the classical solution of the Cauchy problem (\ref{Equation}) given by Lemma \ref{Lemma 1} exists then the very weak solution recaptures it.
\begin{thm} \label{thm:consistency positive case}
Let $u_{0}\in H^{1}(\mathbb{R}^{d})$. Assume that $q\in L^{\infty}(\mathbb{R}^{d})$ is non-negative and consider the Cauchy problem
\begin{equation}
\left\lbrace
\begin{array}{l}
u_{t}(t,x)-\Delta u(t,x) + q(x)u(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
u(0,x)=u_{0}(x). \label{Equation with reg. coeff}
\end{array}
\right.
\end{equation}
Let $(u_{\varepsilon})_{\varepsilon}$ be a very weak solution of (\ref{Equation with reg. coeff}). Then, for any regularising families $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, the net $(u_{\varepsilon})_{\varepsilon}$ converges in $L^{2}$ as $\varepsilon \rightarrow 0$ to the classical solution of the Cauchy problem (\ref{Equation with reg. coeff}).
\end{thm}
\begin{proof}
Consider the classical solution $u$ to
\begin{equation*}
\left\lbrace
\begin{array}{l}
u_{t}(t,x)-\Delta u(t,x) + q(x)u(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
u(0,x)=u_{0}(x).
\end{array}
\right.
\end{equation*}
Note that for the very weak solution there is a representation $(u_{\varepsilon})_{\varepsilon}$ such that
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) + q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left[0,T\right]\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x).
\end{array}
\right.
\end{equation*}
Taking the difference, we get
\begin{equation}
\left\lbrace
\begin{array}{l}
\partial_{t}(u-u_{\varepsilon})(t,x)-\Delta (u-u_{\varepsilon})(t,x) + q_{\varepsilon}(x)(u-u_{\varepsilon})(t,x)=\eta_{\varepsilon}(t,x),\\
(u-u_{\varepsilon})(0,x)=(u_{0}-u_{0,\varepsilon})(x), \label{Equation consistency}
\end{array}
\right.
\end{equation}
where
\begin{equation*}
\eta_{\varepsilon}(t,x)=(q_{\varepsilon}(x)-q(x))u(t,x).
\end{equation*}
Let us denote $U_{\varepsilon}(t,x):=(u-u_{\varepsilon})(t,x)$ and let $W_{\varepsilon}(t, x)$ be the solution to the auxiliary homogeneous problem
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}W_{\varepsilon}(t, x)-\Delta W_{\varepsilon}(t, x) + q_{\varepsilon}(x)W_{\varepsilon}(t, x)=0,\\
W_{\varepsilon}(0, x)=(u_{0}-u_{0,\varepsilon})(x).
\end{array}
\right.
\end{equation*}
Then, by Duhamel's principle, the solution to (\ref{Equation consistency}) is given by
\begin{equation}
U_{\varepsilon}(t, x)=W_{\varepsilon}(t, x) + \int_{0}^{t}V_{\varepsilon}(x,t-s;s)ds, \label{Duhamel consistency}
\end{equation}
where $V_{\varepsilon}(x,t;s)$ is the solution to the problem
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}V_{\varepsilon}(x,t;s)-\Delta V_{\varepsilon}(x,t;s) + q_{\varepsilon}(x)V_{\varepsilon}(x,t;s)=0,\\
V_{\varepsilon}(x,0;s)=\eta_{\varepsilon}(t,x).
\end{array}
\right.
\end{equation*}
As in Theorem \ref{thm uniqueness}, taking the $L^{2}$-norm in (\ref{Duhamel consistency}) and using (\ref{Energy estimate 2}) to estimate $V_{\varepsilon}$ and $W_{\varepsilon}$, we get
\begin{align*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2} & \leq \Vert W_{\varepsilon}(t, \cdot)\Vert_{L^2} + \int_{0}^{T}\Vert V_{\varepsilon}(\cdot,t-s;s)\Vert_{L^2} ds\\
& \lesssim \Vert u_{0}-u_{0,\varepsilon}\Vert_{L^2} + \int_{0}^{T}\Vert \eta_{\varepsilon}(s,\cdot)\Vert_{L^2} ds\\
& \lesssim \Vert u_{0}-u_{0,\varepsilon}\Vert_{L^2} + \Vert q_{\varepsilon}-q\Vert_{L^{\infty}}\int_{0}^{T}\Vert u(s,\cdot)\Vert_{L^2} ds,
\end{align*}
and taking into account that
\begin{equation*}
\Vert q_{\varepsilon}-q\Vert_{L^{\infty}} \rightarrow 0 \text{~~as~~} \varepsilon\rightarrow 0
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-u_{0}\Vert_{L^{2}} \rightarrow 0 \text{~~as~~} \varepsilon\rightarrow 0,
\end{equation*}
consequently, it implies that $u_{\varepsilon}$ converges to $u$ in $L^{2}$ as $\varepsilon\to0$.
\end{proof}
\section{Part II: Negative potential}
\label{NP}
In this part we aim to study the case when the potential is negative and to show that the problem is still well-posed. Namely, we consider the Cauchy problem for the heat equation
\begin{equation}
\label{Equation 2}
\left\lbrace
\begin{array}{l}
\partial_{t}u(t,x)-\Delta u(t,x) - q(x)u(t,x)=0, \,\,\,(t,x)\in\left(0,T\right)\times \mathbb{R}^{d}, \\
u(0,x)=u_{0}(x),
\end{array}
\right.
\end{equation}
where $q$ is non-negative.
In the classical case, we have the following energy estimates for the solution of the problem \eqref{Equation 2}.
\begin{lem} \label{Lemma 3}
Let $u_{0}\in L^{2}(\mathbb{R}^d)$ and suppose that $q\in L^{\infty}(\mathbb{R}^d)$ is non-negative. Then, there is a unique solution $u\in C(\left[0,T\right];L^{2})$ to (\ref{Equation 2}) and it satisfies the estimate
\begin{equation}
\Vert u(t,\cdot)\Vert_{L^2} \lesssim \exp{\left( t\Vert q\Vert_{L^{\infty}} \right)} \Vert u_0\Vert_{L^2}, \label{Energy estimate 3}
\end{equation}
for all $t\in [0,T]$.
\end{lem}
\begin{proof}
Multiplying the equation in (\ref{Equation 2}) by $u$, integrating with respect to $x$, and taking the real part, we obtain
\begin{equation*}
Re \left(\langle u_{t}(t,\cdot),u(t,\cdot)\rangle_{L^2} + \langle -\Delta u(t,\cdot),u(t,\cdot)\rangle_{L^2} - \langle q(\cdot)u(t,\cdot),u(t,\cdot)\rangle_{L^2} \right)=0,
\end{equation*}
for all $t\in [0,T]$. Using similar arguments as in Lemma \ref{Lemma 1} and noting that the term $\Vert q(\cdot)u(t,\cdot)\Vert_{L^2}$ can be estimated by $\Vert q\Vert_{L^{\infty}} \Vert u(t,\cdot)\Vert_{L^2}$, we get
\begin{equation*}
\partial_{t}\Vert u(t,\cdot)\Vert_{L^2} \lesssim \Vert q\Vert_{L^{\infty}} \Vert u(t,\cdot)\Vert_{L^2},
\end{equation*}
for all $t\in [0,T]$. The desired estimate follows by the application of Gronwall's lemma.
\end{proof}
Let now assume that the potential $q$ and the initial data $u_0$ are singular. Consider the Cauchy problem for the heat equation
\begin{equation}
\label{Equation 3}
\left\lbrace
\begin{array}{l}
\partial_{t}u(t,x)-\Delta u(t,x) - q(x)u(t,x)=0, \,\,\,(t,x)\in\left(0,T\right)\times \mathbb{R}^{d}, \\
u(0,x)=u_{0}(x).
\end{array}
\right.
\end{equation}
In order to prove the existence of a very weak solution to (\ref{Equation 3}), we proceed as in the case of the positive potential. We start by regularising the equation in (\ref{Equation 3}). In other words, using
\begin{equation*}
\psi_{\varepsilon }(x)=\omega(\varepsilon)^{-d}\psi(x/\omega(\varepsilon)), ~~~\varepsilon\in\left(0,1\right],
\end{equation*}
where $\psi$ is a Friedrichs mollifier and $\omega$ is a positive function converging to $0$ as $\varepsilon \rightarrow 0$, to be chosen later, we regularise $q$ and $u_0$ obtaining the nets $(q_{\varepsilon})_{\varepsilon}=(q\ast\psi_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}=(u_0\ast\psi_{\varepsilon})_{\varepsilon}$. For this, we can assume that $q$ and $u_0$ are distributions.
\begin{assum}
\label{Assump_neg}
We assume that there exist $N_0, N_1 \in \mathbb{N}_0$ such that
\begin{equation}
\Vert q_{\varepsilon}\Vert_{L^{\infty}}\leq C_0\omega(\varepsilon)^{-N_0},
\label{Moderetness hyp coeff 1}
\end{equation}
and
\begin{equation}
\Vert u_{0,\varepsilon}\Vert_{L^2}\leq C_1\omega(\varepsilon)^{-N_1}. \label{Moderetness hyp data 1}
\end{equation}
\end{assum}
\subsection{Existence of very weak solutions}
In this subsection we give the definition of a very weak solution adapted to the problem (\ref{Equation 3}). For this, we will make use of the same definition of the moderateness as in the non-negative case. Nevertheless, let us recall it here.
\begin{defn}[Moderateness]
\label{Def:Moderetness 1}
Let $X$ be a Banach space with the norm $\|\cdot\|_{X}$. Then we say that a net of functions $(f_{\varepsilon})_{\varepsilon}$ from $X$ is $X$-moderate, if there exist $N\in\mathbb{N}_{0}$ and $c>0$ such that
\begin{equation*}
\Vert f_{\varepsilon}\Vert_{X} \leq c\omega(\varepsilon)^{-N}.
\end{equation*}
\end{defn}
In what follows, we will use particular cases of $X$. Namely, ${L^2}$-moderate, ${L^{\infty}}$-moderate, and $C(\left[0,T\right];L^{2})$-moderate families. For the last, we will shortly write $C$-moderate.
\begin{defn}
Let $q$ be non-negative. Then the net $(u_{\varepsilon})_{\varepsilon}$ is said to be a very weak solution to the problem (\ref{Equation 3}), if there exist an ${L^{\infty}}$-moderate regularisation $(q_{\varepsilon})_{\varepsilon}$ of the coefficient $q$ and an $L^2$-moderate regularisation $(u_{0,\varepsilon})_{\varepsilon}$ of $u_0$ such that $(u_{\varepsilon})_{\varepsilon}$ solves the regularized problem
\begin{equation}
\label{Regularized equation 1}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) - q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0, ~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),
\end{array}
\right.
\end{equation}
for all $\varepsilon\in\left(0,1\right]$, and is $C$-moderate.
\end{defn}
\begin{thm}[Existence of a very weak solution]
Let $q\geq 0$. Assume that the nets $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$ satisfy the assumptions (\ref{Moderetness hyp coeff 1}) and (\ref{Moderetness hyp data 1}), respectively. Then the problem (\ref{Equation 3}) has a very weak solution.
\end{thm}
\begin{proof}
The nets $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$ are moderate by the assumption. To prove that a very weak solution to the Cauchy problem (\ref{Equation 3}) exists, we need to show that the net $(u_{\varepsilon})_{\varepsilon}$, a solution to the regularized problem (\ref{Regularized equation 1}), is $C$-moderate. Indeed, using the assumptions (\ref{Moderetness hyp coeff 1}), (\ref{Moderetness hyp data 1}) and the estimate (\ref{Energy estimate 3}), we get
\begin{equation*}
\Vert u(t,\cdot)\Vert_{L^2} \lesssim \exp{\left( t\omega(\varepsilon)^{-N_0}\right)}\omega(\varepsilon)^{-N_1},
\end{equation*}
for all $t\in [0,T]$. Choosing $\omega(\varepsilon)=\left( \log \varepsilon^{-N_0}\right)^{-\frac{1}{N_0}}$,
we obtain that
\begin{align*}
\Vert u(t,\cdot)\Vert_{L^2} & \lesssim \varepsilon^{-tN_0}\times \left( \log \varepsilon^{-N_0} \right)^{\frac{N_1}{N_0}}\\
& \lesssim \varepsilon^{-TN_0}\times \varepsilon^{-N_1},
\end{align*}
where the fact that $t\in [0,T]$ and that $\log \varepsilon^{-N_0}$ can be estimated by $\varepsilon^{-N_0}$ are used. Then the net $(u_{\varepsilon})_{\varepsilon}$ is $C$-moderate, implying the existence of very weak solutions.
\end{proof}
\subsection{Uniqueness results}
Here, we prove the uniqueness of the very weak solution to the heat equation with a non-positive potential \eqref{Equation 3} in the spirit of Definition \ref{defn:uniqueness singular case}, adapted to our problem.
\begin{defn} \label{defn:uniqueness_2}
Let the regularisations $(q_{\varepsilon})_{\varepsilon}$ and $(\Tilde{q}_{\varepsilon})_{\varepsilon}$ of $q$ and the regularisations $(u_{0,\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$ of $u_0$ satisfy Assumption \ref{Assump_neg}. Then we say that the very weak solution to the heat equation (\ref{Equation 3}) is unique, if for all families $(q_{\varepsilon})_{\varepsilon}$, $(\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$, satisfying
\begin{equation*}
\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k} \text{~~for all~~} k>0
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l} \text{~~for all~~} l>0,
\end{equation*}
we have
\begin{equation*}
\Vert u_{\varepsilon}(t,\cdot)-\Tilde{u}_{\varepsilon}(t,\cdot)\Vert_{L^{2}} \leq C_{N}\varepsilon^{N}
\end{equation*}
for all $N>0$, where $(u_{\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$ solve, respectively, the families of the Cauchy problems
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) - q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}\Tilde{u}_{\varepsilon}(t,x)-\Delta \Tilde{u}_{\varepsilon}(t,x) - \Tilde{q}_{\varepsilon}(x)\Tilde{u}_{\varepsilon}(t,x)=0 ,~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
\Tilde{u}_{\varepsilon}(0,x)=\Tilde{u}_{0,\varepsilon}(x).
\end{array}
\right.
\end{equation*}
\end{defn}
\begin{thm}\label{thm uniqueness negative case}
Let $T >0$. Assume that the nets $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$ satisfy the assumptions (\ref{Moderetness hyp coeff 1}) and (\ref{Moderetness hyp data 1}), respectively. Then, the very weak solution to the Cauchy problem (\ref{Equation 3}) is unique.
\end{thm}
\begin{proof}
Let us consider $(q_{\varepsilon})_{\varepsilon}$, $(\Tilde{q}_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, $(\Tilde{u}_{0,\varepsilon})_{\varepsilon}$, regularisations of the $q$ and $u_0$, satisfying
\begin{equation*}
\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k}
\text{~~for all~~} k>0
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l}
\text{~~for all~~} l>0.
\end{equation*}
Then, $(u_{\varepsilon})_{\varepsilon}$ and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$, the solutions to the related Cauchy problems, satisfy
\begin{equation}
\left\lbrace
\begin{array}{l}
\partial_{t}(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)-\Delta (u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x) - q_{\varepsilon}(x)(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(t,x)=f_{\varepsilon}(t,x),\\
(u_{\varepsilon}-\Tilde{u}_{\varepsilon})(0,x)=(u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon})(x), \label{Equation uniqueness 1}
\end{array}
\right.
\end{equation}
with
\begin{equation*}
f_{\varepsilon}(t,x)=(q_{\varepsilon}(x)-\Tilde{q}_{\varepsilon}(x))\Tilde{u}_{\varepsilon}(t,x).
\end{equation*}
Let us denote by $U_{\varepsilon}(t,x):=u_{\varepsilon}(t,x)-\Tilde{u}_{\varepsilon}(t,x)$ the solution to the equation (\ref{Equation uniqueness 1}). Arguing as in Theorem \ref{thm unicity classic} and using the estimate (\ref{Energy estimate 3}), we arrive at
\begin{equation*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2} \lesssim \exp{\left( t\Vert q_{\varepsilon}\Vert_{L^{\infty}} \right)}\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon} \Vert_{L^2} + \Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon} \Vert_{L^{\infty}}\int_{0}^{T}\exp{\left( s\Vert q_{\varepsilon}\Vert_{L^{\infty}} \right)}\Vert\Tilde{u}_{\varepsilon}(s,\cdot)\Vert_{L^2} ds.
\end{equation*}
On the one hand, the net $(q_{\varepsilon})_{\varepsilon}$ is moderate by the assumption and $(\Tilde{u}_{\varepsilon})_{\varepsilon}$ is moderate as a very weak solution. From the other hand, we have that
\begin{equation*}
\Vert q_{\varepsilon}-\Tilde{q}_{\varepsilon}\Vert_{L^{\infty}}\leq C_{k}\varepsilon^{k}
\text{~~for all~~} k>0,
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-\Tilde{u}_{0,\varepsilon}\Vert_{L^{2}}\leq C_{l}\varepsilon^{l}
\text{~~for all~~} l>0.
\end{equation*}
By choosing $\omega(\varepsilon)=\left( \log \varepsilon^{-N_0}\right)^{-\frac{1}{N_0}}$ for $q_{\varepsilon}$ in \eqref{Moderetness hyp coeff 1}, it follows that
\begin{equation*}
\Vert U_{\varepsilon}(t, \cdot)\Vert_{L^2}=\Vert u_{\varepsilon}(t,\cdot)-\Tilde{u}_{\varepsilon}(t,\cdot)\Vert_{L^2} \lesssim \varepsilon^{N},
\end{equation*}
for all $N>0$, ending the proof.
\end{proof}
\subsection{Consistency with the classical case}
We conclude this section by showing that if the coefficient and the Cauchy data are regular then the very weak solution coincides with the classical one, given by Lemma \ref{Lemma 3}.
\begin{thm}
Let $u_{0}\in L^{2}(\mathbb{R}^{d})$. Assume that $q\in L^{\infty}(\mathbb{R}^{d})$ is non-negative and consider the Cauchy problem
for the heat equation
\begin{equation}
\left\lbrace
\begin{array}{l}
u_{t}(t,x)-\Delta u(t,x) - q(x)u(t,x)=0 ,~~~(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
u(0,x)=u_{0}(x). \label{Equation with reg. coeff 1}
\end{array}
\right.
\end{equation}
Let $(u_{\varepsilon})_{\varepsilon}$ be a very weak solution of the heat equation (\ref{Equation with reg. coeff 1}). Then, for any regularising families $(q_{\varepsilon})_{\varepsilon}$ and $(u_{0,\varepsilon})_{\varepsilon}$, the net $(u_{\varepsilon})_{\varepsilon}$ converges in $L^{2}$ as $\varepsilon \rightarrow 0$ to the classical solution of the Cauchy problem (\ref{Equation with reg. coeff 1}).
\end{thm}
\begin{proof}
Let us denote the classical solution and the very weak one by $u$ and $(u_{\varepsilon})_{\varepsilon}$, respectively. It is clear, that they satisfy
\begin{equation*}
\left\lbrace
\begin{array}{l}
u_{t}(t,x)-\Delta u(t,x) - q(x)u(t,x)=0, \,\,\,(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
u(0,x)=u_{0}(x),
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\left\lbrace
\begin{array}{l}
\partial_{t}u_{\varepsilon}(t,x)-\Delta u_{\varepsilon}(t,x) - q_{\varepsilon}(x)u_{\varepsilon}(t,x)=0, \,\,\,(t,x)\in\left(0,T\right)\times \mathbb{R}^{d},\\
u_{\varepsilon}(0,x)=u_{0,\varepsilon}(x),
\end{array}
\right.
\end{equation*}
respectively. Let us denote by $V_{\varepsilon}(t,x):=(u_{\varepsilon}-u)(t,x)$. Using the estimate (\ref{Energy estimate 3}) and the same arguments as in the positive potential case, we show that
\begin{equation*}
\Vert V_{\varepsilon}(t, \cdot)\Vert_{L^2} \lesssim \exp{\left( t\Vert q_{\varepsilon}\Vert_{L^{\infty}} \right)}\Vert u_{0,\varepsilon}-u_{0} \Vert_{L^2} + \Vert q_{\varepsilon}-q \Vert_{L^{\infty}}\int_{0}^{T}\exp{\left( s\Vert q_{\varepsilon}\Vert_{L^{\infty}} \right)}\Vert u(s,\cdot)\Vert_{L^2} ds.
\end{equation*}
By taking into account that
\begin{equation*}
\Vert q_{\varepsilon}-q\Vert_{L^{\infty}} \rightarrow 0 \text{~~as~~} \varepsilon\rightarrow 0
\end{equation*}
and
\begin{equation*}
\Vert u_{0,\varepsilon}-u_{0}\Vert_{L^{2}} \rightarrow 0 \text{~~as~~} \varepsilon\rightarrow 0,
\end{equation*}
from the other hand, due to the facts $q_{\varepsilon}$ is bounded as a regularisation of an essentially bounded function and $\Vert u(s,\cdot)\Vert_{L^2}$ is bounded as well as $u$ is a classical solution, we conclude that $(u_{\varepsilon})_{\varepsilon}$ converges to $u$ in $L^{2}$ as $\varepsilon\to0$.
\end{proof}
\begin{figure}[ht!]
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./u0.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./positive_t=2.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./positive_t=6.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./positive_t=10.jpg}}
\end{minipage}
\caption{In these plots, we analyse behaviour of the temperature in three different cases. In the top left plot, the graphic of the initial function is given. In the further plots, we compare the temperature function $u$ which is the solution of \eqref{RE-01} at $t=2, 6, 10$ for $\varepsilon=0.2$ in three cases. Case 1 is corresponding to the potential $q$ equal to zero. Case 2 is corresponding to the case when the potential $q$ is a $\delta$-function with the support at point $40$. Case 3 is corresponding to a $\delta^2$-like function potential with the support at point $40$.} \label{fig1}
\end{figure}
\begin{figure}[ht!]
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./positive_q=fi.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.47\linewidth}
\center{\includegraphics[scale=0.35]{./positive_q=fifi.jpg}}
\end{minipage}
\caption{In these plots, we compare the temperature function $u$ at $t=0.01, 1.0, 10.0$ for $\varepsilon=0.2$ in the second and third cases: when the potential is $\delta$-like and $\delta^{2}$-like functions with the support at point $40$, respectively. The left picture is corresponding to the second case. The right picture is corresponding to the third case.}
\label{fig2}
\end{figure}
\begin{figure}[ht!]
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./u0.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./negative_t=1.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./negative_t=2.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./negative_t=4.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./negative_t=6.jpg}}
\end{minipage}
\hfill
\begin{minipage}[h]{0.30\linewidth}
\center{\includegraphics[scale=0.25]{./negative_t=10.jpg}}
\end{minipage}
\caption{In these plots, we analyse behaviour of the solution of the heat equation \eqref{RE-01-Negative} with the negative potential. In the top left plot, the graphic of the temperature distribution at the initial time. In the further plots, we compare the temperature function $u$ at $t=1, 2, 4, 6, 10$ for $\varepsilon=0.8, 0.5, 0.2$. Here, the case of the potential with a $\delta$-like function behaviour with the support at point $30$ is considered.}
\label{fig3}
\end{figure}
\section{Numerical experiments}
In this Section, we do some numerical experiments. Let us analyse our problem by regularising a distributional potential $q(x)$ by a parameter $\varepsilon$. We define
$
q_\varepsilon (x):=(q\ast\varphi_\varepsilon)(x),
$
as the convolution with the mollifier
$\varphi_\varepsilon(x)=\frac{1}{\varepsilon} \varphi(x/\varepsilon),$
where
$$
\varphi(x)=
\begin{cases}
c \exp{\left(\frac{1}{x^{2}-1}\right)}, |x| < 1, \\
0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, |x|\geq 1,
\end{cases}
$$
with $c \simeq 2.2523$ to have
$
\int\limits_{-\infty}^{\infty} \varphi(x)dx=1.
$
Then, instead of \eqref{Equation} we consider the regularised problem
\begin{equation}\label{RE-01}
\partial_{t}u_{\varepsilon}(t,x)-\partial^{2}_{x} u_{\varepsilon}(t,x)+ q_{\varepsilon}(x) u_{\varepsilon}(t,x) =0, \; (t,x)\in[0,T]\times\mathbb R,
\end{equation}
with the initial data $u_\varepsilon(0,x)=u_0 (x)$, for all $x\in\mathbb R.$ Here, we put
\begin{equation}
\label{u_0}
u_0 (x)=
\begin{cases}
\exp{\left(\frac{1}{(x-50)^{2}-0.25}\right)}, \,\, |x-50| < 0.5, \\
0, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\, \,\,\, |x-50| \geq 0.5.
\end{cases}
\end{equation}
Note that ${\rm supp }\, u_0\subset[49.5, 50.5]$.
In the non-negative potential case, for $q$ we consider the following cases, with $\delta$ denoting the standard Dirac's delta-distribution:
\begin{itemize}
\item[Case 1:] $q(x)=0$ with $q_{\varepsilon}(x)=0$;
\item[Case 2:] $q(x)=\delta(x-40)$ with $q_{\varepsilon}(x)=\varphi_\varepsilon(x-40)$;
\item[Case 3:] $q(x)=\delta(x-40)\times\delta(x-40)$. Here, we understand $q_{\varepsilon}(x)$ as follows $q_{\varepsilon}(x)=\left(\varphi_\varepsilon(x-40)\right)^{2};$
\end{itemize}
In Figure \ref{fig1}, we study behaviour of the temperature function $u$ which is the solution of \eqref{RE-01} at $t=2, 6, 10$ for $\varepsilon=0.2$ in three cases: the first case is corresponding to the potential $q$ equal to zero; the second case is corresponding to the case when the potential $q$ is a $\delta$-function with the support at point $40$; the third case is corresponding to a $\delta^2$-like function potential with the support at point $40$. By comparing these cases, we observe that in the second and in the third cases a place of the support of the $\delta$-function is cooling down faster rather that zero-potential case. This phenomena can be described as a "point cooling" or "laser cooling" effect.
In Figure \ref{fig2}, we compare the temperature function $u$ at $t=0.01, 1.0, 10.0$ for $\varepsilon=0.2$ in the second and third cases: when the potential is $\delta$-like and $\delta^{2}$-like functions with the supports at point $40$, respectively. The left picture is corresponding to the second case. The right picture is corresponding to the third case.
In Figures \ref{fig1} and \ref{fig2}, we analyse the equation \eqref{RE-01} with positive potentials. Now, in Figure \ref{fig3}, we study the following equation with negative potentials:
\begin{equation}\label{RE-01-Negative}
\partial_{t}u_{\varepsilon}(t,x)-\partial^{2}_{x} u_{\varepsilon}(t,x) - q_{\varepsilon}(x) u_{\varepsilon}(t,x) =0, \; (t,x)\in[0,T]\times\mathbb R,
\end{equation}
with the same initial data $u_0$ as in \eqref{u_0}. In these plots, we compare the temperature function $u$ at $t=1, 2, 4, 6, 10$ for $\varepsilon=0.8, 0.5, 0.2$ corresponding to the potential with a $\delta$-like function with the support at point $30$. Numerical simulations justify the theory developed in Section \ref{NP}. Moreover, we observe that the negative $\delta$-potential case a place of the support of the $\delta$-function is heating up. This phenomena can be described as a "point heating" or "laser heating" effect. Also, one observes that our numerical calculations prove the behaviour of the solution related to the parameter $\varepsilon$.
All numerical computations are made in C++ by using the sweep method. In above numerical simulations, we use the Matlab R2018b. For all simulations we take $\Delta t=0.2$, $\Delta x=0.01.$
\subsection{Conclusion} The analysis conducted in this article showed that numerical methods work well in situations where a rigorous mathematical formulation of the problem is difficult in the framework of the classical theory of distributions. The concept of very weak solutions eliminates this difficulty in the case of the terms with multiplication of distributions. In particular, in the potential heat equation case, we see that a delta-function potential helps to loose/increase energy in a less time, the latter causing a so-called "laser cooling/heating" effect in the positive/negative potential cases.
Numerical experiments have shown that the concept of very weak solutions is very suitable for numerical modelling. In addition, using the theory of very weak solutions, we can talk about the uniqueness of numerical solutions of differential equations with strongly singular coefficients in an appropriate sense.
\section*{Acknowledgement} This research was funded by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP09058069) and by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations. MR was supported in parts by the EPSRC Grant EP/R003025/2. AA was funded in parts by the SC MES RK Grant No. AP08052028. MS was supported by the Algerian Scholarship P.N.E. 2018/2019 during his visit to the University of Stuttgart and Ghent University. Also, Mohammed Sebih thanks Professor Jens Wirth and Professor Michael Ruzhansky for their warm hospitality.
|
1,116,691,497,195 | arxiv | \section{INTRODUCTION}
Increasing population is a direct indication of growth in energy demands in the residential as well as commercial sector \cite{energy_use}. To meet the demands non-renewable resources are used which is the leading cause of global warming\cite{archer2012global}. Energy saving and demand-side management is need-of-the-hour.
Load monitoring is one of the ways to collect the energy data required to devise automated energy management systems. It also helps in providing feedback to consumers to understand their consumptions. The feedback enables customers to participate in energy-saving and cost-cutting activities\cite{darby2006effectiveness}.
There are two ways to perform load monitoring. One way is to put sensors on each of the appliances deployed in a building that can sample the energy consumption and store the data. The second way is to use such algorithms that can segregate the appliance-level load from the aggregated units of consumption. The first one is a non-practical approach. It is not only costly but intrusive too. The second way, known as Non-Intrusive Load Monitoring /Non-Intrusive Appliance Load Monitoring (NIALM), is more practical and scalable than the first one.
Most of the state-of-the-art NILM algorithms, until recently, use historical appliance-level sampled data to learn models of individual appliances used in the building. Once the model for each of the targeted devices is trained, the segregation could be performed just by estimating the device-specific loads from the sampled aggregated load. The requirement of appliances' consumption data makes the training phase intrusive.
In recent studies \cite{tabatabaei2017toward,li2016whole, singhal2018simultaneous}, NILM has been framed as a multi-label classification problem. These techniques make use of annotated aggregated load for training the model. The annotation contains information about the ON/OFF state of each of the target devices. The annotation can be performed with the help of the cameras installed on premises. This framework circumvents the need for device-level loads, and thus the training phase does not require multiple sensors in the buildings. This modification enables a large-scale roll-out of NILM from the utilities.
Our work is motivated by advantages of the transformation of NILM as a Multi-label classification task and the success of deep learning as a solution to similar problems. We propose a new approach to multi-label classification based on the Restricted Boltzmann Machine (RBM)\cite{Larochelle2008CUD}. RBMs have never been used for multi-label classification so far. It is a classic example of algorithm adaptation for multi-label classification.
RBMs \cite{Smolensky1986} have been effective in learning high-level features and capturing high-order correlations of the observed variables. A typical RBM has a hidden unit in which nodes are conditionally independent given the visible state. RBMs have good reconstruction accuracy which can be leveraged to generate individual load information in latent space. We propose that generative property of RBMs combined with multi-label supervision can be used to perform NILM via state detection of appliances.
\section{Literature review}
\subsection{\textit{ Combinatorial Optimization }}
Studies in combinatorial optimization (CO) such as \cite{hart1992nonintrusive} are based on the principle that total consumption in a building can be approximated as a sum of device-level loads. So aggregated consumption in a building can be expressed as
\begin{equation}
{P_{agg}} = \sum\limits_{i = 1}^N {{s_i}{P_i}}
\end{equation}
where \textit{$P_i$} is individual device load, \textit{$P_{agg}$} is aggregated load, \textit{N} is the total number of appliances, and \textit{$s_i$} is a vector that indicates the state of devices i.e., 0 for 'OFF' state and 1 for 'ON' state.
For load segregation, the motive is to find out the combinations of individual loads whose sum can be approximated as the aggregated load. We can formulate the task of simultaneous detection of ON/OFF state of the devices, $\hat s$, as
\begin{equation}
\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over s} = \mathop {\arg \min }\limits_s \left| {P_{agg} - \sum\limits_{i = 1}^N {{s_i}{P_i}} } \right|
\end{equation}
Equation (2) is an NP-hard problem and quickly becomes intractable as the number of appliances scales up.
\subsection{\textit{ Finite State Machines }}
Apart from the computational complexity, another problem with CO is that it cannot account for the fact that one appliance can run at different power levels, e.g. A.C., fan, washer, etc. However these days most of the appliances (like light, fan, A.C., washer) have marked different states, so it is fair to model the state of the devices as Hidden Markov Models (HMMs). The study \cite{kim2011unsupervised} models aggregated load as an outcome of the interaction of a finite number of independent HMMs.
Most of the modern appliances such as printers, computers, inverters do not have marked states. They are continuously varying. In such situations, the above assumption fails; this, in turn, leads to poor disaggregation performance.
\subsection{\textit{ Multi-Label Classification }}
The classification task where one sample may belong to one or more classes is known as multi-label classification (MLC). Hence, in this case, each sample is mapped to a binary vector of 0's and 1's, assigning 0 or 1 to each of the classes.
Since the aggregated load of a building at any instance may be an outcome of several active appliances' consumption, Tabatabaei et al. \cite{tabatabaei2017toward}, and Li et al. \cite{li2016whole}, framed NILM as an MLC problem. \cite{tabatabaei2017toward} compared the performance of two multi-label classifiers viz Multi-Label K-Nearest Neighbours (ML-kNN) and Random k-Label Sets (RakEl) using time-domain and wavelet-domain features of appliances.
Another recent work \cite{singhal2018simultaneous} uses Multi-label Consistent Deep Dictionary Learning for simultaneous detection of active appliances from smart meter data. These methods do not directly segregate appliance-level load but first identify states of appliances and then disaggregated load is obtained by multiplying the average power consumption of device with the number of instances, it was detected to be in an active state. By far these are the most recent and best-known techniques for multi-label classification based disaggregation.
\section{Proposed approach}
Restricted Boltzmann Machines \cite{Smolensky1986} are one type of undirected graphical models that use hidden variables to model high-order and non-linear regularities of the data. A typical RBM is a two-layer bipartite graph with two types of units, the visible units $x$ and hidden units $h$. An RBM represents probability distributions over the random variables under an energy-based model. The energy model of an RBM is given by $E(x,h) = -x^TWh-b^Tx-c^Th$, where $W$ is the weight to be learned. The joint probability distribution over $(x,h)$ is expressed as $P(x,h) = \frac{1}{z}exp(-E(x,h))$, where $Z$ is the normalization factor. Learning RBMs is a difficult task due to the tractability involved in computing normalization factor $Z$. Several learning algorithms have been proposed \cite{CD2002hinton, Larochelle2012, pmlr-v9-marlin10a} to solve the problem above. Contrastive Divergence (CD) method proposed by Hinton et al. \cite{CD2002hinton} is an efficient method and is widely used to learn RBMs. The generative property of RBM makes it useful for learning latent space representation of data where we don't have information about how data is generated. RBMs have been used for dimensionality reduction \cite{Hinton504}, collaborative filtering \cite{Salakhutdinov2007RBM}, anomaly detection \cite{FIORE2013anomaly} and unsupervised feature learning \cite{pmlr-v15-coates11a}. The classification RBM has been used for various classification tasks in \cite{Larochelle2008CUD, Li2015ConditionalRB} and label consistent collaborative filtering \cite{Verma2018CollaborativeFW}.
\subsection{Multi-Label Classification RBM}
The joint probability distribution of the proposed multi-label classification RBM model shown in figure \ref{fig:1} is given by,
\begin{equation}
p(y,x,h) \propto e^{-E(y,x,h)}
\label{eq:1}
\end{equation}
where $y$ is the label unit. We define the new energy function as follows:
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth]{RBM_ICASSP_2018}
\caption{Proposed architecture for NILM using multi-label classification RBM.}
\label{fig:1}
\end{figure}
\begin{equation}
E(y,x,h) = -h^TWx - a^Tx -b^Th - c^Ty - h^TUy
\label{eq:2}
\end{equation}
with parameters $\Theta = (W,a,b,c,U)$. The model is illustrated in figure \ref{fig:1}. We find the values of visible and hidden units using \autoref{eq:3}, \autoref{eq:4} and \autoref{eq:5} respectively.
\begin{equation}
p(h_j=1|x,y) = \sigma(b_j+U_{jl}+\sum_kW_{ji}x_i)
\label{eq:3}
\end{equation}
\begin{equation}
p(x_i|h) = \sigma(a_i + \sum_jW_{ji}h_j)
\label{eq:4}
\end{equation}
\begin{equation}
p(y_l=1|h) = \frac{exp(c_l+\sum_{j}U_{jl}h_j)}{\sum_{l=1}^{y}exp(c_l+\sum_{j}U_{jl}h_j)}
\label{eq:5}
\end{equation}
where $\sigma$ is the logistic sigmoid and $l$ is the class label out of $C$ classes. These formulations capture the predictive information about the input vector as well as the target class.
Network parameter $\Theta$ is learned using CD \cite{CD2002hinton} algorithm,
\begin{align}
\Delta W_{ij} & = \eta \frac{\delta logp(x,y)}{\delta W_{ij}} \nonumber \\
& = \eta (<x_iy_ih>_{data}-<x_iy_ih>_{model})
\label{eq:6}
\end{align}
where $\eta$ is the learning rate.
For multi-label classification RBM, the above formulation changes as now we have multi-label information present for each sample. The conditional distribution of $y$ given $h$ becomes:
\begin{equation}
p(y_{l}=1|h) = \sigma(c_l + \sum_iU_{jl}h_j)
\label{eq:7}
\end{equation}
This formulation is not tractable since $y$ now has $2^C$ possible values. Therefor for inference we use mean field (MF) message-passing method for an approximate inference. The MF approach tries to approximate the joint posterior $p(y,h|x)$ by a factorial distribution $q(y,h) = \prod^C_{l=1}\mu^{y_l}_l(1-\mu_l)^{1-y_l}\prod^n_{j=1}\tau^{h_j}_j(1-\tau_j)^{1-h_j}$ that minimizes the Kullback-Leibler (KL) divergence with the true posterior. Running the following message passing procedure to convergence
\begin{align}
\mu_l & \leftarrow \sigma(c_l + \sum_jU_{jl}\tau_j) \quad \forall l \in \{1,...,C\}, \\
\tau_j & \leftarrow \sigma(b_j + \sum_bU_{jl}\mu_l+\sum_iW_{ji}x_i) \quad \forall j \in \{1,...,n\}
\end{align}
we can reach a saddle point of the KL divergence, where $\mu_l$ serves as the estimate for $p(y_l=1|x)$ and $\tau_j$ can be used to estimate $p(h_j=1|x)$.
\section{Experimental Evaluation}
\begin{table*}[!ht]
\centering
\caption{\textbf{Appliance-Level Evaluation on REDD } }
\label{tab1}
\begin{tabular}{l cc cc cc cc}
\toprule[0.2mm]
\multirow{2}{0.5cm}{\textbf{Device}}&\multicolumn{2}{c}{\textbf{MLkNN}}&\multicolumn{2}{c}{\textbf{RAkEL}}&\multicolumn{2}{c}{\textbf{LC-DDL}}&\multicolumn{2}{c}{\textbf{MLC-RBM}} \\
&F1-Score &Error&F1-Score&Error &F1-Score&Error&F1-Score&Error\\
\midrule
Lighting &0.6476 &0.3718 &0.6760 &0.8213 &0.6216 &0.2608 &\textbf{ 0.6947}& \textbf{0.1762} \\
Kitchen &0.5081 &0.4304 &0.6108 &0.6995 &0.6411 &0.3326 &\textbf{0.7213 }& \textbf{0.1273} \\
Refrigerator &0.5292 &0.3628 &0.6724 &0.5132 &0.6118 &0.2528 &\textbf{0.7186 }& \textbf{0.1644}\\
Washer Dryer &0.3903 &0.3122 &0.5267 &0.6990 &0.4977 &0.3149 &\textbf{0.6983} & \textbf{0.1963}\\
\bottomrule[0.2mm]
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{\textbf{Appliance-Level Evaluation on Pecan Street} }
\label{tab2}
\begin{tabular}{l cc cc cc cc}
\toprule[0.2mm]
\multirow{2}{0.5cm}{\textbf{Device}}&\multicolumn{2}{c}{\textbf{MLkNN}}&\multicolumn{2}{c}{\textbf{RAkEL}}&\multicolumn{2}{c}{\textbf{LC-DDL}}&\multicolumn{2}{c}{\textbf{MLC-RBM}} \\
&F1-Score &Error&F1-Score&Error &F1-Score&Error&F1-Score&Error\\
\midrule
Air Conditioner &0.6391 &0.1720 &0.6521&0.8565 &0.5882 &\textbf{0.1051} &\textbf{0.7023} & 0.2334 \\
Dishwasher &0.6546 &0.1690 &0.6728&0.8490 &0.4871 &0.1501 &\textbf{0.7269} &\textbf{0.1341} \\
Furnace &0.6123 &0.1341 &0.6231&0.8415 &0.5572 &\textbf{0.0794} &\textbf{0.7113}&0.2224 \\
Microwave &0.5916 &\textbf{0.0727} &0.6819&0.7301 &0.5533 &0.0795 &\textbf{0.6981}&0.1985 \\
\bottomrule[0.2mm]
\end{tabular}
\end{table*}
We performed the experiments on two standard datasets viz. The Reference Energy Disaggregation Dataset (REDD) \cite{kolter2011redd} and a subset of Dataport dataset \cite{dp} (also known as Pecan Street Dataset) available in non-intrusive load monitoring toolkit (NILMTK) format\cite{batra2014nilmtk} .
The REDD dataset is a moderate size publicly available dataset for electricity disaggregation. The dataset consists of
power consumption signals from six different houses, where for each house, the whole electricity consumption, as well as
electricity consumptions of about twenty different devices are recorded at every second.
The Dataport dataset contains 1-minute circuit level and building level electricity data from 240 houses. It contains per minute readings from 18 different devices: air conditioner, kitchen appliances, electric vehicle, and electric hot tub heater, electric water heating appliance, dishwasher, spin dryer, freezer, furnace, microwave, oven, electric pool heater, refrigerator, sockets, electric stove, waste disposal unit, security alarm and washer dryer.
We compare our results with multi-label classification algorithm proposed so far for NILM viz. ML-kNN, RakEl, and LC-DDL. Both the datasets were split into training, testing and cross-validation set in a ratio of 50:30:20 respectively. Cross-validation set was used to decide the values of hyper-parameters. We have munged the data such that each sample contains per hour aggregated consumption and corresponding device labels.
\begin{comment}
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{reconstruction_error}
\caption{Training reconstruction errors of proposed method.}
\label{fig:2}
\end{figure}
\end{comment}
\begin{table}[!ht]
\centering
\caption {\textbf{Performance Evaluation on REDD}}
\label{tab3}
\begin{tabular}{c c c c}
\toprule[0.2mm]
\textbf{Method} & \textbf{Macro F1-Score} &\textbf{ Micro F1-Score} \\
\midrule
MLkNN & 0.6086 & 0.6143 \\
RAkEL & 0.6290 & 0.6294 \\
LC-DDL & 0.5222 & 0.5262 \\
MLC-RBM &\textbf{0.7082} &\textbf{0.7157} \\
\bottomrule[0.2mm]
\end{tabular}
\end{table}
We use PyTorch\cite{paszke2017automatic} for the network implementation. In the proposed multi-label classification RBM we use 60 seconds of aggregated load sampled at 1Hz as input to the model. For hidden unit following sizes are tried $32$, $64$, $128$, and $256$, we find $128$ to be best. The learning rate is set to $0.001$ for all our experiments. We use $k=2$ steps of sampling in CD \cite{CD2002hinton} algorithm to train our model. For inference, we apply sigmoid activation to the output of our model and threshold at $0.5$.
Macro F1 and Micro F1 scores are the two metrics which are commonly used to evaluate the performance of Multi-label classifiers. Appliance-level energy error is computed for each device to evaluate disaggregation performance. Macro F1 score is average of individual F1 score of all the classes so it could be biased towards a class with fewer samples. The Micro F1 score indicates the overall performance of the classifier. It is computed by stacking up samples from all the classes. The F1 score of an individual class is given by \autoref{eqn 13},
\begin{equation}
\label{eqn 13}
F1=\frac{{2 \times TP}}{{2 \times TP + FN + FP}}
\end{equation}
Where TP is the number of true positives, FN is the number of false negatives and FP is the number of false positives.
The appliance-level error also known as Normalized energy error (NEE) is a standard metric which is used in almost every prior study in this area and it is given as \autoref{eqn 14},
\begin{equation}
\label{eqn 14}
NEE = \frac{{\sum\limits_t {|P_t^n - \hat P_t^n|} }}{{\sum\limits_t {P_t^n} }}
\end{equation}
where $P_t^n$ is the power consumption of the appliance \textit{n} at any time instant \textit{t}.
\autoref{tab1} and \autoref{tab2} present the F1-Score and correspondingly obtained disaggregation error for each target device in both the datasets. \autoref{tab3} and \autoref{tab4} contain micro and macro F1-Scores yielded by the state-of-the-art and proposed algorithm on the REDD and Pecan Street dataset respectively. Our proposed model yields the best results regarding classification measures and gives comparable disaggregation accuracy. Although best classification accuracy should reflect the least disaggregation error, here it is not so. This mismatch engenders an ambiguity in results.
We would like to clarify it with an example. Suppose true labels for two hours of aggregate consumption of four devices are 1 0 0 1 and 0 1 1 0 whereas the predicted labels are 0 1 1 0 and 1 0 0 1 respectively. For the given case F1-Score would be zero as all the identified states are wrong. For the same case, disaggregation accuracy would be 100 \% as the number of identified active appliances exactly matches the number of true active appliances. This example explains why techniques, such as LC-DDL, gives the best disaggregation accuracy but worst F1-Scores. Therefore in such a framework, the performance of an algorithm should be judged only after looking at both metrics collectively.
\begin{table}
\begin{center}
\caption {\textbf{Performance Evaluation on Pecan Street}}
\label{tab4}
\begin{tabular}{c c c c}
\toprule[0.2mm]
\textbf{Method} & \textbf{Macro F1-Score} &\textbf{ Micro F1-Score} \\
\midrule
MLkNN & 0.6183 & 0.6194 \\
RAkEL & 0.5872 & 0.6019 \\
LC-DDL & 0.5214 & 0.5332 \\
MLC-RBM &\textbf{0.7080} &\textbf{0.7123} \\
\bottomrule[0.2mm]
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
This work proposes a new technique for NILM framed as a multi-label classification problem. The proposed multi-label classification RBM has good reconstruction ability and when combined with multi-label supervision also provides good classification accuracy. This technique does not require any appliance-level data which makes the task completely non-intrusive. We compare the proposed technique with all the prior works where NILM was transformed as a multi-label classification task. We have performed an experimental evaluation of the proposed work on two widely used datasets. Our proposed model yields the best results in term of classification accuracy and comparable results regarding energy disaggregation. Although we have used multi-label RBM for NILM, it is a generic approach and can be used for solving any multi-label classification problem. In the future, we plan to benchmark it against existing algorithms on established multi-label classification datasets.
\bibliographystyle{IEEEbib}
|
1,116,691,497,196 | arxiv | \section{Introduction}
\IEEEPARstart{W}e study concurrent robot behaviors encoded as Behavior Trees (BTs)~\cite{BTBook}. Robotics applications of BTs span from manipulation~\cite{rovida2017extended, zhang2019ikbt,csiszar2017behavior}
to non-expert programming~\cite{coronado2018development,paxton2017costar,shepherd2018engineering}. Other works include task planning \cite{neufeld2018hybrid}, human-robot interaction~\cite{kim2018architecture, axelsson2019modelling,ghadirzadeh2020human}, learning~\cite{sprague2018adding, banerjee2018autonomous,hannaford2019hidden,scheidelearning}, UAV~\cite{safronov2019asynchronous,sprague2018improving, ogren, bruggemann2017analysing, crofts2017behaviour,molina2020building}, multi-robot systems~\cite{biggar2020framework,tadewos2019fly,kuckling2018behavior,ozkahraman2020combining}, and system analysis~\cite{biggar2020principled,de2020reconfigurable,ogren2020convergence}.
The Boston Dynamics's Spot uses BTs to model the robot's mission~\cite{spot}, the Navigation Stack and the task planner of ROS2 uses BTs to encode the high level robot's behavior~\cite{macenski2020marathon,PlanSys2}.
The particular syntax and semantic of BTs, which we will describe in the paper, allows creating easily complex behaviors composing simpler ones.
A BT designer can compose behaviors in different ways, each with its own semantic. The \emph{Parallel} composition allows a designer to describe the concurrent execution of several sub-behaviors. In BTs, this composition scales better as the complexity of the behavior increases, compared to other control architectures where the system's complexity results from the product of its sub-systems' complexities~\cite{BTBook}. However, the Parallel composition still entails concurrency issues (e.g., race
conditions, starvation, deadlocks, etc.), like any other control architecture. As a result, such composition gets applied only to orthogonal tasks.
In the BT literature, the Parallel composition finds applications only to the composition of orthogonal tasks, where the designer guarantees the absence of conflicts. In this paper, we show how we can extend the use of BTs to address the concurrency issues above. In particular, we show how to obtain synchronized concurrent BTs execution, exclusive access to resources, and predictable execution times. We define performance measures and analyze how they are affected by different design choices. We also provide reproducible experimental validation by publishing the implementation of our BT library, code, and data related to our experiments.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.6\columnwidth}
\centering
\includegraphics[width=\columnwidth]{example-intro-incorrect}
\caption{Flawed BT execution. The conflicting actions \emph{Look for Person} and \emph{Look for Landmarks} may be executed concurrently.}
\label{fig:intro:incorrect}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.7\columnwidth}
\centering
\includegraphics[width=\columnwidth]{example-intro-correct}
\caption{Correct BT execution using the classical BT nodes.}
\label{fig:intro:correct}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.6\columnwidth}
\centering
\includegraphics[width=\columnwidth]{example-intro-new}
\caption{Correct BT execution using the proposed BT nodes.}
\label{fig:intro:proposed}
\end{subfigure}
\caption{Example of flawed and correct concurrent BT execution. The gaze and navigation behaviors are executed in parallel.}
\end{figure*}
Software developers from the video game industry conceived BTs to model the behaviors of Non-Player Characters (NPCs)~\cite{isla2005handling,millington2009artificial}.
Controlled Hybrid Systems (CHSs)~\cite{lunze2009handbook}, which combine continuous and discrete dynamics, were a natural formulation of NPCs' execution and control. However, it turned out that CHSs were not fit to program an NPC, as CHSs implement the discrete dynamics in the form of Finite State Machines (FSMs). The developers realized that FSMs scale poorly, hamper modification and reuse, and have proved inadequate to represent complex deliberation~\cite{sloan2011feasibility,champandard10reasons}. In this context, the issues lie in the transitions of FSMs, which implement a \emph{one-way control transfer}, similar to a GOTO statement of computer programming. In 1968, Edsger Dijkstra observed that \say{\emph{the GOTO statement as it stands is just too primitive; it is too much an invitation to make a mess of one's program [...]
the GOTO statement should be abolished from all higher-level programming languages}}~\cite{dijkstra1968go}. In the computer programming community, Dijksta's observation started a controvert debate against the expressivity power of GOTO statements~\cite{rubin1987goto, benander1990empirical, ashenhurst1987goto}. Finally, the community followed Dijksta's advice, and modern software no longer contains GOTO statements. However, we still can find GOTO statements everywhere in the form of \emph{transitions} in FSMs and therefore CHSs.
The robotics community identified similarities in the desired behaviors for NPCs and robots. In particular, both NPCs and robots act in an unpredictable and dynamic environment; both need to encode different behaviors in a hierarchy, both need a compact representation for their behaviors. Quickly, BTs became a modular, flexible, and reusable alternative over FSM to model robot behaviors~\cite{iovino2020survey}. Moreover, the robotic community showed how BTs generalize successful robot control architectures such
as the Subsumption Architecture and the Teleo-Reactive
Paradigm~\cite{BTBook}.
Using BTs, the designer can compose simple robot behaviors using the so-called \emph{control flow nodes}: Sequence, Fallback, Decorator, and Parallel. As mentioned, the Parallel composition of independent behaviors can arise several concurrency problems in any modeling language, and BTs are no exception. However, the parallel composition of BTs remains less sensitive to dimensionality problems than classical FSMs~\cite{BTBook}.
Another similarity between computer programming and robot behavior design lies in the concurrent execution of multiple tasks. A computer programmer adopts synchronization techniques to achieve \emph{execution synchronization}, where a process has to wait until another process provides the necessary data, or \emph{data synchronization}, where multiple processes have to use or access a critical resource and a correct synchronization strategy ensures that only one process at a time can access them~\cite{taubenfeld2006synchronization}.
The solutions adopted in concurrent programming made a tremendous impact on software development. Another desired quality of a process, particularly in real-time systems, is the \emph{predictability}, that is, the ability to ensure the execution of a process regardless of outside factors that will jeopardize it. In other words, the application will behave as intended in terms of functionality, performance, and response time.
Nowadays, robot software follows a modular approach, in which computation and control use concurrent execution of interconnected modules. This philosophy is promoted by middlewares like ROS and has become a de\-facto standard in robotics. The presence of concurrent behaviors requires facing the same issues affecting concurrent programming, which deals with the execution of several concurrent processes that need to be synchronized to achieve a task or simply to avoid being in conflict with one another.
In this context, proper synchronization and resource management become beneficial to achieve faster and reproducible behaviors, especially at the developing stage, where actions may run at a different speed in the real world and in a simulation environment. Increasing predictability reduces the difference between simulated and real-world robot executions and increases the likelihood of task completion within a given time constraint. We believe that proper parallel task executions will bring benefits in terms of efficiency and multitasking to BTs in a similar way as in computer programming.
The requirement of concurrent or predictable behaviors may also come from non-technical specifications. For example, the Human-Robot Interaction (HRI) community stressed the importance of synchronized robots' behaviors in several contexts. The literature shows evidence of more \say{believable} robots behaviors when they exhibit contingent movements~\cite{fischer2013impact} (e.g., gaze and arm movement when giving directions), coordinated robots and human movements~\cite{lee2011vision} (e.g., a rehabilitation robot moves at the patient's speed), and coordinated gestures and dialogues~\cite{kopp2006towards} (e.g., the robot's gesture synchronized during dialogues).
In this paper, we extend our previous works~\cite{colledanchise2018improving, colledanchise2019analysis}, we define Concurrent BTs (CBTs), where nodes expose information regarding progress and resource used, we also define and implement two new control flow nodes that allow progress and data synchronization techniques and show how to improve behavior predictability. In addition, we introduce measures to assess execution performance and show how design choices affect them.
To clarify what we mean by these concepts, we consider the following task: \emph{A robot has to follow a person}.
The robot's behavior can be encoded as the concurrent execution of two sub-behaviors: \emph{navigation} and \emph{gazing}. However, the navigation behavior is such that, whenever the robot gets lost, it moves the head, looking for visual landmarks to localize itself.
Figure~\ref{fig:intro:incorrect} encodes the BT of this behavior.
At this stage, the semantic of BT is not required to understand the problem.
Note how, whenever the robot gets lost, two actions require the use of the head: the actions \emph{Look for Landmarks} and \emph{Look for Person}. To avoid conflicts, we have to modify to be as in Figure~\ref{fig:intro:correct}. However, such BT goes against the separation of roles as the BT designer needs to know beforehand the actions' resources. In this paper, we propose two control flow nodes that allow the synchronization of such BT in a less invasive fashion, as in Figure~\ref{fig:intro:proposed}.
The concurrent execution of legacy BTs represents another example of such a synchronization mechanism.
Clearly, the design of a single action that performs both tasks represents a \say{better} synchronized solution. However, creating the single action for composite behaviors jeopardizes the advantages of BTs in terms of modular and reusable behavior.
To summarize, in this paper, we provide an extension of our previous work \cite{colledanchise2018improving} and \cite{colledanchise2019analysis}, where the new results are:
\begin{itemize}
\item We moved the synchronization logic from the parallel node to the decorator nodes. This enables a higher expressiveness of the synchronization.
\item We provide reproducible experimental validation on simulated data.
\item We provide experimental validation on three real robots.
\item We compared our approach with two alternative task synchronization techniques.
\item We provide the code to extend an existing software library of BTs, and its related GUI, to encode the proposed synchronization nodes.
\item We provide a theoretical discussion of our approach and identify the assumptions under which the property on BTs are not jeopardized by the synchronization.
\end{itemize}
The outline of this paper is as follows: We present the related work and compare it with our approach in Section~\ref{sec:related}. We overview the background in BTs in Section~\ref{sec:background}. We present the first contribution of this paper on BT synchronization in Section~\ref{sec:synchronizaton}. Then we present the second contribution on performance measures in Section~\ref{sec:measures}. In Section~\ref{sec:experimental} we provide experimental validation with numerical experiments to gather statistically significant data and compare our approach with existing ones. We made these experiments reproducible. We also validated our approach on real robots in three different setups to show the applicability of the approach to real problems. We describe the software library we developed, and we refer to the online repository in Section~\ref{sec:library}. We study the new control nodes from a theoretical standpoint and study how design choices affect the performances in Section~\ref{sec:analysis}. We conclude the paper in Section~\ref{sec:conclusions}.
\section{Related Work}
\label{sec:related}
This section shows how BT designers in the community exploit the parallel composition, and we highlight the differences with the proposed approach. We do not compare our approach with generic scheduling algorithms~\cite{brunner2019autonomous}, as our interest lies in the concurrent behaviors encoded as BTs.
The parallel composition has found relatively little use, compared to the other compositions, due to the intrinsic concurrency issues similar to the ones of computer programming, such as race conditions and deadlocks. Current applications that make use of the parallel composition work under the assumption that sub-BTs lie on orthogonal state spaces \cite{champandard2007enabling, rovida2017extended} or that sub-BTs executed in parallel have a predefined priority assigned~\cite{weber2010reactive} where, in conflicting situations, the sub-BTs with the lower priority stops. Other applications impose a mutual exclusion of actions in sub-BTs whenever they have potential conflicts (e.g., sending commands to the same actuator)~\cite{BTBook} or
they assume that sub-BTs that are executed in parallel are not in conflict by design.
The parallel composition found large use in the BT-based task planner \emph{A Behavior Language} (ABL)~\cite{mateas2002behavior} and in its further developments. ABL was originally designed for the game \emph{Fa\c{c}ade}, and it has received attention for
its ability to handle planning and acting at different deliberation layers, in particular, in Real-Time Strategy games~\cite{weber2010reactive}.
ABL executes sub-BTs in parallel and resolves conflicts between multiple concurrent actions by defining a fixed priority order. This solution threatens the reusability and modularity of BTs and introduces a latent hierarchy in the BT.
The parallel composition found use also in multi-robot applications, both with centralized~\cite{agis2020event} and distributed fashions~\cite{colledanchise2016advantages,yang2020hierarchical}, resulting in improved fault tolerance and other performances.
The parallel node involves multiple robots, each assigned to a specific task using a task-assignment algorithm. A task-assignment algorithm ensures the absence of conflicts.
None of the existing work in the BT literature adequately addressed the synchronization issues that arise when using a parallel BT node. They assume or impose strict constraints on the actions executed and often introduce undesired latent hierarchies difficult to debug.
A recent work~\cite{rovida2018motion} proposed BTs for executing actions
in parallel, even when they lie on the same state space (e.g.,
they use the same robot arm). The authors implement the coordination mechanism with a BT that activates and deactivates motion primitives based on their pre-conditions. Such a framework avoids that more actions access a critical resource concurrently. In our work, we are interested in synchronizing the progress of actions that a BT can execute concurrently.
We address the issues above by defining BT nodes that expose information regarding progress and resource uses. We define an absolute and relative synchronized parallel BT node execution and a resource handling mechanism. We provide a set of statistically meaningful experiments and real-robot executions. We also provide an extension to the software library to obtain such synchronizations and real-robot examples. This makes our paper fundamentally different than the ones presented above and the BT literature.
In our previous work~\cite{colledanchise2018improving,colledanchise2019analysis}, we extend the semantic of the parallel node to allow synchronization. Figure~\ref{fig:rw:bt:old} shows an example of a synchronized BT using our such approach. In this paper, we go beyond our previous work by moving the synchronization logic inside a decorator node, as Figure~\ref{fig:rw:bt:new}. That allows the synchronization to deeper branches of the BT, as in Figure~\ref{fig:ex:absolute:bt:complex}, and multiple cross synchronization. In Section~\ref{sec:experimental}, we will also show a synchronization experiment possible only with the new semantic.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.4\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{experimentCart-before}
\caption{BTs synchronization using the previous formulation.}
\label{fig:rw:bt:old}
\end{subfigure}
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth]{experimentCart-after}
\caption{BTs synchronization using the proposed formulation.}
\label{fig:rw:bt:new}
\end{subfigure}
\caption{BT synchronization using the previous~\cite{colledanchise2018improving} (left) and the proposed formulation (right). }
\label{fig:rw:bt:oldvsnew}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{example1-after-better}
\caption{A more complex version of the BT for Example~\ref{ex:absolute} allowed by the new formulation only.}
\label{fig:ex:absolute:bt:complex}
\end{figure}
\newpage
\section{Background}
\label{sec:background}
This section briefly presents the classical and the state-space formulation of BTs. A detailed description of BTs is available in the literature~\cite{BTBook}.
\subsection{Classical Formulation of Behavior Trees}
\label{sec:background.BT}
A BT is a graphical modeling language that represents actions orchestration. It is a directed rooted tree where the internal nodes represent behavior compositions and leaf nodes represent actuation or sensing operations. We follow the canonical nomenclature for root, parent, and child nodes.
The children of a BT node are placed below it, as in Figure~\ref{fig:ex:absolute:bt:complex}, and they are executed in the order from left to right. The execution of a BT begins from the root node. It sends \emph{ticks}, which are activation signals, with a given frequency to its child. A node in the tree is executed if and only if it receives ticks. When the node no longer receives ticks, its execution stops. The child returns to the parent a status, which can be either \emph{Success}, \emph{Running}, or \emph{Failure} according to the node's logic. Below we present the most common BT nodes and their logic.
In the classical representation, there are four operator nodes (Fallback, Sequence, Parallel, and Decorator) and two execution nodes (Action and Condition). There exist additional operators, but we will not use them in this paper.
\paragraph*{Sequence}
When a Sequence node receives ticks, it routes them to its children in order from the left. It returns Failure or Running whenever a child returns Failure or Running, respectively. It returns Success whenever all the children return Success. When child $i$ returns Running or Failure, the Sequence node does not send ticks to the next child (if any) but keeps ticking all the children up to child $i$.
The Sequence node is graphically represented by a square with the label \say{$\rightarrow$}, as in Figure~\ref{fig:ex:absolute:bt:complex}, and its pseudocode is described in Algorithm~\ref{bts:alg:sequence}.
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\For{$i \gets 1$ \KwSty{to} $N$}
{
\ArgSty{childStatus} $\gets$ \ArgSty{child($i$)}.\FuncSty{Tick()}\\
\uIf{\ArgSty{childStatus} $=$ \ArgSty{Running}}
{
\Return{Running}
}
\ElseIf{\ArgSty{childStatus} $=$ \ArgSty{Failure}}
{
\Return{Failure}
}
}
\Return{Success}
}
\caption{Pseudocode of a Sequence operator with $N$ children}
\label{bts:alg:sequence}
\end{algorithm2e}
\paragraph*{Fallback}
When a Fallback node receives ticks, it routes them to its children in order from the left. It returns a status of Success or Running whenever a child returns Success or Running respectively. It returns Failure whenever all the children return Failure. When child $i$ returns Running or Success, the Fallback node does not send ticks to the next child (if any) but keeps ticking all the children up to the child $i$.
The Fallback node is represented by a square with the label \say{$?$}, as in Figure~\ref{fig:ex:absolute:bt:complex}, and its pseudocode is described in Algorithm~\ref{bts:alg:fallback}.
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\For{$i \gets 1$ \KwSty{to} $N$}
{
\ArgSty{childStatus} $\gets$ \ArgSty{child($i$)}.\FuncSty{Tick()}\\
\uIf{\ArgSty{childStatus} $=$ \ArgSty{Running}}
{
\Return{Running}
}
\ElseIf{\ArgSty{childStatus} $=$ \ArgSty{Success}}
{
\Return{Success}
}
}
\Return{Failure}
}
\caption{Pseudocode of a Fallback operator with $N$ children}
\label{bts:alg:fallback}
\end{algorithm2e}
\paragraph*{Parallel}
When the Parallel node receives ticks, it routes them to all its children. It returns Success if $M \geq N$ children return Success, it returns Failure if more than $N - M$ children return Failure, and it returns Running otherwise.
The Parallel node is graphically represented by a square with the label \say{$\rightrightarrows$}, as in Figure~\ref{fig:ex:absolute:bt:complex}, and its pseudocode is described in Algorithm~\ref{bts:alg:parallel}.
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\ForAll{$i \gets 1$ \KwSty{to} $N$}
{
\ArgSty{childStatus}[i] $\gets$ \ArgSty{child($i$)}.\FuncSty{Tick()}\\
}
\uIf{$\Sigma_{i: \ArgSty{childStatus}[i]=Success} = M$}
{
\Return{Success}
}
\ElseIf{$\Sigma_{i: \ArgSty{childStatus}[i] =Failure} > N - M $}
{
\Return{Failure}
}\Else{
\Return{Running}
}
}
\caption{Pseudocode of a Parallel operator with $N$ children}
\label{bts:alg:parallel}
\end{algorithm2e}
\paragraph*{Decorator}
A Decorator node represents a particular control flow node with only one child. When a Decorator node receives ticks, it routes them to its child according to custom-made policy. It returns to its parent a return status according to a custom-made policy. The Decorator node is graphically represented as a rhombus, as in Figure~\ref{fig:ex:absolute:bt:complex}. BT designers use decorator nodes to introduce additional semantic of the child node's execution or to change the return status sent to the parent node.
\paragraph*{Action}
An action performs some operations as long as it receives ticks. It returns Success whenever the operations are completed and Failure if the operations cannot be completed. It returns Running otherwise. When a running Action no longer receives ticks, its execution stops.
An Action node is graphically represented by a rectangle, as in Figure~\ref{fig:ex:absolute:bt:complex}, and its pseudocode is described in Algorithm~\ref{bts:alg:action}.
\begin{algorithm2e}[h]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\ArgSty{DoAPieceOfComputation()} \\
\uIf{action-succeeded}
{
\Return{Success}
}
\ElseIf{action-failed}
{
\Return{Failure}
}
\Else
{
\Return{Running}
}
}
\caption{Pseudocode of a BT Action}
\label{bts:alg:action}
\end{algorithm2e}
\paragraph*{Condition}
Whenever a Condition node receives ticks, it checks if a proposition is satisfied or not. It returns Success or Failure accordingly. A Condition is graphically represented by an ellipse, as in Figure~\ref{fig:ex:absolute:bt:complex}, and its pseudocode is described in Algorithm~\ref{bts:alg:condition}.
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\uIf{condition-true}
{
\Return{Success}
}
\Else
{
\Return{Failure}
}
}
\caption{Pseudocode of a BT Condition}
\label{bts:alg:condition}
\end{algorithm2e}
\vspace*{-1em}
\subsection{Control Flow Nodes With Memory}
To avoid
the unwanted re-execution of some nodes, and save computational resources, the BT community developed control flow nodes with memory~\cite{millington2009artificial}.
Control flow nodes with memory keep stored which child has returned Success or Failure, avoiding their re-execution. Nodes with memory are graphically represented with the addition
of the symbol \say{$*$} as superscript (e.g., a Sequence node with memory is graphically represented
by a box with the label \say{$\rightarrow^*$}). The memory is cleared when the parent node returns either
Success or Failure so that, at the next activation, all children are re-considered. Every execution of a control flow node with memory can be obtained
employing the related non-memory control flow node using auxiliary conditions and shared memory~\cite{BTBook}. Provided a shared memory, these nodes become syntactic sugar.
\subsection{Asynchronous Action Execution}
Algorithm~\ref{bts:alg:action} performs a step of computation at each tick.
It implements an action execution that is synchronous to the ticks' traversal.
However, in a typical robotics system, action
nodes control the robot by sending commands to a robot's interface to execute a particular skill, such as an arm movement or a navigation skill; these skills are often executed by independent components running on a distributed system. Therefore, the action execution gets delegated to different executables that communicate via a middleware.
As discussed in the literature \cite{BTBook,colledanchise2021implementation}, the designer needs to ensure that the skills running in the robot independently from the BT get properly interrupted when the corresponding action no longer receives ticks.
To support the preemption and synchronization, BT designers split the actions execution in smaller steps, each executed within a \emph{quantum}, that is, a time window during which the action gets executed by the robot asynchronously with respect to the BT. During this time, the action cannot be interrupted. The action starts when the first tick is received, and it proceeds for another quantum only when the next tick is received. At the end of each quantum, a running action yields control back to the BT.
This logic resembles process scheduling, where a scheduler provides computing time, to a process, and then it takes back control to choose the next process to run.
Figure~\ref{fig:stack} shows an example of how a BT action interacts with an external executable that controls the robot. The figure depicts two threads, one that ticks the action node (within the BT executable) and one that controls the robot (within an external executable). When the action node receives a tick from its parent, it pushes a token onto a stack, shared with the external executable that controls the robot. Such executable controls the robot if and only if there is a token in the stack. This behavior also is outlined in the algorithm in Figure~\ref{fig:stack}.
The executable checks the stack periodically, if there is a token in the stack, it consumes it, and it executes a control step. If the stack is empty, the executable halts the controller execution. If the BT tick frequency is faster than the controller quantum (e.g., twice the thread's frequency), this mechanism ensures that the controller operates without interruptions, but it also guarantees that the controller is halted when ticks are no longer dispatched to the action node without delay (this is achieved using the size of the stack equal to one).
It is clear that, in BTs, the tick frequency plays an important role in action preemption. To allow \say{fast} preemption, the quantum of actions should be short and the tick frequency should be high. Intuitively, a blocking action, which does not allow to be interrupted throughout its entire execution, will continue to take control of the robot also if it no longer receives ticks. BTs orchestrate behaviors at a relatively high level of abstraction.
In general, to avoid preemption delay, the time spanned between a tick and the next one must be shorter than the smallest action quantum in the BT.
In our experience, a quantum of $\Delta T\approx100ms$ (i.e., an update frequency of $10Hz$) and a tick frequency of $20Hz$ represents a good trade-off between action responsiveness and required ticks traversal frequency.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{stack}
\caption{Example of asynchronous external action execution. $f_{BT}$ is the tick frequency and $\Delta T$ is the quantum period.}
\label{fig:stack}
\end{figure}
The interaction between the BT and the executable is designed to ensure that the robot controller continues to operate if the BT ticks the action node without interruptions. Otherwise, i.e. if no ticks are received within the assigned quantum, the controller is halted.
\section{Concurrent BTs}
\label{sec:synchronizaton}
This section introduces the first contribution of the paper. We present the Concurrent BTs (CBTs), an extension to classical BTs with the addition of the execution progress and the resource allocated in the formulation. We extend our previous work on parallel synchronization of BTs~\cite{colledanchise2018improving, colledanchise2019analysis} employing decorator nodes that allow progress and resource synchronization. Here we define these nodes by their pseudocode. We provide the source code for some of the examples provided.\footnote{\url{https://github.com/miccol/tro2021-code}}
In Section~\ref{sec:analysis} we provide the formal definition and the state-space formulation of the nodes. We also prove that, under some assumptions, the proposed nodes do not jeopardize the BT properties.
\paragraph*{Concurrent BTs}
A CBT is a BT whose nodes expose information about the execution progress $p(x_k)$ and the resource required $Q(x_k)$ and priority $\rho(x_k)$ at system's state $x_k \in \mathbb{R}^n$. In addition, the nodes contain the user-defined function $g(x_k)$ that represents the priority increase whenever the execution of a node gets denied by a resource not available; we will present the details in this section.
In Section~\ref{sec:analysis} we provide the formal definition of CBTs and the formulation of the Sequence and Fallback composition.
\paragraph*{ProgressSynchronization Decorator}
When a ProgressSynchronization Decorator receives a tick, it ticks its child if the child's progress at the current state $p(x_k)$ is lower than the current barrier $b(x_k)$. The decorator returns to the parent Success if the child returns Success, it returns to the parent Failure if the child returns Failure. It returns Running otherwise.
The ProgressSynchronization Decorator node is graphically represented by a rhombus with the label \say{$\delta^P_b$}, as in Figure~\ref{fig:pdec}, and its pseudocode is described in Algorithm~\ref{alg:progress}.
We will calculate the barrier $b(x_k)$ either in an absolute or relative fashion, as we will show in Sections~\ref{PM:AS} and~\ref{PM:RS}.
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\If{\ArgSty{child}.p($x_k$) $\leq$ \ArgSty{b($x_k$)} }
{
\ArgSty{childStatus} $\gets$ \ArgSty{child.Tick()}\\
\Return{childStatus}
}
\Return{Running}
}
\caption{Pseudocode of a ProgressSynchronization Decorator.}
\label{alg:progress}
\end{algorithm2e}
\vspace{-1em}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.32\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth]{abs-progress-intro}
\caption{ Absolute synchronization. $B$ indicates the set of barriers.}
\label{fig:pdec}
\end{subfigure}
\begin{subfigure}[t]{0.32\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth]{rel-progress-intro}
\caption{ Relative synchronization. $\Delta$ indicates the threshold value.}
\end{subfigure}
\begin{subfigure}[t]{0.32\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth]{resource-intro}
\caption{ Resource Synchronization Decorator node.}
\label{fig:rdec}
\end{subfigure}
\caption{Graphical representation of a Synchronization Decorator nodes.}
\end{figure}
\newpage
\paragraph*{ResourceSynchronization Decorator}
When a ResourceSynchronization Decorator receives a tick, it ticks its child if the resources required by the child $i$, $Q_i(x_k)$, are either available or assigned to that child already. When the decorator ticks a child, it also assigns all the resources in $Q_i(x_k)$ to that child. Whenever the child no longer requires a resource in $Q_i(x_k)$, such resource get released.
The decorator returns to the parent Success if the child returns Success, It returns to the parent Failure if the child returns Failure. It returns Running if either the child return running or if the child is waiting for a resource. $R$ is the set of all the resources of the system. The decorator keeps also a priority value for the subtree accessing a resource, to avoid starvation, as we prove it in Section~\ref{sec:analysis}. Whenever the decorator receives a tick and does not send it to the child (as the resources are not ailable), the priority value evolves according to the $g(x_k)$. In Section~\ref{sec:experimental} we will show two examples that highlight how the choice of the function $g$ avoids starvation. The BT keeps track of the node currently using a resource $q$, via the function $\alpha(q)$. All the resource decorator nodes share the value of such function.
The ResourceSynchronization Decorator node is graphically represented by a rhombus with the label \say{$\delta^R$}, as in Figure~\ref{fig:rdec}. Algorithm~\ref{alg:resource} describes its pseudocode, in particular, for each resource $q$ required by the decorator's child (Line~2), if the resource results assigned to another child (Line~3), then the priority of the child to get the resource $q$ increases according to the function $g$.
The algorithm then assigns the resources to the child with the highest priority (Lines 7-9) and releases the child's resources if either it no longer requires it (Lines 10-11).
\begin{remark}
We are not interested in a scheduler that fairly assigns the resources as it is done, for example, in the Operating Systems. The designer may implement a fair scheduling policy and encode it in the function $g$.
However, if an action has always higher priority than another one to get a resource, this should be modeled via a Sequence or Fallback composition.
\end{remark}
\vspace*{-1em}
\begin{algorithm2e}[h!]
\SetKwProg{Fn}{Function}{}{}
\Fn{Tick()}
{
\For{\ArgSty{q} \KwSty{in} \ArgSty{child.Q($x_k$)}}{
\If{(\ArgSty{$\alpha(q)$} \KwSty{not} = \ArgSty{child}) \KwSty{and} (\ArgSty{$\alpha(q)$} \KwSty{not} \ArgSty{$\emptyset$}) }{
\ArgSty{child.$\rho(x_k)$} $\gets$ \ArgSty{child.$\rho(x_{k-1})$} + \ArgSty{$g(x_k)$}\\
\Return{Running}
}
}
\For{\ArgSty{q} \KwSty{in} \ArgSty{R}}{
\If{\ArgSty{q} \KwSty{in} \ArgSty{child.Q($x_k$)}}{
\If{$\alpha(q)$ \KwSty{not} = \ArgSty{child} $child.\rho(x_k) > \alpha(q).\rho(x_k)$}{\ArgSty{$\alpha(q)$} $\gets$ \ArgSty{child}}
} \ElseIf{$\alpha(q)$ = \ArgSty{child}}{
\ArgSty{$\alpha(q)$} $\gets$ \ArgSty{$\emptyset$}
}
}
\ArgSty{childStatus} $\gets$ \ArgSty{child.Tick()}\\
\Return{childStatus}
}
\caption{Pseudocode of a ResourceSynchronization Decorator.}
\label{alg:resource}
\end{algorithm2e}
\clearpage
\subsection{Absolute Progress Synchronization}
\label{PM:AS}
A BT achieves an absolute progress synchronization by setting, a-priori, a finite ordered set $\mathcal{B}$ of values for the progress. These values are used as \emph{barriers} at the task level~\cite{taubenfeld2006synchronization}. Whenever a child of an AbsoluteProgressSync Decorator has the progress equal to or greater than a progress barrier in $\mathcal{B}$
it no longer receives ticks until all the other nodes whose parent is an instance of such decorator have the progress equal to or greater than the barrier's value.
We now present a use case example for the absolute progress synchronization, taking inspiration from the literature~\cite{chitta2010planning},~\cite{hern2018boston}.
\begin{example}[Absolute]
\label{ex:absolute}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{example1-before}
\caption{Without synchronization.}
\label{fig:ex:absolute:bt:unsync}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth]{example1-after}
\caption{With synchronization.}
\label{fig:ex:absolute:bt:sync}
\end{subfigure}
\caption{BT encoding the desired behavior of Example~\ref{ex:absolute}}
\label{fig:ex:absolute:bt}
\end{figure}
A robot has to pull a door open. To accomplish this task, the robot must execute two behaviors concurrently: an arm movement behavior to pull the door open and a base movement behavior to make the robot move away from the door while this opens, as the BT in Figure~\ref{fig:ex:absolute:bt:unsync}.
The progress profile of the two sub-BTs, \say{Pull Door} $\bt_1$ and \say{Move Away from Door} $\bt_2$, holds the equations below:
\begin{equation}
p_i(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_i(x_{k-1}) + a_i, & \text{otherwise}
\end{cases}
\label{ex:absolute:eq:progress}
\end{equation}
with $a_1 = 0.015$ (Pull Door), $a_2 = 0.01$ (Move Away from Door).
The equations describe a linear progress profile for both behaviors, with the action \say{Pull Door} faster than the action \say{Move Away from Door}.
However, to ensure that the task is correctly executed, the robot must execute the two behaviors above in a synchronized fashion. The BT in Figure~\ref{fig:ex:absolute:bt:sync} encode such synchronized behavior, with the following barriers
\begin{equation}
B = \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9\}
\end{equation}
Figure~\ref{fig:ex:absolute:progress} shows the progress profiles of the actions with and without synchronization. We see how, in the synchronized case, the arm movement keeps stopping to wait for the base movement at the points of executions defined in the barrier.
\end{example}
\begin{figure}[b]
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-absolute-unsync}
\caption{Without synchronization.}
\label{fig:ex:absolute:progress:unsync}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-absolute-sync}
\caption{With synchronization.}
\label{fig:ex:absolute:progress:sync}
\end{subfigure}
\caption{Progress profiles of the actions of Example~\ref{ex:absolute}.}
\label{fig:ex:absolute:progress}
\end{figure}
\newpage
\subsection{Relative Progress Synchronization}
\label{PM:RS}
In this case, synchronization does not follow a common progress indicator, but it is relative to another node's execution. A BT achieves a relative progress synchronization by setting a-priori a threshold value $\Delta \in [0, 1]$. Whenever a child of a RelativeProgressSync Decorator exceeds the minimum progress, among all the other nodes whose parent is an instance such decorator, by $\Delta$, it no longer receives ticks.
We now present a use case example for the relative progress synchronization, taking inspiration from the literature~\cite{fischer2013impact}. We provide an implementation this example in Section~\ref{sec:experimental}.
\begin{example}[Relative]
\begin{figure}[h!]
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=0.8\columnwidth]{example2-before}
\caption{Without synchronization.}
\label{fig:ex:relative:bt:unsync}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=0.8\columnwidth]{example2-after}
\caption{With synchronization.}
\label{fig:ex:relative:bt:sync}
\end{subfigure}
\caption{BT encoding the desired behavior of Example~\ref{ex:relative}}
\label{fig:ex:relative:bt}
\end{figure}
\label{ex:relative}
A service robot has to give directions to visitors to a museum. To make the robot's motions look natural, whenever the robot gives a direction, it points with its arm and the head to that direction as in the BT in Figure~\ref{fig:ex:relative:bt:unsync}.
The progress profile of the two sub-BTs, \say{Head Movement} $\bt_1$ and \say{Arm Movement} $\bt_2$, holds the equations below:
\begin{equation}
p_i(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_i(x_{k-1}) + a_i, & \text{otherwise}
\end{cases}
\label{ex:relative:eq:progress}
\end{equation}
with $a_1 = 0.01$ (Arm), $a_2 = 0.05$ (Head). Figure~\ref{fig:ex:relative:progress:sync} shows the progress profile of the sub-BTs.
The equations describe a linear progress profile for both behaviors, with the action \say{Move Head} faster than the action \say{Move Arm}.
However, the arm and head may require different times to perform the motion, according to the direction to point at. Hence, to look natural, the head movement must follow the arm movement to avoid the unnatural behavior where the robot looks first to a direction and then points at it, or the other way round. The BT in Figure~\ref{fig:ex:relative:bt:sync} encode such synchronized behavior, with $\Delta = 0.1$
Figure~\ref{fig:ex:relative:progress} shows the progress profiles of the actions. We see how the head movement stops when its progress surpasses the arm movement's progress by $0.1$, around time step $k=10$.
\begin{figure}[b]
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-relative-unsync}
\caption{Without synchronization.}
\label{fig:ex:relative:progress:unsync}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-relative-sync}
\caption{With synchronization.}
\label{fig:ex:relative:progress:sync}
\end{subfigure}
\caption{Progress profiles of the actions of Example~\ref{ex:relative}.}
\label{fig:ex:relative:progress}
\end{figure}
\end{example}
\paragraph*{Perpetual Actions}
We can use the relative synchronized parallel node also to impose coordination between \emph{perpetual} actions, (i.e., an action that, even in the ideal case, does not have a fixed duration, hence a progress profile), as in the following example taken from the literature~\cite{rovida2018motion}. We will present an implementation of the example above in Section~\ref{sec:experimental}.
\begin{example}[Perpetual Actions]
\label{ex:perpetual}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.43\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{example3-before}
\caption{Without synchronization.}
\label{fig:ex:perpetual:bt:unsync}
\end{subfigure}
\begin{subfigure}[t]{0.43\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth]{example3-after}
\caption{With synchronization.}
\label{fig:ex:perpetual:bt:sync}
\end{subfigure}
\caption{BT encoding the desired behavior of Example~\ref{ex:perpetual}}
\label{fig:ex:perpetual:bt}
\end{figure}
An industrial manipulator has to insert a piston into a cylinder of a motor block.
It is an instance of a typical peg-in-the-hole problem with the
additional challenge of the freely swinging piston rod. To correctly insert the piston, the latter must be kept aligned during the insertion into the cylinder.
We can describe this behavior as a parallel BT composition of two sub-BTs: one for inserting the piston and one for keeping the piston aligned with the cylinder as in Figure~\ref{fig:ex:perpetual:bt:unsync}. The Insert Piston action stops when the end-effector senses that the piston hits the cylinder's base, hence its progress cannot be computed beforehand. Since the inserting behavior and the alignment behavior are executed concurrently, the robot may insert the piston too fast, resulting in a collision between the piston and the cylinder's edge.
Figure~\ref{fig:ex:perpetual:bt:sync} shows a BT of a synchronized execution, where the progress of the piston insertion (Insert Piston action) has only two values, $0$ and $1$. It equals $1$ whenever the piston is being inserted and $0$ otherwise. Similarly, for the alignment behavior (Align Piston action). The insertion behavior stops while the robot is aligning the piston.
\end{example}
\begin{remark}
In real-world scenarios, we can compute the progress either in an open-loop (i.e., at each tick received it increments the progress) or in a closed-loop (i.e., using the sensors to compute the actual progress) fashion.
\end{remark}
\subsection{Resource Synchronization}
This section shows how CBTs can execute multiple actions in parallel without resource conflicts.
This synchronization becomes useful when executing in parallel BTs that have some actions in common, as shown in Example~\ref{ex:resource}, adapted from the BT literature. This often happens whenever we want to execute concurrently existing BTs.
\begin{example}
\label{ex:resource}
The BT in Figure~\ref{fig:ex:resource:bt:unsync} shows a BT for a missile evasion tactic, taken from the literature\cite{yao2015adaptive}. The BT has three sub-BTs that run in parallel: \say{Turn on Countermeasure Maneuvers}, \say{Countermeasure Maneuvers once}, \say{Dispense Chaff and Flares Every 10 Seconds}, \say{Turn Clockwise if an Enemy on a Range}.
The actions \say{Countermeasure Maneuver} and \say{Turn Clockwise} run in parallel and both use the aircraft's actuation.
There are cases in which both actions receive ticks, resulting in possible conflicts. We can use the resource decorator node to avoid such conflicts, as in the BT in Figure~\ref{fig:ex:resource:bt:sync}.
The authors of \cite{yao2015adaptive} did not address the concurrency issue above. However, taking advantage of the composability of BTs, we did the modification easily.
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth]{example-aircraft-before}
\caption{Without synchronization.}
\label{fig:ex:resource:bt:unsync}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth]{example-aircraft-after}
\caption{With synchronization.}
\label{fig:ex:resource:bt:sync}
\end{subfigure}
\caption{BT encoding the desired behavior of Example~\ref{ex:resource}, adapted from~\cite{yao2015adaptive}}
\label{fig:ex:resource:bt}
\end{figure}
\end{example}
\subsection{Improving Predictability}
\label{sec:predictability}
We can use progress synchronization to impose a given progress profile constraint. The idea is to define an artificial action with the desired progress profile (over time) defined a priori and putting it as a child of an absolute synchronized parallel node with the actions whose progress is to be constrained.
However, since we can only stop actions (i.e., BTs have no means to speed up actions), we can only define such artificial action as progress upper bound. This type of progress profile creation may become very useful at the developing stage since the actions may run at a different speed in the real world and in a simulation environment. Improving predictability reduces the difference between simulated and real-world robot execution.
\begin{example}
\label{ex:predictability}
\begin{figure}[h!]
\centering
\includegraphics[width=0.3\columnwidth]{example-profile-after}
\caption{BT encoding the desired behavior of Example~\ref{ex:predictability}}
\label{fig:ex:predictability:bt}
\end{figure}
A robot has to move its arm following a sigmoid profile (i.e., the
movements are first slow, then fast, then slow again). However, the manipulation action is designed to follow a linear profile (i.e. same movement's speed throughout
the execution). To impose the desired profile we create the action \say{Profile} that models the sigmoid and we impose a progress synchronization with the manipulation action. The BT in Figure~\ref{fig:ex:predictability:bt} shows the BT to encode this task.
\end{example}
Figure~\ref{fig:ex:predictability:progress} shows the progress profiles of the action with and without the progress profile imposition. Note how the action's progress profile changed without editing the action. However, this was possible as the action, originally, has a faster progress profile than the desired one, as we have no non-intrusive means to speed up actions.
\vspace*{-1.5em}
\begin{figure}[h!]
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-predictability-unsync}
\label{fig:ex:predictability:progress:unsync}
\vspace*{-1.5em}
\caption{Without Synchronization.}
\end{subfigure}
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{profile-predictability-sync}
\label{fig:ex:predictability:progress:sync}
\vspace*{-1.5em}
\caption{With Synchronization.}
\end{subfigure}
\caption{Progress profiles of Example~\ref{ex:predictability} with and without synchronization.}
\label{fig:ex:predictability:progress}
\end{figure}
\newpage
\section{Synchronization Measures}
\label{sec:measures}
This section presents the second contribution of the paper. We define measures for the concurrent execution of BTs used to establish execution performance. We show measures for both progress synchronization and predictability. In Section~\ref{sec:experimental} we show how the design choices for relative and absolute parallel nodes affect the performance.
\subsection{Progress Synchronization Distance}
\begin{definition}
\label{PM:def:performance}
Let $N$ be the number of nodes that have as a parent the same instance of a progress decorator node, the progress distance at state $x_k$ is defined as:
\begin{equation}
\pi(x_k) \triangleq \sum_{i = 1} ^N{\sum_{j = 1}^N{\frac{|p_i(x_k) - p_j(x_k)|}{2}}}
\end{equation}
where $p_i\in [0, 1]$ is the progress of the $i$-th child, as in Section~\ref{sec:synchronizaton} above.
\end{definition}
We sum the progress difference for each pair of nodes that have as parent the same instance of a progress decorator node. We divide by $2$ to avoid double count the differences. We use the absolute difference instead of a, e.g., squared difference to assign equal weight to the spread of the progresses.
Intuitively, a small progress distance results in high performance for both relative and absolute progress synchronization.
\subsection{Predictability Distance}
\label{pm.subsec:timeline}
A useful method to measure predictability is to set the desired progress value and compute the average variation from the expected and the true time instant in which the action has a progress that is closest to the desired one. We can use this measure to assess the deviation from the ideal execution.
\begin{definition}
\label{def:pred}
Given a progress value $\bar p \in [0,1]$, a time step $
\tilde t_k \triangleq \argmin_{t_k}(p(x(t_k)) - \bar p)$, and $\hat t_k$ be the time instance when $p(x(t_k))$ is expected to be equal to $\bar p$. The time predictability distance relative to progress $\bar p$ is defined as:
\begin{equation}
P(\bar p) \triangleq |\tilde t_k - \hat t_k|
\end{equation}
\end{definition}
\begin{remark}
A node may not obtain the exact desired progress value as the progress may be defined at discrete points of execution. Hence in Definition~\ref{def:pred} we compute the difference between the desired progress value ad the closest one obtained.
\end{remark}
\newpage
\section{Experimental Validation}
\label{sec:experimental}
We conducted numerical experiments
that allow us to collect statistically significant data to study how the design choices affect the performance measures defined in Section~\ref{sec:measures} and to compare our approach against other solutions. We made the source code available online for reproducibility.\footnote{\url{https://github.com/miccol/tro2021-code}} We also conducted
experiments on real robots to show the applicability of our
approach in the real world. We made available online a
video of these experiments.\footnote{\url{https://youtu.be/zCBuTYogb_U}}
\subsection{Numerical Experiments}
We are ready to show how the number of barriers in $\mathcal{B}$ (for absolute synchronization) and the threshold value $\Delta$ (for relative synchronization) affect the performance, computed using the measures defined in Section \ref{sec:measures}. For illustrative purposes, we define custom-made actions with different progress profiles. To collect statistically significant data, we ran the BT of each experiment 10000 times; we use boxplots to compactly show the minimum, the maximum, the median, and the interquartile range of the measures proposed. Each experiment starts with the progress of actions equal to $0$ and ends when all the actions progress reach $1$.
\paragraph*{How the number of progress barriers affects the performance of absolute synchronization}
We now present an experiment that highlights how the number of progress barriers in $\mathcal{B}$ affects the performance of absolute synchronization.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth]{experiment-progress.pdf}
\caption{Experiments~\ref{PM.ex.dummy}}
\label{fig:ne:progress:bt:absolute}
\end{subfigure}
\begin{subfigure}{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth]{experiment-progress-relative.pdf}
\caption{Experiments~\ref{PM.ex.dummy.delta}}
\label{fig:ne:progress:bt:relative}
\end{subfigure}
\caption{BT used for Experiments~\ref{PM.ex.dummy} and ~\ref{PM.ex.dummy.delta}.}
\label{fig:ne:progress:bt}
\end{figure}
\begin{experiment}
\label{PM.ex.dummy}
Consider the BT in Figure~\ref{fig:ne:progress:bt:absolute} where the progress decorator implements an absolute synchronization with equidistant barriers (i.e., a barrier at each $\frac{1}{|B|}$ progress) and the sub-BTs are such that the progress profile of each $\bt_i$ holds Equation~\eqref{eq:ne:barriers:progress} below
\begin{equation}
p_i(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_i(x_{k-1}) + a_i + \omega_i(x_k), & \text{otherwise}
\end{cases}
\label{eq:ne:barriers:progress}
\end{equation}
with $a_1 = 0.03$, $a_2= 0.02 $, and $\omega_i(x_k) \in [-\bar \omega, \bar \omega]$ a random number, sampled from an uniform distribution, in the interval $[-\bar \omega, \bar \omega]$.
The model above describes an action whose progress evolves linearly with a fixed value ($a_i$) and with some disturbance ($\omega_i(x_k)$), modeling possible uncertainties in the execution that affect the progress.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment1-boxplot}
\caption{
Boxplot of the progress distances of Experiment~\ref{PM.ex.dummy} with different numbers of barriers $|\mathcal{B}|$ and different values of $\bar \omega$. $|\mathcal{B}|= 0$ corresponds to the unsynchronized execution.}
\label{fig:analysis:absolute}
\end{figure}
Figure~\ref{fig:analysis:absolute} shows the results of running 10000 times the BT in Experiment~\ref{PM.ex.dummy} in different settings and computing the average progress distance throughout the execution. We observe better performance with a large number of barriers and smaller $\bar \omega$. This shows that a higher number of barriers prevents the progress of the actions to differ from each other (see Algorithm~\ref{alg:progress}).
Note also that the synchronization yields a reduced variance even with large values of $\omega$.
\end{experiment}
\begin{remark}
In Experiment~\ref{PM.ex.dummy}, we consider equidistant progress barriers. We expect similar results with non-equidistant barriers, except for the corner case in which all the barriers are agglomerated in a specific progress value.
\end{remark}
\paragraph*{How the threshold value affects the performance of relative synchronization}
We now present an experiment that highlights how the value of $\Delta$ affects the performance of relative synchronization.
\begin{experiment}
\label{PM.ex.dummy.delta}
Consider the BT in Figure~\ref{fig:ne:progress:bt:relative} where the progress decorator implements a relative synchronization and the sub-BTs are such that the progress profile of each $\bt_i$ holds Equation~\eqref{eq:ne:barriers:progress:bis} below.
\begin{equation}
p_i(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_i(x_{k-1}) + a_i + \omega_i(x_k), & \text{otherwise}
\end{cases}
\label{eq:ne:barriers:progress:bis}
\end{equation}
with $a_1 = 0.03$, $a_2= 0.02 $, and $\omega_i(x_k) \in [-\bar \omega, \bar \omega]$ a random number, sampled from an uniform distribution, in the interval $[-\bar \omega, \bar \omega]$.
The model above describes an action whose progress evolves linearly with a fixed value ($a_i$) and with some disturbance ($\omega_i(x_k)$), modeling possible uncertainties in the execution that affect the progress.
\vspace*{1em}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment2-boxplot}
\caption{Boxplot of the progress distances of Experiment~\ref{PM.ex.dummy.delta} with different values for $\Delta$ and $\bar \omega$. $\Delta = 1$ corresponds to the unsynchronized execution.}
\label{fig:analysis:relative}
\end{figure}
Figure~\ref{fig:analysis:relative} shows the results of running 10000 times the BT in Experiment~\ref{PM.ex.dummy.delta} in different settings and computing the average progress distance throughout the execution.
We observe that the performance increases with a smaller $\Delta$ and decreases with a larger $\bar \omega$. This shows that a smaller $\Delta$ prevents the progress of the actions to differ from each other (see Algorithm~\ref{alg:progress}s). Note also that the synchronization yields a reduced variance even with large values of $\omega$.
\end{experiment}
\begin{remark}
The synchronization may deteriorate other desired qualities. For example, since actions are waiting for one another, the overall execution may be slower than the slowest action. Moreover, a small value for $\Delta$ or a larger number of barriers can result in highly intermittent behaviors.
\end{remark}
\begin{remark}
The decorators can be placed in different parts of the BT and not as direct children of a parallel node, as shown in Section~\ref{sec:related}.
\end{remark}
\begin{remark}
A single action that performs both tasks represents a better synchronized solution. However, for reusability purposes or for the separation of concerns, the designer may want to implement the behavior using two separated actions.
\end{remark}
\paragraph*{Progress synchronization comparison}
We now present an experiment that compares the synchronization performance. We compare our approach with three different alternative ones: One using elements from the C++11's standard library\footnote{\url{https://en.cppreference.com/w/cpp/thread/barrier}}, as it is the programming language used in the BT library; and one using the DLR's RMC Advanced Flow Control (RAFCon)~\cite{brunner2016rafcon}, as it is a tool to develop concurrent robotic tasks using hierarchical state machine with an intuitive graphical user interface, addressing similar issues of BTs.\footnote{Both implementations are available at \url{github.com/miccol/TRO2021-code}}
Figure~\ref{fig:comparison:refcon} shows the concurrent state machine developed in Rafcon for the Experiments~\ref{PM.ex.comparison.abs} and \ref{PM.ex.comparison.rel}.
\begin{figure}[h!]
\centering
\includegraphics[width=1\columnwidth]{rafcon.jpg}
\caption{Concurrent RAFCon state machine for Experiments~\ref{PM.ex.comparison.abs} and \ref{PM.ex.comparison.rel}. \emph{Action 1} and \emph{Action 2} increase the progress if and only the activation signal (their input) equals $1$.
The \emph{Concurrent State Execution}, which is a RAFCon \texttt{concurrency-state} and executes the two sub-states (\emph{Action 1} and \emph{Action 2}) concurrently. The \emph{Barrier Handler} computes the activation signals for the actions (i.e., it is set to $0$ if the action's progress surpasses the current barrier (for absolute synchronization) or the other action's progress by $\Delta$ (for relative synchronization); it is set to $1$ otherwise)}
\label{fig:comparison:refcon}
\end{figure}
\newpage
\begin{experiment}
\label{PM.ex.comparison.abs}
Consider the BT in Figure~\ref{fig:ne:progress:bt:absolute} where the progress decorator implements an absolute synchronization with equidistant barriers (i.e., a barrier at each $\frac{1}{|B|}$ progress) and the sub-BTs are such that the progress profile of each $\bt_i$ holds Equation~\eqref{eq:ne:barriers:progress} with $\bar{\omega}= 0.015$.
The model above describes the same BT used in Experiment~\ref{PM.ex.dummy} with the given value for $\bar{\omega}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment-comparison-absolute}
\caption{Boxplot of the progress distances of Experiment~\ref{PM.ex.comparison.abs} with different numbers of barriers $|\mathcal{B}|$ for each method. $|\mathcal{B}|= 0$ corresponds to the unsynchronized execution. We compare the performance obtained using C++ primitives (Code), the proposed approach (Our), and the one Rafcon (Rafcon).}
\label{fig:analysis:absolute:comparison}
\end{figure}
Figure~\ref{fig:analysis:absolute:comparison} shows the results of running 10000 times the BT in Experiment~\ref{PM.ex.dummy} with the different approaches.
Note that the unsynchronized setting (e.g., $|\mathcal{B}|= 0$) yields similar values for the different approaches. Hence the boilerplate code of the approach does not affect the performance.
\end{experiment}
\begin{experiment}
\label{PM.ex.comparison.rel}
Consider the BT in Figure~\ref{fig:ne:progress:bt:relative} where the progress decorator implements a relative synchronization and the sub-BTs are such that the progress profile of each $\bt_i$ holds Equation~\eqref{eq:ne:barriers:progress} with $\bar{\omega}= 0.015$.
The model above describes the same BT used in Experiment~\ref{PM.ex.dummy.delta} with the given value for $\bar{\omega}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment-comparison-relative}
\caption{Boxplot of the progress distances of Experiment~\ref{PM.ex.comparison.rel} with different values of $\Delta$ for each method. $\Delta = 1$ corresponds to the unsynchronized execution.}
\label{fig:comparison:relative}
\end{figure}
Figure~\ref{fig:comparison:relative} shows the results of running 10000 times the BT in Experiment~\ref{PM.ex.comparison.rel} with the different approaches. We make the same observation of the previous experiment.
Note that, as in the previous experiment, the unsynchronized setting (e.g., $\Delta= 1$) yields similar values.
\end{experiment}
\begin{remark}
Our solution keeps the same order of magnitude as the computer programming ones (the most efficient from a computation point of view) and outperforms the one of Rafcon, while also keeping the advantages of BT over state machines described in the literature~\cite{BTBook}.
\end{remark}
\paragraph*{How the number of children to synchronize affects the performance}
We now present two experiments that show how the approach scales with the number of children.
\begin{experiment}
\label{ex:number:absolute}
Consider a set of BTs that describe the absolute progress synchronization of a different numbers of actions (Figure~\ref{fig:ne:progress:bt:absolute} shows an example of such BT with two actions). In each BT, the actions' progress holds Equation~\eqref{eq:ne:barriers:progress}, with $\alpha = 0.03$ and $\bar \omega = 0.015$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment1-bis-boxplot}
\caption{Boxplot of the predictability distances of Experiment~\ref{PM.ex.dummy.timeline} with different values for $\Delta$ and $\bar \omega$. $\Delta= 1$ corresponds to the unsynchronized execution.}
\label{fig:analysis:number}
\end{figure}
Figure~\ref{fig:analysis:number} shows the results of running 10000 times the BT in Experiment~\ref{ex:number:absolute} with different numbers of children. We note how the performance decays linearly with the number of children (the number of children increases exponentially in the figure).
\end{experiment}
\begin{experiment}
\label{ex:number:relative}
Consider a set of BTs that describe the absolute progress synchronization of different numbers of actions (Figure~\ref{fig:ne:progress:bt:absolute} shows an examples of such BT with two actions). In each BT, the actions' progress holds Equation~\eqref{eq:ne:barriers:progress}, with $\alpha = 0.03$ and $\bar \omega = 0.015$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment2-bis-boxplot}
\caption{Boxplot for predictability distance of Experiment~\ref{PM.ex.dummy.timeline} with different values for $\Delta$ and $\bar \omega$. $\Delta= 1$ corresponds to the unsynchronized execution.}
\label{fig:analysis:number:relative}
\end{figure}
Figure~\ref{fig:analysis:number:relative} shows the results of running 10000 times the BT in Experiment~\ref{ex:number:absolute} with different numbers of children. Similar to the previous experiment. The performance decays linearly with the number of children.
\end{experiment}
As expected, in both absolute and relative synchronization settings, the number of children deteriorates the progress synchronization performance. This is due to the fact that the children's progresses surpass one another, increasing the progress distance.
\newpage
\paragraph*{How the number of barriers affects the predictability}
We now present an experiment that highlights how the number of barriers for an absolute synchronized parallel node affects the predictability of an execution.
\begin{experiment}
\label{PM.ex.dummy.timeline}
Consider the BT of Figure~\ref{fig:ex:predictability:bt} where the progress decorator implements a relative synchronization and the action \say{Arm Movement}, whose progress is to be imposed, has a progress defined such that it holds Equation~\eqref{SA:ex:jitter:action} below:
\begin{equation}
p_2(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_2(x_{k-1}) + 2 + \omega_i(x_k), & \text{otherwise}
\end{cases}
\label{SA:ex:jitter:action}
\end{equation}
whereas the progress of the action \say{Profile} holds Equation~\eqref{SA:ex:jitter:model} below:
\begin{equation}
p_1(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_1(x_{k-1}) + 0.1, & \text{otherwise}
\end{cases}
\label{SA:ex:jitter:model}
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment3-boxplot}
\caption{Boxplot for predictability distance of Experiment~\ref{PM.ex.dummy.timeline} with different values for $\Delta$ and $\bar \omega$. $\Delta= 1$ corresponds to the unsynchronized execution.}
\label{fig:analysis:predictability}
\end{figure}
Figure~\ref{fig:analysis:predictability} reports the results of Experiment~\ref{PM.ex.dummy.timeline}. We observe worse performance with larger $\bar \omega$ and $\Delta$.
\end{experiment}
\begin{remark}
In the experiments above, we showed how a designer could synchronize the progress of several subtrees in a non-invasive fashion. The designer can tune the number of
barriers for the absolute synchronization and the threshold value for the relative synchronization.
However, as mentioned above, synchronizations between actions may deteriorate other performances. Figure~\ref{ex:remark:time} shows the average times to complete the executions for Experiment~\ref{PM.ex.comparison.rel}. Similar results were found for absolute progress synchronization.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{comparison-average-times}
\caption{Boxplot for time to complete.}
\label{ex:remark:time}
\end{figure}
\end{remark}
\newpage
\paragraph*{How the priority increment function $g$ affects the execution in resource synchronization}
We now present two examples of resource synchronization and how the shape of the increment function leads to different behaviors. As mentioned above, this synchronization is done among subtrees that have equal priority in accessing the resources. The function $g$ provides the designer a way to shape their resource allocation strategy.
Experiment~\ref{ex:greedy} and~\ref{ex:fair} show an example of usage of the resource synchronization decorator with different settings for the function $g$.
\begin{experiment}[Greedy Dining Robots]
\label{ex:greedy}
This experiment is the Dining Philosopher Problem \cite{tanenbaum2015modern} with a twist.
Consider three robots that sit in a round table with three cables: Cable $A$, Cable $B$, and Cable $C$. Each cable sits between two robots such that the Robot $1$ can grab Cable $A$ and $B$, Robot $2$ can grab Cable $B$ and $C$, and Robot $3$ can grab Cable $C$ and $A$.
Each robot needs two cables to charge its battery.
This example represents those cases in which several software components (controlling different robots or different parts on the same robot) need to access a shared resource.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\columnwidth]{dining-philosophers}
\caption{BT encoding the desired behavior of Experiment~\ref{ex:greedy}}
\label{fig:ex:greedy:bt}
\end{figure}
The BT in Figure~\ref{fig:ex:greedy:bt} encodes a behavior of these robots that ensures resource synchronization. At each tick, the action \say{Robot $i$ Recharges} increases the battery level by 10\% of its full capacity.
The progress profile follows the battery level as follows:
\begin{equation}
p_i(x_k)=
\begin{cases}
0 &\text{ if }k = 0\\
p_i(x_{k-1}) + 0.1, & \text{otherwise}
\end{cases}
\end{equation}
$\bt_1$, $\bt_2$, and $\bt_3$ are such that
\begin{equation}
Q_1(x_k) =
\begin{cases}
\{A, B\} &\text{ if } p(x_k) < 1 \\
\emptyset &\text{ otherwise }
\end{cases}
\end{equation}
\begin{equation}
Q_2(x_k) =
\begin{cases}
\{B, C\} &\text{ if } p(x_k) < 1 \\
\emptyset &\text{ otherwise }
\end{cases}
\end{equation}
\begin{equation}
Q_3(x_k) =
\begin{cases}
\{C, A\} &\text{ if } p(x_k) < 1 \\
\emptyset &\text{ otherwise }
\end{cases}
\end{equation}
The $g$ function are defined as follows:
\begin{eqnarray}
g_i(x_k) &= 0
\end{eqnarray}
That is, the priority does not change when the action does not receive ticks.
Figure~\ref{fig:ex:resource:greedy} shows the progress profile of the two BT. We see how, once a robot acquires the two wires, the wires are assigned to that robot until it no longer requires it (i.e., the battery is fully charged).
\end{experiment}
\newpage
\begin{experiment}[Fair Dining Robots]
\label{ex:fair}
Consider the three robot of the Experiment~\ref{ex:greedy} above, with the difference in the definitions of the
$g$ functions:
\begin{eqnarray}
g_i(x_k) &= 1
\end{eqnarray}
That is, the priority increases when the action does not receive ticks.
Figure~\ref{fig:ex:resource:fair} shows the progress profile of the two BT. We see how the wires are allocated in a \say{fair} fashion.
\end{experiment}
The two experiments above show that the choice of the function $g$ becomes crucial to avoid starvation. In Section~\ref{sec:analysis} we will prove under which circumstances the BT execution avoids starvation. Note that by tuning $g_i(x_k)$, we can achieve different profiles of execution, equivalent to the assignment of a quantum of time received by each robot when they get access to the shared resource.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth] {profile-greedy}
\caption{Experiment~\ref{ex:greedy}.}
\label{fig:ex:resource:greedy}
\end{subfigure}
\begin{subfigure}[b]{0.49\columnwidth}
\includegraphics[width=\columnwidth] {profile-fair}
\caption{Experiment~\ref{ex:fair}.}
\label{fig:ex:resource:fair}
\end{subfigure}
\label{fig:ex:resource}
\caption{Progress profiles of the BTs of Experiments~\ref{ex:greedy} and~\ref{ex:fair}.}
\end{figure}
\subsection{Real World Validation}
This section presents the experimental validation implemented on real robots. The literature inspired our experiments.
\paragraph*{Progress Synchronization}
In Experiment~\ref{ex:icub}, below we present an implementation of Example~\ref{ex:relative} above, motivated by the impact of contingent behaviors on the quality of
verbal human-robot interaction~\cite{fischer2013impact}.
\begin{experiment}[iCub Robot]
\label{ex:icub}
An iCub robot~\cite{metta2008icub} has to look and point to a given direction.
Figure~\ref{fig:ex:icub:plot} shows the progress plots for the two actions in the synchronized and unsynchronized case.
Figure~\ref{fig:ex:icub:exec} shows the progress profiles of the two actions using the iCub Action Rendering Engine\footnote{\url{https://robotology.github.io/robotology-documentation/doc/html/group__actionsRenderingEngine.html}}, where we send concurrently the command to look and point at the same coordinate; and the ones using the BT in Figure~\ref{fig:ex:bt:icub} with a relative synchronization and a threshold value of $\Delta= 0.1$, where the actions look and point performs small steps towards the desired coordinate.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{experiment-icub-bt}
\caption{BT encoding the behavior of Experiment~\ref{ex:icub}}
\label{fig:ex:bt:icub}
\end{figure}
\begin{figure}[h]
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-unsync-0.jpg}
\caption{[Unsync] Initial State.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-sync-0.jpg}
\caption{[Sync] Initial State.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-unsync-1.jpg}
\caption{[Unsync] The robot starts moving the head.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-sync-1.jpg}
\caption{[Sync] The robot moves head and arm slowly.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-unsync-2.jpg}
\caption{[Unsync] The robot finishes to move the head while the arm is still moving.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-sync-2.jpg}
\caption{[Sync] The robot keeps moving head and arm slowly.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-unsync-3.jpg}
\caption{[Unsync] The robot finishes to move the arm.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth,trim={15cm 5cm 10cm 9cm},clip]{icub-sync-3.jpg}
\caption{[Sync] The robot finishes to move the head and arm.}
\end{subfigure}
\caption{Execution steps of Experiment~\ref{ex:icub} with (right) and without (left) synchronization.}
\label{fig:ex:icub:exec}
\end{figure}
\vspace*{-0.5em}
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{icub-plot-unsync}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{icub-plot-sync}
\end{subfigure}
\caption{Progress values for Experiment~\ref{ex:icub} with (right) and without (left) synchronization}
\label{fig:ex:icub:plot}
\end{figure}
\end{experiment}
The unsynchronized execution looks unnatural as the head moves way faster than the arm. The synchronization allowed a reduction of the average progress distance from $0.4176$ to $0.0964$. From the plot in Figure~\ref{fig:ex:icub:plot} we note how, with the synchronized execution, the head stops as soon as its progress surpasses the one of the arm by $0.1$. Then it moves slower.
Moreover, the synchronized execution completes the task in about double the time. This is due to the fact that smaller movements in iCub are performed slowly, and the synchronized execution breaks down the action in small steps.
\clearpage
\paragraph*{Resource Synchronization}
We now present the use case of a resource synchronization mechanism.
We took a BT used for a use case for an Integrated Technical Partner
European Horizon H2020 project RobMosys\footnote{ \url{https://scope-robmosys.github.io/}} and then we used the resource synchronization mechanism to parallelize some tasks executed sequentially using the classical formulation of BTs.
As noted in the BT literature~\cite{colledanchise2016advantages, brunner2019autonomous} turning a sequential behavior execution into a concurrent one becomes much simpler in BTs compared to FSMs.
\begin{experiment}[R1 Robot]
\label{ex:r1}
An R1 robot~\cite{parmiggiani2017design} has to pick up an object from the user's hand and then navigate towards a predefined destination.
To grasp the object from the user's hand, the robot has to put the arm in a pre-grasp position, extend its hand\footnote{The arm has a prismatic joint in the wrist.}, grasp the object, and finally, retract the hand.
The BT in Figure~\ref{fig:ex:r1:unsync:bt} encodes the behavior of the robot designed using the classical BT nodes and Figure~\ref{fig:ex:r1:unsync:exec} shows some execution steps, as done in the original project above.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{experiment-r1-bt-before}
\caption{Original BT encoding the behavior of Experiment~\ref{ex:r1}. The node labeled with $\rightarrow^*$ represents a sequence with memory.}
\label{fig:ex:r1:unsync:bt}
\end{figure}
However, the robot can execute the pregrasp motion as well as the extraction of the arm while it approaches the user and the retraction of the arm while it navigates towards the destination, whereas the grasping action needs the robot to be still. Hence we can execute the pre-grasp and post-grasp action while the robot moves, speeding up the execution. The BT in Figure~\ref{fig:ex:r1:sync:bt} models such behavior, taking advantage of the resource synchronization, where the action \say{Goto} and \say{Close Hand} allocate the resource \emph{Mobile Base} as long as they are running.
Figure~\ref{fig:ex:r1:sync:exec} shows some execution steps. The concurrent execution of some actions allows a faster overall behavior as in \cite{colledanchise2016advantages, brunner2019autonomous}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\columnwidth]{experiment-r1-bt-after}
\caption{BT encoding the behavior of Experiment~\ref{ex:r1} using the resource synchronization mechanism. This BT is significantly simpler than the one in Figure~\ref{fig:ex:r1:unsync:bt}.}
\label{fig:ex:r1:sync:bt}
\end{figure}
\end{experiment}
\begin{figure}[t!]
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-0}
\caption{The robot approaches the user. \\ Action Executed: \say{Goto Object}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-1.jpg}
\caption{The robot reaches the user. \\ Action Executed: \say{Goto Object}.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-2.jpg}
\caption{The robot moves the arm in pre-grasp position. Action Executed: \say{Move Arm}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-extra.jpg}
\caption{The robot extracts the hand.\\ Actions Executed: \say{Extract Hand}.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-3.jpg}
\caption{The robot closes the hand. \\ Actions Executed: \say{Close Hand}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-4.jpg}
\caption{The robot retracts the hand. \\ Actions Executed: \say{Retract Hand}.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-5.jpg}
\caption{The robot moves towards the destination. Actions Executed: \say{Goto Destination}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-unsync-6.jpg}
\caption{The robot moves towards the destination. Actions Executed: \say{Goto Destination}.}
\end{subfigure}
\caption{Execution screenshots of Experiment~\ref{ex:r1} running the BT in Figure~\ref{fig:ex:r1:unsync:bt}. }
\label{fig:ex:r1:unsync:exec}
\end{figure}
\begin{figure}[h!]
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-sync-1.jpg}
\caption{The robot moves towards the user while positioning the arm and hand. Actions Executed: \say{Goto Object}, \say{Move Arm In Pregrasp}, and \say{Extract Hand}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-sync-3.jpg}
\caption{The robot reaches the user and then closes the hand. \\ Action Executed: \say{Close Hand}.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-sync-4.jpg}
\caption{The robot moves towards the destination while retracting the hand. \\ Actions Executed: \say{Goto Destination} and \say{Retract Hand}.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{r1-sync-5.jpg}
\caption{The hand gets retracted. \\ Action Executed: \say{Goto Destination}.}
\end{subfigure}
\caption{Execution screenshots of Experiment~\ref{ex:r1} running the BT in Figure~\ref{fig:ex:r1:sync:bt}.}
\label{fig:ex:r1:sync:exec}
\end{figure}
\paragraph*{Perpetual Actions}
We now present an experiment where we show the applicability of our approach with perpetual actions. Experiment~\ref{ex:panda} below presents an implementation of Example~\ref{ex:perpetual}, inspired by the literature~\cite{rovida2018motion}.
\begin{experiment}[Panda Robot]
\label{ex:panda}
An industrial manipulator has to insert a piston into a hollow cylinder. The piston's rod and the piston's head are attached via a revolute joint. To correctly insert the rod, the robot must keep it aligned during the insertion into the cylinder. During the execution, the end-effector gets misaligned, requiring the robot to realign the rod. Figure~\ref{fig:ex:perpetual:bt:sync} depicts the BT that encodes this task. The action progress equals the ones of Example~\ref{ex:perpetual}. Figure~\ref{fig:ex:panda:exec} shows the execution steps of this experiment with and without synchronization. We see how the synchronized execution fails since the robot inserts the piston too fast for the alignment sub-behavior to have an effect.
\vspace*{-0.5em}
\begin{figure}[h!]
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-unsync-1.jpg}
\caption{[Unsync] The rod gets misaligned. The robot keeps inserting the rod.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-unsync-1.jpg}
\caption{[Sync] The rod gets misaligned. The robot stops inserting the rod.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-unsync-2.jpg}
\caption{[Unsync] The robot aligns the rod while this moves downwards. The rod hits the cylinder.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-sync-2.jpg}
\caption{[Sync] The robot aligns the rod.}
\end{subfigure}
\vspace*{0.5em}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-unsync-2.jpg}
\caption{[Unsync] A safety fault stops the execution.}
\end{subfigure}
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=\columnwidth,angle=-90,trim={7cm 2cm 15cm 2cm},clip]{panda-sync-3.jpg}
\caption{[Sync] The insertion task resumes}
\end{subfigure}
\caption{Execution screenshots of Experiment~\ref{ex:panda} with (right) and without (left) synchronization. }
\label{fig:ex:panda:exec}
\end{figure}
\end{experiment}
\newpage
\section{Software Library}
\label{sec:library}
This section presents the third contribution of the paper. We made publicly available an implementation of the nodes presented in this paper. The decorators work with the BehaviorTree.CPP engine~\cite{BTCpp} and the Groot GUI~\cite{Groot}. The user can define the values of barriers $|B|$ (for absolute progress synchronization), the threshold $\Delta$ (for relative progress synchronization), or the priority increment function $g$ of Definition~\ref{def:priorityincrement}. The BT can also have independent synchronizations, as shown in the BT of Figure~\ref{fig:sw}. We made the details available in the library's repository.\footnote{\url{https://github.com/miccol/TRO2021-code}}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{groot.jpg}
\caption{Concurrent BT of example using the Groot GUI.}
\label{fig:sw}
\end{figure}
\subsection{Implement concurrent BTs with BehaviorTree.CPP}
Listing~\ref{btcpp} below shows the code to implement Example~\ref{ex:absolute} above with the BehaviorTree.CPP engine. The user has to instantiate the root and the actions nodes (Lines 1-4), then it defines the barriers (Lines 6-7), it instantiates the decorators (Lines 9-10), and finally, it constructs the BT (Lines 12-16).
\begin{lstlisting}[language=C++, caption={Implementation code for the BT in Example~\ref{ex:absolute} }, label=btcpp]
BT::ParallelNode parallel("root",2);
SyncSmoothAction action1("Arm Movement",0,0.015);
SyncSmoothAction action2("Base Movement",0,0.01);
AbsoluteBarrier barrier({0.1,0.2,0.3,0.4,0.5,0.6,
0.7,0.8,0.9,1.0});
DecoratorProgressSync dec1("dec1", &barrier);
DecoratorProgressSync dec2("dec2", &barrier);
dec1.addChild(&action1);
dec2.addChild(&action2);
parallel.addChild(&dec1);
parallel.addChild(&dec2);
\end{lstlisting}
\subsection{Implement concurrent BTs with Groot}
We provide a palette of nodes that allow the user to instantiate them using the Groot GUI in a drag-and-drop fashion. The instructions on how to load the palette are available in the library's documentation. Details on how to instantiate and run a generic BT are available in the Groot library's documentation.
\newpage
\section{Theoretical Analysis}
\label{sec:analysis}
This section presents the fourth contribution of the paper. We first give a set of formal definitions, and then we provide a mathematical analysis of the BT synchronization.
As the decorators defined in this paper may disable the execution of some sub-trees, we need to identify the circumstances that preserve the entire BT's properties.
\subsection{State-space Formulation of Behavior Trees}
\label{sec:background.ss}
The state-space formulation of BTs~\cite{BTBook} allows us to study them from a mathematical standpoint. A recursive function call represents the tick. We will use this formulation in the proofs below.
\begin{definition}[Behavior Tree \cite{BTBook}]
\label{bg.def:BT}
A BT is a three-tuple
\begin{equation}
\bt_i\triangleq\{f_i,r_i, \Delta t\},
\end{equation}
where $i\in \mathbb{N}$ is the index of the tree, $f_i: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is the right hand side of a difference equation, $\Delta t$ is a time step and
$r_i$ is the return status that can be equal to either \emph{Running}, \emph{Success}, or \emph{Failure}. Finally, let $x_k\triangleq x(t_k)$ be the system state at time $t_k$, then the execution of a BT $\bt_i$ is described by:
\begin{eqnarray}
x_{k+1}&=&f_i( x_{k}), \label{bts:eq:executionOfBT}\\
t_{k+1}&=&t_{k}+\Delta t.
\end{eqnarray}
\end{definition}
\begin{definition}[Sequence compositions of BTs~\cite{BTBook}]
\label{bts:def.seq}
Two or more BTs can be composed into a more complex BT using a Sequence operator,
$$\bt_0=\mbox{Sequence}(\bt_1,\bt_2).$$
Then $r_0,f_0$ are defined as follows
\begin{eqnarray}
\mbox{If }x_k\in S_1&& \\
r_0(x_k) &=& r_2(x_k) \\
f_0(x_k) &=& f_2(x_k) \label{eq:seq1}\\
\mbox{ else }&& \nonumber \\
r_0(x_k) &=& r_1(x_k) \\
f_0(x_k) &=& f_1(x_k). \label{eq:seq2}
\end{eqnarray}
\end{definition}
$\bt_1$ and $\bt_2$ are called children of $\bt_0$.
\begin{remark}
When executing the new BT, $\bt_0$ first keeps executing its first child $\bt_1$ as long as it returns Running or Failure.
\end{remark}
For notational convenience, we write:
\begin{equation}
\mbox{Sequence}(\bt_1, \mbox{Sequence}(\bt_2,\bt_3))= \mbox{Sequence}(\bt_1,\bt_2, \bt_3)
\end{equation}
and similarly for arbitrarily long compositions.
\begin{definition}[Fallback compositions of BTs~\cite{BTBook}]
\label{bts:def.fal}
Two or more BTs can be composed into a more complex BT using a Fallback operator,
$$\bt_0=\mbox{Fallback}(\bt_1,\bt_2).$$
Then $r_0,f_0$ are defined as follows
\begin{eqnarray}
\mbox{If }x_k\in \mathcal{F}_1&& \\
r_0(x_k) &=& r_2(x_k) \\
f_0(x_k) &=& f_2(x_k) \\
\mbox{ else }&&\nonumber \\
r_0(x_k) &=& r_1(x_k) \\
f_0(x_k) &=& f_1(x_k).
\end{eqnarray}
\end{definition}
For notational convenience, we write:
\begin{equation}
\mbox{Fallback}(\bt_1, \mbox{Fallback}(\bt_2,\bt_3))= \mbox{Fallback}(\bt_1,\bt_2, \bt_3)
\end{equation}
and similarly for arbitrarily long compositions.
\begin{definition}[Parallel compositions of BTs~\cite{BTBook}]
\label{bts:def:parallel}
Two or more BTs can be composed into a more complex BT using a Parallel operator,
$$\bt_0=\mbox{Parallel}(\bt_1,\bt_2, M).$$
Where $f_0(x) \triangleq (f_{1}(x),f_{2}(x))$ and $r_0$ is defined as follows
\begin{eqnarray}
\mbox{If } M=1&&\nonumber \\
r_0(x) &=& \mathcal{S} \mbox{ If } r_1(x)=\mathcal{S} \vee r_2(x)=\mathcal{S}\\
r_0(x) &=& \mathcal{F} \mbox{ If } r_1(x)=\mathcal{F} \wedge r_2(x)=\mathcal{F}\\
r_0(x) &=& \mathcal{R} \mbox{ else } \\
\mbox{If } M=2&&\nonumber \\
r_0(x) &=& \mathcal{S} \mbox{ If } r_1(x)=\mathcal{S} \wedge r_2(x)=\mathcal{S}\\
r_0(x) &=& \mathcal{F} \mbox{ If } r_1(x)=\mathcal{F} \vee r_2(x)=\mathcal{F}\\
r_0(x) &=& \mathcal{R} \mbox{ else }
\end{eqnarray}
\end{definition}
For notational convenience, we write:
\begin{equation}
\mbox{Parallel}(\bt_1, \mbox{Parallel}(\bt_2,\bt_3,2), 2)= \mbox{Parallel}(\bt_1,\bt_2, \bt_3 ,3)
\end{equation}
as well as:
\begin{equation}
\mbox{Parallel}(\bt_1, \mbox{Parallel}(\bt_2,\bt_3,1), 1)= \mbox{Parallel}(\bt_1,\bt_2, \bt_3 ,1)
\end{equation}
and similarly for arbitrarily long compositions.
\begin{definition}[Finite Time Successful~\cite{BTBook}]
\label{properties:def:FTS}
A BT is Finite Time Successful (FTS) with region of attraction $R'$, if for all starting points $x(0)\in R'\subset R$, there is a time $\tau$, and a time $\tau'(x(0))$ such that $\tau'(x)\leq \tau$ for all starting points, and
$x(t)\in R' $ for
all $t\in [0,\tau')$
and $x(t)\in S$ for
$t = \tau'$
\end{definition}
As noted in the following lemma, exponential stability implies FTS, given the right choices of the sets $S,F,R$.
\begin{lemma}[Exponential stability and FTS~\cite{BTBook}]
A BT for which $x_s$ is a globally exponentially stable equilibrium of the execution,
and $S \supset \{x: ||x-x_s||\leq \epsilon\}$, $\epsilon>0$, $F=\emptyset$, $R=\mathbb{R}^n \setminus S$, is FTS.
\end{lemma}
\emph{Safety} is the ability to avoid a particular portion of the state-space, which we denote as the \emph{Obstacle Region}. To make statements about the safety of composite BTs, we need the following definition. Details on safe BTs can be found in the literature~\cite{BTBook}.
\begin{definition}[Safeguarding~\cite{BTBook}]
\label{def:Safeguarding}
A BT is safeguarding, with respect to the step length $d$, the obstacle region $O \subset \mathbb{R}^n$, and the initialization region $I \subset R$, if it is safe, and FTS with region of attraction $R' \supset I$ and a success region $S$, such that $I$ surrounds $S$ in the following sense:
\begin{equation}
\{x\in X \subset \mathbb{R}^n: \inf_{s\in S} || x-s || \leq d \} \subset I,
\end{equation}
where $X$ is the reachable part of the state space $\mathbb{R}^n$.
\end{definition}
This implies that the system, under the control of another BT with maximal statespace steplength $d$, cannot leave $S$ without entering $I$, and thus avoiding $O$~\cite{BTBook}.
\begin{definition}[Safe~\cite{BTBook}]
\label{properties:def:Safe}
A BT is safe, with respect to the obstacle region $O \subset \mathbb{R}^n$, and the initialization region $I \subset R$,
if for all starting points $x(0)\in I$, we have that $x(t) \not \in O$, for all $t \geq 0$.
\end{definition}
\subsection{CBT's Definition}
We now formulate additional definitions. We use these definitions to provide a state-space formulation for CBTs (Definition~\ref{bts.def:CBT} below) and to prove system properties.
\begin{definition}[Progress Function]
\label{def:progress}
The function $p: \mathbb{R}^n \to [0,1]$ is the progress function. It indicates the progress of the BT's execution at each state.
\end{definition}
\begin{definition}[Resources]
\label{ps.def.L}
$R$ is a collection of symbols that represents the resources available in the system.
\end{definition}
\begin{definition}[Allocated Resource]
\label{ps.def.allocatedresources}
Let $\mathcal{N}$ be the set of all the nodes of a BT, the function $\alpha : \mathbb{R}^n \times R \to \mathcal{N}$ is the resource allocation function. It indicates the BT using a resource.
\end{definition}
\begin{definition}[Resource Function]
\label{ps.def.resources}
The function $Q : \mathbb{R}^n \to 2 ^ R$ is the resource function. It indicates the set of resources needed for a BT's execution at each state.
\end{definition}
\begin{definition}[Node priority]
\label{def:priority}
The function $\rho : \mathbb{R}^n \to \mathbb{R}$ is the priority function. It indicates the node's priority to access a resource.
\end{definition}
\begin{definition}[Priority Increment Function]
\label{def:priorityincrement}
The function $g : \mathbb{R^n} \to \mathbb{R}$ is the priority increment function. It indicates how the priority changes while a node is waiting for a resource.
\end{definition}
We can now define a CBT as BT with information regarding its progress and the resources needed as follows:
\begin{definition}[Concurrent BTs]
\label{bts.def:CBT}
A CBT is a tuple
\begin{equation}
\bt_i\triangleq\{f_i,r_i, \Delta t, p_i, q_i\},
\end{equation}
where $i$, $f_i$, $\Delta t$,
$r_i$ are defined as in Definition~\ref{bg.def:BT}, $p_i$ is a progress function, and $q$ is a resource function.
\end{definition}
A CBT has the functions $p_i$ and $q_i$ in addition to the others of Definition~\ref{bg.def:BT}. These functions are user-defined for Actions and Condition. For the classical operators, the functions are defined below.
\newpage
\begin{definition}[Sequence compositions of CBTs]
\label{bts:def.smoothseq}
Two CBTs can be composed into a more complex CBT using a Sequence operator,
$$\bt_0=\mbox{Sequence}(\bt_1,\bt_2).$$
The functions $r_0,f_0$ match those introduced in Definition~\ref{bts:def.seq}, while the functions $p_0,q_0$ are defined as follows
\begin{eqnarray}
\mbox{If }x_k\in S_1&& \\
p_0(x_k) &=& \frac{p_1(x_k) + p_2(x_k)}{2} \\
q_0(x_k) &=& q_2(x_k)\\
\mbox{ else }&& \nonumber \\
p_0(x_k) &=& \frac{p_1(x_k)}{2}\\
q_0(x_k) &=& q_1(x_k).
\end{eqnarray}
\end{definition}
\begin{definition}[Fallback compositions of CBTs]
\label{bts:def.smoothfal}
Two CBTs can be composed into a more complex CBT using a Fallback operator,
$$\bt_0=\mbox{Fallback}(\bt_1,\bt_2).$$
The functions $r_0,f_0$ are defined as in Definition~\ref{bts:def.fal}, while the functions $p_0,q_0$ are defined as follows
\begin{eqnarray}
\mbox{If }x_k\in {F}_1&& \\
p_0(x_k) &=& p_2(x_k)\\
q_0(x_k) &=& q_2(x_k) \\
\mbox{ else }&&\nonumber \\
p_0(x_k) &=& p_1(x_k)\\
q_0(x_k) &=& q_1(x_k).
\end{eqnarray}
\end{definition}
\begin{definition}[Parallel compositions of CBTs]
\label{bts:def:smoothpar}
Two CBTs can be composed into a more complex CBT using a Parallel operator,
$$\bt_0=\mbox{Parallel}(\bt_1,\bt_2).$$
The functions $r_0,f_0$ are defined as in Definition~\ref{bts:def:parallel}, while the functions $p_0$ and $q_0$ are defined as follows
\begin{eqnarray}
p_0(x_k) &=& \mbox{min}(p_1(x_k), p_2(x_k)) \\
q_0(x_k) &=& q_1(x_k) \cup q_2(x_k)
\end{eqnarray}
\end{definition}
\begin{remark}
Conditions nodes do not perform any action. Hence their progress function can be defined as $p(x_k) = 0$ and their resource function as $q(x_k) = \emptyset$ $\forall x_k \in \mathbb{R}^n$.
\end{remark}
\begin{definition}[Absolute Barrier]
\label{def:absbarrier}
An absolute barrier is defined as:
\begin{equation}
\begin{split}
b(x_k) \triangleq min \{b_i \in B : \forall T_j \in T, p_j(x_k) \geq b_{i-1} \land \\ \land \exists \bt_k : p_k(x_k) \geq b_{i} \}
\end{split}
\end{equation}
with $B$ a finite set of progress values.
\end{definition}
\begin{definition}[Relative Barrier]
\label{def:relbarrier}
A relative barrier is defined as:
\begin{equation}
b(t_k) \triangleq min_{\bt_i \in T} \{p_i \} + \Delta
\end{equation}
with $\Delta \in \left[0, 1\right]$
\end{definition}
\begin{definition}[Functional Formulation of a Progress Decorator Node]
A CBT $\bt_1$ can be composed into a more complex BT using an Absolute Progress Decorator operator,
$$\bt_0=\mbox{AbsoluteProgress}(\bt_1, b(x_k)).$$
Then $r_0,f_0, p_0, Q_0$ are defined as follows
\begin{align}
&p_0(x_k)=p_1(x_k)\\
&Q_0(x_k) = Q_1(x_k)\\
\mbox{If } &p_1(x_k) < b(x_k) \nonumber \\
&f_0(x_k) = f_1(x_k) \label{eq:form:progress:f}
\\
& r_0(x_k) = r_1(x_k) \label{eq:form:progress:r} \\
\mbox{else }& \nonumber \\
&f_0(x_k) = x_k \\
& r_0(x_k) = \mathcal{R}
\end{align}
With $b(x_k)$ as in Definition~\ref{def:absbarrier} for an absolute synchronization or as in Definition~\ref{def:relbarrier} for a relative synchronization.
\end{definition}
\begin{definition}[Functional Formulation of a Resource Decorator Node]
A Smooth BT $\bt_1$ can be composed into a more complex BT using a Resource Decorator operator,
$$\bt_0=\mbox{ResourceDecorator}(\bt_1, g).$$
With $g$ as in Definition~\ref{def:priorityincrement}. then $r_0,f_0, p_0, Q_0$ are defined as follows
\begin{align}
&p_0(x_k)=p_1(x_k)\\
\mbox{If } &(\forall q \in Q_1(x_k): \alpha(x_{k}, q) = \bt_1 \land \rho_1(x_k) \geq \rho_{max}) \nonumber \\ &\lor \alpha(x_k, q) = \emptyset \nonumber \\
&r_0(x_k) = r_1(x_k) \label{eq:form:resource:r}\\
&f_0(x_k) = f_1(x_k) \label{eq:form:resource:f} \\
&Q_0(x_k) = Q_1(x_k) \\
&\alpha(x_{k}, q) = \begin{cases}
\bt_1 & \mbox{if } q \in Q_1(x_k) \\
\emptyset & \mbox{if } q \not\in Q_1(x_k) :\\ & \hspace{0.5em} \alpha(x_{k-1}, q) = \bt_1\\
\alpha(x_{k-1}, q) & \mbox{otherwise}\\
& \rho(x_k) = \rho(x_{k-1})
\end{cases}\\
\mbox{els}&\mbox{e } \nonumber \\
&r_0(x_k) = \mathcal{R} \\
&f_0(x_k) = x_k \\
&Q_0(x_k) = \emptyset\\
& \rho_1(x_k) = \rho_1(x_{k-1}) + g_1(x_k) \label{eq:form:resource:g}
\end{align}
With $\alpha$ from Definition~\ref{ps.def.allocatedresources}.
\end{definition}
\begin{definition}[Active node]
A BT node $\bt_i$ is said to be active in a given BT if $\bt_i$ is either the root node or whenever $r(x_k) = R$, $\bt_i$ will eventually receive a tick.
\end{definition}
An \emph{active} node is a node that eventually will receive a tick when its returns status is running.
\newpage
\subsection{Lemmas}
The synchronization mechanism proposed in this paper may jeopardize the FTS property (described in Definition~\ref{properties:def:FTS}) of a BT. In particular, a FTS BT may no longer receive ticks from decorators proposed in this paper as this will wait for another action indefinitely. This relates to the problem of
\emph{starvation}, where a process waits for a critical resource and other processes, with a higher priority, prevent access to such resource~\cite{tanenbaum2015modern}.
\begin{lemma}[ProgressSync FTS BTs]
\label{th.APSFTS}
Let $\bt_1$ and $\bt_2$ be two FTS, with region of attraction $R_1$ and $R_2$ respectively, and active sub-BTs in the BT $\bt$. The sub-BTs $\tilde{\bt_1} = DecoratorSync(\bt_1, b(x_k))$ and $\tilde{\bt_2} = DecoratorSync(\bt_2, b(x_k))$ in the BT $\tilde \bt$ obtained by replacing in $\bt$ $\bt_1$ and $\bt_2$ with $\tilde{\bt_1}$ and $\tilde{\bt_2}$ respectively, are FTS.
\begin{proof}
Since the $\bt_1$ and $\bt_2$ are active they will receive ticks as long as their return status is running, from Equations~ \eqref{bts.def:CBT} and~\eqref{eq:form:progress:r}, $\tilde{\bt_1}$ and $\tilde{\bt_2}$ have the same return statuses of $\bt_1$ and $\bt_2$ respectively, they both will receive ticks as long as they return status is running. Since $\bt_1$ and $\bt_2$ are FTS with region of attraction $R$ they will eventually reach a state $x_{\bar k} \in S_i$, which implies $r_i(x_{\bar k}) = S$ hence eventually $p_i(x_{\bar k}) = 1$. In such case the the decorator propagate every tick that it receive.
\end{proof}
\end{lemma}
\begin{corollary}[of Lemma~\ref{th.APSFTS}]
Let $\bt_1$, $\bt_2$, $\cdots$, and $\bt_N$ be $N$ FTS with region of attraction $R_i$ and active sub-BTs. Each sub-bt $\tilde{\bt_i} = DecoratorSync(\bt_i, B)$ is FTS if $r_i(x_k) = S \implies p_i(x_k) = 1$ hold.
\begin{proof}
The proof is similar to the one of Lemma~\ref{th.APSFTS}.
\end{proof}
\end{corollary}
\begin{lemma}
Let $\bt_1$ be a safeguarding BT with respect to the step length $d$, the obstacle region $O \subset \mathbb{R}^n$, and the initialization region $I \subset R$. Then $\bt_0 = ProgressSync(\bt, b(x_k))$ is also safeguarding with respect to the step length $d$, the obstacle region $O \subset \mathbb{R}^n$, and the initialization region $I \subset R$, for any value of $b(x_k)$.
\begin{proof}
From Definition~\ref{def:Safeguarding}, $\bt$ holds the following: $\{x: \inf_{s\in S_1} || x-s || \leq d \} \subset I$ hence $|f_1(x_k) - x_k| \leq d$ holds. From Definition~\ref{def:progress}, $f_0(x_k)$ is either $f_1(x_k)$ or $x_{k}$, hence hence $|f_0(x_k) - x_k| \leq d$ holds.
\end{proof}
\end{lemma}
\begin{lemma}
Let $\bt_0 = ResourceDecorator(\bt_1, g)$, if \\$g(x_k)>0$ $\forall x_k \in \mathbb{R}$ then the execution of $\bt_0$ is starvation-free regardless the resources allocated.
\begin{proof}
According to Equation~\eqref{eq:form:resource:g}, since $g(x_k) > 0$, whenever the $\bt_0$ does not propagate ticks to the BT $\bt_1$
it gradually increases the priority of $\bt_1$.
\end{proof}
Setting $g(x_k) > 0$ implements aging, a technique to avoid starvation~\cite{tanenbaum2015modern}. We could also shape the function $g$ such that it implements different scheduling policies. However, that falls beyond the scope of the paper.
\end{lemma}
\section{Conclusions}
\label{sec:conclusions}
This paper proposed two new BTs control flow nodes for resource and progress synchronization with different synchronization policies, absolute and relative. We proposed measures to assess the synchronization between different sub-BTs and the predictability of robot execution. Moreover, we observed how design choices for synchronization might affect the performance. The experimental validation supports such observations.
We showed our approach's applicability in a simulation system that allowed us to run the experiments several times in different settings to collect statistically significant data. We also showed the applicability of our approach in real robot scenarios taken from the literature. We provided the source code of our experimental validation and the code for the control flow nodes aforementioned. Finally, we studied the proposed node from a theoretical standpoint, which allowed us to identify the assumptions under which the synchronization does not jeopardize some BT properties.
\section*{Acknowledgment}
This work was carried out in the context of the SCOPE project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 732410, in the form of financial support to third parties of the RobMoSys project. We also thank, in alphabetical order, Fabrizio Bottarel, Marco Monforte, Luca Nobile, Nicola Piga, and Elena Rampone for the support in the experimental validation.
\balance
\bibliographystyle{IEEEtran}
|
1,116,691,497,197 | arxiv | \section*{ACKNOWLEDGEMENTS}
The authors are thankful to Myongtak Choi for
helpful discussions.
This work was supported in part by the Korean Science and Engineering Foundation
and Korea Research
Foundation (BSRI-98-2441).
|
1,116,691,497,198 | arxiv | \section{Introduction}
For any $0\leq \alpha<1, \theta+\alpha>0$, let $U_1(\alpha,\theta),U_2(\alpha,\theta),\ldots$ be a sequence of independent random variables with $U_i(\alpha,\theta)$ having distribution $Beta(1-\alpha,\theta+i\alpha)$ for $i \geq 1$. If we define
\[
V_1(\alpha,\theta) =U_1(\alpha,\theta), V_n(\alpha,\theta)=(1-U_1(\alpha,\theta))\cdots(1-U_{n-1}(\alpha,\theta))U_n(\alpha,\theta), \ \ n\geq 2,
\]
then the law of the decreasing order statistic $${\bf P}(\alpha,\theta)=(P_1(\alpha,\theta),P_2(\alpha,\theta),\ldots)$$ of $(V_1(\alpha,\theta),V_2(\alpha,\theta),\ldots)$ is the two-parameter Poisson-Dirichlet distribution $PD(\alpha,\theta)$. It is a probability on the infinite-dimensional simplex
\[
\nabla_{\infty}=\{{\bf p}=(p_1,p_2,\ldots): p_1\geq p_2\geq \cdots \geq 0, \sum_{i=1}^{\infty}p_i \leq 1\}.
\]
Let $S$ be Polish space and $\nu$ a probability on $S$ satisfying $\nu(\{x\})=0$ for all $x$ in $S$. In this case we say $\nu$ is diffuse. The Pitman-Yor process with parameters $\alpha,\theta$ and $\nu$ is the random measure
\[
\Xi_{\alpha,\theta,\nu}=\sum_{i=1}^{\infty}P_i(\alpha,\theta)\delta_{\xi_i}.
\]
where $\xi_1,\xi_2,\ldots$ are i.i.d. with common distribution $\nu$ and is independent of ${\bf P}(\alpha,\theta)$. The case $\alpha=0$
corresponds to the Dirichlet process constructed in \cite{Fer73}.
The distribution $PD(0,\theta)$ was introduced by Kingman in \cite{Kingman75} as the law of relative jump sizes of a gamma subordinator over the interval $[0,\theta]$. It also arises in other context most notably population genetics. The distribution $PD(\alpha,0)$ was introduced in Kingman \cite{Kingman75} through the stable subordinator.
In \cite{PPY92} and \cite{PY92}, $PD(\alpha,0)$ was constructed from the ranked length of excursion intervals between zeros of a Brownian motion ($\alpha=1/2$) or a recurrent Bessel process of order $2(1-\alpha)$ for general $\alpha$.
In this paper we focus on the case $\theta=0$. Without the loss of generality, we choose the space $S$ to be $[0,1]$ and the probability $\nu$ to be the uniform distribution on $[0,1]$. This implies that the parameter $\alpha$ is in $(0,1)$. Our main objective is to study the asymptotic behaviour of $PD(\alpha,0)$ when $\alpha$ converges to zero, and the behaviour of both $PD(\alpha,0)$ and $\Xi_{\alpha,0,\nu}$ when $\alpha$ converges to one. There are many scenarios where the limiting procedure of $\alpha$ approaching one or zero arises naturally. We consider two examples below.
The first example is Derrida's random energy model (REM) introduced in \cite{Derrida80} and \cite{Derrida81}. This is a toy model for disordered system such as spin glasses. For any $N\geq 1$, let $S_N=\{-1,1\}^N$ denote the configuration space. Then the REM is a family of i.i.d. random variables $\{H_N(\sigma): \sigma \in S_N\}$ with common normal distribution of mean zero and variance $N$. Here $H_N(\sigma)$ is the Hamiltonian. Given the temperature $T$ and $\beta=T^{-1}$, the Gibbs measure is a probability on $S_N$ given by
\[
Z_N^{-1}{\exp\{-\beta H_n(\sigma)}\}
\]
where
\[
Z_N =\sum_{\sigma \in S_N}\exp\{-\beta H_N(\sigma)\}
\]
is the partition function. Let $T_c = \frac{1}{\sqrt{2\ln 2}}$ and $\alpha =\frac{T}{T_c}$. Then for $T <T_c$ or equivalently $\beta >\sqrt{2\ln 2}$, the decreasing order statistic of the Gibbs measure is known (cf. \cite{Tala03}) to converge to the Poisson-Dirichlet distribution $PD(\alpha,0)$ as $N$ tends to infinity. Thus $\alpha$ converging to zero corresponds to temperature going to zero while $\alpha$ converging to one corresponds to temperature rising to the critical value. To account for correlations, the generalized random energy model (GREM) involving hierarchical levels was introduced and studied in \cite{Derrida85} and \cite{DerGar86}. The generalization to continuum levels was done in \cite{BoSz98} and the genealogy of the hierarchical systems is described by the Bolthausen-Sznitman coalescent. In deriving the infinitesimal rate of the coalescent (Proposition 4.11 in \cite{Ber06} ), one needs to consider the limit of $PD(e^{-t},0)$ as $t$ converges to zero or equivalently $\alpha =e^{-t}$ converging to one.
The second example is concerned with the coalescence time for an explosive branching process. Consider a Galton-Watson branching process with offspring distribution in the domain of attraction of a stable law of index $0<\ga<1$. Let $X_n$ denote the coalescence time of any two individuals choosing at random at generation $n$. Then it is shown in \cite{Athreya12} that $\lim_{n\ra \infty}P\{n-X_n \leq k\}$ exists and can be calculated explicitly through $PD(\ga^k,0)$. In this case, $\alpha=\ga^k$ converging to zero corresponds to $k$ converging to infinity.
There have been intensive studies of the asymptotic behaviour for the Poisson-Dirichlet distribution and the Pitman-Yor process in recent years with motivations from probability theory, population genetics, and Bayesian statistics (see \cite{Feng10} and the references therein).
The results in this paper not only generalize some earlier results but, more importantly, reveal some surprising new structures.
The paper is organized as follows. In Section 2, we review the subordinator representation for $PD(\alpha,0)$. Section 3 contains the law of large numbers, fluctuation, and moderate deviations associated with $PD(\alpha,0)$ as $\alpha$ converges to zero or one. In Section 4, we establish the large deviation principle for $\Xi_{\alpha,0,\nu}$ under the limit of $\alpha$ converging to one. We finish the paper in Section 5 with some concluding remarks.
\section{Subordinator Representation}
For any $0<\alpha<1$, let $\rho_t$ be the stable subordinator with index $\alpha$ and L\'evy measure
\[
\Lambda_{\alpha}(d\,x)= \frac{\alpha}{\Gamma(1-\alpha)}x^{-(1+\alpha)}d\, x, \ \ x >0.
\]
The boundary case $\alpha=1$ corresponds to the straight line $\rho_t=t$. When $\alpha$ converges to zero, $\rho_t$ becomes a killed subordinator with killing rate one (\cite{Ber96}).
For any $t >0$, let $J_1(\rho_t)\geq J_2(\rho_t)\geq \cdots$ denote the jump sizes of $\rho_t$ over the interval $[0,t]$. Then the following representation holds.
\begin{theorem}\label{pre-t1}{\rm (Perman, Pitman, and Yor \cite{PPY92})}
For any $t >0$, the law of
\begin{equation}} %\be=\begin{equation\label{pre-q1}
(\frac{J_1(\rho_t)}{\rho_t},\frac{J_2(\rho_t)}{\rho_t},\ldots)
\end{equation}} %\ee=\end{equation
is $PD(\alpha,0)$.
\end{theorem}
For any $n \geq 1$, let $Z_n =\Lambda_{\alpha}(J_n(\rho_1), \infty)$. Then $Z_1 <Z_2<\ldots$ and $Z_1, Z_2-Z_1, Z_3-Z_2,\ldots$ are i.i.d. exponential random variables with parameter $1$.
Noting that $\Lambda_{\alpha}(x,\infty)= \frac{x^{-\alpha}}{\Gamma(1-\alpha)}$, it follows that
\begin{equation}} %\be=\begin{equation\label{q1'}
\frac{J_n(\rho_1)}{\rho_1}= \frac{Z_n^{-1/\alpha}}{\sum_{i=1}^\infty Z_i^{-1/\alpha}}
\end{equation}} %\ee=\end{equation
and
\begin{equation}} %\be=\begin{equation\label{q2'}
\rho_1= \Gamma(1-\alpha)^{-1/\alpha} \sum_{i=1}^\infty Z_i^{-1/\alpha}.
\end{equation}} %\ee=\end{equation
Thus by Theorem~\ref{pre-t1} the law of
\begin{equation}} %\be=\begin{equation\label{pre-q2}
(\frac{Z_1^{-1/\alpha}}{\sum_{i=1}^{\infty}Z_i^{-1/\alpha}}, \frac{Z_2^{-1/\alpha}}{\sum_{i=1}^{\infty}Z_i^{-1/\alpha}}, \ldots)
\end{equation}} %\ee=\end{equation
is $PD(\alpha,0)$.
\section{Limit Theorems for $PD(\alpha,0)$}
Let $${\bf P}(\alpha,0)=(P_1(\alpha,0), P_2(\alpha,0), \ldots)$$ and
$$\varphi_2({\bf P}(\alpha,0)):=\sum_{i=1}^{\infty}P_i^2(\alpha,0).$$
A direct application of Pitman's sampling formula (\cite{Pitman1992}, \cite{Pitman06}) leads to
\[
\mathbb{E}_{\alpha,0}[\varphi_2({\bf P}(\alpha,0))] = 1-\alpha.
\]
This implies that ${\bf P}(\alpha,0)$ converges in probability to $(1,0,\ldots)$ and $(0,0,\ldots)$ as $\alpha$ converges to $0$ and $1$, respectively. The objective of this section is to obtain more detailed information associated with these limits including fluctuation and large deviations.
\subsection{Convergence and Limit}
For any $n \geq 1$, set
\[
P_n(\alpha,0)=\frac{Z_n^{-1/\alpha}}{\sum_{i=1}^{\infty}Z_i^{-1/\alpha}}
\]
Let $0<\ga(\alpha) \leq 1$ and $\iota(\alpha)>0$ be such that
\begin{equation}} %\be=\begin{equation\label{scale-e1}
\lim_{\alpha \ra 0}\frac{\ga(\alpha)}{\alpha}=c_1 \in [0,+\infty]
\end{equation}} %\ee=\end{equation
and
\begin{equation}} %\be=\begin{equation\label{scale-e2}
\lim_{\alpha \ra 1}\frac{\iota(\alpha)}{\Gamma(1-\alpha)} =c_2 \in [0,\infty).
\end{equation}} %\ee=\end{equation
\begin{theorem}\label{sec3-t1}
Let
\[
{\bf P}^{\ga(\alpha)}(\alpha,0)=(P_1^{\ga(\alpha)}(\alpha,0), P_2^{\ga(\alpha)}(\alpha,0), \ldots).
\]
If $c_1$ is finite, then
${\bf P}^{\ga(\alpha)}(\alpha,0)$ converges almost surely to $(1, (\frac{Z_1}{Z_2})^{c_1},(\frac{Z_1}{Z_3})^{c_1},\ldots)$ as $\alpha$ converges to $0$. If $c_1=\infty$, then ${\bf P}^{\ga(\alpha)}(\alpha,0)$ converges to $(1,0,\ldots)$ in probability as $\alpha$ converges to $0$.
\end{theorem}
{\bf Proof:}\ Set
\[
\tilde{\bf Z}= (Z_1^{-1}, Z_2^{-1}, \ldots).
\]
Then we have
\[
(\sum_{i=1}^{\infty}Z_i^{-1/\alpha})^{\alpha}= ||\tilde{\bf Z}||_{1/\alpha}.
\]
When $\alpha$ approaches to zero, $||\tilde{\bf Z}||_{1/\alpha}$ converges almost surely to $||\tilde{\bf Z}||_{\infty}=Z^{-1}_1$. This implies that
\[
{\bf P}^{\alpha}(\alpha,0)=(\frac{Z^{-1}_1}{||\tilde{\bf Z}||_{1/\alpha}}, \frac{Z_2^{-1}}{||\tilde{\bf Z}||_{1/\alpha}}, \ldots)
\]
converges almost surely to $(1, \frac{Z_1}{Z_2}, \frac{Z_1}{Z_3}, \ldots)$ as $\alpha$ converges to zero. Writing ${\bf P}^{\ga(\alpha)}(\alpha,0)$ as
\[
((\frac{Z^{-1}_1}{||\tilde{\bf Z}||_{1/\alpha}})^{\ga(\alpha)/\alpha},(\frac{Z_2^{-1}}{||\tilde{\bf Z}||_{1/\alpha}})^{\ga(\alpha)/\alpha}, \ldots).
\]
Then by continuity we obtain that ${\bf P}^{\ga(\alpha)}(\alpha,0)$ converges almost surely to $(1, (\frac{Z_1}{Z_2})^{c_1},(\frac{Z_1}{Z_3})^{c_1},\ldots)$ as $\alpha$ converges to zero. If $c_1=\infty$, then for any $M\geq 1$ one has $\frac{\ga(\alpha)}{\alpha} > M$ for small enough $\alpha$. Thus for any $n > 1$
\[
\lim_{\alpha \ra 0}P_n^{\ga(\alpha)}(\alpha,0) \leq \lim_{\alpha \ra 0}P_n^M(\alpha,0) =(\frac{Z_1}{Z_n})^M.
\]
Since $M$ is arbitrary, we obtain
\[
\lim_{\alpha \ra 0}P_n^{\ga(\alpha)}(\alpha,0)=0, \ a.s., \ n>1.
\]
Finally for $n=1$, we have
$$P_1(\alpha,0)\leq P_1^{\ga(\alpha)}(\alpha,0)\leq 1.$$
Noting that
\[
\mathbb{E}[P_1(\alpha,0)] \leq \mathbb{E}[\varphi_2({\bf P}(\alpha,0))]=1-\alpha.
\]
It follows that $P_1(\alpha,0)$ converges to $1$ in probability which implies that $P_1^{\ga(\alpha)}(\alpha,0)$ converges to one in probability.
\hfill $\Box$
\begin{theorem}\label{sec3-t2}
As $\alpha$ converges to $1$, $\iota(\alpha){\bf P}(\alpha,0)$ converges in probability to $c_2 \, (Z_1^{-1}, Z_2^{-1}, \ldots)$.
\end{theorem}
{\bf Proof:}\ Let $S_{\alpha}= \rho_1^{-\alpha}$. Then the law of $S_{\alpha}$ is the Mittag-Leffler distribution with density function
\[
g_{\alpha}(s)=\sum_{k=0}^{\infty}\frac{(-s)^k}{k!}\Gamma(\alpha k+\alpha+1)\frac{\sin(\alpha k\pi )}{\alpha k \pi}
\]
and
\begin{eqnarray}} %\beqn=\begin{eqnarray
&&\sum_{i=1}^\infty Z_i^{-1/\alpha}= (\frac{S_{\alpha}}{\Gamma(1-\alpha)})^{-1/\alpha}\label{pre-q3}\\
&& \mathbb{E}[S_{\alpha}^r]= \frac{\Gamma(r+1)}{\Gamma(\alpha r+1)}, \ r>-1\label{pre-q4}.
\end{eqnarray}} %\eeqn=\end{eqnarray
This implies that
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\mathbb{E}[(S_{\alpha}-1)^2]&=& \frac{2}{\Gamma(2\alpha+1)}- \frac{2}{\Gamma(\alpha +1)} +1\\
&\ra& 0,\ \ \alpha \ra 1.
\end{eqnarray*}
Hence $S_{\alpha}$ converges to $1$ in probability as $\alpha$ converges to $1$.
By \rf{q2'}, one has
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\iota(\alpha){\bf P}(\alpha,0)&=& \frac{\iota(\alpha)}{\Gamma(1-\alpha)}\Gamma(1-\alpha)^{1-\frac{1}{\alpha}}((\frac{Z_1}{S_{\alpha}})^{-1/\alpha}, (\frac{Z_2}{S_{\alpha}})^{-1/\alpha},\ldots)\\
&=&\frac{\iota(\alpha)}{\Gamma(1-\alpha)}\Gamma(1-\alpha)^{1-\frac{1}{\alpha}}\exp\{\frac{1}{\alpha}\log S_{\alpha}\}(Z_1^{-1/\alpha},Z_2^{-1/\alpha}, \ldots).\end{eqnarray*}
Since $S_{\alpha}$ converges to one in probability and $(Z_1^{-1/\alpha}, Z_2^{-1/\alpha}, \ldots)$ converges to $(Z_1^{-1}, Z_2^{-1}, \ldots)$ almost surely as $\alpha$ converges to one, we conclude that $\iota(\alpha){\bf P}(\alpha,0)$ converges to $c_2\, (Z_1^{-1}, Z_2^{-1}, \ldots)$ in probability.
\hfill $\Box$
\subsection{Large Deviations}
In this section we consider the large deviations associated with the deterministic limits obtained in Theorem~\ref{sec3-t1}. In comparison with the large deviations associated with ${\bf P}(\alpha,0)$ these results can be viewed as moderate deviations for ${\bf P}(\alpha,0)$. We prove these results through a series of lemmas.
For any $n \geq 1$ let
\[
R_n =\frac{P_{n+1}(\alpha,0)}{P_n(\alpha,0)}.
\]
Then $\{R_n:n \geq1\}$ is a sequence of independent beta random variables with each $R_n$ having the $beta(n\alpha,0)$ distribution (Proposition 8 in \cite{PitmanYor97}).
\begin{lemma}\label{ldp-l1}
Let ${\bf R}^{\ga(\alpha)}=(R_1^{\ga(\alpha)}, R_2^{\ga(\alpha)}, \ldots)$. As $\alpha$ converges to $0$,
large deviation principles hold for ${\bf R}^{\ga(\alpha)}$ on space $[0,1]^{\infty}$ with respective speeds and rate functions
$(\frac{\alpha}{\ga(\alpha)}, J_1(\cdot))$ and $(\log \frac{\ga(\alpha)}{\alpha}, J_2(\cdot))$ depending on whether $c_1=0$ or $c_1=\infty$, where
\[
J_1({\bf x})=\left\{ \begin{array}{ll}
\sum_{n=1}^\infty n\log \frac{1}{x_{n}},& x_n>0 \ \mb{for all}\ n>1,\\
+\infty,& otherwise.
\end{array}
\right.
\]
and
\[
J_2({\bf x})=\#\{n\geq 1: x_n >0\}.
\]
\end{lemma}
{\bf Proof:}\ Assume that $c_1=0$. For any $n\geq 1$ and any $x$ in $[0,1]$, one has
\begin{eqnarray*}} %\beq=\begin{eqnarray*
n\log x \leq \lim_{\delta\ra 0}\liminf_{\alpha \ra 0}\frac{\ga(\alpha)}{\alpha}\log \mathbb{P}\{|R_n^{\ga(\alpha)}-x|<\delta\}\\
n\log x \geq \lim_{\delta\ra 0}\limsup_{\alpha \ra 0}\frac{\ga(\alpha)}{\alpha}\log \mathbb{P}\{|R_n^{\ga(\alpha)}-x|\leq\delta\}.
\end{eqnarray*}
This combined with the compactness of $[0,1]$ implies that $R_n^{\ga(\alpha)}$ satisfies a large deviation principle on $[0,1]$ with speed $\frac{\alpha}{\ga(\alpha)}$ and rate function $n\log x$. Similarly for $c_1=\infty$, we have
\begin{eqnarray*}} %\beq=\begin{eqnarray*
-\chi_{\{x>0\}} \leq \lim_{\delta\ra 0}\liminf_{\alpha \ra 0}(\log\frac{\ga(\alpha)}{\alpha})^{-1}\log \mathbb{P}\{|R_n^{\ga(\alpha)}-x|<\delta\}\\
-\chi_{\{x>0\}} \geq \lim_{\delta\ra 0}\limsup_{\alpha \ra 0}(\log\frac{\ga(\alpha)}{\alpha})^{-1}\log \mathbb{P}\{|R_n^{\ga(\alpha)}-x|\leq\delta\}.
\end{eqnarray*}
These combined with the independence of $R_1, R_2, \dots$ imply the large deviations for ${\bf R}^{\ga(\alpha)}$.\hfill $\Box$
\begin{lemma}\label{ldp-l2}
There exists $\delta \geq 1$ such that for any ${\lambda}}\def\om{{\omega}}\def\si{{\sigma} <\delta$
\begin{equation}} %\be=\begin{equation\label{ldp-e5}
\mathbb{E}[\exp\{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)(P^{-1}_1(\alpha,0)-1)\}]= (1+A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma},\alpha})^{-1}<\infty
\end{equation}} %\ee=\end{equation
where
\[
A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma},\alpha}=\alpha \int_0^1(1-e^{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)z})z^{-(1+\alpha)}d\,z.
\]
\end{lemma}
{\bf Proof:}\ Clearly $A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma},\alpha}$ is nonnegative for ${\lambda}}\def\om{{\omega}}\def\si{{\sigma} \leq 0$, and converges to negative infinity as ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}$ tends to positive infinity. It is known (equation (77) in \cite{Kingman75}) that
\begin{equation}} %\be=\begin{equation\label{ldp-e3}
\mathbb{E}[\exp\{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)(P^{-1}_1(\alpha,0)-1)\}]= (1+A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma},\alpha})^{-1}<\infty
\end{equation}} %\ee=\end{equation
for ${\lambda}}\def\om{{\omega}}\def\si{{\sigma} \leq 0$. For ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}>0$, we have
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{ldp-e4}
A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}, \alpha}&=& (1-{\lambda}}\def\om{{\omega}}\def\si{{\sigma})e^{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)}-1 +{\lambda}}\def\om{{\omega}}\def\si{{\sigma}^2(1-\alpha)\int_0^1 z^{1-\alpha}e^{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)z}d\,z\\
&\geq& (1-{\lambda}}\def\om{{\omega}}\def\si{{\sigma}) e^{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)} -1 +{\lambda}}\def\om{{\omega}}\def\si{{\sigma}^2(1-\alpha)\int_0^1z^{1-\alpha}e^{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}(1-\alpha)z}d\,z.\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
If we define
$${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_{\alpha}= \sup\{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}>0:A_{{\lambda}}\def\om{{\omega}}\def\si{{\sigma},\alpha} +1>0\},$$
then ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_{\alpha}\geq 1$ by \rf{ldp-e4} and
\[
\delta=\inf\{{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_{\alpha}: 0<\alpha<1\}\geq 1.
\]
By Campbell's theorem \rf{ldp-e5} holds for any ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}<\delta$.
\hfill $\Box$
\begin{lemma}\label{ldp-l3}
Let ${\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}>0$ be arbitrarily given. If $c_1=0$, then
\begin{equation}} %\be=\begin{equation\label{ldp-e7}
\limsup_{\alpha \ra 0}\frac{\ga(\alpha)}{\alpha}\log \mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}=-\infty.
\end{equation}} %\ee=\end{equation
If $c_1=\infty$ and
\begin{equation}} %\be=\begin{equation\label{ldp-e11}
\lim_{\alpha \ra 0}\ga(\alpha)=0,
\end{equation}} %\ee=\end{equation
then
\begin{equation}} %\be=\begin{equation\label{ldp-e6}
\limsup_{\alpha \ra 0}\frac{1}{\log \frac{\ga(\alpha)}{\alpha}}\log \mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}=-\infty.
\end{equation}} %\ee=\end{equation
\end{lemma}
{\bf Proof:}\ Since the limit involves only small $\alpha$, we may assume that $0<\alpha<1/2$ and $0< {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa} <1/2$. Let $\delta$ be as in Lemma~\ref{ldp-l2} and set $\delta_1=\delta/4$. By direct calculation we obtain that
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{ldp-e8}
\mathbb{P}\{|P^{\ga(\alpha)}_1(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}&=&\mathbb{P}\{P^{-1}_1(\alpha,0)-1\geq (1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{-1/\ga(\alpha)}-1\}\nonumber\\
&\leq & \mathbb{E}[e^{\delta_1(P_1^{-1}(\alpha,0)-1)}]e^{-\delta_1[(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{-1/\ga(\alpha)}-1]}\\
&\leq & (1+A_{\delta_1, \alpha})^{-1}e^{-\delta_1[(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{-1/\ga(\alpha)}-1]}.\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
It follows from \rf{ldp-e4} that
\begin{equation}} %\be=\begin{equation\label{ldp-e9}
\lim_{\alpha\ra 0} (1+A_{\delta_1, \alpha}) =1.
\end{equation}} %\ee=\end{equation
If $c_1=0$, then
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{ldp-e10}
&&\limsup_{\alpha\ra 0}\frac{(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{-1/\ga(\alpha)}-1}{\frac{\alpha}{\ga(\alpha)}}= \limsup_{\alpha\ra 0}\frac{(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{-1/\ga(\alpha)}}{\frac{\alpha}{\ga(\alpha)}}\nonumber\\
&&\hspace{1cm} = \limsup_{\alpha \ra 0}\exp{\{\frac{1}{\ga(\alpha)}[\log\frac{1}{(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})}+\ga(\alpha)\log \ga(\alpha)-\frac{\ga(\alpha)}{\alpha} \alpha \log \alpha] \}}\\
&& \hspace{1cm}= \infty. \nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
Next assume that $c_1=\infty$ and \rf{ldp-e11} hold. For any $0<{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}<1/2$, $(1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa})^{1/\ga(\alpha)}$ converges to zero as $\alpha$ tends to zero. Hence for any $k\geq 1$, one can find $\alpha_k>0$ such that for all $0<\alpha<\alpha_k$
\[
\mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}\leq \mathbb{P}\{P_1(\alpha,0)< \frac{1}{k}\}.
\]
By the large deviation principle for $P_1(\alpha,0)$ in \cite{Feng09}, we obtain that
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\limsup_{\alpha \ra 0}\frac{1}{\log\frac{1}{\alpha}}\log \mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}&\leq& \limsup_{\alpha \ra 0}\frac{1}{\log\frac{1}{\alpha}}\log \mathbb{P}\{P_1(\alpha,0)< \frac{1}{k}\}\\
&\leq& -(k-1).
\end{eqnarray*}
Noting that $\ga(\alpha)<1$ and $k$ is arbitrary it follows that
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{ldp-e12}
&&\limsup_{\alpha\ra 0}\frac{1}{\log\frac{\ga(\alpha)}{\alpha}}\log \mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\}\hspace{6cm}\nonumber\\
&&\hspace{3.5cm}\leq \limsup_{\alpha\ra 0}\frac{1}{\log\frac{1}{\alpha}}\log \mathbb{P}\{|P_1^{\ga(\alpha)}(\alpha,0)-1|>{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}\} \\
&&\hspace{3.5cm}\leq\lim_{k\ra \infty }\limsup_{\alpha\ra 0}\frac{1}{\log\frac{1}{\alpha}}\log \mathbb{P}\{P_1(\alpha,0)\leq \frac{1}{k}\} \nonumber\\
&&\hspace{3.5cm} = -\infty.\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
Putting together \rf{ldp-e8}-\rf{ldp-e12}, we get \rf{ldp-e7} and \rf{ldp-e6}.
\hfill $\Box$
\begin{theorem}\label{ldp-t1}
Let $\ga(\alpha)$ satisfy \rf{scale-e1}, and set
\[
\nabla=\{{\bf x}=(x_1,x_2,\ldots): 1\geq x_1\geq x_2\geq\cdots\geq 0\}.
\]
Then the followings hold as $\alpha$ converges to $0$.
{\rm (i)} If $c_1 =0$, then the family $\{{\bf P}^{\ga(\alpha)}(\alpha,0): 0<\alpha <1\}$ satisfies a large deviation principle on space $\nabla$ with speed $\frac{\alpha}{\ga(\alpha)}$ and rate function
\begin{equation}} %\be=\begin{equation\label{ldp-e2}
I_1({\bf x})=\left\{ \begin{array}{ll}
\sum_{n=1}^\infty n\log \frac{x_{n}}{x_{n+1}},& x_1=1, x_n>0 \ \mb{for all}\ n>1,\\
+\infty,&otherwise.
\end{array}
\right.
\end{equation}} %\ee=\end{equation
{\rm (ii)} If $c_1 =\infty$ and \rf{ldp-e11} holds, then the family $\{{\bf P}^{\ga(\alpha)}(\alpha,0): 0<\alpha <1\}$ satisfies a large deviation principle on space $\nabla$ with speed $\log\frac{\ga(\alpha)}{\alpha}$ and the rate function
\begin{equation}} %\be=\begin{equation\label{ldp-e1}
I_2({\bf x})=\left\{ \begin{array}{ll}
n-1,& x_1=1, x_n>0, x_k=0, k> n,\\
+\infty,& otherwise.
\end{array}
\right.
\end{equation}} %\ee=\end{equation
\end{theorem}
{\bf Proof:}\ Writing ${\bf P}^{\ga(\alpha)}$ in terms of ${\bf R}^{\ga(\alpha)}$ we have
\[
{\bf P}^{\ga(\alpha)}= P_1^{\ga(\alpha)}(\alpha,0)(1, R^{\ga(\alpha)}_1,R^{\ga(\alpha)}_1 R^{\ga(\alpha)}_2, \ldots).
\]
By Lemma~\ref{ldp-l3}, $P_1^{\ga(\alpha)}(\alpha,0)$ is exponentially equivalent to one. Hence by lemma 2.1 in \cite{FengGao08}
$(1, R^{\ga(\alpha)}_1,R^{\ga(\alpha)}_1 R^{\ga(\alpha)}_2, \ldots)$ and $ {\bf P}^{\ga(\alpha)}$ have the same large deviation principle. Define
\[
\psi: [0,1]^{\infty}\lra \nabla, \ \ (x_1,x_2,\ldots)\ra (1, x_1,x_1x_2, \ldots).
\]
Then $\psi$ is clearly continuous and $(1, R^{\ga(\alpha)}_1,R^{\ga(\alpha)}_1 R^{\ga(\alpha)}_2, \ldots)= \psi({\bf R}^{\ga(\alpha)})$. Noting that
\[
I_i({\bf x})= \inf\{J_i({\bf y}): \psi({\bf y})={\bf x}\}, i=1,2,
\]
the theorem follows from Lemma~\ref{ldp-l1} and the contraction principle.
\hfill $\Box$
\section{Asymptotic Behaviour of $\Xi_{\alpha,0,\nu}$}
Recall that the REM has configuration space $S_{N}=\{-1,1\}^{N}$ and the Hamiltonian given by a family of $i.i.d.$ normal random variables with mean $0$ and variance $N$
$$\{H_{N}(\sigma)\mid \sigma\in S_{N}\}.$$
The Gibbs measure $G_{N}(\sigma)$ at temperature $T$ is given by
$$
Z_{N}^{-1}\exp\{-\beta H_{N}(\sigma)\},
$$
where $\beta=1/T$ and $Z_{N}=\sum_{\sigma\in S_{N}}\exp\{-\beta H_{N}(\sigma)\}.$ By making the change of variable
$$
r_{N}(\sigma)=1-\sum_{i=1}^{N}(1-\sigma_{i})2^{-i-1},
$$
we can regard $[0,1]$ as the new configuration space. The corresponding Gibbs measure has the form
$$
\mu_{N}^{T}(d\,x)=\sum_{\sigma\in S_{N}}\delta_{r_{N}}(d\,x)G_{N}(\sigma).
$$
As $N\to\infty$, the limiting Gibbs measure $\mu^{T}=\lim_{N\to\infty}\mu_{N}^{T}$ exhibits phase transition at the critical temperature $T_{c}=\sqrt{2\log2}$. More specifically, by Theorems 9.3.1 and 9.3.4 in \cite{Bov06}, we have
$$
\mu^{T}=\begin{cases}
\nu,& \mbox{ if}\ T\geq T_{c}\\
\Xi_{\alpha,0,\nu}, & \mbox{ if } T<T_{c}.
\end{cases}
$$
Thus a phase transition occurs when the temperature crosses the critical value between high temperature and low temperature regimes. The low temperature regime has a rich structure. The transition from the low temperature regime to the critical temperature regime corresponds to $\alpha$ tending to one from below. The goal of this section is to understand the microscopic behaviour of this transition through the establishment of a large deviation principle for $\Xi_{\alpha,0,\nu}$.
\subsection{Estimates for Stable Subordinator}
Recall that $\rho_t$ be the stable subordinator with index $0<\alpha<1$. For $t=1$, the following holds.
\begin{lemma}{\rm (\cite{Pollard46}, \cite{Kan75})}
The distribution function of $\rho_{1}^{\frac{\alpha}{1-\alpha}}$ has two integral representations:
\begin{equation}
F(x)=\mathbb{P}\{\rho_{1}^{\frac{\alpha}{1-\alpha}}\leq x\}=\frac{1}{\pi}\int_{0}^{\pi}e^{-\frac{A(u)}{x}}du,\label{first}
\end{equation}
where $A(u)$ is the Zolotarev's function defined as
$$
A(u)=\left\{\frac{\sin^{\alpha}(\alpha u)\sin^{1-\alpha}((1-\alpha)u)}{\sin u}\right\}^{\frac{1}{1-\alpha}}.
$$
The distribution function of $\rho_{1}$ is thus
$F(x^{\frac{\alpha}{1-\alpha}}).
$
The density function of $\rho_{1}$ has the following representation
\begin{equation}
\phi_{\alpha}(t)=\frac{1}{\pi}\int_{0}^{\infty}e^{-tu}e^{-u^{\alpha}\cos \pi\alpha}\sin (u^{\alpha}\sin \pi\alpha)du\label{second}
\end{equation}
\end{lemma}
Applying these representations, we obtain the following estimations.
\begin{theorem}\label{fgz-t1}
For any given $1>\delta>0$, we have
\begin{equation}} %\be=\begin{equation\label{fgz-1}
\lim_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}<1-\delta\}}=\lim_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}\leq 1-\delta\}}=\log\frac{1}{1-\delta}
\end{equation}} %\ee=\end{equation
and
\begin{equation}} %\be=\begin{equation\label{fgz-2}
\lim_{\alpha\to1}\frac{1}{\log\frac{1}{1-\alpha}}\log \mathbb{P}\{\rho_{1}>1+\delta\}=\lim_{\alpha\to1}\frac{1}{\log\frac{1}{1-\alpha}}\log \mathbb{P}\{\rho_{1}\geq1+\delta\}=-1.
\end{equation}} %\ee=\end{equation
\end{theorem}
\begin{proof}
For any $u\in (0,\pi), v\in (0,1)$, one has
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\frac{d [v \cot(v u)-\cot u]}{dv}&=&\frac{1}{2\sin^{2}(v u)}(\sin(v u)-2v u)\\
&\leq& \frac{1}{2\sin^{2}(v u)}(\sin(v u)-v u)\leq 0\end{eqnarray*}
which implies that
$$
\frac{d\log\frac{\sin(v u)}{\sin u}}{du}=v\cot(v u)-\cot u\geq 0.
$$
Hence
$$
A(u)=\exp\{\alpha\log\frac{\sin(\alpha u)}{\sin u}+(1-\alpha)\log\frac{\sin ((1-\alpha)u)}{\sin u}\}
$$
\noindent is nondecreasing in $u$. Further more it follows from direct calculation that
\[
\lim_{u\to0}A(u)=(1-\alpha)\alpha^{\frac{\alpha}{1-\alpha}}\quad \lim_{u\to\pi}A(u)=\infty.
\]
\noindent Therefore, applying the representation \rf{first} we get that for any $\epsilon>0$
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&&\frac{\pi-\epsilon}{\pi}\exp\left\{-\frac{A(\pi-\epsilon)}{(1-\delta)^{\frac{\alpha}{1-\alpha}}}\right\}\\
&&\leq\frac{1}{\pi}\int_{0}^{\pi-\epsilon}e^{-\frac{A(u)}{(1-\delta)^{\frac{\alpha}{1-\alpha}}}}du
=\mathbb{P}\{\rho_1 \leq 1-\delta\}\\
&&\leq \exp\left\{-\frac{A(0)}{(1-\delta)^{\frac{\alpha}{1-\alpha}}}\right\}.\end{eqnarray*}
This implies that
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\log\frac{1}{1-\delta}&\leq&\liminf_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}\leq 1-\delta\}}\\
&=& \liminf_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}<1-\delta\}}
\end{eqnarray*}
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&&\limsup_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}<1-\delta\}}\\
&&=\limsup_{\alpha\to1}(1-\alpha)\log\log\frac{1}{\mathbb{P}\{\rho_{1}\leq 1-\delta\}}\\
&& \leq \log(\frac{1}{1-\delta})
\end{eqnarray*}
and thus \rf{fgz-1} holds.
\noindent To prove \rf{fgz-2}, we apply (\ref{second}) and get
\begin{align*}
\mathbb{P}\{\rho_{1}>1+\delta\}=&\mathbb{P}\{\rho_{1}\geq1+\delta\}\\
=&\frac{1}{\pi}\int_{1+\delta}^{\infty}\int_{0}^{\infty}u^{-1}e^{-(1+\delta)u}e^{-u^{\alpha}\cos\pi\alpha}\sin(u^{\alpha}\sin\pi\alpha)du dt\\
=&\frac{\sin\pi\alpha}{\pi}\int_{0}^{\infty}u^{-(1-\alpha)}e^{-\delta u} \left[e^{-u-u^{\alpha}\cos\pi\alpha}\frac{\sin(u^{\alpha}\sin\pi\alpha)}{u^{\alpha}\sin\pi\alpha}\right]du
\end{align*}
\noindent Noting that
$
\frac{\sin(u^{\alpha}\sin\pi\alpha)}{u^{\alpha}\sin\pi\alpha}$ is bounded and
$$
\lim_{\alpha \ra 1}\frac{\sin \pi\alpha}{\pi(1-\alpha)}
=1,$$
it follows that \rf{fgz-2} holds.
\end{proof}
\begin{theorem}\label{fgz-t2}
The family $\{\rho_1: 0<\alpha<1\}$ satisfies a large deviation principle on $(0,\infty)$ as $\alpha$ tends to one with speed $-\log(1-\alpha)$ and rate function {\rm (not good in this case)}
\begin{equation}} %\be=\begin{equation\label{fgz-3}
J(x)=\left\{ \begin{array}{ll}
1,& x>1,\\
0,& x=1\\
+\infty,& otherwise.
\end{array}
\right.
\end{equation}} %\ee=\end{equation
\end{theorem}
{\bf Proof:}\ Let $A$ be a closed set in $(0,\infty)$. If $A$ contains $1$, then $\inf_{x\in A}J(x)=0$ and the upper estimate holds. If $A$ does not contain $1$, then one can find $0<a<1<b$ such that $A$ is either a subset of $(0,a]$, a subset of $[b,\infty)$ or a subset or $(0,a]\cup [b,\infty)$. For each case we can apply Theorem\ref{fgz-t1} to obtain the upper estimate.
\noindent The proof for lower estimates goes as follows. Let $B$ be any open set. If $B$ intersects with $[0,1)$, then the lower estimates are trivial. If $B$ does not intersect with $[0,1)$, then $B$ can not contain $1$. Hence one can find $1<a<b<\infty$ such that $(a,b)\subset B$ and
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\mathbb{P}\{\rho_{1}\in B\}&\geq &\mathbb{P}\{\rho_{1}\in (a,b)\}\\
&\geq & \frac{b-a}{\pi}\int_{0}^{\infty}u^{-1}e^{-bu}e^{-u^{\alpha}\cos\pi\alpha}\sin(u^{\alpha}\sin\pi\alpha)du
\end{eqnarray*}
which implies that
\[
\liminf_{\alpha\ra 1}\frac{1}{-\log(1-\alpha)}\log \mathbb{P}\{\rho_{1}\in B\}\geq-1 =-inf_{x \in B}J(x)
\]
\hfill $\Box$
For any $n \geq 1$, let $\tau_1, \ldots, \tau_{n+1}$ be independent copies of $\rho_1$. Set
\[
\sigma_i=\frac{\tau_i}{\tau_1}, \ i=2, \ldots,n+1.
\]
Set
\[
\tilde{\sigma}_n =\min\{\sigma_i:2\leq i\leq n+1\}\]
and let $r_n$ denote the frequency of $\tilde{\sigma}_n$ among $\{\sigma_i\}_{i=2,\ldots,n+1}$. Define
\begin{equation}} %\be=\begin{equation\label{fgz-4}
J_n(u_1,\ldots, u_n)=\left\{ \begin{array}{ll}
n+1-r_n,& \tilde{\sigma}_n <1,\\
n-r_n,& \tilde{\sigma}_n=1,\\
n, &\tilde{\sigma}_n >1.
\end{array}
\right.
\end{equation}} %\ee=\end{equation
Clearly $J_n(\cdot)$ is a rate function on $(0,\infty)^n$.
\begin{theorem}\label{fgz-t3}
The family $\{(\sigma_2,\ldots,\sigma_{n+1}): 0<\alpha<1\}$ satisfies a large deviation principle on $(0,\infty)^n$ with speed $-\log(1-\alpha)$ and rate function $J_n(\cdot)$ as $\alpha$ tends to one.
\end{theorem}
{\bf Proof:}\ Note that the map
\[
\Phi: (0,\infty)^{n+1} \ra (0,\infty)^n, \ (x_1,\ldots,x_{n+1})\ra (\frac{x_2}{x_1},\ldots, \frac{x_{n+1}}{x_1})
\]
is clearly continuous. It follows from the contraction principle that large deviation upper and lower estimates hold for the family $\{(\sigma_2,\ldots,\sigma_{n+1}): 0<\alpha<1\}$ with the bounds given by the function
\[
\tilde{J}_n(u_1,\ldots,u_n)=\inf\{\sum_{i=1}^{n+1} J(x_i): x_{j+1}=u_{j}x_1, j=1,\ldots, n\}.
\]
Since $J(x)=\infty$ for $x$ in $(0,1)$, it follows that
\[
\tilde{J}_n(u_1,\ldots,u_n)=\inf\{\sum_{i=1}^{n+1} J(x_i): x_1\geq 1, \ x_{j+1}=u_{j}x_1\geq 1,\ j=1,\ldots, n\}=J_n(u_1,\ldots,u_n)
\]
and the theorem follows.
\hfill $\Box$
\noindent {\bf Remark}. The contraction principle used in Theorem~\ref{fgz-t3} does not lead to a large deviation principle in general due to the fact that the starting rate function is not good. But here and later on, direct calculations show that the upper and lower bounds are all given by rate functions.
\subsection{Large Deviations for $\Xi_{\alpha,0,\nu}$}
Let $M_1([0,1])$ denote the space of probabilities on $[0,1]$ equipped with the weak topology. For any $\mu$ in $M_1([0,1])$ define
\[
{\cal I}(\mu)=\begin{cases}
0, & \mu=\nu\\
n,&\mu=\sum_{i=1}^np_i\delta_{x_i}+(1-\sum_{i=1}^n p_i)\nu\\
\infty,& \mbox{ otherwise}.
\end{cases}
\]
\noindent The main result of this subsection is
\begin{theorem}\label{fgz-t4}
The family $\{\Xi_{\alpha,0,\nu}: 0<\alpha<1\}$ satisfies a large deviation principle on $M_1([0,1])$ with speed $-\log(1-\alpha)$ and good rate function ${\cal I}(\cdot)$ as $\alpha$ tends to one.
\end{theorem}
\noindent We prove this theorem through a series of lemmas.
\begin{lemma}\label{fgz-l1}
For any $n \geq 1$, let $0=t_{0}<t_{1}<\cdots<t_{n}<t_{n+1}=1$ and $B_{1},\cdots,B_{n+1}$ be a measurable partition of $[0,1]$ such that $\nu(B_{i})=t_{i}-t_{i-1}$.
Then
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&&(\Xi_{\alpha,0,\nu}(B_{1}),\cdots,\Xi_{\alpha,0, \nu}(B_{n+1}))\\
&&\ \stackrel{\text{\upshape d}}{=}
\rho_1^{-1}(\rho_{t_{1}}, \rho_{t_{2}}-\rho_{t_{1}},\cdots,\rho_{t_{k}}-\rho_{t_{k-1}}, \rho_1-\rho_{t_{k}})\\
&& \ \stackrel{\text{\upshape d}}{=} (t_1^{1/\alpha}+\sum_{k=2}^{n+1}(t_k-t_{k-1})^{1/\alpha}\sigma_k)^{-1}(t_1^{1/\alpha},(t_2-t_1)^{1/\alpha}\sigma_2,\ldots, (1-t_n)^{1/\alpha}\sigma_{n+1})
\end{eqnarray*}
where $\stackrel{\text{\upshape d}}{=}$ denotes equality in distribution.
\end{lemma}
{\bf Proof:}\ The first equality is from \cite{Pitman96} and the second equality follows from the independent increments of the stable subordinator and the equality
$
\rho_t \stackrel{\text{\upshape d}}{=} t^{1/\alpha}\rho_1.
$
\hfill $\Box$
\begin{lemma}\label{fgz-l2}
Let
\[
\triangle_{n+1} :=\{(y_1,\ldots,y_{n+1}): y_i\geq 0, \sum_{k=1}^{n+1} y_k =1\}.
\]
\noindent Then the family $\{(\Xi_{\alpha,0,\nu}(B_{1}),\cdots,\Xi_{\alpha,0, \nu}(B_{n+1})): 0<\alpha<1\}$ satisfies a large deviation principle on $\triangle_{n+1}$ with speed $-\log(1-\alpha)$ and good rate function ${\cal I}_n(\cdot)$ as $\alpha$ tends to one, where
\[
{\cal I}_n(y_1,\ldots,y_{n+1})=(n+1)-\gamma(y_1,\ldots,y_{n+1})
\]
with
\[
\gamma(y_1,\ldots,y_{n+1})=\#\{1\leq i \leq n+1: \frac{y_i}{t_i-t_{i-1}}=\min\{\frac{y_k}{t_k-t_{k-1}}: 1\leq k \leq n+1\}\}.
\]
\end{lemma}
{\bf Proof:}\ First note that the map
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&& H: [0,1]^n\times (0,\infty)^n \ra [0,1], \\
&& (v_1,\ldots,v_{n+1}; u_1, \ldots,u_n)\ra (v_1+\sum_{k=2}^{n+1}v_k u_{k-1})^{-1}(v_1, v_2 u_1,\ldots,v_{n+1}u_n)
\end{eqnarray*}
is continuous and $(\Xi_{\alpha,0,\nu}(B_{1}),\cdots,\Xi_{\alpha,0, \nu}(B_{n+1}))$ has the same distribution as
\[
H(t_1^{1/\alpha}, \ldots, (1-t_n)^{1/\alpha}; \sigma_2,\ldots, \sigma_{n+1}).
\]
Noting that $(t_1^{1/\alpha}, \ldots, (1-t_n)^{1/\alpha})$ satisfies a full large deviation principle with effective domain $(t_1, \ldots, (1-t_n))$. It follows from Theorem~\ref{fgz-t3}, the independence between $(t_1^{1/\alpha}, \ldots, (1-t_n)^{1/\alpha})$ and $(\sigma_2, \ldots, \sigma_{n+1})$ and the contraction principle that large deviation estimates hold for $(\Xi_{\alpha,0,\nu}(B_{1}),\cdots,\Xi_{\alpha,0, \nu}(B_{n+1}))$ with upper and lower bounds given by the function
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\tilde{\cal I}_n(y_1, \ldots, y_{n+1})&=&\inf\{J_n(u_1,\ldots,u_n): u_{i}\in (0,\infty), u_i=\frac{t_1}{y_1}\frac{y_{i+1}}{t_{i+1}-t_{i}}, i=1, \ldots, n\}\\
&=&\left\{ \begin{array}{ll}
n+1-\tilde{r}_n,& \min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} <\frac{y_1}{t_1},\\
n-\tilde{r}_n,& \min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} =\frac{y_1}{t_1},\\
n, &\min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} >\frac{y_1}{t_1}.
\end{array}
\right.
\end{eqnarray*}
where $\tilde{r}_n$ is the frequency of $\min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\}$ among $\frac{y_2}{t_2-t_1},\ldots, \frac{y_{n+1}}{1-t_n}$. On the other hand,
\[
\gamma(y_1,\ldots, y_{n+1})=\left\{ \begin{array}{ll}
\tilde{r}_n,& \min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} <\frac{y_1}{t_1},\\
\tilde{r}_n+1,& \min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} =\frac{y_1}{t_1},\\
1, &\min_{2\leq i \leq n+1}\{\frac{y_i}{t_i-t_{i-1}}\} >\frac{y_1}{t_1}.
\end{array}
\right.
\]
Hence we obtain that $\tilde{\cal I}_n(\cdot)={\cal I}_n(\cdot)$. It remains to show that ${\cal I}_n(\cdot)$ is a good rate function. Since $\triangle_{n+1}$ is compact, it suffices to verify the lower semicontinuity of the ${\cal I}_n(\cdot)$. For any point $(y_1,\ldots,y_{n+1})$ in $\triangle_{n+1}$, let $\gamma(y_1, \ldots, y_{n+1})=m$. If the neighbourhood of $(y_1,\ldots,y_{n+1})$ is small enough, then the frequency of the minimum in each point inside the neighbourhood is at least $m$. Hence ${\cal I}(\cdot)$ is lower semicontinuous.
\hfill $\Box$
\begin{lemma}\label{fgz-l3}
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{fgz-5}
{\cal I}(\mu)&=&\sup\{{\cal I}_n(\mu([0,t_1]),\mu((t_1,t_2]),\ldots, \mu((t_n,1]):\\
&& 0=t_0<t_1<\cdots<t_n<t_{n+1}=1, n =1,2,\ldots\}.\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
The supremum can be taken over all continuity points $t_1, \ldots, t_n$ of $\mu$.
\end{lemma}
{\bf Proof:}\ We divide the proof into several cases. Let $\mu$ be any probability in $M_1([0,1])$. By Lebesgue's Decomposition Theorem, one can write
\[
\mu={\lambda}}\def\om{{\omega}}\def\si{{\sigma}_1\mu_a+{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_2\mu_s+{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3\mu_{ac}
\]
where $\mu_{a}$ is atomic, $\mu_s$ is singular with respect to $\nu$, $\mu_{ac}$ is absolutely continuous with respect to $\nu$, and
\[
{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_1+{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_2+{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3 =1,\ {\lambda}}\def\om{{\omega}}\def\si{{\sigma}_i \geq 0, i=1,2,3.
\]
Set
$$F_s(x)=\mu_s([0,x]), \ \ f(x)= \frac{d\,\mu_{ac}}{d\,\nu}(x).$$
\vspace{0.4cm}
\noindent {\bf Case 1:} The probability $\mu$ has countable number of atoms.
\vspace{0.2cm}
Since the total mass of $\mu_a$ is equal to one, there exists a countable infinite number of atoms with all different value of masses. Let the masses of these atoms be ranked in descending order and the corresponding atoms are $x_1,x_2,\ldots$. Clearly $\mu_s(\{x_i\})=\mu_{ac}(\{x_i\})=0$ for all $i\geq 1$. For any $m\geq 2$, by the continuity of probabilities, one can choose
small positive numbers ${\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_1,{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_2, \ldots,{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_m$ such that $x_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i, 1\leq i \leq m$ are the continuity points of $\mu$, $(x_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i,x_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i]\subset [0,1], 1\leq i\leq m$ are disjoint, and $$\mu((x_1-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_1, x_1+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_1])>\mu((x_2-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_2, x_2+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_2]) >\cdots>\mu((x_m-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_m, x_m+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_m]) .$$ The partition based on the points $\{x_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i\: i=1,2,\ldots,m\}$ clearly gives a lower bound $m-1$ for ${\cal I}(\cdot)$. Since $m$ is arbitrary, the supremum taken over continuity points of $\mu$ gives the value of infinity which is the same as ${\cal I}(\cdot)$.
\vspace{0.2cm}
\noindent {\bf Case 2:} The probability $\mu$ has at most finite number of atoms and $\nu(\{f(x)\neq 1\})>0$.
\vspace{0.2cm}
Let $A =\{x\in [0,1]: f(x) <1\}, B=\{x\in [0,1]:f(x)>1\}$, and $C=\{x\in [0,1]: f(x)=1\}$. Then we have
\[
\mu_{ac}(A)< \nu(A), \mu_{ac}(B)>\nu(B), \mu_{ac}(C)=\nu(C)\]
and
\[
\nu(A)-\mu_{ac}(A)=\mu_{ac}(B)-\nu(B)
\]
The fact that $\nu\{C\}<1$ thus implies that $\nu(A)>0, \nu(B)>0.$ For any $m\geq 1$ we can find
$0<s_1<\cdots<s_m<1, 0<t_1<\cdots<t_m<1$ such that
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&&\{s_i\}_{1\leq i\leq m}\subset A, \{t_i\}_{1\leq i\leq m}\subset B\\
&& \{s_i, t_i\}_{i\geq 1}\ \mbox{does not contain atoms of }\ \mu\\
&& \mbox{when}\ {\lambda}}\def\om{{\omega}}\def\si{{\sigma}_2 >0, F'_s(x)=0 \ \mbox {for}\ x=s_i \ \mbox{or}\ t_i, i\geq 1.
\end{eqnarray*}
For any $i,j \geq 1$, we then have
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\lim_{{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa} \ra 0}\frac{\mu((s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}])}{2{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}}&=& {\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3\lim_{{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa} \ra 0}\frac{\mu_{ac}((s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}])}{2{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}}={\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3 f(s_i)\\
&<& {\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3 f(t_j)={\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3\lim_{{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa} \ra 0}\frac{\mu_{ac}((t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}])}{2{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}}\\
&=&\lim_{{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa} \ra 0}\frac{\mu((t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}])}{2{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}}.
\end{eqnarray*}
This makes it possible to choose ${\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i>0$ such that $s_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i, t_j\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_j$ are all continuity points of $\mu$ and
\[
\frac{\mu((s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i])}{\nu(s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_i]} < \frac{\mu((t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_j, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_j])}{\nu(t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_j, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_j]}.\]
This provides a lower bound of $m$ for $ {\cal I}(\mu)$. Since $m$ is arbitrary, we established \rf{fgz-5} in this case.
\vspace{0.2cm}
\noindent {\bf Case 3:} The probability $\mu$ has at most finite number of atoms, ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_2>0$ and $\nu(\{f(x)\neq 1\})=0$.
\vspace{0.2cm}
It is clear that we have $\mu_{ac}=\nu$ in this case. For any $m \geq 1$, the singularity guarantees the existence of $0<s_1<\cdots<s_m<1, 0<t_1<\cdots<t_m<1$
such that the derivative of $F_{s}(x)$ is zero for $x=t_i$ while the derivative at $s_i$ is either infinity or does not exist. Additionally we can choose $s_i,t_i$ so that none of them are atoms of $\mu_a$. Let ${\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}$ be small enough so that all intervals
$(s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa},s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}]$ and $(t_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa},t_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}]$ $i=1, \ldots, m$ are disjoint. Let ${\cal J}$ denote the partition of $[0,1]$ using $\{t_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa},s_i\pm{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}: i=1,\ldots,m\}$.
One can then find a refined partition, using subsequence if necessary, $\tilde{\cal J}$ of ${\cal J}$, and positive numbers ${\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, \delta_0$ such that $s_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, t_i\pm {\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0$ are continuity points of $\mu$ and the value of $(2{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0)^{-1} \mu_{s}$ on each interval containing one of the $t_i'$s is less than $\delta_0$ while its value on each interval containing one of the $s_i'$s is greater than $\delta_0$. In other words, we can have for any $1\leq i,j \leq m$
\[
\frac{\mu((s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0])}{\nu((s_i-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, s_i+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0])}\neq \frac{\mu((t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0])}{\nu((t_j-{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0, t_j+{\epsilon}}\def\ga{{\gamma}}\def\ka{{\kappa}_0])}. \]
This implies that
\[
\sup\{{\cal I}_n(\mu([0,t_1]),\mu((t_1,t_2]),\ldots, \mu((t_n,1]):
0=t_0<t_1<\cdots<t_n<t_{n+1}=1, n \geq 1 \}\geq m. \]
The arbitrary selection of $m$ leads to \rf{fgz-5} in this case.
\vspace{0.2cm}
\noindent {\bf Case 4:} The probability $\mu$ has at most finite number of atoms, ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_2=0$ and $\nu(\{f(x)\neq 1\})=0$.
\vspace{0.2cm}
In this case we have $\mu={\lambda}}\def\om{{\omega}}\def\si{{\sigma}_1\mu_a +{\lambda}}\def\om{{\omega}}\def\si{{\sigma}_3\nu$. If ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_1 =0$, then $\mu=\nu$ and ${\cal I}(\mu)$ is clearly zero. Assume that ${\lambda}}\def\om{{\omega}}\def\si{{\sigma}_1>0$ and the number of atoms is $r$. Let $F(x)=\mu([0,x])$. Since $r$ is finite, any partition ${\cal J}$ of $[0,1]$ will have at most $r$ disjoint intervals covering these atoms. The maximum
\[
\sup\{{\cal I}_n(\mu([0,t_1]),\mu((t_1,t_2]),\ldots, \mu((t_n,1]):
0=t_0<t_1<\cdots<t_n<t_{n+1}=1, n \geq 1\} \]
is achieved at any partition with exactly $r$ disjoint intervals covering the $r$ atoms.
\hfill $\Box$
{\bf Proof of Theorem~\ref{fgz-t4}:} Let $C([0,1])$ be the space of all continuous function on $[0,1]$ equipped with the supremum norm, and $\{g_j(x): j=1,2,... \}$ be a countable dense subset of $C([0,1])$. The set $\{g_j(x): j=1,2,... \}$ is clearly convergence determining
on $M_1([0,1])$. Let $|g_j|=\sup_{x \in [0,1]}|g_j(x)|$ and $\{h_j(x)=\frac{g_j(x)}{|g_j|\vee 1}: j=1,...\}$
is also convergence determining.
For any $ \mu, \upsilon$ in $M_1([0,1])$, define
\begin{equation}} %\be=\begin{equation\label{twodir20}
d(\mu,\upsilon)=\sum_{j=1}^{\infty}\frac{1}{2^j}|\langle \mu, h_j\rangle-\langle\upsilon,h_j\rangle|.
\end{equation}} %\ee=\end{equation
Then $d$ is a metric generating the weak topology on $M_1([0,])$.
For any $\delta >0, \mu \in M_1([0,1])$, let
\[
B(\mu,\delta)=\{\upsilon \in M_1([0,1]): d(\upsilon,\mu)< \delta\}, \ \ \overline{B}(\mu,\delta)=\{\upsilon \in M_1([0,1]): d(\upsilon,\mu)\leq \delta\}.
\]
Since $M_1([0,1])$ is compact, the family of the laws of $\Xi_{\alpha, 0, \nu}$ is exponentially tight. By theorem (P) in \cite{Pu91}, to prove the theorem it suffices to verify that
\begin{eqnarray}} %\beqn=\begin{eqnarray\label{fgz-6}
&&\lim_{\delta \ra
0}\liminf_{\alpha \ra 1 }\frac{1}{-\log(1-\alpha)}\log \mathbb{P}\{B(\mu,\delta)\} \\
&&= \lim_{\delta \ra
0}\limsup_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)} \log \mathbb{P}\{\overline{B}(\mu,\delta)\} =- {\cal I}(\mu).\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
Let $m$ be large enough so that
\begin{equation}} %\be=\begin{equation\label{addition1}
\{\upsilon \in M_1([0,1]):|\langle \mu, h_j \rangle -\langle \upsilon, h_j \rangle|< \delta/2:j=1,\cdots,m \} \subset B(\nu,\delta).
\end{equation}} %\ee=\end{equation
Consider $0=t_0<t_1< \cdots< t_n <t_{n+1}=1$ with $A_i=(t_{i-1},t_i], i=1, \ldots, n+1$ such that
\[
\sup\{|h_j(x)-h_j(y)|: x,y \in A_i,i=1,\cdots,n; j=1,\cdots,m \}< \delta/8.
\]
Choosing $0< \delta_1 < \frac{\delta}{4n}$, and define
\[
V_{t_1, \cdots, t_n}(\mu, \delta_1)= \{(y_1,...,y_n) \in \triangle_n: |y_i-\mu(A_i)|< \delta_1 , i=1,\cdots,n\}.
\]
For any $\upsilon$ in $M_1([0,1])$, let
\[
\Psi(\upsilon)=(\upsilon(A_1),...,\upsilon(A_{n+1})).
\]
If $\Psi(\upsilon)$ belongs to $ V_{t_1, \cdots, t_n}(\mu, \delta_1)$, then for $j=1,...,m$
\begin{eqnarray*}} %\beq=\begin{eqnarray*
|\langle \upsilon, h_j \rangle -\langle \mu, h_j \rangle|&=&|\sum_{i=1}^{n+1}\int_{A_i}h_j(x)(\upsilon(dx)-\mu(dx))|\\
&<& \frac{\delta}{4} + n\delta_1 < \delta/2,
\end{eqnarray*}
which implies that
\[
\Psi^{-1}( V_{t_1, \cdots, t_n}(\mu, \delta_1))\subset
\{\upsilon\in M_1([0,1]):|\langle \upsilon, h_j \rangle -\langle \mu, h_j \rangle|< \delta/2:j=1,\cdots,m \}.
\]
This combined with \rf{addition1} implies that
\[
\Psi^{-1}(V_{t_1, \cdots, t_n}(\mu, \delta_1))\subset B(\mu,\delta).
\]
Since $V_{t_1, \cdots, t_n}(\mu, \delta_1)$ is open in $\triangle_{n}$, it follows from Lemma~\ref{fgz-l2} that
\begin{eqnarray}} %\beqn=\begin{eqnarray
&&\lim_{\delta \ra
0}\liminf_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)}\log \mathbb{P}\{B(\mu,\delta)\}\label{fgz-7}\\
&&\hspace{0.5cm}\geq
\lim_{\delta \ra
0}\liminf_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)}\log \mathbb{P}\{\Psi^{-1}(V_{t_1, \cdots, t_n}(\mu, \delta_1))\}\nonumber\\
&&\hspace{0.5cm}= \lim_{\delta \ra
0}\liminf_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)}\log \mathbb{P}\{(\Xi_{\alpha,0,\nu}(A_1),...,\Xi_{\alpha,0,\nu}(A_{n+1}))\in V_{t_1, \cdots, t_n}(\mu, \delta_1)\}\nonumber\\
&&\hspace{0.5cm}\geq -{\cal I}_{n+1}(\mu(A_1),...,\mu(A_{n+1}))\geq -{\cal I}(\mu).\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
Next we assume that $t_1,...,t_n$ are continuity points of $\mu$. We denote the collection of all partitions from these points
by ${\cal J}_{\mu}$. This implies that $\Psi(\upsilon)$ is continuous
at $\mu$. Hence for any $\delta_2 >0$, one can choose $\delta >0$ small enough such that
\[
\overline{B}(\mu,\delta) \subset \Psi^{-1}(V_{t_1, \cdots, t_k}(\mu, \delta_2)).
\]
Let
\[
\overline{V}_{t_1, \cdots, t_k}(\mu, \delta_2)=\{(y_1,...,y_n)\in \triangle_n: |y_i-\mu(A_i)|\leq \delta_2 , i=1,\cdots,n\}.
\]
Then we have
\begin{eqnarray}} %\beqn=\begin{eqnarray
&& \lim_{\delta\ra 0}\limsup_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)} \log \mathbb{P}\{\overline{B}(\mu,\delta)\} \label{fgz-8}\\
&& \hspace{0.5cm}\leq \limsup_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)} \log \mathbb{P}\{(\Xi_{\alpha,0,\nu}(A_1),...,\Xi_{\alpha,0,\nu}(A_{n+1}))\in
\overline{V}_{t_1, \cdots, t_n}(\mu, \delta_2)\}.\nonumber
\end{eqnarray}} %\eeqn=\end{eqnarray
Letting $\delta_2$ go to zero and applying Lemma~\ref{fgz-l2} again, one gets
\[
\lim_{\delta\ra 0}\limsup_{\theta \ra \infty}\frac{1}{\theta} \log P\{\overline{B}(\mu,\delta)\} \leq -
{\cal I}_{n+1}(\mu(A_1),...,\mu(A_{n+1})).
\]
Finally, taking supremum over ${\cal J}_{\mu}$ and applying Lemma~\ref{fgz-l3}, one gets
\[
\lim_{\delta\ra 0}\limsup_{\alpha \ra 1}\frac{1}{-\log(1-\alpha)} \log \mathbb{P}\{\overline{B}(\mu,\delta)\} \leq -{\cal I}(\mu),
\]
which combined with \rf{fgz-7} leads to the theorem.
\hfill $\Box$
\section{Concluding Remarks}
The limiting procedure $\alpha$ going to zero arises naturally in the branching model considered in \cite{Athreya12}. This is a Galton-Watson branching process with offspring distribution $\{p_j: j\geq 0\}$ in the domain of attraction of a stable law of order $0<\alpha<1$ and $p_0=0$. For any $n \geq 1$, let $T_n$ be the coalescence time of any two randomly selected individuals from the nth
generation. Then it is shown in \cite{Athreya12} that
\[
\mathbb{P}\{n-T_n\leq k\} \ra \pi(k) \ \mbox{as}\ n \ra \infty
\]
where $\pi(k)$ is identified as the expectation of a random variable. It turns out that the random variable is just $\varphi_2({\bf P}(\alpha^k,0))$ and $\pi(k)$ has the following more explicit expression
\[
\pi(k)=1-\alpha^k.
\]
In this context, $\varphi_2({\bf P}(\alpha^k,0))$ gives the random probability distribution of the coalescence time and its asymptotic behaviour for large $k$ or equivalently $\alpha^k$ going to zero is described in Theorem~\ref{sec3-t1} and Theorem~\ref{ldp-t1}.
A comparison between $\alpha$ converging to $1$ and $\theta$ converging to infinity reveals fundamental differences. Under these limiting procedures, we have both ${\bf P}(\alpha,0)$ and ${\bf P}(0,\theta)$ converge to $(0,0,\ldots)$. This can be seen from the distributions of $\varphi_2({\bf P}(\alpha,0))$ and $\varphi_2({\bf P}(0,\theta))$.
It is shown in \cite{Gri79a} and \cite{JKK02} that
\[
\sqrt{\theta/2} [\theta \varphi_2({\bf P}(0,\theta))-1] \Longrightarrow Z, \ \theta \ra \infty,
\]
where $Z$ is the standard normal random variable. By Ewens sampling formula, we have
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\mathbb{E}[\varphi_2({\bf P}(0,\theta))]&=& \frac{1}{\theta+1}\\
\mathbb{E}[\varphi^2_2({\bf P}(0,\theta))]&=& \frac{3!+ \theta}{(\theta+1)(\theta+2)(\theta+3)}
\end{eqnarray*}
and
\[
\mathbb{E}[\varphi^3_2({\bf P}(0,\theta))]= \frac{1}{(\theta+1)_{(5)}}(5!+ 3\cdot 3!\theta+\theta^2).
\]
The skewness of $\varphi_2({\bf P}(0,\theta))$ is given by
\begin{eqnarray*}} %\beq=\begin{eqnarray*
&&\frac{\mathbb{E}[\varphi_2^3({\bf P}(0,\theta))]- 3\mathbb{E}[\varphi_2({\bf P}(0,\theta))]\mathbb{E}[\varphi_2^2({\bf P}(0,\theta))]+2(\mathbb{E}[\varphi_2({\bf P}(0,\theta))])^3}{(\mathbb{E}[\varphi_2^2({\bf P}(0,\theta))]-(\mathbb{E}[\varphi_2({\bf P}(0,\theta))])^2)^{3/2}}\\
&& =\frac{O(\theta^{-5})}{O(\theta^{-4.5})} \ra 0, \ \ \theta\ra \infty
\end{eqnarray*}
which is consistent with the Gaussian limit.
On the other hand, for $\varphi_2({\bf P}(\alpha,0))$ one has
\begin{eqnarray*}} %\beq=\begin{eqnarray*
\mathbb{E}[\varphi_2({\bf P}(\alpha,0))] &=& 1-\alpha\\
\mathbb{E}[\varphi_2^2({\bf P}(\alpha,0))] &= &\frac{(1-\alpha)(2-\alpha)(3-\alpha)+\alpha(1-\alpha)^2}{6},
\end{eqnarray*}
and
\begin{eqnarray*}} %\beq=\begin{eqnarray*
Var(\varphi_2({\bf P}(\alpha,0)))&=& \frac{\alpha(1-\alpha)}{3}\\
\mathbb{E}[\varphi_2^3({\bf P}(\alpha,0))]&=& \frac{1}{5!}[(1-\alpha)_{(5)}+ 3\alpha (1-\alpha)^2 (2-\alpha)(3-\alpha)+ \alpha^2(1-\alpha)^3].
\end{eqnarray*}
This means that the skewness of $\varphi_2({\bf P}(\alpha,0))$ is of order $O((1-\alpha)/O((1-\alpha)^{3/2})$ which goes to infinity as $\alpha$ converges to $1$. Thus the distribution of
$\varphi_2({\bf P}(\alpha,0))$ is skewed strongly to the right and a Gaussian limit is unlikely.
Another difference is reflected from the large deviation behaviour of the Pitman sampling formula. For any $n \geq 1$, a partition {\boldmath$\eta$} of $n$ with length $l$, the conditional Pitman sampling formula given ${\bf P}(\alpha,\theta)={\bf p}$ is
\[
F_{\mbox{\boldmath$\eta$}}({\bf p})= C(n,\mbox{\boldmath$\eta$})\sum_{\mbox{distinct}\ i_1,\ldots,i_l }p_{i_1}^{\eta_1}\cdots p_{i_l}^{\eta_l}
\]
where
\[
C(n,\mbox{\boldmath$\eta$})= \frac{n!}{\prod_{k=1}^l \eta_k!\prod_{j=1}^na_j(\mbox{\boldmath$\eta$})}.
\]
Assuming $\eta_i\geq 2$ for all $i$. Then $ F_{\mbox{\boldmath$\eta$}}({\bf p})$ is continuous on $\nabla_{\infty}$. By contraction principle, large deviation principles hold for the image laws of $PD(0,\theta)$ and $PD(\alpha,0)$ under $F_{\mbox{\boldmath$\eta$}}({\bf p})$ with respective speed $\theta$ and $-\log(1-\alpha)$.
Integrating $F_{\mbox{\boldmath$\eta$}}({\bf p})$ with respect to $PD(\alpha, \theta)$ leads to the unconditional Pitman sampling formula. The large deviation speed is shown in \cite{Feng07} to be
$\log \theta$ under $PD(0,\theta)$. In \cite{FengZhou15}, the large deviation speed under $PD(\alpha,0)$ is shown to be $-\log(1-\alpha)$. In other words, under $PD(0,\theta)$ the conditional and unconditional Pitman sampling formulae have different large deviation speeds due to averaging and finite sample size, while under $PD(\alpha,0)$ the corresponding speeds are the same.
The large deviations for $\Xi_{\alpha,0,\nu}$ provide more information on the microscopic transition structure at the critical temperature for the REM. At the instant when the temperature starts to move below the critical value $T_c$, a portion of mass of the uniform measure $\nu$ may be lost and is replaced by an atomic portion with finite atoms. This represents the emerging of finite number of energy valleys and the energy landscape of the system becomes a mixture of valleys and ``flat" regions. The emerging of energy valleys follow the order where the small number of energy valleys is more likely to occur than a large number of valleys.
|
1,116,691,497,199 | arxiv | \section{Introduction}
Approximately 200 years after Chladni's work on acoustic plates detecting and classifying nodal lines \cite{Chl02} there is a revival due to two pioneering works on the subject of nodal domains. The first to mention is by Blum et~al.~\cite{blu02} proposing nodal domains as a tool to separate integrable and chaotic systems from each other. They explained their numerical findings for nodal domains in chaotic billiards in terms of the random plane wave approach \cite{ber77a} and found a good agreement. Bogomolny and coworkers \cite{bog02b} introduced a percolation model, yielding explicit results, e.\,g., for the linear increase of number of nodal domains with the quantum number with a slope of 0.0624. These results have been experimentally verified in closed microwave billiards \cite{Sav04b,Hul05b}.
Other studies of nodal domains are concerned with the continuation into the classically forbidden region taken tunneling into account \cite{bie02}, with graphs \cite{gnu04}, with iso-spectrality on graphs \cite{ban06} or billiards, i.\,e., whether the shape of the drum can be determined by nodal domain counting \cite{gnu05,gnu06,lev06}, with quantum maps \cite{kea06}, and with the anharmonic oscillator \cite{aib05}.
Here we will concentrate on real physical systems, which are always open due to the measurement process and absorption. We shall show that nodal domains still can be defined and have the same behavior as in closed systems. In addition it will be illustrated that the variation of the number of nodal domains in dependence of a global phase of the wavefunction can be used as a tool to determine the `openness' of the billiard.
\section{Nodal domains for real and imaginary parts}
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.eps.jpg.eps}
\caption{\label{fig:wavefunction} (color online) Real part $\psi_R$ (left), imaginary part $\psi_I$ (center), and modulus$|\psi|^2=\psi_R^2+\psi_I^2$ of the wavefunction $\psi$ at $\nu=5.64$\,GHz corresponding to a Weyl number of $n_{\rm{Weyl}}$=32.5. Correlations between $\psi_R$ and $\psi_I$ have already been removed by a global phase, see text. Additionally the nodal points of the modulus and the nodal lines of real and imaginary parts are marked. Nodal points occur at crossings of a nodal lines of $\psi_R$ with that of $\psi_I$.}
\end{figure}
The billiard used is a quantum dot like structure with two attached leads of width 2\,cm (for more details see \cite{kim02}). Additionally we added two insets to two of the sides (see also Fig.~\ref{fig:wavefunction}) to avoid any bouncing ball structures \cite{kim02,kim03a}. The measurements have been performed on a grid of step size 2.5\,mm. A similar billiard has already been used to determine long range correlations in the wavefunctions \cite{kim05a} and vortex distributions and correlations \cite{kim03b}. For each point the transmission $S_{12}$ from a fixed antenna 1 in one lead to the probing antenna 2 has been measured including the phases. As the transmission depends only on the field distribution at the positions of the antennas, and the position of antenna 1 is fixed, one can obtain the electric field distribution $E_z(x,y)$ at the position of antenna 2 \cite{ste95,stoe99}. We will use the term `wavefunction' for this quantity as it is related to the quantum mechanical problem of a particle in a box, even though it is not the eigenfunction of a single resonance but a superposition of different eigenfunctions. From this measurement the wavefunction $\psi(x,y)=E_z(x,y)$ has been determined including its real and imaginary parts (see \cite{ste95,kuh05b} and chapter 6 of \cite{stoe99}). In Fig.~\ref{fig:wavefunction} real part $\psi_R$, imaginary part $\psi_I$ and the modulus of a typical wavefunction is shown.
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps.jpg.eps}
\caption{\label{fig:ReIm} Imaginary versus real part of the wave
function at $\nu=13.84$\,GHz, for the directly measured wave
function (left), and after uncorrelating them (right) by
$\psi_R+i\psi_I=e^{-i\varphi_{g,0}} \left(\psi_R'+i\psi_I' \right)$. $\varphi_{g,0}$ is a global phase from the experiment acquired due to the antenna and leads.}
\end{figure}
Fig.~\ref{fig:ReIm}(left) shows a plot of the imaginary part $\psi_I'$ versus the real part $\psi_R'$ for a typical wavefunction prior to any correction. Each dot corresponds to a measured grid point. Usually the cloud of dots is not aligned with its main axes with respect to the coordinate axes leading to a correlation of real and imaginary part. This correlation has its origin in unwanted global phase shifts $\varphi_{g,0}$ mainly from the lead and the antennas. This global phase has been removed together with the correlations between $\psi_R$ and $\psi_I$ by means of a phase rotation,
\begin{equation}\label{eq:ExpGlobalPhase}
\psi_R+i\psi_I=e^{-i\varphi_{g,0}} \left(\psi_R'+i\psi_I' \right)
\end{equation}
where $\langle\psi_R^2\rangle > \langle\psi_I^2\rangle$, thus adjusting the main axes of the cloud of dots to the coordinate axes (Fig.~\ref{fig:ReIm} (right)). For a completely open system a circular cloud would have been expected. The eccentricity thus reflects the lack in openness (for this wavefunction). For a quantitative description of the openness the phase rigidity $|\rho|^2$ has been introduced \cite{lan97}. $\rho$ is defined by
\begin{equation}\label{eq:rigidity}
\rho=\int d\mathbf{r}\,\psi(\mathbf{r})^2 =
\frac{\langle\psi_R^2\rangle-\langle\psi_I^2\rangle}
{\langle\psi_R^2\rangle+\langle\psi_I^2\rangle}.
\end{equation}
This quantity is immediately available from the cloud of dots as shown in (Fig.~\ref{fig:ReIm}(right)). It is noteworthy to mention that the phase rigidity is a highly fluctuating quantity and is different for each wavefunction.
Because of the openness of the billiard there is a strong overlap of eigenfrequencies at each frequency making a direct determination of the mode number $n$ impossible. Therefore the Weyl number \cite{wey13} has been used to determine this quantity, in our billiards given by
\begin{equation}\label{eq:Weyl}
n_{\rm{Weyl}}=\frac{A}{4\pi} k^2 - \frac{S}{4\pi} k + C
\end{equation}
where $k=2\pi\nu/c$ is the wavenumber, $\nu$ the frequency, $c$ the speed of light, $A$ and $S$ are the area and the circumference of the billiard and $C$ is a constant determined by corners and the curvature in the billiards. The order of the constant is 1 and in our case we used $A \approx$\,0.0361\,m$^2$, $S\approx$\,0.807\,m and $C$=0. Strictly speaking the Weyl formula is not applicable for open systems, since the resonances are shifted from the real axis into the complex plane. Occasionally resonances even may be removed from the spectrum due to a strong coupling to the surroundings \cite{leh95}, and fractal Weyl laws may occur \cite{lu03,non}. But still the Weyl formula is the best approximation known in this case to obtain the mode number $n$.
From the measured real and imaginary parts we determined the nodal lines by means of a bilinear interpolation. From the bilinear interpolation we get the lines within the grid. Using this information and the Hoshen-Kopelman method \cite{hos76} we obtain the number of nodal domains $\nu_n$ and their areas.
\begin{figure}
\includegraphics[width=8cm]{fig3.eps}\\
\caption{\label{fig:nnodal} (color online) The number of nodal
domains $\nu_n$ versus the Weyl number $n_{\rm{Weyl}}$ for the real
(crosses) and imaginary part (diamonds). The red dashed line
correspond to the theoretical prediction $\nu_n=0.0624 n$
of the percolation model
\cite{bog02b}. The blue solid lines are fits including boundary
effects \cite{blu02}
with Eq.~(\ref{eq:NodalDomainNumber}) with
a linear slope of 0.059 and 0.060 for real and imaginary
part respectively.}
\end{figure}
In the case of open systems the distribution of intensities $P(|\psi|^2)$, e.\,g., can be obtained from the random superposition of plane waves (RSPW) approach \cite{ber77a}, expressing $\psi$ as a
\begin{equation}
\psi(\mathbf{r}) = \sum_{\mathbf{k}} a(\mathbf{k})
e^{i \mathbf{k} \cdot \mathbf{r}}.
\label{eq:sum}
\end{equation}
In open systems the plane wave amplitudes $a(\mathbf{k})$ have a
Gaussian distribution with zero mean and with variance
\begin{equation}
\langle a(\mathbf{k}) a(-\mathbf{k}) \rangle = \rho \langle a(\mathbf{k}) a^*(\mathbf{k}) \rangle,
\label{eq:avar}
\end{equation}
where $|\rho|^2$ is the phase rigidity as given by Eq.~(\ref{eq:rigidity}). In this model the phase rigidity is only defined by an average and individual realizations therefore have fluctuating phase rigidities, depending on the number of plane waves used.
First we investigate the number of nodal domains $\nu_n$ for the real and imaginary part. Bogomolny and Schmidt \cite{bog02b} predicted from the percolation model a linear increase with a slope of $a$=0.0624. Blum et~al.\ \cite{blu02} introduced an additional term to take boundary effects into account,
\begin{equation}
\label{eq:NodalDomainNumber}
\nu_n= a n + b \sqrt{n}
\end{equation}
where $n$ is the quantum number, which in our case is given by the Weyl number $n_{\rm{Weyl}}$ (see Eq.~(\ref{eq:Weyl})). In Fig.~\ref{fig:nnodal} number of nodal domains is shown versus the Weyl number for the real (crosses) and imaginary (diamonds) part of the wavefunction. The dashed line corresponds to the slope given by Ref.~\onlinecite{bog02b}. The data were fitted with Eq.~(\ref{eq:NodalDomainNumber}) and we found $a$=0.0594 and $b$=1.231 for the real and $a$=0.0599 and $b$=1.300 for the imaginary part. The value for $a$ is in accordance with the expected slope of 0.0624.
The boundary correction parameter $b$ should scale with the ratio $S/\sqrt{A}$, where $S$ and $A$ are the circumference and area of the billiard, respectively. The rescaled quantity $b_c=b\sqrt{A}/S$ should be universal. In the present case we find values of 0.29 and 0.31 for $b_c$ for the real and the imaginary part, respectively. For two other billiards investigated experimentally,namely the half rough circle billiard with \cite{Hul05b} and without \cite{Sav04b} teflon insert, values of 0.27, and 0.20 have been obtained for $b_c$. The $b_c$ obtained for the billiard with a half circle teflon insert have to be taken with care, as the internal boundary between teflon and air and the integrable shape of the teflon insert may give rise to deviations. In case of the numerical calculations presented in Ref.~\onlinecite{blu02} values of 0.25 and 0.21 have been obtained for the Sinai and stadium billiard, respectively. In all cases the estimated error is of about 10\%. Thus $b_c$ is comparable for all billiards.
\begin{figure}
\includegraphics[width=8cm]{fig4.eps}\\
\caption{\label{fig:variance} (color online) The variance of the
number of nodal domains $\nu_n$ versus the Weyl number
$n_{\rm{Weyl}}$ for the real (blue boxes) and imaginary part (red
crosses). The blue dashed line corresponds to the theoretical
prediction.}
\end{figure}
For the variance $\sigma^2$ of the number of nodal domains the percolation model predicts $\sigma^2\approx 0.05 n$ \cite{bog02b}. In Fig.~\ref{fig:variance} the scaled variance $\sigma^2/n_{\rm{Weyl}}$ is plotted versus the Weyl number $n_{\rm{Weyl}}$ for the real (blue boxes) and imaginary (red crosses) part. The variances have been calculated from the corresponding fits mentioned before and the average has been performed over 30 consecutive wavefunctions. A good agreement is found. The larger variations at small Weyl numbers for the imaginary part are due to experimental noise: As in this regime the phase rigidity is often small, i.\,e., the imaginary part is small, noise in the experimental data can create small nodal domains particularly close to the billiard boundary, leading to higher fluctuations.
\begin{figure}
\includegraphics[width=8cm]{fig5.eps}\\
\caption{\label{fig:PAreas} (color online) The distribution of normalized nodal domain areas $P_A(s/s_{\rm{min}})$ is shown for the real (solid histogram) and the imaginary part (red dotted histogram). Additionally the expected algebraic decay with an exponent of $\tau=187/91$ is shown as blue dashed line.}
\end{figure}
Another quantity of interest is the distribution $P_A$ of nodal domain areas $s$. The area distribution has also been calculated in Ref.~\onlinecite{bog02b} and is given asymptotically by
\begin{equation}\label{eq:PArea}
P_A(s)\propto \left(\frac{s}{s_{\rm{min}}}\right)^{-\tau}
\end{equation}
where $\tau=187/91$ and $s_{\rm{min}}=\pi (j_1/k)^2$, the smallest possible area for fixed wavenumber $k$, where $j_1$ is the first zero of the Bessel function $J_0(x)$. In Fig.~\ref{fig:PAreas} the distribution of normalized nodal domain areas is shown for the real and imaginary part. All evaluated wavefunctions have been used to obtain the distribution. Additionally the expected slope of the algebraic decay with $\tau=187/91$ is shown as a dashed line and is in agreement with the experimental histograms. Deviations at large values of $s/s_{\rm{min}} > $ 80 are due to poor statistics. Here only a few entries per bin are available.
No difference between the expected behavior for closed chaotic systems and the behavior for the real and imaginary part of the wavefunction in open chaotic systems is found. A global phase rotation had been performed to uncorrelate real and imaginary parts, but the same behavior is found for the number of nodal domains (Fig.~\ref{fig:nnodal}), its variance (Fig.~\ref{fig:variance}) and the area distribution (Fig.~\ref{fig:PAreas}), using the original data without any global phase rotation.
\section{Nodal domain dependence on global phase}
\begin{figure*}
\includegraphics[width=2\columnwidth]{fig6.eps.jpg.eps}
\caption{\label{fig:NodalPhi} (color online) (Upper part) Number of nodal domains $\nu_n$ as a function of a global phase $\varphi_g$ for a wavefunction at $n_{\rm{Weyl}} \approx$ 223. Marked are the phases $\varphi_g$, for which the nodal domains are shown below, corresponding to $\varphi_g/\pi$ = 0, 0.225, 0.35, 0.475, 0.5, and 0.5525. Areas, where appearances and disappearances of nodal domains can be seen, are highlighted. The inserts show the corresponding $\psi_I$ versus $\psi_R$ plots, see also Fig.~\ref{fig:ReIm}.}
\end{figure*}
In the preceding section we have mostly neglected the fact that the billiard is open and treated the real and imaginary part just like a real valued wavefunction in case of closed systems. But as the billiard is open we have additional parameters like the rigidity $|\rho^2|$ or global phases $\varphi_g$ or even new quantities like currents $j$ or vorticities $\omega$ \cite{bar02,kim03a}. In Fig.~\ref{fig:NodalPhi} we present the nodal domains and the number of nodal domains of the real part of a wavefunction at Weyl number $n_{\rm{Weyl}} \approx$ 223 with a phase rigidity $|\rho|^2 \approx 0.81$. At the top the number of nodal domains for the real part is shown as a function of the global phase $\varphi_g$. For $\varphi_g=0$ real and imaginary part are uncorrelated and for $\varphi_g=\pi/2$ real and imaginary part have changed their identities and are uncorrelated again. In the lower part a selection of nodal domains is presented for phases indicated by arrows in the upper figure. Additionally the $\psi_I$ versus $\psi_R$ plots are shown, see Fig.~\ref{fig:ReIm}. Highlighted rectangles emphasize regions, where changes of the number of nodal domains occur, due to rearrangements of nodal lines. While the phase is changing the nodal lines are shifted and permanently dissolved and reconnected. The only points fixed are the nodal points of the corresponding modulus $|\psi|^2$, corresponding to elliptic points in the flow. Nodal lines can only intersect at the saddle points corresponding to hyperbolic points in the flow. Their position is also independent of the global phase (see Ref.~\cite{hoeh} for a more detailed discussion of these features).
\begin{figure}
\includegraphics[width=8cm]{fig7.eps}\\
\caption{\label{fig:NodalPhi2} Number of nodal
domains $\nu_n$ as a function of a global phase $\varphi_g$ for a wavefunction at $n_{\rm{Weyl}} \approx$ 229.}
\end{figure}
Because the phase rigidity for this wavefunction is quite large ($|\rho|^2$=0.81) the modulus of the imaginary part $|\psi_I|$ is much smaller than the modulus of the real part $|\psi_R|$. Therefore a small change in the global phase $\varphi_g\mapsto \varphi_g +d\varphi$ for $\varphi_g$ close to zero (or close to $\pi$) has only a weak effect on the real part of the wavefunction and on its nodal pattern. Here the number of nodal domains $\nu_n$ is less likely to change as a function of phase. On the other side for $\varphi_g$ close to $\pi/2$, there is an increased probability for changes in the nodal pattern. Indeed, if the global phase approaches $\pi/2$, changes of $\nu_n$ become more frequent. In Fig.~\ref{fig:NodalPhi2} the number of nodal domains as a function of the global phase is shown for wavefunctions at Weyl numbers $n_{\rm{Weyl}}$=229 with $|\rho|^2$=0.04. In this case the phase rigidity is small, i.\,e.\ real and imaginary parts are of the same order. Therefore already for small global phases the number of nodal domains changes and a suppression of changes in $\nu_n$ for small phases is not observed -- instead the changes occur quite uniformly over the complete range of global phases. This already shows that the phase rigidity can be observed from the number of nodal domains, as a function of the global phase, at least qualitatively.
\begin{figure}
\includegraphics[width=8cm]{fig8.eps}\\
\caption{\label{fig:CorrNodal} (color online) Correlation of the
signed areas as a function of the global phase $\varphi_g$ for three different wavefunctions at Weyl numbers $n_{\rm{Weyl}} \approx $ 229, 222, and 223 with rigidities $\rho$=0.04, 0.48, and 0.81, respectively. The curve with the smallest curvature belong to $\rho$ =0.04, and the one with the largest curvature belong to $\rho$ =0.81. The dotted lines show the results of Eq.~(\ref{eq:Corr}). The first and the last correlation corresponds to the wavefunctions used to produce Figs.~\ref{fig:NodalPhi} and \ref{fig:NodalPhi2}.}
\end{figure}
It is a non-trivial problem to obtain an expression for the number of nodal domains as a function of the global phase based on the random-wave (or the percolation) model. However there are other quantities which describe the nodal pattern as a function of the global phase and which are less resistent to a theoretical approach. Any of these is suited to exhibit the interplay between openness and nodal domains. One possibility is the global phase autocorrelation of the signed areas of the wavefunctions defined by
\begin{equation}
\label{eq:CorrDef} C(\varphi)= \langle
\rm{sgn}(\psi_{R,\varphi_1}) \cdot \rm{sgn}(\psi_{R,\varphi_2})\rangle, \qquad
\varphi=\varphi_2-\varphi_1,
\end{equation}
where $\mathrm{sgn}(x)$ denotes the sign of $x$, and $\langle \cdot\rangle$ denote an average over the area. $\psi_{R,\varphi}$ denotes the real part of the wavefunction for the value $\varphi$ of the global phase. $\varphi_1$ is chosen to be the global phase, where the real and imaginary parts are uncorrelated, in our case $\varphi_1$ = 0. Equivalently one can write
\begin{equation}
\label{eq:CorrDef2} C(\varphi)=\frac{A_{\vartheta,+}-A_{\vartheta,-}}{
A}=
1-2\frac{A_{\vartheta,-}}{A}
\end{equation}
where $A$ is the area of the billiard, $A_{\vartheta,+}$ is the area where $\psi_{R,\varphi_1}$ and $\psi_{R,\varphi_1}$ have the same sign, and $A_{\vartheta,-}=A-A_{\vartheta,+}$ is the area where they have opposite signs. Thus the signed area correlator measures the fraction of the billiard area which changes sign as $\vartheta$ increases.
We can give an expression for the autocorrelation function $C(\varphi)$ based on the random wave model. At a given point we may write the wavefunction at $\varphi=0$ as
\begin{equation}
\psi_0= \psi_R+i \psi_I= x_1 + i \xi x_2
\label{eq:rwm_c}
\end{equation}
where $x_1$ and $x_2$ are independent and equally distributed Gaussian random variables, and $\xi$ is a real constant, that is related to the rigidity via
\begin{equation}
\rho=\frac{\langle \psi_R^2\rangle -\langle \psi_I^2\rangle}{
\langle \psi_R^2\rangle
+\langle \psi_I^2\rangle}=\frac{1-\xi}{1+\xi}\ .
\end{equation}
In general, random wave models of the given type only define the rigidity on the mean over all realizations (neither the norm of the real part nor the norm of the imaginary part
are fixed). For the present the fluctuations in $\psi_R^2$ and $\psi_I^2$ just reflect the fact that we describe one point in a wavefunction while the rigidity is obtained by an integral (space average) over the billiard. Other applications of random wave models may need to take more care and define a model where the rigidity does not fluctuate from one realization to another.
Within the above variant of the random wave model the autocorrelator can easily be calculated as
\begin{eqnarray}
\label{eq:Corr}
C(\varphi)&=&\frac{1}{\pi}\int\ dx_1dx_2\ e^{-x_1^2-x_2^2}\times\nonumber\\
&&\times \mathrm{sgn}\big( x_1 \big) \mathrm{sgn}\big(x_1\cos(\varphi)+\xi x_2 \sin(\varphi) \big)\nonumber\\
&=&\int_0^{2\pi}\frac{d\alpha}{2\pi} \mathrm{sgn}\big(\cos{\alpha}\big)\mathrm{sgn}\big(\cos{\alpha}\cos{\varphi}+\xi\sin{\alpha}\sin{\varphi}\big)\nonumber\\
&=&\frac{2}{\pi}\arctan\left(\sqrt{\frac{1+\rho}{1-\rho}}\big/ \tan{\varphi}\right)\ .
\end{eqnarray}
In Fig.~\ref{fig:CorrNodal} the global phase autocorrelation of the signed areas is shown for three different wavefunctions at approximately the same Weyl number $n_{\rm{Weyl}} \approx $ 229, 222, and 223 but for very different phase rigidities $|\rho|^2$=0.04, 0.48, and 0.810, respectively. Additionally, the results from Eq.~(\ref{eq:Corr}) are plotted as dotted lines. An excellent agreement between experiment and theory is found, especially as there is no free parameter, since the only parameter, the phase rigidity, has been determined directly from the wavefunctions.
\section{Nodal domains for the vorticity}
\begin{figure}
\includegraphics[width=\columnwidth]{fig9.eps.jpg.eps}
\caption{\label{fig:FlowVorticity} (color online) Left the
probability current density $\vec{j}$ is shown for the whole billiard at 13.84\,GHz. On the right side the corresponding vorticity $\omega$ is plotted. The lower part of the figure shows a magnification of the region marked by squares in the upper figure. In the zoomed figures vortices and saddles are marked by circles and crosses. Open circle denote clockwise and filled circles denote counterclockwise vortices.}
\end{figure}
Since the billiard is open there is a flow through the system which, in the electromagnetic case, is described by the Poynting vector \cite{seb99}. Quantum mechanically it corresponds to the probability current density and is given by
\begin{equation}\label{eq:current}
j \propto {\rm Im}\left[\psi^*(r)\nabla \psi(r)\right]\ .
\end{equation}
In Fig.~\ref{fig:FlowVorticity} (left) the probability current density $j$ is shown for the same wavefunction as used in Fig.~\ref{fig:NodalPhi}. A complex flow structure is observed with numerous elliptic fixed points (vortices) and hyperbolic points (saddles). The vortices and saddles are marked by circles and crosses in the lower part of the figure.
Another quantity of interest is the vorticity given by
\begin{equation}\label{eq:vorticity}
\omega =(\nabla_x\psi_R)(\nabla_y\psi_I) - (\nabla_y\psi_R)(\nabla_x\psi_I).
\end{equation}
which, up to a constant factor, is just the curl of flow. It is shown Fig.~\ref{fig:FlowVorticity} on the right hand side. For more details on distributions and correlations of the probability current or vorticity we refer to Refs.~\onlinecite{bar02,kim03b}. One observes clearly a nodal line pattern for the vorticity. The nodal lines of the vorticity correspond to the unstable manifolds of the dynamics. Accordingly, nodal lines of the vorticity will always cross exactly at the saddle points and each vortex has a separated nodal domain with an area depending on the vortex strength and the distance to other vortices and saddles. A type of irregular checkerboard pattern of two saddles and two vortices, one clockwise, the other one counterclockwise, with a nodal line crossing between the four of them is created and can be seen in Fig.~\ref{fig:FlowVorticity}\,(upper right). Due to measurement errors but also already due to the discretisation and the bilinear interpolation these crossings will be turned into avoided crossings, thus connecting the nodal domains more or less randomly. This corresponds to the spirit of percolation theory.
\begin{figure}
\includegraphics[width=8cm]{fig10.eps}\\
\caption{\label{fig:wnnodal} (color online) The number of nodal
domains $\nu_n$ versus the Weyl number $n_{\rm{Weyl}}$ for the
vorticity $\omega$ (diamonds). The red dashed line
correspond to the theoretical prediction $\nu_n=0.0624 n$
of the percolation model
\cite{bog02b}. The blue solid lines are fits including boundary
effects \cite{blu02} with Eq.~(\ref{eq:NodalDomainNumber}) with a linear slope $a$ = 0.161.}
\end{figure}
In Fig.~\ref{fig:wnnodal} we present the number of nodal domains for the vorticity. A fit with Eq.~(\ref{eq:NodalDomainNumber}) yields a linear slope of $a$=0.161 and $b$=1.65. Therefore we find again a dependency like in Eq.~(\ref{eq:NodalDomainNumber}). The slope is larger than the slope expected in the case of real wavefunctions (0.0602). Predictions for the linear increase of the vorticity in terms of a percolation model are still missing.
\section{summary}
To summarize, we have shown in this paper that the nodal domains of the real and imaginary part of the wavefunctions in open systems behave like the nodal domains for the wavefunction in closed systems. Additionally we have calculated the global phase autocorrelation of the signed areas as a function of the global phase. This quantity was shown to be an indicator of openness, i.\,e., the phase rigidity. An effect of the rigidity is also present in the fluctuations of the number of nodal domains as the global phase is varied, though an analytical description is beyond present knowledge. Nodal domains in open systems can also be defined for other quantities like the vorticity, which we have shown here. They also seem to have a linear behavior (plus square root corrections) but with a different slope in comparison to the number of nodal domains in wavefunctions in closed systems.
\section*{Acknowledgments}
The experiments were supported by the DFG.
SG thanks for the support by a research grant from the GIF (grant I-808-228.14/2003). L.~Sirko, Warsawa, is thanked for making the geometrical data of the billiards used in Ref.~\onlinecite{Sav04b,Hul05b} available to us.
\bibliographystyle{apsrev}
|
1,116,691,497,200 | arxiv | \section{Introduction}
According to sociologists and the XAI literature \cite{jacovimiller21}, a pre-requisite to \textit{extrinsic} human-AI trust establishment is for users to be able to \textit{anticipate} the model's behavior. In NLP, while we expect pre-trained language models (PTLM) to power agents interacting with humans, often Transformers-based state-of-the-art architectures (BERT \cite{devlin2019bert} and its variants) behave unpredictably showing poor generalization abilities for simpler non-adversarial examples \cite{kaushik2019learning,probinglogical} while achieving state-of-the-art in complex examples requiring composite reasoning \cite{wang2018glue}. This has motivated researchers to re-think \textit{evaluation} methodologies, which is a key component of \textit{extrinsic} trust. Recently, inspired by behavioral testing \cite{beizer1995black}, authors in \cite{ribeiro-etal-2020-beyond} proposed creation of template-based test-suites, called \textsc{CheckList}, that has a broader coverage ranging from the minimal expected functionality to more complicated tests across a range of capabilities. Moving beyond capability-wise probing and cloze-task formulations, the methodology produces a behavioral summary that aggregates different shortcomings of the SOTA models across capabilities in a disentagled manner. In this work, we hypothesize that the behavioral summary using the \textsc{CheckList} method for a natural language understanding task will help humans form a holistic intuition, that will in turn form the basis of quantifying the \textit{predictability} of models through humans.
To test this \textit{central} hypothesis of our work, we choose the Natural Language Inferencing (NLI) task as it tests reasoning capabilities explicitly, and create a \textsc{CheckList} that enables evaluation of whether NLI systems exhibit such reasoning capabilities. In the process, we extend the list of capabilities in \newcite{ribeiro-etal-2020-beyond} to cover more interesting linguistic and logical reasoning phenomena (such as causal, spatial, pragmatic) required in NLI (or similar tasks). We discuss how we come up with templates for such reasoning capabilities. The evaluation results on \textsc{CheckList} test-suite provide a fine-grained disentangled view of a model's capabilities, untangling the effects of different phenomena. However, for models such as BERT \cite{devlin2019bert} and RoBERTa \cite{liu2019roberta}, we discover capability-wise and intra-template inconsistencies. Even though, the average aggregate accuracies tell a clear story, such inconsistencies are found in most systems we evaluated. As a potential resolution, we design a human study and through a simulation experiment \cite{DBLP:journals/corr/abs-1902-00006}, we see how human judgement can be used to quantify model predictability.
Our contributions are the following. 1) First, we create a template-based test-suite\footnote{We will make the dataset available for public use.} (194 templates, 184k examples) for the NLI task by extending a recently published reasoning taxonomy for NLI \cite{joshi2020taxinli}, and benchmark SOTA NLI systems that reveals new interesting facts about them. 2) We observe inconsistencies in the performance of the models within templates as well as across similar templates (perturbations of the same template), 3) performance inconsistency within templates (across varying lexicon) reveals new biases for BERT, and 4) Through a user study using \textit{simulation} experiments, we provide an indication of human judgement about how inconsistencies affect predictability of models. Particularly, our experiments collectively indicate that RoBERTa is more ``robust" (indeed!) and predictable than BERT.
\section{Related Work}
The conflicting performance of Transformer-based PTLMs in large natural language understanding benchmarks and targeted phenomena-wise tests have led to a wave of work in probing and attempting to understand these models. Extensive probing tasks have been implemented in order to investigate how and where (within the model) linguistic information has been encoded \cite{tenney2019bert,tenney2019you,hewitt-manning-2019-structural,jawahar-etal-2019-bert,liu2019linguistic,kim-etal-2019-probing}. However, the effectiveness of the leading evaluation methodology, aka probing tasks, have come into question. For example, \newcite{ravichander-etal-2020-systematicity} cautions that BERT may not understand some ``concepts'' even though probing studies may indicate otherwise.
At a semantic level, and more specifically with respect to the NLI task, inference datasets have been curated focusing on testing a range of reasoning capabilities \cite{poliak-etal-2018-collecting,Richardson2019ProbingNL}. Several work \cite{mccoy-etal-2019-right,kaushik2019learning,glockner-etal-2018-breaking} developed targeted evaluation sets to adversarially challenge these large PTLMs and demonstrated shortcomings. However, these methods rely on the aggregate statistic of accuracy to assess performance, which makes it tricky to pinpoint where exactly the model is failing,
and how to remedy the issues \cite{wu-etal-2019-errudite}. The recent work by \citet{ribeiro-etal-2020-beyond} takes a different route. Inspired by software testing, authors propose creation of a set of model-agnostic test cases that capture basic expected functionality from a trained system. The revelation that SOTA systems fail on such minimal functionality tests has motivated the community to look at behavioral testing methodologies more closely, through which we can define capabilities and test them individually in a scalable manner.
Language understanding tasks such as NLI introduces two additional challenges. Understanding requires a set of theoretically well-defined types of reasoning capabilities, as put forward by the theories of semantics and logic \cite{sowa2010role,wittgenstein-1922}. Such types define the necessary capabilities that an NLU (or NLI) system should possess; some of which are missing in the \textsc{CheckList} work \cite{ribeiro-etal-2020-beyond}. A goal of evaluation is also to develop a holistic intuition about model's behavior, and the behavioral summary from \textsc{CheckList} by itself may not be sufficient in achieving such a goal. These central challenges are relevant for all NLU tasks, and constitute the primary focus of our work.
\begin{table*}[!ht]
\setlength\fboxsep{1pt}
\resizebox{\textwidth}{!}{%
\scriptsize
\setlength\tabcolsep{2pt}
\begin{tabular}{p{0.70cm} p{13.5cm} c}
\toprule
\# & \multicolumn{1}{c}{Template \& Examples} & Annotations\\
\arrayrulecolor{black}\midrule
T1 & {\textcolor{premise}P:} \{NAME\} is \{ADJ\}. {\textcolor{hypothesis}H:} \{NAME\} is \{Antonym(ADJ\}. {\colorbox{pink}{\texttt{contradict}}} & \multirow{2}{*}{C (1)}\\
Lex. & {\textcolor{premise}P:} Jim is responsible. {\textcolor{hypothesis}H:} Jim is irresponsible.\\
\arrayrulecolor{black}\midrule
T2 & {\textcolor{premise}P:} \{NAME1\} is \{a/an\} \{PROF\} and \{NAME2\} is too. {\textcolor{hypothesis}H:} \{NAME2\} is \{a/an\} \{PROF\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)}\\
Synt. & {\textcolor{premise}P:} Kevin is a politician and Steve is too. {\textcolor{hypothesis}H:} Steve is a politician.\\
\arrayrulecolor{black}\midrule
T3 & {\textcolor{premise}P:} \{NAME1\} and \{NAME2\} are from \{CTRY1\} and \{CTRY2\} respectively. {\textcolor{hypothesis}H:} \{NAME1\} is from \{CTRY1\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)}\\
Bool. & {\textcolor{premise}P:} George and Michael are from Germany and Australia respectively. {\textcolor{hypothesis}H:} George is from Germany.\\
\arrayrulecolor{black}\midrule
T4 & {\textcolor{premise}P:} \{NAME1\} and \{NAME2\} are from \{CTRY1\} and \{CTRY2\} respectively. {\textcolor{hypothesis}H:} \{NAME1\} is from \{CTRY2\}. {\colorbox{pink}{\texttt{contradict}}} &\multirow{2}{*}{C (1)}\\
Bool. & {\textcolor{premise}P:} Helen and Barbara are from Canada and Brazil respectively. {\textcolor{hypothesis}H:} Helen is from Brazil.\\
\arrayrulecolor{black}\midrule
T5 & {\textcolor{premise}P:} \{MALE\_NAME\} and \{FEMALE\_NAME\} are \{friends/collegues/married\}. He is \{a/an\} \{PROF1\} and she is \{a/an\} \{PROF2\}. {\textcolor{hypothesis}H:} \{MALE\_NAME\} is \{a/an\} \{PROF1\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (.8)}\\
Coref & {\textcolor{premise}P:} Angelique and Ricardo are collegues. He is a minister and she is a model. {\textcolor{hypothesis}H:} Angelique is a model.\\
\arrayrulecolor{black}\midrule
T6 & {\textcolor{premise}P:} \{CITY1\} is \{N1\} miles from \{CITY2\} and \{N2\} miles from \{CITY3\}. H : \{CITY1\} is \{nearer/farther\} to \{CITY2\} than \{CITY3\}. {\colorbox{pink}{\texttt{entail}}}&\multirow{2}{*}{E (1)}\\
Spatial & {\textcolor{premise}P:} Manchester is 67 miles from Pittsburg and 27 miles from Kansas. {\textcolor{hypothesis}H:} Manchester is nearer to Kansas than Pittsburg.\\
\arrayrulecolor{black}\midrule
T7 & {\textcolor{premise}P:} \{NAME\} has \{EVT1\} followed by \{EVT2\} followed by \{EVT3\}. {\textcolor{hypothesis}H:} \{EVT1/3\} is the \{first/last\} event. {\colorbox{pink}{\texttt{entail}}}&\multirow{2}{*}{E (1)}\\
Temp. & {\textcolor{premise}P:} Barbara has a history class then a mathematics class then a seminar. {\textcolor{hypothesis}H:} The history class is the first event.\\
\arrayrulecolor{black}\midrule
T8 & {\textcolor{premise}P:} \{NAME1\} \{bought/taught/...\} \{OBJ\} to \{NAME2\}. {\textcolor{hypothesis}H:} \{NAME2\} \{sold/learnt/...\} \{OBJ\} from \{NAME1\} {\colorbox{pink}{\texttt{entail}}}&\multirow{2}{*}{E (1)}\\
Causal & {\textcolor{premise}P:} Katherine taught science to Nancy. {\textcolor{hypothesis}H:} Nancy learnt science from Katherine.\\
\arrayrulecolor{black}\midrule
T9 & {\textcolor{premise}P:} \{NAME1\}, \{NAME2\}, ... \{NAMEn\} are the only children of \{NAME0\}. {\textcolor{hypothesis}H:} \{NAME0\} has \{n\} children. {\colorbox{pink}{\texttt{entail}}}&\multirow{2}{*}{E (1)}\\
Num. & {\textcolor{premise}P:} Bill, Patrick, Thomas, Joseph and Scott are the only children of Mark. {\textcolor{hypothesis}H:} Mark has 5 children.\\
\arrayrulecolor{black}\midrule
T10 & {\textcolor{premise}P:} \{NAME1\} is the child of \{NAME0\}, \{NAME2\} is the child of \{NAME0\}, ..., \{NAMEn\} is the child of \{NAME0\}. {\textcolor{hypothesis}H:} \{NAME0\} has \{n1<n\} children. {\colorbox{pink}{\texttt{contradict}}} &\multirow{2}{*}{C (1)}\\
Num. & {\textcolor{premise}P:} Mark is the child of Ruth. Patricia is the child of Ruth. Helen is the child of Ruth. Dorothy is the child of Ruth. Ann is the child of Ruth. {\textcolor{hypothesis}H:} Ruth has 1 child.\\
\arrayrulecolor{black}\midrule
T11 & {\textcolor{premise}P:} \{NAME\} lives in \{CITY\}. {\textcolor{hypothesis}H:} \{NAME\} lives in \{CTRY\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (.8)}\\
World & {\textcolor{premise}P:} Patrick lives in Lahore. {\textcolor{hypothesis}H:} Patrick lives in Pakistan\\
\arrayrulecolor{black}\midrule
T12 & {\textcolor{premise}P:} \{NAME\}'s \{RELATION/OBJ\} is \{ADJ\}. {\textcolor{hypothesis}H:} \{NAME\} has a \{RELATION/OBJ\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)}\\
PreSup& {\textcolor{premise}P:} Sarah's brother is jolly. {\textcolor{hypothesis}H:} Sarah has a brother.\\
\arrayrulecolor{black}\midrule
T13 & {\textcolor{premise}P:} \{NAME\} has stopped \{CONTINUOUS VERB\}. {\textcolor{hypothesis}H:} \{NAME\} used to \{VERB\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)}\\
PreSup& {\textcolor{premise}P:} Martin has stopped drinking. {\textcolor{hypothesis}H:} Martin used to drink.\\
\arrayrulecolor{black}\midrule
T14 & {\textcolor{premise}P:} \{NAME\} \{PAST VERB\} \{some/few\} of the \{NOUN\}. {\textcolor{hypothesis}H:} \{NAME\} didn't \{VERB\} all of the \{NOUN\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)} \\
Implic.& {\textcolor{premise}P:} Helen hired some of the teachers. {\textcolor{hypothesis}H:} Helen didn't hire all of the teachers. \\
\arrayrulecolor{black}\midrule
T15 & {\textcolor{premise}P:} \{OBJ1\} and \{OBJ2\} lie on the table. \{NAME\} asked for the \{OBJ1\}. {\textcolor{hypothesis}H:} \{NAME\} did not ask for the \{OBJ2\}. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{N (.5) E (.5)}\\
Implic.& {\textcolor{premise}P:} Toothpaste and eyeliner lie on the table. Jane asked for the eyeliner. {\textcolor{hypothesis}H:} Jane did not ask for the toothpaste.\\
\arrayrulecolor{black}\midrule
T16 & {\textcolor{premise}P:} \{NAME1\} asked if \{NAME2\} has \{N1\} dollars. \{NAME2\} had \{N2 < N1\} dollars. {\textcolor{hypothesis}H:} \{NAME2\} didn't have \{N1\} dollars. {\colorbox{pink}{\texttt{entail}}} &\multirow{2}{*}{E (1)}\\
Implic.& {\textcolor{premise}P:} Donald asked if Chris had 200 dollars. Chris had 90 dollars. {\textcolor{hypothesis}H:} Chris didn't have 200 dollars. \\
\arrayrulecolor{black}\bottomrule
\end{tabular}
}
\caption{We show a few representative Templates (examples), top human annotated label, and associated confidence (0-1). Full list in Appendix.}
\label{tab:exampletemplates}
\end{table*}
\section{A \textsc{CheckList} for the NLI task}
The \textsc{CheckList} methodology \cite{ribeiro-etal-2020-beyond} assists users in testing NLP models by creating templates for a variety of linguistic capabilities, coupled with \textit{test types} (Minimal Functionality tests (MFTs), Invariance tests (INVs), and Directed Expectation tests (DIRs)) which make the corresponding capability easy to test. These templates can then be used to generated multiple examples using the \textsc{CheckList} tool\footnote{\url{https://github.com/marcotcr/checklist}}.
\begin{table}[!ht]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}lc@{}}
\toprule
\multicolumn{1}{c}{Template} & Expected \\ \midrule
Template: P: \textbf{\textcolor{darkblue}{\big\{NAME\big\}}} is \textbf{\textcolor{darkblue}{\big\{ADJ\big\}}}. H: \textbf{\textcolor{darkblue}{\big\{NAME\big\}}} is \textbf{\textcolor{darkblue}{\big\{Synonym(ADJ)\big\}}} & Entail \\
Example: P: \colorbox{mask}{Alexia} is \colorbox{mask}{happy}. H: \colorbox{mask}{Alexia} is \colorbox{mask}{glad}. & Entail \\ \bottomrule
\end{tabular}%
}
\caption{An example NLI template testing synonyms.}
\label{tab:template1}
\end{table}
An example \textsc{CheckList} template for NLI task, is shown in Table~\ref{tab:template1}.
Here \textcolor{darkblue}{NAME}, \textcolor{darkblue}{ADJ} are placeholders. Corresponding lexicons are: \textcolor{darkblue}{\{NAME\}} = \{Alexia, John, Mia, ...\}, \textcolor{darkblue}{\{ADJ\}} = \{good, bad, kind, ...\}. \textcolor{darkblue}{Synonym(.)} stands for a synonym of the word (from WordNet).
The capabilities discussed in \newcite{ribeiro-etal-2020-beyond} are targeted to test the robustness of NLP systems against a minimal set of properties that are necessary yet feasible to check. However in tasks such as NLI, inferencing often requires (one or more) linguistic and logical reasoning capabilities. Our goal is to test the systems against such reasoning types. Even if some reasoning types are deemed not necessary, such tests cumulatively should inform about the systems' abilities in a holistic manner.
This presents us with two challenges for templated
test-suite generation for the NLI task (or tasks that require different types of reasoning): 1) careful selection of capabilities that represent well-known linguistic and logical taxonomy, and are easily extensible, 2) template creation for such capabilities.
\subsection{Selection of Capabilities}
\label{subsec:taxinli}
According to the linguistics literature \cite{wittgenstein-1922,Jurafsky+Martin:2009a}, deciphering \textit{meaning} from natural language \textit{form} often takes both semantic and pragmatic understanding abilities. From the perspective of Logic (Charles Peirce), there are three pre-dominant forms of reasoning: deductive, inductive, and abductive; that can be employed by an agent to \textit{understand} and \textit{interact}. Other than a few recent work \cite{bhagavatula2020abductive,jeretic-etal-2020-natural}, most of the NLI datasets have widely covered (monotonic) deductive inferences; and lexical, syntactic and semantic understanding abilities. Recently, \citet{joshi2020taxinli} proposed an (extensible) categorization of the reasoning tasks involved in NLI. This categorization strikes a balance between the high-level categorizations from Language and Logic, while refining the categories and their granularity based on their relevance with respect to current public NLI datasets. Authors define three broad groups of reasoning: {\sc Linguistic}, {\sc Logical} and {\sc Knowledge}, which aligns with our philosophy. Other categorizations \cite{nie2019adversarial,wang2018glue}, though relevant, are often incomplete as they are tuned towards analyzing errors.
Here, we introduce the categories briefly and show examples (and templates) in Tab.~\ref{tab:exampletemplates}.
{\sc Linguistic} represents NLI examples where inference process is internal to the provided text; further divided into three categories {\tt lexical}, {\tt syntactic} and {\tt factivity}.
{\sc Logical} denotes examples where inference may involve processes external to text, and grouped under {\it Connectives}, and {\it Deduction}. {\it Connectives} involve categories such as {\tt negation, boolean, quantifiers, conditionals} and {\tt comparatives}. \textit{Deduction} involves different types of reasoning such as {\tt relational, spatial, causal, temporal}, and {\tt coreference}. {\sc Knowledge} indicates examples where external ({\tt world}) or commonly assumed ({\tt commonsense}) knowledge is required for inferencing. For detailed definitions, we refer the readers to \citet{joshi2020taxinli}.
\paragraph{Extending TaxiNLI.} We extend the taxonomy by adding back the (pruned) {\tt Numerical} category. We also add a high-level category {\sc Pragmatic}, with two sub-categories {\tt pre-supposition} and {\tt implicatures}. Templates belonging to {\tt factivity} fall under the more general capability pre-supposition.
\subsection{Template Generation for Reasoning Categories}
We aim to test the minimal expected functionalities along each reasoning type individually. Automatic template creation from public datasets is not straightforward, as examples in public NLI datasets represent multiple capabilities \cite{joshi2020taxinli}. Even targeted datasets such as Winograd Schema Challenge \cite{levesque2012winograd} may require careful re-annotation, as examples may require lexical or boolean reasoning.
Instead, we resort to manually creating templates and use human annotations to verify the correctness of templated instances. As and when required, we extend the list of basic key placeholders (and corresponding lexicon) provided by the \textsc{CheckList} tool, such as, \textcolor{darkblue}{\big\{PROFESSION\big\}} = \{doctor, actor, politician, $\ldots$\}, \textcolor{darkblue}{\big\{COM ADJ\big\}} = \{smarter, taller\, $\ldots$\}, \textcolor{darkblue}{\big\{CITY\big\}} = \{Paris, New York\, $\ldots$\}\footnote{The complete list is in Appendix.}. We share the list of templates (target phenomena and generated data) with the dataset. Here, we discuss some challenges we face during template creation, and highlight interesting templates (displayed in Tab.~\ref{tab:exampletemplates}).
\\\noindent
\textbf{Linguistic.}~~
For \underline{Syntactic}, our templates test different types of ellipsis (T2) that require syntactic understanding of the premise and the hypothesis. Paraphrasing is hard to test individually, as most paraphrases in existing-paraphrase corpora \cite{dolan2005automatically,WinNT} are not necessarily entailments, and such paraphrases may involve lexical changes as well.
\\\noindent
\textbf{Logical.}~~~
For \underline{Boolean}, apart from testing logical \textit{and} ($\land$), \textit{or} ($\lor$); we test ordered resolution as well (T3, T4).
\underline{Quantifier} templates test the understanding of \textit{universal} (all, $\forall$), and \textit{existential} (some, none; $\exists,\neg\exists$) quantification, and the effects of interchanging them. For \underline{Coreference}, we come up with representative templates to test gendered (T5), animate vs. inanimate resolution.
For \underline{Spatial} templates, we utilize the list of spatial prepositions and adverbs indicating relative positions (near, far, above, below, left, right). We include a set of templates testing cardinal directions (north, east, south and west); and some requiring comparison of distances (T6, requiring both Spatial and Numerical understanding).
\underline{Temporal} templates cover relative occurrences of events using prepositions, such as \textit{before}, \textit{after}, and \textit{until}. Another set of templates test the understanding of time in the day (8AM comes before 2PM), month or year. We add templates for temporally ordered events (A happened and then B happened, T7) which require the inference of earliest or latest events.
\underline{Causal} template creation is tricky since often reasoning beyond \textit{form} is required. However, with specific controls placed, we can generate accurate causal premise-hypothesis pairs. We use a set of action-verb pairs which are complementary (e.g. give-take, give-receive, etc.) to describe corresponding actions between 2 entities and an object (T8). This still results in limited test cases. Explorations into leveraging knowledge graphs such as ConceptNet, ATOMIC \cite{speer-conceptnet,DBLP:conf/aaai/SapBABLRRSC19} to retrieve appropriate causal action phrases could be a way to tackle the issue.
Most \textsc{Logical} category templates implicitly test the \underline{Relational} (deductive reasoning with relations in text) capability, and hence we add three representative templates. Templates falling under \underline{Numerical} test a basic understanding of counting (T9, T10), addition, subtraction and numerical comparison.
\begin{figure*}[!ht]
\centering
\subfloat{\includegraphics[width=0.45\textwidth,height=0.24\textheight]{figures/heatmap_new.png}}\hfill
\subfloat{\includegraphics[width=0.49\textwidth, height=0.24\textheight]{figures/histogram_new.png}}
\caption{(a) (Best viewed in color) For each of the 17 reasoning capabilities, we show average accuracy for each model, (b) For 3 models, we show a histogram of number of templates across 5 accuracy bins.
}
\label{fig:histograms}
\end{figure*}
\\\noindent
\textbf{Knowledge. }~~~For \underline{Taxonomic}, we require templates where a taxonomic hierarchy (external to text) is implicitly required to infer the hypothesis. Templates either require to infer A is a type of B (``P: A has some properties. H: A is a type of B.’’;) or utilise that information (similar to \newcite{NEURIPS2020_e992111e}) (``P: B has property P. H: A has property P.''). We collect set of properties of common flowers\footnote{\url{https://bit.ly/3hu2Psn}}, birds, fishes, and mammals from Wikipedia to generate the templates. The scope of \underline{World} templates is vast. Here, we create templates that specifically tests basic knowledge of geography (city-country pairs, T11), famous personalities (noble prize winners and their contributions\footnote{\url{https://bit.ly/2RkIOdk}}) along with the understanding of some well-known concepts such as \textit{speed} (\textit{speed} decreases when brakes are applied), popularity on social media. \\\noindent
\textbf{Pragmatic. }
For Pragmatic, we add templates along the lines of \newcite{jeretic-etal-2020-natural}. For \underline{Pre-supposition}, templates test the existence of objects (T12), occurrence of events, aspectual verbs (T13), and quantifiers. \underline{Implicature} templates are constructed by following Grice's cooperative principle (Maxims of quality, quantity, and relevance) \cite{grice1975logic}. Most Implicature templates also require other capabilities such as quantifiers (T14), Boolean (T15), and Numerical (T16).
\section{Benchmarking NLI systems}
We analyze BERT-base (uncased), DistilBERT-base (uncased) and RoBERTa-large (cased) \cite{devlin2019bert,liu2019roberta} fine-tuned on MultiNLI (referred to as RoBERTa). We also observe the effects of adversarial training using RoBERTa-large fine-tuned on Adversarial NLI dataset (RoBERTa-ANLI; \cite{nie2019adversarial}), a larger model DeBERTa-large \cite{he2020deberta}. For lack of space, we include a short summary of observations from RoBERTa-ANLI and DeBERTa in Appendix. For easy reproduction, we use the MNLI fine-tuned models publicly available from the Huggingface Transformers repository\footnote{\url{https://github.com/huggingface/transformers}} \cite{Wolf2019HuggingFacesTS}.
\paragraph{CheckListNLI Dataset.} We created a total of 194 templates spanning all 17 capabilities, discussed in Section \cref{subsec:taxinli}. For each template (except {\sc Knowledge}), we generate 1000 examples by careful instantiations of the placeholders present in the template. Since {\sc Knowledge} category involves collecting facts from other sources, we generate 100 examples per template. The dataset will be released upon acceptance. We ask two independent annotators to annotate the NLI label for 5 random examples from each template (970 examples). The average Fleiss' $\kappa$ $0.81$ (same as Cohen's) shows very high inter-annotator agreement.
\subsection{Observations and Analysis}
\begin{table}[]
\small
\centering
\begin{tabular}{cccccc}
\toprule
Dataset & BERT & DistilBERT & RoBERTa \\
\midrule
MNLI-test & 84.5 & 82.2 & 90.2 \\
\midrule
CheckListNLI & 59.4 & 54.6 & 68.2 \\
\bottomrule
\end{tabular}
\caption{Average accuracy on MNLI-test set and CheckListNLI.}
\label{tab:acc_aggregate}
\end{table}
Table~\ref{tab:acc_aggregate} shows the accuracy on MNLI test, and CheckListNLI dataset for all models. Similar to MNLI, RoBERTa clearly outperforms BERT and DistilBERT on \textsc{CheckListNLI}. Further we analyze BERT, DistilBERT and RoBERTa's capability-wise and intra-template performance.
\paragraph{Capability-wise Performance.} The capability-wise average accuracy of the models are shown in Figure \ref{fig:histograms}(a). We observe that all models perform well on Lexical, Syntactic and Presupposition capabilities. Within the Logical category, the results are comparably poor and inconsistent across both the capabilities and model dimension. The same holds for the Knowledge categories and Implicature templates.
For further analysis, we mark a template as \textit{passed} if the model's accuracy is above $80\%$, as \textit{unsure} if the accuracy is in middle bins ($20$-$80\%$), and \textit{failed} if less than $20\%$. For \underline{negation}, all models fail on template containing ``but not'' (P: Janet, but not Stephen, is a dancer. H: Stephen is a dancer.) For \underline{boolean}, we observe that BERT and DistilBERT are \textit{unsure} on ordered resolution; whereas RoBERTa is biased towards entailment label (\textit{unsure} for contradiction). Moreover for RoBERTa, the bias shifts towards contradiction with the addition of ``not'' in the hypothesis even if the correct label is entailment (P: Margaret and Robert are from America and Russia respectively. H: Margaret is not from Russia.). For \underline{comparative}, all models fail on template with insufficient information (P: Philip is more handsome than Frances. Philip is more handsome than Kevin. H: Kevin is more handsome than Frances.). BERT and DistilBERT are \textit{unsure} on reasoning about hypothesis in the presence of superlative adjective in the premise (P: Among Emily, Daniel, and Joseph, the bravest is Daniel. H: Emily is braver than Daniel.) DistilBERT fails on a template (P: John is taller than Mia. H: Mia is taller than John.) when the placeholders are reversed. Such perturbations are more common for \underline{Quantifier}. BERT and DistilBERT fail when ``all'' is replaced by ``some'' in the hypothesis.
Within \underline{numerical}, RoBERTa seems better at counting, however it fails when the hypothesis refers to an incorrect count. BERT and DistilBERT models seems \textit{unsure} for all counting-related templates. All models struggle with templates related to addition and subtraction (often showing label bias). Under \underline{Spatial}, all models struggle with cardinal directions and spatial relations (left, right). Interestingly, RoBERTa fails at spatial distance comparisons (while being able to compare number of coins under Numeric). Within \underline{Temporal}, all models are unable to compare year of birth and time of the day. A surprising observation is that BERT is sensitive to the lexical substitution of ``before'' (``after'') with ``earlier than'' (``later than''). RoBERTa is able to accurately reason ``A happened before/after B'' over two to three events, whereas BERT and DistilBERT results are \textit{unsure}. All models fail to detect the ``first'' or the ``last'' event in a sequence. Within \underline{coreference} RoBERTa is able to accurately resolve male and female names. Lastly, for \underline{Causal} templates, RoBERTa seems more accurate than both BERT and DistilBERT. For \underline{knowledge} templates, all models consistently suffer. On probing \underline{implicature} templates, we observe that models vary between logical (P: Silverware and plate lie on the table. Barbara asked for the plate. H: Barbara also asked for the silverware.; predicting neutral over contradiction) and implicative (P: Some of the balls are purple in colour. H: All of the balls are purple in colour.; predicting contradiction over neutral), depending on the template.
\paragraph{Intra-Template Performance.} In Figure \ref{fig:histograms}(b), we plot the histogram of number of templates across different accuracy bins. The middle bins (20-80) indicate that models are \textit{unsure} on the tested phenomena. This happens most often for BERT (58/194) and DistilBERT (52/194) compared to RoBERTa (28/194). We analyzed the bias of BERT across the vocabulary of placeholders. We examine the template P: \{NAME1\}, but not \{NAME2\}, is a \{PROFESSION\}. H: \{NAME2\} is a \{PROFESSION\}. Template accuracy highly varies when profession is fixed to engineer ($0\%$), dancer ($9\%$) vs. doctor ($80\%$) and professor ($92\%$). Compared to professions, we see only limited variations, when names are restricted to male vs. female names. Similar effects are seen for adjectives in P : \{NAME1\} is \{ADJ\}. \{NAME2\} is \{COM ADJ\}. H : \{NAME2\} is \{COM ADJ\} than \{NAME1\}. Template accuracy varies when adjectives fixed to bigger ($0\%$), sweeter ($16\%$), vs. creepier ($100\%$).
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth, height=0.17\textheight]{figures/placeholders.png}
\caption{Figure showing the regression coefficients for common placeholders for all three models.}
\label{fig:placeholders}
\end{figure}
To dive deeper, we first analyze the effect of placeholders on template accuracy using Linear Regression as a surrogate model for feature importance. The feature vector is a one-hot representation created using the concatenation of placeholders, top 20 other words (using BOW) and template label.
We show the coefficients for placeholders in Fig. \ref{fig:placeholders} (rare placeholders are omitted).
Placeholders \textcolor{darkblue}{COLOR} and \textcolor{darkblue}{ACTION} have high positive coefficients as they co-occur with quantifier, syntactic, and pre-supposition templates where the models perform well. Similarly, high negative coefficient for \textcolor{darkblue}{YEAR} is due to models being unable to compare year of birth. Placeholders \textcolor{darkblue}{MALE NAME} and \textcolor{darkblue}{FEMALE NAME} turns out to be interesting having negative coefficient for BERT and DistilBERT, and near-zero coefficient for RoBERTa. This is intuitive as RoBERTa is better at resolving gendered coreferences. Interestingly, \textcolor{darkblue}{NAME} is more negative in BERT, showing the hidden effect of how varying names can affect BERT more than RoBERTa. Similarly, models perform decently on templates involving comparative (\textcolor{darkblue}{COM ADJ}) while struggling in templates involving superlatives (\textcolor{darkblue}{SUP ADJ}).
The behavioral analysis gives us an indication that RoBERTa-large may indeed be more robust and accurate, but inter-template inconsistencies begs for further exploration.
\section{Consulting Humans to Quantify Model Inconsistency}
\label{sec:human}
The inconsistencies observed for both BERT and RoBERTa begs the question as to how to quantify the progress towards models with increased predictable behavior\footnote{Training machine learning models to predict behavior adds more confounders that can affect the analysis.}. Inspired by the recent XAI literature \cite{gilpin2018explaining,lipton2018mythos,DBLP:journals/corr/abs-1902-00006}, we design a human study where humans are shown different types of post-hoc behavioral information and asked to predict system's behavior on new examples.
This premise presents us with many dimensions of control, which may affect the outcome of the study: 1) the level of abstraction of behavioral information, 2) example-based (local) vs. global summary, 3) interface design (or presentation of such information), 4) baseline explanation, 5) choice of test examples, 6) task (verification, simulation or counterfactual), 7) choice of participants.
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth, height=0.3\textheight]{figures/landingpage4.png}
\caption{Landing Page Instructions to participants. Study in \url{https://bit.ly/3huowZG}}
\label{fig:taskif}
\end{figure}
\paragraph{Local Explanations and Interface Design.} From multiple pilot studies using global behavioral summary (such as Template-wise accuracy scores), and relevant template-level local summaries \footnote{Detailed in Appendix. Current study (Stage B1): \url{checklist-nli.herokuapp.com/A}}, we observe that global template-wise accuracies have large cognitive load for participants, and template-wise aggregate accuracies are hard to comprehend (and predict) without the knowledge of the lexicon that the keywords represent (owing to intra-template inconsistencies). Hence, in this study, we fixate on providing local example-based explanations. For the explanations interface design, we follow \citet{DBLP:journals/corr/abs-1902-00006}. Through various pilot studies, authors in \citet{DBLP:journals/corr/abs-1902-00006} found that the use of three intuitive boxes -- namely, the input box, the explanation box, and the question box; makes the information easier to follow. We observe that this vastly improves relative to our earlier pilot studies based on Google Forms-based questionnaire.
\paragraph{Test Example Selection and Baseline.} To choose test examples, we choose 25 test templates from CheckListNLI carefully by balancing templates representing different accuracy buckets, and ensuring inclusion of multiple capability-templates. For each template, we show 5 random test examples. As a baseline, we consider example-based LIME \cite{ribeiro2016should} explanations. For each test example, five nearest neighbor examples are chosen, where the top three attended words (from Premise and hypothesis) are highlighted using LIME output. The variations are created by varying where the nearest neighbors come from. In Stage 1, we choose the nearest neighbors from the MultiNLI validation set and Stage 2 includes nearest neighbors from CheckListNLI dataset respectively (barring the corresponding test template examples). To calculate nearest neighbors, we use the underlying system's (BERT/RoBERTa) final hidden-layer embeddings (corresp. to \texttt{[CLS]} token) and calculate cosine distances.
\paragraph{Task, Metrics and Participants.} In this study, we restrict ourselves to \textit{simulation} questions, where the participants are asked to simulate (\textit{anticipate}) an underlying blackbox system's (instructions in Fig.~\ref{fig:taskif}) prediction given explanation and the input. Since Transformers often show different types of bias and its not known whether they follow human reasoning, we track two metrics: 1) prediction accuracy, 2) mutual agreement score. Consuming such examples and being able to generalize based on explanations on nearest neighbors require a certain amount of analytical reasoning skills. Hence, instead of going to crowd-sourced platforms, we choose a total of 10 International Linguistics Olympiad participants\footnote{\url{https://www.ioling.org/}}. Such participants are trained on analytical text-based puzzle solving, but not trained in formal Linguistics or Logic.
\subsection{Findings and Observations}
\begin{figure}[!ht]
\centering
\includegraphics[width=\columnwidth]{figures/humanstudy.png}
\caption{Average Accuracy and Mutual pairwise agreement for participants (out of 125).}
\label{fig:study2}
\end{figure}
In Figure \ref{fig:study2}, we report results from Stages 1 and 2 for BERT (referred to as B1, B2) and Stage 2 for RoBERTa (R2). The average prediction accuracy of the participants increase from $58.7\pm6.8$ (out of 125) to $61.4\pm6.8$ from Stage B1 to B2. Similarly, average mutual agreement among participants increases from $74.75$ to $79.8$, showing that the nearest neighbors from CheckListNLI (representing disentangled phenomena) enable better explanation. Upon interviewing participants, some mentioned their responses in stage B1 was often ``random''. This is intuitive as MNLI examples are often quite complex with long sentences. So, we repeat only stage 2 for RoBERTa (R2). For R2, we clearly see a $12.5\%$ improvement in prediction accuracy with a $2.62\%$ improvement in agreement score. This indicates that even though inconsistencies exists for both models, participants were able to anticipate RoBERTa's behavior better.
We also recorded responses about the different stages of study from participants. Most participants found that nearest neighbor examples were most relevant for B2 (5 out of 8) and R2 (4 out of 8). The LIME-based highlights and predicted labels were both useful in B2 and R2. A participant commented that "The highlighted words and predictions were very helpful. Relied completely on both". Interestingly, the participants were only told that the systems are different for B2 and R2, without revealing any further detail. But, most people ($60\%$) found that its easier to predict the system's behavior for R2. Participants mentioned that task interface was easy to navigate and they had no difficulty understanding the instructions.
\section{Conclusion}
Following the recent XAI literature \cite{jacovimiller21}, we aim to quantify progress towards more predictable natural language understanding models (especially PTLMs). To this end, we select the NLI task which requires reasoning, and conduct detailed behavioral analysis of state-of-the-art NLI systems. We adapt and extend a recently proposed taxonomy for NLI (\textsc{TaxiNLI}). Through a templated test-suite (194 templates, 17 reasoning types), we observe that for both BERT and RoBERTa, model inconsistencies can be found both across templates (i.e., logical, lexical perturbation of templates) and within templates while only varying lexicon. Furthermore, we design a human study where we ask humans to predict model behavior given behavioral information about NLI systems. A $12.5\%$ increase in human prediction accuracy for RoBERTa over BERT provides an indication that, despite fine-grained inconsistencies, RoBERTa is more predictable than BERT.
Our work shows how behavioral information may help quantify progress towards systems with more predictable (therefore trusted; \cite{jacovimiller21}) behavior.
\section*{Ethics Statement}
As our work involves human study of the behavior of a blackbox NLI system, we took an Internal Review Board (IRB) approval from our organization for the study and asked for all necessary consent. A consent form was shared with all participants, and all participants formally agreed to participate by electronically signing the form. To the best of our knowledge we ensured that the study did not expose them to any possible harmful content (in text, images or other forms). We also did not collect or share any personally identifiable information.
\section*{Acknowledgement}
We would like to thank Pratik Joshi for contribution to coreference templates. We would also like to thank Sebastian Santy, Saujas Vadguru and Aalok Sathe for attempting and providing useful insights during human study pilots.
|
1,116,691,497,201 | arxiv | \section{Introduction}
Bialgebroids arise as the endomorphisms of fiber functors from certain tensor categories to a bimodules category over a base algebra. For example, bialgebras are bialgebroids over a one-dimensional base algebra, while weak bialgebras are bialgebroids over a separable base algebra. Hopf algebroids are bialgebroids with antipodes: various twisted Hopf algebras are also Hopf algebroids over a one-dimensional base algebra.
Like bialgebras and their actions/coactions, bialgebroids also act and coact on noncommutative algebras in a more general setting suitable to mathematical physics \cite{BB}. Initially
appearing in the analytic theory of subfactors, the notion of depth two has been widened to Frobenius extensions in
\cite{KN} and to arbitrary subalgebras
in \cite{KS}. As shown in \cite{KS}
and later papers, depth two is a \textit{Galois} theory of actions and coactions for
bialgebroids. In this paper, we widen the definition of depth
two algebra extension in \cite{KS} to include Hopf $H$-Galois
extensions where $ H$ is an infinite-dimensional Hopf algebra,
such as the univ.\ env.\ algebra of a Lie
algebra or an infinite dimensional group
algebra. Although
we lose the dual theory of finite projective, left and right bialgebroids
over the centralizer in \cite{KS}, we retain the right
bialgebroid $T$ and its role in coaction in \cite{LK2005}. We then obtain the main theorem of depth
two Galois theory with no finiteness conditions (Theorem~\ref{th-main}): an algebra
extension $A \| B$ is right depth two with $A_B$
a balanced module if and only if
$A \| B$ is $T$-Galois w.r.t.\
a left projective right $R$-bialgebroid $T$, for some base ring $R$
which commutes within $A$
with the subring of
coinvariants $B$.
\subsection{Depth two preliminaries}
By algebra we mean a unital associative
algebra over a commutative ring $k$,
and by algebra extension $A \| B$, we mean
any identity-preserving algebra homomorphism
$B \rightarrow A$, proper if $B \rightarrow A$ is monic.
In either case, the natural
bimodule ${}_BA_B$ and its properties
define the properties of the extension
from this point of view. For example,
we say $A \| B$ is right faithfully flat
if $A_B$ is faithfully flat, in which
case one notes the
extension $A \| B$ is proper.
An algebra extension $A \| B$ is \textit{left depth two (D2)} if
its tensor-square $A \o_B A$ as a natural $B$-$A$-bimodule
is isomorphic to a direct summand of a direct sum
of the natural $B$-$A$-bimodule $A$: equivalently, for some set
$I$, we have
\begin{equation}
\label{eq: D2}
A \o_B A \oplus * \cong A^{(I)},
\end{equation}
where $A^{(I)}$ denotes the coproduct (weak direct
product, direct sum $\sum_{i \in I} A_i$,
each $A_i = A$) of $A$ with itself indexed by
$I$ and consists of elements $(a_i)_{i\in I}$
where $a_i \in A$ and $a_i = 0$ for all but finitely many indices
(almost everywhere, a.e.).
An extension $A \| B$ is \textit{right D2} if eq.~(\ref{eq: D2}) holds instead as natural
$A$-$B$-bimodules. An algebra extension is of course D2 if it is both left D2 and right D2.
For example,
if $A \| B$ is a projective algebra
(so $B$ is commutative, maps into the center of $A$
and the module $A_B$ is projective), then
$A \| B$ is D2, since $A_B \oplus * \cong
B^{(I)}$ for index set $I$, so
we may tensor this by $- \o_B A_A$
to obtain eq.~(\ref{eq: D2}).
As another example, suppose $H$ is a Hopf algebra
of finite or infinite dimension over a field,
and $A$ is a right $H$-comodule algebra with
$B$ equal to the subalgebra of coinvariants.
If $A \| B$ is an $H$-Galois extension,
then $A \| B$ is right D2 since $A \o_B A \cong
A \o H$ via the Galois $A$-$B$-isomorphism,
$x \o y \mapsto x y_{(0)} \o y_{(1)}$, where $y_{(0)} \o y_{(1)}$
denote finite sums of elements equal to the
value in $A \o H$ of the coaction on $y \in A$. Let $I$ be
in one-to-one correspondence with a basis for $H$.
Then $A \o_B A \cong A^{(I)}$. If $H$ has a bijective
antipode, use the equivalent Galois $B$-$A$-bimodule isomorphism
given by $x \o y \mapsto x_{(0)} y \o x_{(1)}$ to conclude that $A \| B$
is left D2.
If the index set $I$ is finite, then
the algebra extension
$A \| B$ is right or left D2 in the earlier
sense of \cite{KN, KS, LK2003, LK2005}.
The lemma below notes that the
earlier definition is recovered for any f.g.\
extension.
\begin{lemma}
If $A \| B$ is right or left D2 and either of the natural modules ${}_BA$ or $A_B$ is
finitely generated, then $I$
in eq.~(\ref{eq: D2}) may be
chosen finite.
\end{lemma}
\begin{proof}
Suppose $A \| B$ is right D2. If either
${}_BA$ or $A_B$ is f.g., then ${}_AA\o_B A_B$ is f.g.
It follows that ${}_AA \o_B A_B$
is isomorphic to a direct summand
of a finite direct sum $A^n \subseteq A^{(I)}$.
The argument is entirely similar starting
with a left D2, left or right f.g.\ extension.
More explicitly using the $A$-$B$-epi $f$
and $A$-$B$-monic $g$ defined below, if $A \o_B A =
Aw_1 B + \cdots + Aw_N B$ for $N$ elements
$w_j \in A \o_B A$, then
$g(w_j) = (a_{ij})_{i \in I}$
has finite support on $I_j \subset I$,
then $I' = I_1 \cup \cdots \cup I_N$
is finite and $g$ corestricts, $f$
restricts to $A^{I'}$ so that $f \circ g
= \mbox{\rm id}_{A \o_BA}$.
\end{proof}
In analogy with projective bases for projective modules,
we similarly develop D2 quasibases for depth two extensions.
\begin{prop}
An algebra extension is right D2 if and only if
there is an index set $I$ and sets of elements $u_i
= u^1_i \o_B u^2_i \in (A \o_B A)^B$, $\gamma_i \in \End {}_BA_B$, both indexed by $I$, such that for each
$a \in A$, $\gamma_i(a) = 0$ a.e.\ on $I$, and
\begin{equation}
\label{eq: rd2qb}
x \o_B y = \sum_{i \in I} x\gamma_i(y)u^1_i \o_B u^2_i
\end{equation}
for all
$x,y \in A$.
\end{prop}
\begin{proof}
Let $\pi_i: A^{(I)} \rightarrow A$ and $\iota_i: A \rightarrow A^{(I)}$ be the usual projection and inclusion mappings of a coproduct, so that $\pi_j \circ \iota_i
= \delta_{ij} \mbox{\rm id}_A$ and $\sum_{i \in I} \iota_i \circ \pi_i = \mbox{\rm id}$ on $A^{(I)}$.
Given a right D2 extension $A \| B$, there is
an $A$-$B$-split epimorphism $f: A^{(I)} \rightarrow A\o_B A$,
say with section $g : A\o_B A \rightarrow A^{(I)}$.
Then $f \circ g = \mbox{\rm id}_{A \o_B A}$.
Define $f_i = f \circ \iota_i \in \mbox{\rm Hom}\, (A, A \o_B A)$
and define $g_i = \pi_i \circ g \in \mbox{\rm Hom}\, (A \o_B A, A)$, both hom-groups of the natural $A$-$B$-bimodules. Then
$\sum_{i \in I} f_i \circ g_i = \mbox{\rm id}_{A \o_B A}$. But
\begin{equation}
\mbox{\rm Hom}\, (A, A \o_B A) \cong (A \o_B A)^B
\end{equation}
via $f \mapsto f(1_A)$, and
\begin{equation}
\mbox{\rm Hom}\, (A \o_B A, A) \cong \End {}_BA_B
\end{equation}
via $F \mapsto F(1_A \o -)$ with inverse
$\alpha \mapsto (x \o y \mapsto x\alpha(y))$.
In this case, there are $\gamma_i \in \End {}_BA_B$
such that $\gamma_i(a) = g_i(1 \o a)$, all $a \in A$, and $u_i \in (A \o_B A)^B$ such that
$f_i(1_A) = u_i$, for each $i \in I$.
Note that $\gamma_i(a) = 0$ a.e.\ on $I$,
since $g_i(1 \o a) = \pi_i(g(1 \o a))$ is
zero a.e.\ on $I$.
It follows from $\mbox{\rm id}_{A \o_BA} = \sum_{i \in I} f_i \circ g_i$ that
$$ x \o y = \sum_{i \in I} x\gamma_i(y)u_i. $$
Conversely, given right D2 quasibases $\{ \gamma_i \}_{i \in I}$, $\{ u_i \}_{i \in I}$ as above,
define epimorphism $\pi: A^{(I)} \rightarrow A \o_B A$
of natural $A$-$B$-bimodules by
\begin{equation}
\pi[ (a_i) ] = \sum_{i \in I} a_i u_i
\end{equation}
with $A$-$B$-bimodule section $\iota: A\o_B A \hookrightarrow A^{(I)}$ given by
\begin{equation}
\iota(x \o y) = (x\gamma_i(y))_{i \in I}
\end{equation}
well-defined in $A^{(I)}$ since for
all $a \in A$, $\gamma_i(a) = 0$ a.e.\ on $I$.
\end{proof}
A similar proposition holds for a left D2 extension $A \| B$
and left D2 quasibase
$t_i \in (A \o_B A)^B$ and $\beta_i \in \End {}_BA_B$
for each $i \in I$. In this case,
\begin{equation}
\label{eq: ld2qb}
\sum_{i \in I} t^1_i \o_B t^2_i\beta_i(x)y = x \o_B y,
\end{equation}
for all $x,y \in A$,
which is equivalently expressed as $a \o 1 = \sum_i t_i \beta_i(a)$
for all $a \in A$,
where again $\beta_i(a) = 0$ a.e.\ on the index set $I$.
We fix our notation for right and left D2 quasibases
throughout the paper. In addition, we
denote $T = (A \o_B A)^B$ and (less importantly) $S = \End {}_BA_B$.
For example, left and right D2 quasibases are obtained as follows
for group algebras $A = k[G] \supseteq B = k[N]$ where $G$ is a group, possibly of infinite order,
$N$ is a normal subgroup of possibly infinite index, and
$k$ is a commutative ring.
Let $\{ g_i \}_{i \in I}$ be a transversal of $N$ in $G$.
Define straightforwardly a projection onto the $i$'th coset by $\gamma_i(a) = \sum_{j \in J} \lambda_{ij} g_i n_j$ where $a \in A$ and therefore of the form
$a = \sum_{i \in I} \sum_{j \in J} \lambda_{ij} g_i n_j$,
where $J$ is an indexing set in one-to-one correspondence with $N$
and $k$-coefficients $\lambda_{ij} = 0$ a.e. on $I \times J$.
In this case for any basis element $g \in G \hookrightarrow A$ all but
one of the projections $\gamma_i$ vanish on $g$: if $g$ is in the coset
$Ng_j$, then $\gamma_j(g) = g$. Of course, the $\gamma_i$
are $B$-$B$-bimodule projections since $gN = Ng$ for all $g \in G$.
It is then easy to see that
\begin{equation}
1 \o_B g = \sum_{i \in I} \gamma_i(g) g_i^{-1} \o_B g_i
\end{equation}
whence eq.~(\ref{eq: rd2qb}) follows by choosing $u_i = g_i^{-1} \o_B g_i$. Note that $u_i \in (A \o_B A)^B$
since $ng_i^{-1} \o_B g_i = g_i^{-1} \o_B g_i n$
for $n \in N$.
Similarly a left D2 quasibase is given by $\{ \gamma_i \}$
and $\{ g_i \o_B g_i^{-1} \}$ since $g_i N = N g_i$.
We end this section with a proposition
collecting various necessary conditions
on a right depth two algebra extension.
\begin{prop}
Suppose an algebra extension
$A \| B$ is right D2 with centralizer $R$. Then the following is true:
\begin{enumerate}
\item for each two-sided ideal in $A$, $(I \cap R)A \subseteq A(I \cap R)$;
\item ${}_BA$ is projective
if $A \| B$ is moreover a split extension.
\item for some indexing set $I$,
$\End {}_BA \oplus * \cong A^I$
as natural $B$-$A$-bimodules.
\item For each H-separable extension $B \| C$, or equivalently an extension satisfying
\begin{equation}
\label{eq: extH-sep}
B \o_C B \oplus \, * \cong B^{(J)} \
\mbox{\rm natural $B$-$B$-bimodules}
\end{equation}
for any index set $J$, the
composite algebra extension $A \| C$
is right D2.
\end{enumerate}
\end{prop}
\begin{proof}
The proof of each statement follows in the order above.
\begin{enumerate}
\item Given $x \in I \cap R$ and $a \in A$, apply
eq.~(\ref{eq: rd2qb}) and a right D2 quasibase: $xa = \sum_i \gamma_i(a) u^1_i x u^2_i$.
Note that for each $i$, $u^1_i x u^2_i
\in I \cap R$.
\item Given a $B$-$B$-bimodule projection
$p: A \rightarrow B$, apply $p \o_B \mbox{\rm id}_A$ to
eq.~(\ref{eq: rd2qb}) with $x = 1$, obtaining
$y = \sum_{i \in I} p(\gamma_i(y)u^1_i)u^2_i$ for all $y \in A$, which shows
${}_BA$ has dual bases.
\item Note that $\mbox{\rm Hom}\, ({}_AA \o_B A, {}_AA) \cong \End {}_BA$ as $A$-$A$-bimodules
via $F \mapsto F(1 \o -)$. Apply
$\mbox{\rm Hom}\, ({}_A - ,{}_A A)$ to
${}_AA \o_B A_B \oplus * \cong {}_AA^{(I)}_B$, noting that
$\mbox{\rm Hom}\, (A^{(I)}, A) \cong A^I$ (the
direct product)
as $B$-$A$-bimodules.
\item Apply the functor $A \o_B - \o_B A$ from
$B$-$B$-bimodules into $A$-$B$-bimodules
to the isomorphism~(\ref{eq: extH-sep}).
Then ${}_AA \o_C A_B \oplus \, * \, \cong
{}_AA \o_B A_B^{(J)}$. Clearly
$A \o_B A^{(J)} \oplus * \cong (A^{(I)})^{(J)} \cong A^{(I \times J)}$
as $A$-$B$-bimodules. Whence
the composite extension
$A \| C$ satisfies the right D2 condition \begin{equation}
\label{eq: same}
A \o_C A \oplus \, * \, \cong A^{(I \times J)}.
\end{equation}
Finally, $J$ in eq.~(\ref{eq: extH-sep}) may be replaced by the finite support of the image of $1 \otimes 1$ in $A^{(J)}$, under a split $A$-$A$-monomorphism $A \o_B A \rightarrow A^{(J)}$. Whence an algebra extension
satisfying eq.~(\ref{eq: extH-sep}) is
H-separable \cite{LK2003}.
\end{enumerate}
\end{proof}
Similar statements hold for a left D2 extension, one of which results in
\begin{cor}
If $A \| B$ is D2, then the centralizer
$R$ is a normal subalgebra: i.e.,
for each two-sided ideal $I$ in $A$,
the contraction of $I$ to $R$ is $A$-invariant:
\begin{equation}
A(I \cap R) = (I \cap R)A
\end{equation}
\end{cor}
For example, any trivial extension $A \| A$ is D2, in which case
$R$ is the center of $A$, which is of course a normal subalgebra.
\section{The bialgebroid $T$ for a depth two extension}
In this section we establish that if $A \| B$ is a right or left
D2 algebra extension, then the construct $T = (A \o_B A)^B$,
whose acquaintance we made in the last section, is a right bialgebroid
over the centralizer $C_A(B) = R$. Moreover, $T$ is right or left
projective as a module over $R$ according to which depth two condition,
left or right, respectively, we assume.
\begin{lemma}
Let $T$ be equipped with the natural $R$-$R$-bimodule structure given by
\begin{equation}
\label{eq: Rbimod}
r \cdot t \cdot s = rt^1 \o_B t^2 s
\end{equation}
for each $r,s \in R$ and $t \in T$.
If $A \| B$ is left D2 (right D2), then $T$ is a projective
right (left, resp.) $R$-module.
\end{lemma}
\begin{proof}
This follows from eq.~(\ref{eq: ld2qb}) by restricting to
elements of $T \subseteq A \o_B A$. We obtain
$t = \sum_i t_i \beta_i(t^1)t^2$ where $t_i \in T$. But $\beta_i(t^1) t^2 \in R$
so define elements $f_i \in \mbox{\rm Hom}\, (T_R, R_R)$, indexed by $I$, by
$f_i(t) = \beta_i(t^1)t^2$. Substitution yields $t = \sum_i t_i f_i(t)$,
where $f_i(t) = 0$ a.e.\ on $I$. Whence $T_R$ is projective
with dual basis $\{ t_i \}$, $\{ f_i \}$.
The proof that $A \| B$ is right D2 implies ${}_RT$ is projective follows similarly from eq.~(\ref{eq: rd2qb}).
\end{proof}
The next theorem may be viewed as a generalization of the first statement in \cite[theorem 5.2]{KS}.
\begin{theorem}
\label{th-bi}
If $A \| B$ is right D2 or left D2, then $T$ is a right
bialgebroid over the centralizer $R$.
\end{theorem}
\begin{proof}
The algebra structure on $T$ comes from the isomorphism
$T \cong \End {}_AA \o_B A_A$ via $$t \longmapsto
(x \o_B y \mapsto xt^1 \o_B t^2 y)$$
with inverse $F \mapsto F(1 \o 1)$. The endomorphism algebra structure
on $T$ becomes
\begin{equation}
\label{eq: tee}
tu = u^1 t^1 \o_B t^2 u^2,
\ \ \ 1_T = 1_A \o_B 1_A.
\end{equation}
It follows from this that there is algebra homomorphism $s_R: R \rightarrow T$
and algebra anti-homomorphism $t_R: R \rightarrow T$, satisfying a commutativity
condition and inducing an $R$-$R$-bimodule from the right of $T$, given
by ($r,s \in R, t \in T$)
\begin{eqnarray}
s_R(r) & = & 1_A \o_B r \\
t_R(s) & = & s \o_B 1_A \\
s_R (r) t_R(s) & = & t_R(s) s_R(r) = r \o_B s \\
t t_R(r) s_R(s) & = & rt^1 \o t^2 s.
\end{eqnarray}
Henceforth, the bimodule ${}_RT_R$ refered to is
the one above, which is the same as the bimodule in eq.~(\ref{eq: Rbimod}).
An $R$-coring structure $(T,R,\Delta, \varepsilon)$ with comultiplication
$\Delta: T \rightarrow T \o_R T$ and counit $\varepsilon: T \rightarrow R$ is
given by
\begin{eqnarray}
\Delta(t) & = & \sum_{i \in I} (t^1 \o_B \gamma_i(t^2)) \o_R u_i \\
\varepsilon(t) & = & \sum t^1 t^2
\end{eqnarray}
i.e., $\varepsilon$ is the restriction of $\mu: A \o_B A \rightarrow A$
defined by $\mu(x \o y) = xy$ to $T \subseteq A \o_B A$.
The coproduct $\Delta$ is well-defined since for any given
$t \in T$, there are only finitely many nonzero terms on the right.
It is immediate that $\Delta$ is left $R$-linear, $\varepsilon$ is left
and right $R$-linear, and $$(\varepsilon \o_R \mbox{\rm id}_T) \circ \Delta =
\mbox{\rm id}_T = (\mbox{\rm id}_T \o_R \varepsilon) \circ \Delta $$
follows from variants of eq.~(\ref{eq: rd2qb}).
We postpone the proof of coassociativity of $\Delta$
for one paragraph.
Additionally, note that the coproduct and counit are
unit-preserving, $\varepsilon(1_T) = 1_A = 1_R$
and $\Delta(1_T) = 1_T \o_R 1_T$, since $\gamma_i(1_A) \in R$.
We employ the usual Sweedler notation $\Delta(t) = t_{(1)} \o_R t_{(2)}$.
In order to show the bialgebroid identities
\begin{eqnarray}
\label{eq: rightRlin}
\Delta(tr) & = & t_{(1)} \o_R t_{(2)} r \\
\label{eq: timesR}
s_R(r) t_{(1)} \o_R t_{(2)} & = & t_{(1)} \o_R t_R(r)t_{(2)} \\
\label{eq: homo}
\Delta(tu) & = & t_{(1)} u_{(1)} \o_R t_{(2)} u_{(2)}
\end{eqnarray}
it will be useful to know that
\begin{equation}
T \o_R T \stackrel{\cong}{\longrightarrow} (A \o_B A \o_B A)^B \ \ \
t \o_R u \mapsto t^1 \o t^2u^1 \o u^2
\end{equation}
with inverse $$v \mapsto \sum_i (v^1 \o_B v^2 \gamma_i(v^3)) \o_R u_i. $$
Note that the LHS and RHS of eq.~(\ref{eq: rightRlin}) are
the expressions $\sum_i (t^1 \o \gamma_i(t^2r)) \o u_i$
and $\sum_i (t^1 \o \gamma_i(t^2)) \o u_ir$, which both map bijectively into
$t^1 \o 1_A \o t^2 r$ in $(A \o_B A \o_B A)^B$, whence LHS = RHS
indeed.
Similarly, the LHS of eq.~(\ref{eq: timesR}) is
$\sum_i (t^1 \o r\gamma_i(t^2)) \o_R u_i$
while the RHS is $\sum_i (t^1 \o \gamma_i(t^2)) \o_R (u^1_i r \o u^2_i)$,
both mapping into the same element, $t^1 \o_B r \o_B t^2 $.
Hence, this equation holds, giving meaning to the next equation for all $t,u \in T$ (the tensor product algebra over noncommutative
rings ordinarily makes no sense, cf.\ \cite{BW}).
The eq.~(\ref{eq: homo}) holds because both expressions map isomorphically into
the element $u^1 t^1 \o_B 1_A \o_B t^2 u^2$.
Finally the coproduct is coassociative, $(\Delta \o_R \mbox{\rm id}_T) \circ \Delta =
(\mbox{\rm id}_T \o_R \Delta) \circ \Delta $ since we first note that $$T \o_R T \o_R T \stackrel{\cong}{\longrightarrow}
(A \o_B A \o_B A \o_B A)^B$$ $$t \o u \o v \mapsto
t^1 \o t^2 u^1 \o u^2 v^1 \o v^2.$$
Secondly, $\sum_i \Delta(t^1 \o \gamma_i(t^2)) \o_R u_i$
maps into $t^1 \o 1 \o 1 \o t^2$,
as does $\sum_i (t^1 \o \gamma_i(t^2)) \o_R \Delta(u_i)$,
which establishes this, the last of the axioms
of a right bialgebroid.
The proof that $T$ is a right bialgebroid using a left D2 quasibase instead
is very similar.
\end{proof}
If $A$ and $B$ are commutative algebras where $A$ is $B$-projective,
then the bialgebroid $T$ is just
the tensor algebra $A \o_B A$, $R = A$,
with Sweedler $A$-coring $\Delta(x \o y) =
x \o 1 \o y$ and $\varepsilon = \mu$.
This particular bialgebroid has an antipode $\tau: T \rightarrow T$ given by $\tau(x \o y) = y \o x$
(cf.\ \cite{Lu, PX, KS}).
P.\ Xu \cite{PX} defines bialgebroid using an anchor map $T \rightarrow \End R$
instead of the counit $\varepsilon: T \rightarrow R$.
The anchor map is a right $T$-module algebra structure on $R$ given
by
\begin{equation}
\label{eq: ract1}
r \triangleleft t = t^1 r t^2,
\end{equation}
for $r \in R, t \in T$. We will
study this and an extended right $T$-module algebra structure on $\End {}_BA$ in the next section.
The counit is the anchor map evaluated
at $1_R$, which is indeed the case above.
\begin{remark}
\begin{rm}
If $I$ is a finite set, a D2 extension
$A \| B$ has a left bialgebroid structure on $S =
\End {}_BA_B$ such that $A$ is left
$S$-module algebra, the left
or right endomorphism algebras are smash
products of $A$ with $S$
and $T$ is the $R$-dual bialgebroid of $S$ \cite{KS}.
In the proofs of these facts, most of the formulas in \cite{KS}
do not make sense if $I$ is an infinite
set.
\end{rm}
\end{remark}
\section{A right $T$-module endomorphism algebra}
We continue in this section with a right depth two extension $A \| B$
and our notation for $T = (A \o_B A)^B$, $R= C_A(B)$, left and right D2 quasibases $t_i, u_i \in T$, $\beta_i, \gamma_i \in S$
where $i \in I$, respectively, in a index set $I$ of possibly infinite cardinality.
Given any right $R$-bialgebroid $T$, recall that a right $T$-module algebra
$A$ is an algebra in the tensor category of right $T$-modules \cite{BW, KS}.
Suppose ${}_AM$ is a left $A$-module. Let $\mathcal{E}$ denote
its endomorphism ring as a module restricted to a $B$-module: $\mathcal{E} = \End {}_BM$.
There is a right action of $T$ on $\mathcal{E}$
given by $f \triangleleft t = t^1 f(t^2 -)$ for $f \in \mathcal{E}$. This is
a measuring action and $\mathcal{E}$ is a right $T$-module
algebra (as defined in \cite{KS, BW}), since
$$ (f \triangleleft t_{(1)})\circ (g \triangleleft t_{(2)}) = \sum_i t^1 f(\gamma_i(t^2) u^1_ig(u^2_i -)) = (f\circ g) \triangleleft t, $$
and $1_{\mathcal{E}} \in \mathcal{E}^T$ is a $T$-invariant, since $\mbox{\rm id}_M \triangleleft t = \mbox{\rm id}_M \triangleleft s_R(\varepsilon(t))$.
The subring of invariants $\mathcal{E}^T$ in $\mathcal{E}$ is $\End {}_AM$ since $\End {}_AM \subseteq \mathcal{E}^T$ is obvious,
and $\phi \in \mathcal{E}^T$ satisfies for $m \in M, a \in A$:
$$ \phi(am) = \sum_i \gamma_i(a) (\phi \triangleleft u_i)(m)
= \sum_i \gamma_i(a) \varepsilon_T(u_i) \phi(m) = a \phi(m).$$
With similar arguments for left D2 quasibase, we have established:
\begin{theorem}
If $B \rightarrow A$ is right or left D2 and ${}_AM$ is module, then $\End {}_BM$
is a right $T$-module algebra with invariant subalgebra
$\End {}_AM$.
\end{theorem}
By specializing $M= A$, we obtain
\begin{cor}
If $A \| B$ is D2, then $\End {}_BA$ is a right $T$-module algebra
with invariant subalgebra $\rho(A)$ and right $T$-module subalgebra $\lambda(R)$.
\end{cor}
\begin{proof}
For any $r \in R$, we have $\lambda(r) \triangleleft t = \lambda (r \triangleleft t)$ wrt.\ the right action of $T$ on $R$ in eq.~(\ref{eq: ract1}) in the previous section. Of course, $\mbox{\rm Hom}\, ({}_AA, {}_AA) \cong \rho(A)$ where we fix the notation for right multiplication, $\rho(a)(x) = xa$ (all $a,x \in A$).
\end{proof}
The right $T$-module $\End {}_BA$
is identifiable with composition of endomorphism and homomorphism under the ring isomorphism $T \cong \End {}_AA \o_B A_A$ and the
$A$-$A$-bimodule isomorphism
$\End {}_BA \cong \mbox{\rm Hom}\, ({}_AA \o_B A, {}_AA)$ via
$f \mapsto (x \o y \mapsto xf(y))$. We leave this remark
as an exercise.
\section{Main theorem characterizing Galois extension}
Given any right $R$-bialgebroid $T$, recall that a right $T$-comodule algebra
$A$ is an algebra in the category of right $T$-comodules
\cite{BW}. If $B$ denotes its subalgebra of coinvariants $A^{\rm co \, T}$, which are
the elements $\delta: x \mapsto x \o_R 1_T$ under the coaction, we
say $A \| B$ is right $T$-Galois if the canonical mapping
$\beta: A \o_B A \rightarrow A \o_R T$ given by $\beta(x \o y) = x y_{(0)} \o y_{(1)}$
is bijective. Note that any $r \in R$ and $b \in B$ necessarily commute
in $A$, since
the coaction is monic and
$$\delta (rb) = \delta(r)\delta(b) = b \o_R s_R(r) = \delta (br) .$$
Among other things, we show in the theorem that if $A \| B$ is right
depth two, then $A$ is a right $T$-comodule algebra
and the isomorphism $A \o_B A \cong A \o_R T$ projects to
the Galois mapping via $A \o_B A \rightarrow A \o_{A^{\rm co \, T}} A$.
If moreover the natural module $A_B$ is faithfully flat (apply to
eq.~\ref{eq: ff} below) or balanced,
i.e., the map $\rho$: $B \rightarrow \End {}_EA$ is surjective
where $E = \End A_B$, then $B = A^{\rm co \, T}$.
\begin{theorem}
\label{th-main}
An algebra extension $A \| B$ is right D2 and right balanced
if and only if $A \| B$ is a right $T$-Galois extension for
some left projective right bialgebroid $T$ over some algebra $R$.
\end{theorem}
\begin{proof}
($\Leftarrow$)
Since ${}_RT$ is projective, ${}_RT \oplus * \cong {}_R R^{(I)}$
for some set $I$. Then $A \o_R T \oplus * \cong A^{(I)}$
as $A$-$B$-bimodules, since $R$ and $B$ commute in $A$
and $A \o_R R^{(I)} \cong (A \o_R R)^{(I)}$. But the Galois isomorphism $A \o_B A \stackrel{\cong}{\rightarrow} A \o_R T$,
$\beta(x \o y) = xy_{(0)} \o y_{(1)}$
is an $A$-$B$-bimodule isomorphism, hence
$A \| B$ is right D2.
To see that the natural map $\rho: B \rightarrow \End {}_EA$ is surjective,
we let $F \in \End {}_EA$. Then for each $a \in A$, left multiplication
$\lambda_a \in E$, whence $F \circ \lambda_a = \lambda_a \circ F$.
It follows that $F = \rho_x$ where $x = F(1)$. Since
$B = A^{\rm co \, T}$, it suffices
to show that $x_{(0)} \o x_{(1)} = x \o 1$ under the coaction. For this
we pause for a lemma.
Lemma: Let $R$ be an algebra with modules $M_R$ and ${}_RV$
where $V$ is projective with dual bases $w_i \in V$, $f_i \in \mbox{\rm Hom}\, ({}_RV, {}_RR) = {}^*V$ for some possibly infinite cardinality index set $i \in I$.
If for some $m_ j \in M, v_j \in V$ and finite index set $J$, we have$\sum_{j \in J} m_j \phi(v_j) = 0$ for each $\phi \in {}^*V$,
then $\sum_{j \in J} m_j \o_R v_j = 0$. This statement
follows of course by substitution of $\sum_{i \in I} f_i(v_j)w_i$
for each $j \in J$.
To see that $x \o 1 - x_{(0)} \o x_{(1)} = 0$, we define for each $\nu \in \mbox{\rm Hom}\, ({}_RT, {}_RR)$, the right $B$-endomorphism $\overline{\nu} \in E$
by $\overline{\nu}(a) = a_{(0)} \nu(a_{(1)})$. Since also $\rho_r \in E$ for
$r = \nu(1_T) \in R$, we compute:
$$ x \nu(1_T) = \rho_{\nu(1_T)} F(1_A) = F(\overline{\nu}(1_A)) =
\overline{\nu}(F(1_A)) = x_{(0)} \nu(x_{(1)}). $$
By lemma then $x \in B$, so that $A_B$ is a balanced module.
($\Rightarrow$) If $A \| B$ is right D2, we have explicit
formulas in the previous section for $T = (A \o_B A)^B$
as a left $R$-projective right bialgebroid over $R = C_A(B)$.
Define a coaction $\delta: A \rightarrow A \o_R T$ by \begin{equation}
\label{eq: coaction}
\delta(a) = \sum_{i \in I}
\gamma_i(a) \o_R u_j.
\end{equation}
We claim that $A$ is a right $T$-comodule algebra,
an argument similar to \cite[5.1]{LK2005} but with infinite index set,
and postpone a sketch of the proof for two paragraphs.
It is clear that $B \subseteq A^{\rm co \, T}$, since $\gamma_i(b) =
b\gamma_i(1)$ and $\sum_i \gamma(1)u_i = 1_T$. For the reverse
inclusion, suppose $x \in A$ such that $x_{(0)} \o x_{(1)} = x \o 1_T$.
Note the isomorphism
\begin{equation}
\label{eq: ice}
A \o_R T \stackrel{\cong}{\longrightarrow} A \o_B A
\end{equation}
given by $a \o t \mapsto at^1 \o t^2$,
with inverse
\begin{equation}
\label{eq: beta}
\beta(x \o y) = \sum_i x\gamma_i(y) \o_R u_i, \ \ \beta: A \o_B A
\stackrel{\cong}{\longrightarrow} A \o_R T .
\end{equation}
Since $\sum_i \gamma_i(x) \o u_i = x \o 1_T$,
the image under $\beta^{-1}$ is
\begin{equation}
\label{eq: ff}
1_A \o_B x = x \o_B 1_A.
\end{equation}
Given any $f \in E$, apply $\mu \circ (f \circ \lambda_a \o \mbox{\rm id}_A)$
to this equation obtaining $f(a)x =f(ax) $ for each $a \in A$.
Then $\rho_x \circ f = f \circ \rho_x$, whence $\rho_x \in \End {}_EA$.
Then $\rho_x \in \rho(B)$ since $A_B$ is balanced. Hence $x \in B$.
The Galois condition on the algebra extension $A \| B$ follows immediately from the fact that $\beta$ in
eq.~(\ref{eq: beta}) is an isomorphism. Indeed using the isomorphism
$\beta^{-1}$ as an identification between $A \o_R T$ and $A \o_B A$
is the easiest way to show $\delta$ defines a right $T$-comodule structure
on $A$.
The conditions that $A$ must meet to be a
right $T$-comodule algebra are
\begin{enumerate}
\item an algebra homomorphism $R \rightarrow A$;
\item a right $T$-comodule structure $(A, \delta)$:
$\delta$ is right $R$-linear, $a_{(0)} \varepsilon(a_{(1)}) = a$ for all $a \in A$,
$(\mbox{\rm id}_A \o \Delta) \circ \delta = (\delta \o \mbox{\rm id}_T) \circ \delta$;
\item $\delta(1_A) = 1_A \o 1_T$;
\item for all $r \in R, a \in A$, $r a_{(0)} \o_R a_{(1)} = a_{(0)} \o_R t_R(r) a_{(1)}$;
\item $\delta(xy) = x_{(0)} y_{(0)} \o_R x_{(1)} y_{(1)}$ for all $x,y \in A$.
\end{enumerate}
The following is a sketch of the proof, the details being left to the reader.
For $R \rightarrow A$ we take the inclusion $C_A(B) \hookrightarrow A$. Note that
$\delta(ar) = a_{(0)} \o_R a_{(1)} s_R(t)$ since both expressions map into $1 \o_B ar$ under $\beta^{-1}: A \o_R T \stackrel{\cong}{\longrightarrow} A \o_B A$. Next we note that $A$ is counital since $\sum_i \gamma_i(a)u^1_i u^2_i = a$. The coaction is coassociative on any $a \in A$ since both expressions
map into $1_A \o 1_A \o a$ under the isomorphism
\begin{equation}
\label{eq: isom}
A \o_R T \o_R T\stackrel{\cong}{\longrightarrow} A \o_B A \o_B A, \ \ \
a \o t \o u \longmapsto at^1 \o t^2 u^1 \o u^2.
\end{equation}
The expressions in the last two items map bijectively via $\beta^{-1}$ into
$r \o a$ and $1 \o xy$ in $A \o_B A$, respectively, so the equalities hold.
\end{proof}
The following by-product of the proof above is a characterization
of right (similarly left) depth two
in terms of $T$.
\begin{cor}
Let $A \| B$ be an algebra extension
with $T = (A \o_B A)^B$ and $R = C_A(B)$.
The extension $A \| B$ is right D2 if and only if $A \o_R T \cong A \o_B A$
via $a \o_R t \mapsto at^1 \o_B t^2$
and the module ${}_RT$ is projective.
\end{cor}
The main theorem is most interesting for
subalgebras with small centralizers.
An example of what can happen for large centralizers: the theorem shows
that \textit{any} field extension $K \supseteq F$ is $T$-Galois, since the underlying vector space of the $F$-algebra $K$ is
free, therefore balanced,
and any algebra over a field is depth two.
The bialgebroid $T$ in this case is
remarked on after Theorem~\ref{th-bi}.
The paper \cite{LK2005} sketches how
the main theorem in this paper would
extend the main
theorem in \cite{KN} for
extensions with trivial centralizer as follows.
We call an algebra extension $A \| B$
semisimple-Hopf-Galois if $H$ is
a semisimple Hopf algebra, $A$
is an $H$-comodule algebra
with coinvariants $B$ and the Galois
mapping $A \o_B A \rightarrow A \o H$ is bijective \cite{Mo}. Recall
that an algebra extension $A \| B$
is a Frobenius extension if $A_B$
if f.g.\ projective and $A \cong
\mbox{\rm Hom}\, (A_B, B_B)$ as natural
$B$-$A$-bimodules. Left and right depth
two are equivalent conditions on a Frobenius
extension \cite{KS}. Recall too
that an algebra extension $A \| B$ is separable
if the multiplication $\mu: A \o_B A \rightarrow A$ is
a split $A$-$A$-epi.
\begin{cor}
Suppose $A \| B$ is a Frobenius extension of $k$-algebras with trivial centralizer $R = 1_A \cdot k$ and $k$ a field of characteristic zero.
Then $A \| B$ is semisimple-Hopf-Galois if and only if $A \| B$ is a separable and depth two extension.
\end{cor}
Also various pseudo-Galois and almost-Galois extensions over groups,
Hopf algebras or weak Hopf algebras are depth two, balanced
extensions, and so Galois extensions with
respect to bialgebroids. The following
corollary is an example using Hopf algebras, although the corollary
may be stated more generally for bialgebroids by using
the proof of $\Leftarrow$ above, which stays valid if the Galois mapping $\beta: A \o_B A \rightarrow A \o_R T$ is weakened from isomorphism to split $A$-$B$-monomorphism.
\begin{cor}
Suppose $H$ is a Hopf algebra and $A \| B$ is a right $H$-extension. If the Galois
mapping $\beta$ is a split $A$-$B$-monomorphism, then $A \| B$ is a right $(A \o_B A)^B$-Galois extension,
where $(A \o_B A)^B$ is the bialgebroid $T$
over $C_A(B)$ studied in section~2.
\end{cor}
This corollary fits in with the current study of weak Hopf-Galois extensions in case
the centralizer $C_A(B)$ is a separable algebra over a field, whence the bialgebroid $T$ is a weak bialgebra \cite[Prop.\ 7.4]{KS}.
|
1,116,691,497,202 | arxiv | \section*{Introduction}
Comatrix corings were introduced by the authors in
\cite{Kaoutit/Gomez:2003a} to give a structure theorem of all
cosemisimple corings. This construction generalizes Sweedler's
canonical corings \cite{Sweedler:1975}, and provides a version of
descent theory for modules \cite[Theorem
3.10]{Kaoutit/Gomez:2003a}. Sweedler's canonical corings and their
automorphisms were the key tool in \cite{Masuoka:1989} to give a
non-commutative version of the fact that the relative Picard group
attached to any commutative ring extension is isomorphic to the
Amistur 1-cohomology for the units-functor due to Grothendieck's
faithfully flat descent.
In this note we extend, by using different methods, the main
result of \cite[$\S$2]{Masuoka:1989} to the context of comatrix
corings. In fact, we apply ideas and recent results from
\cite{Gomez:2002} and \cite{Kaoutit/Gomez:2003a}, and the present
paper can be already seen as natural continuation of the theory
developed in \cite{Kaoutit/Gomez:2003a}.
The first section is rather technical, and it is devoted to prove
that there is an adjoint pair of functors between the category of
comodules over a given comatrix coring and the category of
comodules over its associated Sweedler's canonical coring. This
adjunction will have a role in the proof of the main result.
Section 2 is the core of the paper, as it contains the
aforementioned isomorphism of groups (Theorem \ref{resultado-1}).
The maps connecting bimodules and coring automorphisms are at a
first glance different than the maps constructed in
\cite{Masuoka:1989}. However, they are neatly related, as
Proposition \ref{(b)1} shows.
All rings considered in this note are algebras with $1$ over
commutative ground base ring $K$. A right or left module, means a
unital module. All bimodules over rings are central
$K$--bimodules. If $A$ is any ring, then we denote by $\rmod{A}$
(resp. $\lmod{A}$) the category of all right (resp. left)
$A$--modules. The opposite ring of $A$ will be denoted by $A^o$,
its multiplication is defined by $a_2^o a_1^o= (a_1a_2)^o$,
$a_1^o, a_2^o \in A^o$ (i.e. $a_1, a_2 \in A$). As usual, some
special convention will be understood for the case of
endomorphisms rings of modules. Thus, if $X_A$ is an object of
$\rmod{A}$, then its endomorphisms ring will be denoted by
$\rend{A}{X}$, while if ${}_AY$ is left $A$--module, then its
endomorphisms ring, denoted by $\lend{A}{Y}$, is, by definition,
the opposite of the endomorphisms ring of $Y$ as an object of the
category $\lmod{A}$. In this way $X$ is an
$(\rend{A}{X},A)$--bimodule, while $Y$ is an
$(A,\lend{A}{Y})$--bimodule. The opposite left $A^o$--module of
$X_A$, will be denoted by $X^o$, the action is given by $a^o x^o=
(xa)^o$, $a^o \in A^o$, $x^o \in X^o$. Of course, if $f : X
\rightarrow W$ is right $A$--linear map, then its opposite map
$f^o: X^o \rightarrow W^o$ is left $A^o$--linear which is defined
by $f^o(x^o)=(f(x))^o$, for all $x^o \in X^o$. The same process
will be applied on bimodules and bilinear maps. For any
$(B,A)$--bimodule $M$ we denote by $M^* = \mathrm{Hom}(M_A,A_A)$
its right dual and by ${}^*M = \mathrm{Hom}({}_BM,{}_BB)$ its
left dual. $M^*$ and ${}^*M$ are considered, in a natural way, as
an $(A,B)$--bimodules.
Recall from \cite{Sweedler:1975} that an $A$--coring is a
three-tuple
$(\coring{C},\Delta_{\coring{C}},\varepsilon_{\coring{C}})$
consisting of an $A$--bimodule $\coring{C}$ and the two
$A$--bilinear maps
$$\xymatrix@C=50pt{\coring{C} \ar@{->}^-{\Delta_{\coring{C}}}[r] & \coring{C}\tensor{A}\coring{C}},\quad
\xymatrix@C=30pt{ \coring{C}
\ar@{->}^-{\varepsilon_{\coring{C}}}[r] & A}$$ such that
$(\Delta_{\coring{C}}\tensor{A}\coring{C}) \circ
\Delta_{\coring{C}} = (\coring{C}\tensor{A}\Delta_{\coring{C}})
\circ \Delta_{\coring{C}}$ and
$(\varepsilon_{\coring{C}}\tensor{A}\coring{C}) \circ
\Delta_{\coring{C}}=(\coring{C}\tensor{A}\varepsilon_{\coring{C}})
\circ \Delta_{\coring{C}}= \coring{C}$. A morphism of an
$A$--corings, is an $A$--bilinear map $\phi: \coring{C}
\rightarrow \coring{D}$ which satisfies: $\varepsilon_{\coring{D}}
\circ \phi = \varepsilon_{\coring{C}}$ and $\Delta_{\coring{D}}
\circ \phi = (\phi \tensor{A} \phi) \circ \Delta_{\coring{C}}$. A
right $\coring{C}$--comodule is a pair $(M,\rho_{M})$ consisting
of right $A$--module and a right $A$--linear map $\rho_{M}: M
\rightarrow M\tensor{A}\coring{C}$, called right
$\coring{C}$--coaction, such that
$(M\tensor{A}\Delta_{\coring{C}}) \circ \rho_M =
(\rho_M\tensor{A}\coring{C}) \circ \rho_M$ and
$(M\tensor{A}\varepsilon_{\coring{C}}) \circ \rho_M=M$. Left
$\coring{C}$--comodules are symmetrically defined, and we will use
the Greek letter $\lambda_{-}$ to denote theirs coactions. For
more details on comodules, definitions and basic properties of
bicomodules and the cotensor product, the reader is referred to
\cite{Brzezinski/Wisbauer:2003} and its bibliograpy.
\section{Comatrix coring and adjunctions}\label{Sect1}
Throughout this section $\Sigma$ will be a fixed $(B,A)$--bimodule
which is finitely generated and projective as right $A$--module
with a fixed dual basis $\{(e_i,e_i^*)\}_{1 \leq i \leq n} \subset
\Sigma \times \Sigma^*$. Let $S = \rend{A}{\Sigma}$ its right
endomorphisms ring, and let $\lambda : B \rightarrow S$ be the
canonical associated ring extension. It is known that there is a
$S$--bimodule isomorphism
\begin{equation}\label{xi}
\xymatrix@R=0pt{\xi: \Sigma \tensor{A} \Sigma^* \ar@{->}[r] & S =
\rend{A}{\Sigma}
\\ u \tensor{A} v^* \ar@{|->}[r] & [x \mapsto u v^*(x)] \\
\sum_i e_i \tensor{A} e_i^*s=\sum_i se_i \tensor{A} e_i^* & s
\ar@{|->}[l] }
\end{equation}
With this identification the product of $S$ (the composition)
satisfies
\begin{equation}\label{tensor}
\begin{array}{c}
s(u \tensor{A} u^*) = s(u) \tensor{A} u^*, \\ (u \tensor{A} u^*) s
= u \tensor{A} u^* s, \\ (u \tensor{A} u^*) (v \tensor{A} v^*) = u
u^*(v) \tensor{A} v^* = u \tensor{A} u^*(v) v^*,
\end{array}
\end{equation}
for every $s \in S$, $u, v \in \Sigma$, $v^*, u^* \in \Sigma^*$.
By \cite[Proposition 2.1]{Kaoutit/Gomez:2003a}, the $A$--bimodule
$\rcomatrix{B}{\Sigma}$ is an $A$--coring with the following
comultiplication and counit
\[
\Delta_{\rcomatrix{B}{\Sigma}}(u^* \tensor{B}u) = \sum_i u^*
\tensor{B} e_i \tensor{A} e_i^* \tensor{B} u,\quad
\varepsilon_{\rcomatrix{B}{\Sigma}}(u^* \tensor{B}u)= u^*(u).
\]
The map $\Delta_{\rcomatrix{B}{\Sigma}}$ is independent on the
choice of the right dual basis of $\Sigma_A$, see \cite[Remark
2.2]{Kaoutit/Gomez:2003a}. This coring is known as \emph{ the
comatrix coring} associated to the bimodule $\Sigma$.
\begin{remark}\label{lcomcor}
One can define a comatrix coring using a bimodule which is a
finitely generated and projective left module. However, the
resulting coring is isomorphic to the comatrix coring defined by
the left dual module. To see this, consider ${}_A\Lambda_B$ any
bimodule such that ${}_A\Lambda$ is finitely generated and
projective module with a fixed left dual basis
$\{f_j,{}^*f_j\}_j$. Put ${}_B\Sigma_A ={}_B{}^*\Lambda_A$, the
set $\{{}^*f_j,f_j^*\}_j$ where $f_j^* \in \Sigma^*$ are defined
by $f_j^*(u)=u(f_j)$, for all $u \in \Sigma$ and $j$; form a right
dual basis for $\Sigma_A$. The isomorphism of corings is given by
$$ \xymatrix@R=0pt{ \rcomatrix{B}{\Sigma} \ar@{->}^-{\cong}[r] &
\Lambda\tensor{B}{}^*\Lambda \\ u^* \tensor{B}{}^*v \ar@{|->}[r] &
(\sum_ju^*({}^*f_j)f_j)\tensor{B}{}^*v }$$ The proof is direct,
using the above duals basis, and we leave it to the reader.
\end{remark}
Keeping the notations before the Remark \ref{lcomcor}, we have
that the right (resp. left) $A$--module $\Sigma$ (resp.
$\Sigma^*$) is a right (resp. left)
$\rcomatrix{B}{\Sigma}$--comodule with left (resp. right)
$B$--linear coaction: $$\rho_{\Sigma} : \Sigma \longrightarrow
\Sigma \tensor{A}\rcomatrix{A}{\Sigma},\quad (u \mapsto \sum_i e_i
\tensor{A} e_i^* \tensor{B} u),$$ for every $u \in \Sigma$, and
$$\lambda_{\Sigma^*} : \Sigma^* \longrightarrow \rcomatrix{A}{\Sigma}
\tensor{A} \Sigma^*,\quad (u^* \mapsto \sum_i u^* \tensor{A} e_i
\tensor{B} e_i^*),$$ for every $u^* \in \Sigma^*$. Furthermore,
the natural right $A$--linear isomorphism $\Sigma \cong
{}^*(\Sigma^*)$ turns out to be a right
$\rcomatrix{B}{\Sigma}$--colinear isomorphism. Associated to the
ring extension $\lambda: B \rightarrow S$, we consider also the
canonical Sweedler $S$--coring $\swe{S}{B}{S}$ whose
comultiplication is given by
$\Delta_{S\tensor{B}S}(s\tensor{B}s')=s\tensor{B}1\tensor{S}1\tensor{B}s'$,
$s,s' \in S$, and the counit is the usual multiplication.
\bigskip
The aim of this section is to establish an adjunction between the
category of right $\rcomatrix{B}{\Sigma}$--comodules and the
category of right $\swe{S}{B}{S}$--comodules. Recall first that
this last category is isomorphic to the category of descent data
associated to the extension $B \rightarrow S$, (cf.
\cite{Nuss:1997}, \cite{Brzezinski:2002}). This isomorphism of
categories will be implicitly used in the sequel. For every left
$S$--module $Y$ and right $S$--module $Z$, we denote by $\iota_Z :
Z \rightarrow S \tensor{S} Z$, and $\iota_Y' : Y \rightarrow Y
\tensor{S} S$ the obvious natural $S$--linear isomorphisms.
\bigskip
\noindent \emph{The functor} $-\tensor{S}\Sigma : \rcomod{S
\tensor{B} S} \rightarrow \rcomod{\rcomatrix{B}{\Sigma}}$.
\medskip Let $(Y, \rho_Y) \in \rcomod{S \tensor{B} S}$, and
consider the following right $S$--linear map
\begin{equation}\label{ros}
\xymatrix@C=60pt{Y \ar@{->}^-{\rho_Y}[r] & Y \tensfun{S}S
\tensfun{B}S \ar@{->}^-{Y\tensfun{S}\xi^{-1} \tensfun{B} S}[r] &
Y\tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} S }
\end{equation}
where $\xi$ is the $S$--bilinear map given in \eqref{xi}. Applying
$- \tensor{S}\Sigma$ to \eqref{ros}, we get
\begin{equation}\label{tensig}
\xymatrix@R=40pt@C=75pt{Y\tensfun{S} \Sigma \ar@{->}^-{\rho_Y
\tensfun{S}\Sigma}[r]\ar@{-->}_-{\rho_{Y \tensfun{S}\Sigma}}[drr]
& Y \tensfun{S} S\tensfun{B}S \tensfun{S} \Sigma \ar@{->}^-{Y
\tensfun{S} \xi^{-1} \tensfun{B} S \tensfun{S} \Sigma}[r] & Y
\tensfun{S}\Sigma \tensfun{A}\Sigma^* \tensfun{B} S \tensfun{S}
\Sigma \ar@{->}[d]|{Y \tensfun{S}\Sigma \tensfun{A} \Sigma^*
\tensfun{B} \iota^{-1}}
\\ & & Y \tensfun{S} \Sigma \tensfun{A}\Sigma^* \tensfun{B}\Sigma, }
\end{equation}
explicitly, $$\rho_{Y \tensor{S} \Sigma}(y\tensor{S}u)=
\sum_{i,(y)} y_{(0)} \tensor{S}e_i \tensor{A} e_i^* \tensor{B}
y_{(1)}u,$$ where $\rho_Y(y)= \sum_{(y)}y_{(0)} \tensor{S} 1
\tensor{B} y_{(1)}$. It is clear that $\rho_{Y \tensor{S} \Sigma}$
is a right $A$--linear map and satisfies the counitary property.
To check the coassociativity, first consider the diagram
\begin{equation}\label{diauno}
\xymatrix@R=40pt@C=90pt{Y \tensfun{S} \Sigma
\ar@{->}^-{\rho_Y\tensfun{S} \Sigma}[r]
\ar@{->}[d]|{\rho_Y\tensfun{S} \Sigma} & Y \tensfun{S} S
\tensfun{B} S \tensfun{S} \Sigma \ar@{->}[d]|{Y \tensfun{S} \Delta
\tensfun{S} \Sigma} \\ Y \tensfun{S}S \tensfun{B} S \tensfun{S}
\Sigma \ar@{->}^-{\rho_Y \tensfun{S}S \tensfun{B} S \tensfun{S}
\Sigma}[r] \ar@{->}[d]|{Y \tensfun{S} \xi^{-1}\tensfun{B} S
\tensfun{S} \Sigma} & Y \tensfun{S} S \tensfun{B} S \tensfun{S} S
\tensfun{B} S \tensfun{S} \Sigma \ar@{->}[d]|{Y \tensfun{S} S
\tensfun{B} S \tensfun{S} \xi^{-1} \tensfun{B} S \tensfun{S}
\Sigma}
\\ Y \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} S
\tensfun{S} \Sigma \ar@{->}^-{\rho_Y \tensfun{S} \Sigma \tensfun{A}
\Sigma^* \tensfun{B} S \tensfun{S} \Sigma}[r] \ar@{->}[d]|{Y \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B} \iota^{-1}} &
Y \tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma \tensfun{A}
\Sigma^* \tensfun{B} S
\tensfun{S} \Sigma \ar@{->}[d]|{Y \tensfun{S} S \tensfun{B} S \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B} \iota^{-1}}
\\ Y\tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma \ar@{->}^-{\rho_Y\tensfun{S} \Sigma
\tensfun{A} \Sigma^* \tensfun{B} \Sigma}[r] &
Y \tensfun{S} S \tensfun{B} S \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma }
\end{equation}
It is commutative because $\rho_Y$ is a coaction for the right $S
\tensor{B} S$--comodule $Y$. Now, look at the following diagram
\begin{equation}\label{diados}
\xymatrix@R=18pt@C=68pt{ Y \tensfun{S} S \tensfun{B} S \tensfun{S}
\Sigma \ar@{->}[dd]|{Y \tensfun{S} \Delta \tensfun{S} \Sigma}
\ar@{->}[dr]|{Y \tensfun{S} \xi^{-1} \tensfun{B}S \tensfun{S}
\Sigma} & \\ & Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensor{B}S \tensor{S} \Sigma \ar@{->}[dd]|{Y \tensfun{S} \Sigma
\tensfun{A} \Sigma^* \tensfun{B} \iota^{-1}}
\\ Y \tensfun{S} S \tensfun{B} S \tensfun{S} S \tensfun{B} S
\tensfun{S} \Sigma \ar@{->}[dd]|{Y \tensfun{S} S \tensfun{B} S
\tensfun{S} \xi^{-1} \tensfun{B} S \tensfun{S} \Sigma} & \\ & Y
\tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\ar@{->}[dd]|{Y \tensfun{S}\Sigma \tensfun{A} \Delta}
\\ Y \tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma \tensfun{A}
\Sigma^* \tensfun{B} S
\tensfun{S} \Sigma \ar@{->}[dd]|{Y \tensfun{S} S \tensfun{B} S \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B} \iota^{-1}} &
\\ & Y \tensfun{S}\Sigma \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\tensfun{A} \Sigma^* \tensfun{B} \Sigma^*
\\ Y \tensfun{S} S \tensfun{B} S \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma \ar@{->}[dr]|{Y \tensfun{S} \xi^{-1}
\tensfun{B} S \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma} &
\\ & Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} S \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\ar@{->}[uu]|{Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \iota^{-1} \tensfun{A} \Sigma^* \tensfun{B} \Sigma} }
\end{equation}
which is easily shown to be commutative. By concatenating diagrams
\eqref{diauno} and \eqref{diados} we see that the map
$\rho_{Y\tensor{S}\Sigma}$ endows $Y \tensor{S} \Sigma$ with a
structure of right $\rcomatrix{B}{\Sigma}$--comodule.
Now, let $f : Y \rightarrow Y'$ be a morphism in $\rcomod{S
\tensor{B} S}$, and consider the right $A$--linear map
$f\tensor{S} \Sigma : Y \tensor{S} \Sigma \rightarrow Y'
\tensor{S} \Sigma$. Then we have the following commutative diagram
\[
\xymatrix@R=20pt@C=15pt{ Y \tensfun{S} \Sigma
\ar@{->}[ddr]|{\rho_Y \tensfun{S} \Sigma} \ar@{->}[ddd]|{f
\tensfun{S} \Sigma} & & Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} S \tensfun{S} \Sigma \ar@{->}[ddr]|{Y \tensfun{S}
\Sigma \tensfun{A} \Sigma^* \tensfun{B} \iota^{-1}}
\ar@{->}[ddd]|{f\tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} S \tensfun{S} \Sigma} & \\ & & &
\\ & Y \tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma
\ar@{->}[ddd]|{f\tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma}
\ar@{->}[uur]|{Y\tensfun{S} \xi^{-1} \tensfun{B} S \tensfun{S}
\Sigma} & & Y \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma \ar@{->}[ddd]|{f\tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \Sigma}
\\ Y' \tensfun{S} \Sigma \ar@{->}[ddr]|{\rho_{Y'} \tensfun{S} \Sigma}
& & Y'\tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B}
S \tensfun{S} \Sigma \ar@{->}[ddr]|{Y' \tensfun{S}\Sigma \tensfun{A}
\Sigma^* \tensfun{B} \iota^{-1}}&
\\ & & &
\\ & Y' \tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma
\ar@{->}[uur]|{Y' \tensfun{S} \xi^{-1} \tensfun{B} S \tensfun{S}
\Sigma} & & Y' \tensfun{S}\Sigma \tensfun{A} \Sigma^* \tensfun{B}
\Sigma, }
\]
which means that $f \tensor{S} \Sigma$ is a morphism in
$\rcomod{\rcomatrix{B}{\Sigma}}$, with the coaction
\eqref{tensig}. Therefore, we have constructed a well defined
functor $-\tensor{S} \Sigma: \rcomod{S \tensor{B} S} \rightarrow
\rcomod{\rcomatrix{B}{\Sigma}}$.
\bigskip
\noindent\emph{The functor} $-\tensor{A} \Sigma^*:
\rcomod{\rcomatrix{B}{\Sigma}} \rightarrow \rcomod{S \tensor{B}
S}$.
\medskip
Let $(X, \rho_X) \in \rcomod{\rcomatrix{B}{\Sigma}}$, and
consider the right $S$--linear map
\begin{equation}\label{tensiges}
\xymatrix@R=40pt@C=60pt{ X\tensfun{A} \Sigma^*
\ar@{->}^-{\rho_X\tensfun{A} \Sigma^*}[r]
\ar@{-->}_-{\rho_{X\tensfun{A} \Sigma^*}}[drr] & X
\tensfun{A}\Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\ar@{->}^-{X \tensfun{A} \Sigma^* \tensfun{B} \xi}[r] & X
\tensfun{A} \Sigma^* \tensfun{B} S \ar@{->}[d]|{X\tensfun{A}
\iota'\tensfun{B} S}
\\ & & X\tensfun{A}\Sigma^* \tensfun{S} S \tensfun{B} S. }
\end{equation}
Direct verifications, using elements, and the coassociativity of
$\rho_X$, give a commutative diagram:
\[
\xymatrix@R=15pt@C=10pt{ X\tensfun{A} \Sigma^*
\ar@{->}[ddr]|{\rho_X \tensfun{A}\Sigma^*} \ar@{->}[dddd]|{\rho_X
\tensfun{A}\Sigma^*} & & X \tensfun{A} \Sigma^* \tensfun{B}
\ar@{->}[dddddddd]|{X \tensfun{A}\Sigma^* \tensfun{B}\mu^r} S
\\ & & \\ & X \tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\ar@{->}[dddd]|{X \tensfun{A} \Delta \tensfun{A} \Sigma^*}
\ar@{->}[uur]|{X \tensfun{A} \Sigma^* \tensfun{B} \xi} & \\ & &
\\ X \tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\ar@{->}[dddd]|{X \tensfun{A} \Sigma^* \tensfun{B} \xi}
\ar@{->}[ddr]|{\rho_X \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\tensfun{A} \Sigma^*} & &
\\ & &
\\ & X \tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \Sigma \tensfun{A} \Sigma^* \ar@{->}[dddd]|{X
\tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\tensfun{B}\xi} &
\\ & &
\\ X \tensfun{A} \Sigma^* \tensfun{B} S
\ar@{->}[ddr]|{\rho_X \tensfun{A}\Sigma^* \tensfun{B} S} & & X
\tensfun{A} \Sigma^* \tensfun{B} S \tensfun{B} S
\\ & & \\ & X \tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A}
\Sigma^* \tensfun{B} S \ar@{->}[uur]|{X \tensfun{A} \Sigma^*
\tensfun{B} \xi \tensfun{B} S}, & }
\]
where $\mu^r$ is the $(B-S)$--bilinear map defined by $\mu^r(s)= 1
\tensor{B} s$, for all $s \in S$. That is, the right $S$--linear
map $f:=(X \tensor{A} \Sigma^* \tensor{B} \xi ) \circ (\rho_X
\tensor{A} \Sigma^*)$ verify the cocycle condition (see
\cite[Definition 3.5(2)]{Nuss:1997}). Since $\rho_{X \tensor{A}
\Sigma^*}$ satisfies the counitary property, $f$ is actually a
descent datum on $X \tensor{A} \Sigma^*$ (see \cite{Cipolla:1976},
\cite{Nuss:1997}). Henceforth, $\rho_{X\tensor{A} \Sigma^*}= (X
\tensor{A} \iota' \tensor{B} S) \circ f$ is a right $S \tensor{B}
S$--coaction on $X \tensor{A} \Sigma^*$.
Given any right $\rcomatrix{B}{\Sigma}$--colinear map $g : X
\rightarrow X'$, we easily get a right $S \tensor{B} S$--colinear
map $g \tensor{A} \Sigma^*: X\tensor{A} \Sigma^* \rightarrow X'
\tensor{A} \Sigma^*$, with the coactions \eqref{tensiges}.
Therefore, $-\tensor{A}\Sigma^*: \rcomod{\rcomatrix{B}{\Sigma}}
\rightarrow \rcomod{S \tensor{B} S}$ is a well defined functor.
\bigskip
The precedent discussion serves to state the following
proposition.
\begin{proposition}\label{secondadj}
For every pair of comodules $\left( (Y_{S \tensor{B}S},\rho_Y);
(X_{\rcomatrix{B}{\Sigma}},\rho_X)\right)$, the following
$K$--linear map
\[
\xymatrix@R=0pt{ \Psi_{Y,X}: \hom{\rcomatrix{B}{\Sigma}}{Y
\tensfun{S}\Sigma}{X} \ar@{->}[r] & \hom{S \tensfun{B}
S}{Y}{X\tensfun{A}\Sigma^*} \\ f \ar@{|->}[r] & (f
\tensfun{A}\Sigma^*) \circ (Y\tensfun{S}\xi^{-1}) \circ \iota_Y'
\\ (X\tensfun{A}\varepsilon')
\circ (g \tensfun{S} \Sigma) & g \ar@{|->}[l] }
\]
(where $\varepsilon'$ is the counit of the comatrix $S$--coring
$\rcomatrix{S}{\Sigma}$), is a natural isomorphism. In other
words, $-\tensor{S}\Sigma$ is left adjoint to $-\tensor{A}
\Sigma^*$.
\end{proposition}
\begin{proof}
We only prove that $\Psi_{Y,X}$ and its inverse are well defined
maps, the rest is straightforward. Clearly $\Psi_{Y,X}(f)$ is
$S$--linear, for every $f \in \hom{\rcomatrix{B}{\Sigma}}{Y
\tensor{S}\Sigma}{X}$. The colinearity of $\Psi_{Y,X}(f)$ follows
if we show that
\begin{equation}\label{dig0}
\xymatrix@R=60pt@C=60pt{ Y
\ar@{->}|-{\rho'_Y}[d] \ar@{->}^-{\Psi(f)}[r] &
X\tensfun{A}\Sigma^* \ar@{->}|-{(X \tensfun{A} \Sigma^*
\tensfun{B} \xi)\circ (\rho_X\tensfun{A}\Sigma^*)}[d] \\
Y\tensfun{B}S \ar@{->}^-{\Psi(f)\tensfun{B}S}[r] &
X\tensfun{A}\Sigma^*\tensfun{B}S }
\end{equation}
is a commutative diagram, where $\rho'_Y=(\iota_Y^{-1}\tensor{B}S)
\circ \rho$. Put $$\boldsymbol{f}= \Psi_{Y,X}(f) \circ \rho_Y' =
(f\tensor{A} \Sigma^* \tensor{B} \xi) \circ (Y\tensor{S}\xi^{-1}
\tensor{B}S) \circ \rho_Y.$$ Using the colinearity of the map $f$,
we easily prove that the following diagram is commutative
\[
\xymatrix { Y \ar@{->}^-{\iota'}[r]
\ar@/_6pc/^{\boldsymbol{f}}[dddd] \ar@{->}[dd]|{\rho_Y} & Y
\tensfun{S} S \ar@{->}^-{Y \tensfun{S} \xi^{-1}}[r] & Y
\tensfun{S} \Sigma \tensfun{A} \Sigma^* \ar@{->}[d]^-{\rho_Y
\tensfun{S}\Sigma \tensfun{A} \Sigma^*} \ar@/^12pc/|-{(\rho_X
\circ f)\tensfun{A}\Sigma^*}[dddd] &
\\ & & Y\tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma \tensfun{A}
\Sigma^* \ar@{->}[d]|{Y \tensfun{S} \xi^{-1} \tensfun{B} S
\tensfun{S}\Sigma \tensfun{A} \Sigma^*} & \\ Y \tensfun{S} S
\tensfun{B} S \ar@{->}[d]|{Y \tensfun{S} \xi^{-1} \tensfun{B} S} &
& Y \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} S
\tensfun{S} \Sigma \tensfun{A} \Sigma^*
\ar@{->}[d]|{Y\tensfun{S}\Sigma \tensfun{A} \Sigma^* \tensfun{B}
\iota^{-1} \tensfun{A} \Sigma^* } &
\\ Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} S \ar@{->}^-{Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \xi^{-1}}[rr] \ar@{->}[d]|{f \tensfun{A} \Sigma^*
\tensfun{B} S} & & Y \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \Sigma \tensfun{A} \Sigma^* \ar@{->}[d]|{f \tensfun{A}
\Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*} & \\ X
\tensfun{A} \Sigma^* \tensfun{B} S & & X \tensfun{A} \Sigma^*
\tensfun{B} \Sigma \tensfun{A} \Sigma^* \ar@{->}_-{X \tensfun{A}
\Sigma^* \tensfun{B} \xi}[ll], }
\]
which is exactly the diagram \eqref{dig0}. Now, let $g \in \hom{S
\tensor{B} S}{Y}{X\tensor{A}\Sigma^*}$, so the following diagram
is easily shown to be commutative
\[
\xymatrix{ Y \tensfun{S} \Sigma \ar@{->}[rr]|{g \tensfun{S}
\Sigma} \ar@{->}[ddd]|{\rho_Y \tensfun{S} \Sigma}
& & X \tensfun{A} \Sigma^* \tensfun{S}
\Sigma \ar@{->}[d]|{\rho_X \tensfun{A} \Sigma^* \tensfun{S}
\Sigma}
\\ & & X\tensfun{A} \Sigma^* \tensfun{B} \Sigma \tensfun{A} \Sigma^*
\tensfun{S} \Sigma \ar@{->}[d]|{X \tensfun{A} \Sigma^* \tensfun{B}
\xi \tensfun{S} \Sigma}
\\ & & X \tensfun{A} \Sigma^* \tensfun{B}
S \tensfun{S} \Sigma \ar@{->}[d]|{X \tensfun{A} \Sigma^*
\tensfun{B}\iota^{-1}}
\\ Y \tensfun{S} S \tensfun{B} S \tensfun{S} \Sigma
\ar@{->}[dd]|{Y \tensfun{S} \xi^{-1}\tensfun{B} S \tensfun{S}
\Sigma} \ar@{->}[dr]|{\iota^{-1}_\Sigma \tensfun{B} S \tensfun{S}
\Sigma} & & X \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\\ & Y \tensfun{B} S \tensfun{S} \Sigma \ar@{->}[uur]|{g \tensfun{B}
S \tensfun{S} \Sigma} &
\\ Y\tensfun{S}\Sigma
\tensfun{A} \Sigma^* \tensfun{B} S \tensfun{S} \Sigma
\ar@{->}[d]|{Y\tensfun{S}\Sigma \tensfun{A} \Sigma^*
\tensfun{B}\iota^{-1}} & & X\tensfun{A} A \tensfun{A} \Sigma^*
\tensfun{B} \Sigma \ar@{->}[uu]|{\cong}
\\ Y \tensfun{S} \Sigma \tensfun{A} \Sigma^* \tensfun{B} \Sigma
\ar@{->}[rr]|{g \tensfun{S} \Sigma \tensfun{A} \Sigma^*
\tensfun{B} \Sigma} & & X \tensfun{A} \Sigma^* \tensfun{S} \Sigma
\tensfun{A} \Sigma^* \tensfun{B}\Sigma \ar@{->}[u]|{X
\tensfun{A}\varepsilon' \tensfun{A} \Sigma^* \tensfun{B}\Sigma} }
\]
On the other hand, we have
\[
\rho_X \circ (X \tensfun{A} \varepsilon') = (X \tensfun{A}
\Sigma^* \tensfun{B}\iota^{-1}) \circ (X \tensfun{A} \Sigma^*
\tensfun{B} \xi \tensfun{S} \Sigma) \circ (\rho_X \tensfun{A}
\Sigma^* \tensfun{S} \Sigma),
\]
putting this in the above diagram, we get that $(X \tensor{A}
\varepsilon') \circ (g \tensor{S} \Sigma)$ is
$\rcomatrix{B}{\Sigma}$--colinear; and this finishes the proof.
\end{proof}
\begin{remark}\label{digcom}
\begin{enumerate}
\item Applying Proposition \ref{secondadj}, we get (up to natural
isomorphisms) the following commutative diagram of functors
\begin{equation}\label{dig}
\xymatrix@R=50pt@C=60pt{ & \rcomod{S \tensfun{B} S}
\ar@<0,5ex>[dr]^{\hom{S \tensfun{B} S}{S}{-}}
\ar@<0,5ex>[dl]^{-\tensfun{S}\Sigma} & \\
\rcomod{\rcomatrix{B}{\Sigma}}
\ar@<0,5ex>[rr]^{\hom{\rcomatrix{B}{\Sigma}}{\Sigma}{-}}
\ar@<0,5ex>[ur]^{-\tensfun{A}\Sigma^*} & & \rmod{B}
\ar@<0,5ex>[ll]^{-\tensfun{B}\Sigma}
\ar@<0,5ex>[ul]^{-\tensfun{B}S},}
\end{equation}
where the sideways pairs represent adjunctions. \item
Symmetrically, one can define a pair of adjoint functors relating
the categories of left comodules: $\Sigma^* \tensor{S}-: \lcomod{S
\tensor{B} S} \rightleftarrows \lcomod{\rcomatrix{B}{\Sigma}}:
\Sigma \tensor{A}-$, which turns the diagram
\begin{equation}\label{digl}
\xymatrix@R=50pt@C=60pt{ & \lcomod{S \tensfun{B} S}
\ar@<0,5ex>[dr]^{\hom{S \tensfun{B} S}{S}{-}}
\ar@<0,5ex>[dl]^{\Sigma^* \tensfun{S}-} & \\
\lcomod{\rcomatrix{B}{\Sigma}}
\ar@<0,5ex>[rr]^{\hom{\rcomatrix{B}{\Sigma}}{\Sigma^*}{-}}
\ar@<0,5ex>[ur]^{\Sigma \tensfun{A}-} & & \lmod{B}
\ar@<0,5ex>[ll]^{\Sigma^* \tensfun{B}-}
\ar@<0,5ex>[ul]^{S\tensfun{B}-},}
\end{equation}
commutative.
\end{enumerate}
\end{remark}
\section{A group isomorphism}
Let $B \subset S$ be ring extension. The set $\mathbf{I}_B(S)$ of all
$B$--sub-bimodules of $S$ is a monoid with the obvious product.
For $I, J \in \mathbf{I}_B(S)$, consider the the multiplication map:
\[
\mathbf{m}: I \tensor{B} J \rightarrow IJ,\quad \mathbf{m}(x
\tensor{B} y)=xy.
\]
$\mathbf{I}_B^l(S)$ (resp. $\mathbf{I}_B^r(S)$) denotes the submonoid consisting of all
$B$-sub-bimodules $I \subset S$ such that
\[
S \tensor{B} I \cong S \quad (\text{resp. } I \tensor{B} S \cong
S) \quad \text{ through } \mathbf{m}.
\]
$\mathbf{Inv}_B(S)$ denote the group of invertible $B$-sub-bimodules of $S$.
By \cite[Proposition 1.1]{Masuoka:1989}, $\mathbf{Inv}_B(S) \subset \mathbf{I}_B^l(S)
\cap \mathbf{I}_B^r(S)$.
\bigskip
From now on fix a bimodule ${}_B\Sigma_A$ with $\Sigma_A$ finitely
generated and projective, consider its endomorphisms ring
$S=\rend{A}{\Sigma}$, and assume that ${}_B\Sigma$ is faithful, i.
e., the canonical ring extension $\lambda : B \rightarrow S$ is
injective ($B$ will be identified then with its image). Consider
the comatrix $A$--coring $\coring{C}=\rcomatrix{B}{\Sigma}$, and
denote by $\End{A}{\coring{C}}$ the monoid of the coring
endomorphisms of $\coring{C}$. We denote by $\Aut{A}{\coring{C}}$
its group of units, that is, the group of all coring automorphisms
of $\coring{C}$. The canonical Sweedler $S$--coring
$\swe{S}{B}{S}$ associated to the ring extension $B \subset S$,
will be also considered.
\begin{remark}\label{hom}
Keeping the previous notations, we made the followings remarks.
\begin{enumerate}[(1)]
\item As we have seen the $(B,A)$--bimodule $\Sigma$ is actually a
$(B,\coring{C})$--bicomodule ($B$ is considered as a trivial
$B$--coring), while $\Sigma^*$ becomes a
$(\coring{C},B)$--bicomodule. Given $g \in \End{A}{\coring{C}}$,
and a right comodule $X_{\coring{C}}$ (resp. left comodule
${}_{\coring{C}}X$), we denote by $X_g$ the associated induced
right (resp. left) $\coring{C}$--comodule. That is, $\rho_{X_g} =
(X\tensor{A}g) \, \circ \, \rho_X$ (resp. $\lambda_{X_g} =
(g\tensor{A}X) \, \circ \, \lambda_X$). If $(X,\rho_X)$ is any
right $\coring{C}$--comodule such that $X_A$ is finitely generated
and projective module, then it is well known that the right dual
module $X^*$ admits a structure of left $\coring{C}$--comodule
with coaction $$ \lambda_{X^*}(x^*) = \sum
\left((x^*\tensor{A}\coring{C}) \circ \rho_X(x_j)\right)
\tensor{A}x_j^*,\, x^* \in X^*,$$ where $\{x_j,x_j^*\}_j $ is any
right dual basis of $X_A$. In this way $(\Sigma_g)^*$ and
$(\Sigma^*)_g$ have the same left $\coring{C}$--coaction, that is,
they are equal as a left $\coring{C}$--comodules, then we can
remove the brackets $\Sigma_g^*=(\Sigma_g)^* = (\Sigma^*)_g$.
\item Given $g, h \in \End{A}{\coring{C}}$, the $B$--subbimodule
$\Sigma_h \cotensor{\coring{C}} \Sigma^*_g$ of $\Sigma \tensor{A}
\Sigma^*$ is identified, via the isomorphism given in \eqref{xi},
with $\hom{\coring{C}}{\Sigma_g}{\Sigma_h}$. Another way to
obtain this identification is given as follows. Recall, from
\cite[Example 3.4]{Gomez:2002} or \cite[Example
6]{Kaoutit/Gomez:2003a}, that $(\Sigma_g^*)_B$ is quasi-finite
$(\coring{C},B)$--bicomodule with adjunction $-\tensor{B} \Sigma_g
\dashv -\cotensor{\coring{C}} \Sigma_g^*$, so the cotensor functor
$-\cotensor{\coring{C}}\Sigma^*_g$ is naturally isomorphic to the
hom-functor $\hom{\coring{C}}{\Sigma_g}{-}$. Moreover, this
isomorphism can be chosen to be just the restriction of $-
\tensor{A} \Sigma^*_g \cong \hom{A}{\Sigma_g}{-}$. Applying this
isomorphism to $\Sigma_h$, for any $h\in \End{A}{\coring{C}}$, we
arrive to the desired identification.
\item Let $g \in\End{A}{\coring{C}}$, the following multiplication
\[
\xymatrix@R=0pt{\overline{\mathbf{m}}: \Sigma^* \tensor{B}
\hom{\coring{C}}{\Sigma_g}{\Sigma} \ar@{->}[r] & \Sigma^*_g & (u^*
\tensor{B} t \mapsto u^*t) }
\]
is a left $\coring{C}$--comodule map. Furthermore, we have a
commutative diagram
\[
\xymatrix@R=40pt@C=60pt{ \Sigma \tensfun{A} \Sigma^* \tensfun{B}
\hom{\rcomatrix{B} \Sigma}{\Sigma_g}{\Sigma} \ar@{->}^-{\Sigma
\tensfun{A} \overline{\mathbf{m}}}[r] \ar@{->}[d]|{\xi
\tensfun{A}\hom{\rcomatrix{B} \Sigma}{\Sigma_g}{\Sigma}} & \Sigma
\tensfun{A} \Sigma^*_g \ar@{->}[d]^{\xi} \\ S \tensfun{B}
\hom{\rcomatrix{B} \Sigma}{\Sigma_g}{\Sigma}
\ar@{->}^-{\mathbf{m}}[r] & S, }
\]
where $\mathbf{m}$ is the usual multiplication of $S$.
\end{enumerate}
\end{remark}
\bigskip
We define the following two maps:
\[
\xymatrix@R=0pt{ \digamma^r: \End{A}{\coring{C}} \ar@{->}[r] &
\mathbf{I}_B(S) & ( g \ar@{|->}[r] & \hom{\coring{C}}{\Sigma}{\Sigma_g}), }
\]
and
\[
\xymatrix@R=0pt{ \digamma^l: \End{A}{\coring{C}} \ar@{->}[r] &
\mathbf{I}_B(S) & ( g \ar@{|->}[r] & \hom{\coring{C}}{\Sigma_g}{\Sigma}). }
\]
These maps obey the following lemma. First, recall from
\cite{Sugano:1982} (cf. \cite{Caenepeel/Kadison:2001}), that $M$
is called a \emph{separable bimodule} or $B$ is said to be
$M$-\emph{separable over} $A$ provided the evaluation map $$ M
\tensor{A} {}^*M \rightarrow B,\quad m \tensor{A}\varphi \mapsto
\varphi(m)$$ is a split epimorphism of $(B,B)$--bimodules. As
shown in \cite{Sugano:1982} (cf. \cite[Theorem
3.1]{Kadison:1999}), if $M$ is a separable bimodule, then $B
\rightarrow S$ is a \emph{split extension}, i.e., there is a
$B$--linear map $\alpha: S \rightarrow B$ such that $\alpha(1_S) =
1_B$. Conversely, if ${}_BM_A$ is such that $M_A$ is finitely
generated and projective module, and $B \rightarrow S$ is a splits
extension, then ${}_BM_A$ is a separable bimodule.
\begin{lemma}\label{cos-fil}
Let $g \in \End{A}{\coring{C}}$, then
\begin{enumerate}[(i)]
\item $\digamma^r(g) \in \mathbf{I}_B^r(S)$ if and only if ${}_BS$ preserves
the equalizer of $(\rho_{\Sigma_g} \tensor{A} \Sigma^*, \Sigma_g
\tensor{A} \lambda_{\Sigma^*})$ (cf. \cite[Section
2.4]{Gomez:2002}). In particular, if either ${}_B\Sigma$ is flat
module or ${}_B\Sigma_A$ is a separable bimodule, then
$\digamma^r(g) \in \mathbf{I}_B^r(S)$.
\item $\digamma^l(g) \in \mathbf{I}_B^l(S)$ if and only if ${S}_B$ preserves
the equalizer of $(\rho_{\Sigma} \tensor{A} \Sigma_g^*, \Sigma
\tensor{A} \lambda_{\Sigma_g^*})$. In particular, if either
$\Sigma_B^*$ is flat module or ${}_B\Sigma_A$ is a separable
bimodule, then $\digamma^l(g) \in \mathbf{I}_B^l(S)$.
\item If $g \in \Aut{A}{\coring{C}}$, then $\digamma^l(g) =
\digamma^r(g^{-1})$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)$ and $(ii)$ We only prove $(i)$ because $(ii)$ is symmetric.
Following the identifications made in Remark \ref{hom}, we have
$\digamma^r(g) \cong \Sigma_g\cotensor{\coring{C}}\Sigma^*$.
Taking this isomorphism into account, the first statement in $(i)$
is reduced to the problem of compatibility between tensor and
cotensor. Effectively, by \cite[Lemma 2.2]{Gomez:2002}, ${}_BS
\cong \Sigma \tensor{A} \Sigma^*$ preserves the equalizer of
$(\rho_{\Sigma_g} \tensor{A} \Sigma^*, \Sigma_g \tensor{A}
\lambda_{\Sigma^*})$ if and only if
\[
(\Sigma_g \cotensor{\coring{C}} \Sigma^*) \tensor{B} \Sigma
\tensor{A} \Sigma^* \cong \Sigma_g \cotensor{\coring{C}}
(\Sigma^* \tensor{B} \Sigma \tensor{A} \Sigma^*) = (\Sigma_g
\cotensor{\coring{C}} \coring{C}) \tensor{A} \Sigma^* \cong
\Sigma\tensor{A} \Sigma^* \cong S,
\]
if and only if $(\Sigma_g \cotensor{\coring{C}} \Sigma^*) \in
\mathbf{I}_B^r(S)$, since by Remark \ref{hom}(3) this composition coincides
with the multiplication of the monoid $\mathbf{I}_B(S)$. If ${}_B\Sigma$ is a
flat module, then clearly ${}_BS$ is also flat. Hence, it
preserves the stated equalizer. Now, if we assume that
${}_B\Sigma_A$ is a separable bimodule, then \cite[Theorem
3.5]{Brzezinski/Gomez:2003} implies that
$\coring{C}=\rcomatrix{B}{\Sigma}$ is a coseparable $A$--coring
(cf. \cite{Guzman:1989}, \cite{Gomez/Louly:2003} for definition).
Therefore, equalizers split by \cite[Proposition
1.2]{Guzman:1989}, and so they are preserved by any module.\\
$(iii)$ A straightforward computation shows that
$\hom{\coring{C}}{\Sigma_g}{\Sigma} =
\hom{\coring{C}}{\Sigma}{\Sigma_{g^{-1}}}$.
\end{proof}
\begin{theorem}\label{resultado-0}
Let ${}_B \Sigma_A$ be a bimodule such that ${}_B\Sigma$ is
faithful and $\Sigma_A$ is finitely generated and projective.
Consider $\coring{C}=\rcomatrix{B}{\Sigma}$ its associated
comatrix $A$--coring. If either
\begin{enumerate}[(a)]
\item $\Sigma^*_B$ is a faithfully flat module, or \item
${}_B\Sigma_A$ is a separable bimodule.
\end{enumerate}
Then $\digamma^l: \End{A}{\coring{C}} \rightarrow \mathbf{I}_B^l(S)$ is a
monoid isomorphism with inverse
\begin{equation}\label{Gama}
\xymatrix@R=0pt{\Gamma^l:\mathbf{I}_B^l(S) \ar@{->}[r] & \End{A}{\coring{C}}
\\ I \ar@{|->}[r] & [u^* \tensfun{B} u \mapsto \sum_k u^*
s_k \tensfun{B} x_ku], }
\end{equation}
where $\mathbf{m}^{-1}(1)=\sum_k s_k \tensor{B} x_k \in
S\tensor{B}I$.
\end{theorem}
\begin{proof}
Under the hypothesis $(a)$, we have, by the left version of the
generalized Descent Theorem for modules \cite[Theorem
2]{Kaoutit/Gomez:2003a}, that $\Sigma^* \tensor{B}-: \lmod{B}
\rightarrow \lcomod{\rcomatrix{B}{\Sigma}}$ is an equivalence of
categories with inverse
$\hom{\rcomatrix{B}{\Sigma}}{\Sigma^*}{-}$. Applying the diagram
\eqref{digl} of the Remark \ref{digcom}, we obtain that $S
\tensor{B}-: \lmod{B} \rightarrow \lcomod{S \tensor{B}S}$ is a
separable functor (cf. \cite{Nastasescu/Bergh/Oystaeyen:1989} for
definition). Now, assume $(b)$, then the ring extension $B
\rightarrow S$ splits as a $B$--bimodule map. By \cite[Proposition
1.3]{Nastasescu/Bergh/Oystaeyen:1989}, the functors $S\tensor{B}-:
\lmod{B} \rightarrow \lmod{S}$ is separable, and by \cite[Lemma
1.1(3)]{Nastasescu/Bergh/Oystaeyen:1989}, the functor
$S\tensor{B}-: \lmod{B} \rightarrow \lcomod{S\tensor{B}S}$ is
separable. In conclusion, under the hypothesis $(a)$ or $(b)$, the
functor $S\tensor{B}-: \lmod{B} \rightarrow \lcomod{S\tensor{B}S}$
reflects isomorphisms. Therefore, any inclusion $I \subseteq J$ in
$\mathbf{I}_B^l(S)$, implies equality $I=J$. This fact will be used implicitly
in the remainder of the proof.
The map $\Gamma^l$ is easily shown to be well defined, while Lemma
\ref{cos-fil} implies that $\digamma^l$ is also well defined. Let
us first show that $\digamma^l$ is a monoid map. The image of the
unit is mapped to $B$, $\digamma^l(1_{\End{A}{\coring{C}}})=
\mathrm{End}_{\coring{C}}(\Sigma)= B$, since by \cite[Proposition
2]{Kaoutit/Gomez:2003a} the inclusion $B \subseteq
\rend{\rcomatrix{B}{\Sigma}}{\Sigma}$ is always true. Let $g,h \in
\End{A}{\coring{C}}$, and $t \in \digamma^l(g)$, $s \in
\digamma^l(h)$, that is \begin{eqnarray*}
\sum_i e_i \tensor{A} e_i^* \tensor{B} tu &=& \sum_i te_i\tensor{A}g(e_i^* \tensor{B}u ) \\
\sum_i e_i \tensor{A} e_i^* \tensor{B} su &=& \sum_i se_i\tensor{A}h(e_i^* \tensor{B}u )
\end{eqnarray*}
for every element $u \in \Sigma$. So, for every $u \in \Sigma$, we
have
\begin{eqnarray*}
\rho_{\Sigma}(tsu) &=& \sum_i e_i\tensor{A}e_i^* \tensor{B}tsu \\
&=& \sum_i te_i\tensor{A}g(e_i^* \tensor{B}su ) \\
&=& (t\tensor{A}\coring{C}) \, \circ \, (\Sigma \tensor{A} g)
\left( \sum_i e_i\tensor{A}e_i^* \tensor{B}su \right) \\
&=& (t\tensor{A}\coring{C}) \, \circ \, (\Sigma \tensor{A} g)
\left( \sum_i se_i\tensor{A}h(e_i^* \tensor{B}u) \right) \\
&=& \sum_i tse_i \tensor{A} gh(e_i^*\tensor{B}u) \\
&=& (ts\tensor{A}\coring{C}) \, \circ \, \rho_{\Sigma_{gh}}(u)
\end{eqnarray*}
which means that $ts \in
\hom{\coring{C}}{\Sigma_{gh}}{\Sigma}=\digamma^l(gh)$, and so
$\digamma^l(g) \digamma^l(h) = \digamma^l(gh)$. Now, let $I \in
\mathbf{I}_B^l(S)$ with $\mathbf{m}^{-1}(1)= \sum_k s_k \tensor{B} t_k \in S
\tensor{B} I$. If $s$ is any element in $I$, then $1\tensor{B}s =
\sum_kss_k \tensor{B}t_k \in S\tensor{B}I$. Henceforth, \begin{eqnarray*}
(s\tensor{A}\coring{C}) \, \circ \, \rho_{\Sigma_{\Gamma^l(I)}}(u) &=&
(s\tensor{A}\coring{C})
\left( \sum_i e_i\tensor{A}\Gamma^l(I)(e_i^*\tensor{B}u) \right) \\
&=& \sum_{i,k} se_i \tensor{A}e_i^* s_k \tensor{B}t_ku \\
&=& \sum_{i,k} e_i \tensor{A}e_i^* s s_k \tensor{B}t_ku \\
&=& \sum_{i} e_i \tensor{A}e_i^* \tensor{B}su \, = \,
\rho_{\Sigma}(su)
\end{eqnarray*}
for every $u \in \Sigma$, that is $s: \Sigma_{\Gamma^l(I)}
\rightarrow \Sigma \in I$ is a $\coring{C}$--colinear map.
Therefore, $I = \digamma^l(\Gamma^l(I))$, for every $I \in \mathbf{I}_B^l(S)$.
Conversely, let $g \in \End{A}{\rcomatrix{B}{\Sigma}}$, and put
$I=\digamma^l(g)= \hom{\coring{C}}{\Sigma_g}{\Sigma}$ with
$\mathbf{m}^{-1}(1)=\sum_k s_k \tensor{B} x_k \in S\tensor{B}I$.
For every $t \in I$, we have
\begin{equation}\label{uphi}
\begin{array}{ll}
\sum_i g(u^* t \tensor{B} e_i) \tensor{A} e_i^* = \sum_i u^*
\tensor{B} e_i \tensor{A} e_i^* t, & \quad \forall u^* \in
\Sigma^*
\\ & \\ \sum_i e_i \tensor{A} e_i^* \tensor{B} tu = \sum_i te_i
\tensor{A} g(e_i^* \tensor{B} u), & \quad \forall u \in \Sigma.
\end{array}
\end{equation}
Computing, using equations \eqref{uphi}
\begin{eqnarray*}
(\Gamma^l(I)\tensor{A} \lambda_{\Sigma^*}(u^*) &=& \sum_{i,k} u^*s_k \tensor{B} t_ke_i \tensor{A} e_i^* \\
&=& \sum_{k} u^*s_k \tensor{B} \left( \sum_i t_ke_i \tensor{A} e_i^* \right) \\
&=& \sum_{k} u^*s_k \tensor{B} \left( \sum_i e_i \tensor{A} e_i^* t_k\right) \\
&=& \sum_k\left( \sum_i u^*s_k \tensor{B} e_i \tensor{A}e_i^*t_k \right) \\
&=& \sum_k\left( \sum_i g(u^*s_kt_k \tensor{B} e_i) \tensor{A}e_i^* \right) \\
&=& \sum_i g(u^*\tensor{B}e_i)\tensor{A}e_i^* \\
&=& (g \tensor{A} \Sigma^*) \, \circ \, \lambda_{\Sigma^*}(u^*)
\end{eqnarray*} for every $u^* \in \Sigma^*$, that is $(\Gamma(I) \tensor{A} \Sigma^*) \circ
\lambda_{\Sigma^*} = (g \tensor{A} \Sigma^*) \circ
\lambda_{\Sigma^*}$. Whence,
\begin{equation}\label{delt-g}
(\Gamma(I) \tensor{A} \Sigma^* \tensor{B} \Sigma) \circ \Delta =
(g \tensor{A} \Sigma^* \tensor{B} \Sigma) \circ \Delta,
\end{equation}
because $\Delta_{\coring{C}} = \lambda_{\Sigma^*} \tensor{B}
\Sigma$. On the other hand
\begin{eqnarray*}
\Delta \, \circ \, \Gamma^l(I)(u^* \tensor{B} u) &=& \sum_k u^* s_k
\tensor{B}\left(\sum_i e_i \tensor{A} e_i^* \tensor{B} t_ku\right) \\
&=& \sum_k u^* s_k \tensor{B} \left(\sum_i t_ke_i \tensor{A} g(e_i^*
\tensor{B} u)\right), \, \text{ by } \eqref{uphi} \\
&=& \Sigma^* \tensor{B} \Sigma \tensor{A} g\left(\sum_{i,k} u^* s_k
\tensor{B}t_ke_i \tensor{A} e_i^* \tensor{B} u\right) \\
&=& (\Sigma^* \tensor{B} \Sigma \tensor{A} g) \, \circ \, (\Gamma^l(I)
\tensor{A} \Sigma^* \tensor{B} \Sigma) \, \circ \, \Delta (u^* \tensor{B}u) \\
&=& (\Sigma^* \tensor{B} \Sigma \tensor{A} g) \, \circ \, (g
\tensor{A} \Sigma^* \tensor{B} \Sigma) \, \circ \, \Delta(u^*
\tensor{B}u),\, \text{ by } \eqref{delt-g} \\
&=& (g \tensor{A} g) \, \circ \, \Delta (u^* \tensor{B} u) \\
&=& \Delta \, \circ \, g(u^* \tensor{B}u), \, g \in
\End{A}{\coring{C}},
\end{eqnarray*}
for every $u^* \in \Sigma^*,\, u \in \Sigma$. Therefore, $\Delta
\circ \Gamma^l(I) = \Delta \circ g$, thus
$\Gamma^l(I)=\Gamma^l(\digamma^l(g))=g$, for every $g \in
\End{A}{\coring{C}}$ since $\Delta$
is injective.
\end{proof}
Symmetrically we have the anti-homomorphism of monoids
\begin{equation}\label{Gama'}
\xymatrix@R=0pt{ \Gamma^r: \mathbf{I}_B^r(S) \ar@{->}[r] &
\End{A}{\rcomatrix{B}{\Sigma}} \\ I \ar@{|->}[r] & [u^* \tensor{B}
u \mapsto \sum_k u^* t_k \tensfun{B} s_k u], }
\end{equation}
where $\mathbf{m}^{-1}(1) = \sum_k t_k \tensor{B} s_k \in I
\tensor{B} S$. Let $B^o \subset S^o$ denote the opposite ring
extension of $B \subset S$, and identify $S^o$ with
$\rend{A^o}{(\Sigma^*)^o}$, where the notation $X^o$, for any left
$A$--module $X$, means the opposite right $A^o$--module. Put
${}_{B^o} W_{A^o}= ({}_A \Sigma^*_B)^o$ the opposite bimodule, and
consider its right dual $W^*$, with respect to $A^o$, i.e. $W^*
=\mathrm{Hom}(W_{A^o},A^o_{A^o})$. Obviously $W_{A^o}$ is finitely
generated and projective module, and we can consider its
associated comatrix $A^o$--coring $\rcomatrix{B^o}{W}$. By the
Remark \ref{lcomcor}, there is an $A$--coring isomorphism
\[
(W^* \tensor{B^o} W)^o \cong \rcomatrix{B}{\Sigma}, \qquad \left(
(w^* \tensor{B^o} w)^o \mapsto \sum_i w \tensor{B} e_i
w^*((e_i^*)^o)^o \right),
\]
where $(W^* \tensor{B^o} W)^o$ is the opposite $A$--coring of the
$A^o$--coring $W^* \tensor{B^o} W$. Therefore, we have an
isomorphism of monoids $\End{A^o}{\rcomatrix{B^o}{W}} \cong
\End{A}{\rcomatrix{B}{\Sigma}}$. Finally, using this last
isomorphism together with the equality $\mathbf{I}_B^r(S) =
\mathbf{I}^l_{B^o}(S^o)$, we can identify the $\Gamma^r$-map of
equation \eqref{Gama'} with the $\Gamma^l$-map \eqref{Gama}
associated to the new data: $A^o$, $B^o \subset S^o$, and
${}_{B^o}W_{A^o}$. Henceforth, Theorem \ref{resultado-0} yields
\begin{theorem}\label{resultado-0'}
Let ${}_B \Sigma_A$ be a bimodule such that ${}_B\Sigma$ is
faithful and $\Sigma_A$ is finitely generated and projective.
Consider $\coring{C}=\rcomatrix{B}{\Sigma}$ its associated
comatrix $A$--coring. If either
\begin{enumerate}[(a)]
\item ${}_B\Sigma$ is faithfully flat module, or \item
${}_B\Sigma_A$ is a separable bimodule.
\end{enumerate}
Then $\digamma^r: \End{A}{\rcomatrix{B}{\Sigma}} \rightarrow
\mathbf{I}_B^r(S)$ is an anti-isomorphism of monoids with inverse map
$$\xymatrix@R=0pt{
\Gamma^r: \mathbf{I}_B^r(S) \ar@{->}[r] & \End{A}{\rcomatrix{B}{\Sigma}} \\ I
\ar@{|->}[r] & [u^* \tensor{B} u \mapsto \sum_k u^* t_k
\tensfun{B} s_k u], }$$ where $\mathbf{m}^{-1}(1) = \sum_k t_k
\tensor{B} s_k \in I \tensor{B} S$.
\end{theorem}
The isomorphism $\Gamma^l$ given in \eqref{Gama} gives, by
restriction, an isomorphism of groups $\Gamma : \mathbf{Inv}_B(S) \rightarrow
\Aut{A}{\rcomatrix{B}{\Sigma}}$. Analogously, the anti-isomorphism
$\Gamma^r$ defined in \eqref{Gama'}, gives, by restriction, an
anti-isomorphism of groups $\Gamma' : \mathbf{Inv}_B(S) \rightarrow
\Aut{A}{\rcomatrix{B}{\Sigma}}$. Moreover, when both $\Gamma^r$
and $\Gamma^l$ are bijective, Lemma \ref{cos-fil}.(iii) says that
$\Gamma = (-)^{-1} \circ \Gamma'$, where $(-)^{-1}$ denotes the
antipode map in the group of automorphisms. We can thus say that,
either in the hypotheses of Theorem \ref{resultado-0} or in the
hypotheses of Theorem \ref{resultado-0'}, we have an isomorphism
of groups $\Gamma : \mathbf{Inv}_B(S) \rightarrow
\Aut{A}{\rcomatrix{B}{\Sigma}}$ defined either as $\Gamma^l$ or as
$(-)^{-1} \circ \Gamma^r$, respectively. We can then state our
main theorem as follows.
\begin{theorem}\label{resultado-1}
Let ${}_B \Sigma_A$ be a bimodule such that ${}_B\Sigma$ is
faithful and $\Sigma_A$ is finitely generated and projective.
Consider $\coring{C}=\rcomatrix{B}{\Sigma}$ its associated
comatrix $A$--coring. If either
\begin{enumerate}[(a)]
\item ${}_B\Sigma$ or $\Sigma^*_B$ is a faithfully flat module, or
\item ${}_B\Sigma_A$ is a separable bimodule.
\end{enumerate}
Then there is an isomorphism of groups $\Gamma: \mathbf{Inv}_B(S) \rightarrow
\Aut{A}{\rcomatrix{B}{\Sigma}}$.
\end{theorem}
To finish, we want to compare Masuoka's maps \cite[Theorem
2.2(2.3)]{Masuoka:1989} with our $\digamma$--maps, using the
adjunction of the section \ref{Sect1}.
\begin{proposition}\label{(b)1}
Let ${}_B \Sigma_A$ be a bimodule such that ${}_B\Sigma$ is
faithful and $\Sigma_A$ is finitely generated and projective. Let
$S=\rend{A}{\Sigma}$ its ring of right linear endomorphisms. Then
\begin{enumerate}[(1)]
\item the map
\[
\xymatrix@R=0pt{\widehat{(-)}: \End{A}{\rcomatrix{B}{\Sigma}}
\ar@{->}[r] & \End{S}{S \tensor{B} S} \\ g \ar@{|->}[r] &
\widehat{g}= (\xi \tensor{B} \xi) \circ (\Sigma \tensor{A} g
\tensor{A}\Sigma^*) \circ (\xi^{-1} \tensor{B} \xi^{-1}) }
\]
is an injective homomorphism of monoids which turns the following
diagram commutative
\[
\xymatrix{ \mathbf{I}_B^l(S) \ar[d]_{\overline{\Gamma}^l} \ar[r]^-{\Gamma^l} &
\End{A}{\rcomatrix{B}{\Sigma}} \ar[dl]^-{\widehat{(-)}}
\\
\End{S}{S \tensfun{B} S} & }
\]
where $\overline{\Gamma}^l$ is the Gamma map associated to the
bimodule ${}_BS_S$ and the comatrix $S$--coring $S\tensor{B}S$
(see \cite[(2.1)]{Masuoka:1989}); \item for every $g \in
\End{A}{\rcomatrix{B}{\Sigma}}$, we have
\[
\hom{\rcomatrix{B}{\Sigma}}{\Sigma_g}{\Sigma} \,=\,
\hom{S\tensor{B}S}{S_{\widehat{g}}}{S} \, = \, \{s \in S |\,\,
\widehat{g}(s\tensor{B}1)=1 \tensor{B}s \}
\]
\end{enumerate}
\end{proposition}
\begin{proof}
(1) We only show that $\widehat{(-)}$ is a well defined map, the
compatibilities with the multiplication and unit are an easy
computations. So let $g \in \End{A}{\rcomatrix{B}{\Sigma}}$, by
definition $\widehat{g}$ is an $S$--bilinear map, and preserves
the counit. Denote by $\Delta'$ the comultiplication of $S
\tensor{B} S$, i.e. $\Delta': S\tensor{B}S \rightarrow
S\tensor{B}S\tensor{B}S$ sending $s\tensor{B}s' \mapsto s
\tensor{B}1\tensor{B}s'$, $s, s'\in S$. Then $\widehat{g}$ is
coassociative if and only if
\begin{equation}\label{coass}
\Delta' \circ \widehat{g} = (\widehat{g} \tensfun{B} S) \circ (S
\tensfun{B} \widehat{g}) \circ \Delta'.
\end{equation}
Now, a direct computations give the following equations
\begin{eqnarray*}
( \widehat{g} \tensor{B} S) \circ (S \tensor{B} \widehat{g}) &=&
(\xi \tensor{B} \xi \tensor{B} \xi) \circ ( \Sigma \tensor{A} g
\tensor{A} g \tensor{A} \Sigma) \circ (\xi^{-1} \tensor{B}
\xi^{-1} \tensor{B} \xi^{-1}),
\end{eqnarray*}
\begin{eqnarray*}
(\Sigma \tensor{A} \Delta \tensor{A} \Sigma^* ) \circ (\xi^{-1}
\tensor{B} \xi^{-1}) &=& (\xi^{-1} \tensor{B} \xi^{-1} \tensor{B}
\xi^{-1}) \circ \Delta, \\ & & \\
\Delta' \circ (\xi \tensor{B} \xi) &=& (\xi \tensor{B} \xi \tensor{B} \xi) \circ (\Sigma \tensor{A}
\Delta \tensor{A} \Sigma^*),
\end{eqnarray*}
which in conjunction with the coassociativity of $g$ imply the
equality of equation \eqref{coass}.
\\ (2) The second stated equality is a direct consequence of
the identification of the $B$--bimodule
$\hom{S\tensor{B}S}{S_{\widehat{g}}}{S}$ with a $B$--sub-bimodule
of $S$. Now, observe that the canonicals right $A$--linear and
right $S$--linear isomorphisms $S_{\widehat{g}} \tensor{S} \Sigma
\cong \Sigma_g$ and $S \cong \Sigma \tensor{A} \Sigma^*$ are,
respectively, right $\rcomatrix{B}{\Sigma}$--colinear map and
right $S \tensor{B} S$--colinear map, with respect to the
coactions defined in equations \eqref{tensig} and
\eqref{tensiges}. Whence,
\[
\hom{\rcomatrix{B}{\Sigma}}{\Sigma_g}{\Sigma} \cong
\hom{\rcomatrix{B}{\Sigma}}{S_{\widehat{g}}\tensor{S} \Sigma
}{\Sigma} \cong \hom{S \tensor{B} S}{S_{\widehat{g}}}{\Sigma
\tensor{A} \Sigma^*} \cong \hom{S \tensor{B}
S}{S_{\widehat{g}}}{S},
\]
where the second isomorphism is given by the Proposition
\ref{secondadj}. The desired first equality is now obtained using
the inclusion $\hom{\rcomatrix{B}{\Sigma}}{\Sigma_g}{\Sigma}
\subseteq \hom{S \tensor{B} S}{S_{\widehat{g}}}{S} \subset S$
which we show as follows. An element $s \in S$ belongs to
$\hom{\rcomatrix{B}{\Sigma}}{\Sigma_g}{\Sigma}$ if and only if
\begin{eqnarray*}
\sum_i e_i \tensor{A} e_i^* \tensor{B} su &=& \sum_i se_i
\tensor{A} g(e_i^* \tensor{B} u), \quad \forall u \in \Sigma.
\end{eqnarray*} This implies \begin{eqnarray*}
\sum_{i,j} e_i \tensor{A} e_i^* \tensor{B} se_j \tensor{A} e_j^* &=& \sum_{i,j} se_i
\tensor{A} g(e_i^* \tensor{B} e_j)\tensor{A}e_j^*
\end{eqnarray*} Using the isomorphism $\xi$ of equation
\eqref{xi} and the definition of the map $\widehat{(-)}$, we
obtain $s \in \hom{\rcomatrix{B}{\Sigma}}{\Sigma_g}{\Sigma}$
implies $1\tensor{B}s = \widehat{g}(s\tensor{B}1)$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
} \providecommand{\href}[2]{#2}
|
1,116,691,497,203 | arxiv |
\section{Conclusions and Future Work }
We have presented a novel model that exploits popularity time series (trends) and linear regression to predict user engagement on web content. Three variations of the model were presented --- Mixed, Mixed-Trend KSC and Mixed-Trend K-Means --- together with a data characterization that motivates their design. Our results show that our best model, the Mixed-Trend K-Means, provides gains in prediction accuracy ranging from 15\% to 27\% when compared with state of the art approaches\footnote{We note that all of our source code is available at: \url{http://github.com/flaviovdf/ecmlpkdd-analytics-challenge-2014}.}.
Future work on popularity prediction includes addressing the tradeoff between accurate predictions and early predictions (i.e., how early a model can accurately predict the popularity of a piece of content) as well as outlier detection (i.e., predicting that a content that has attracted little interest will suddenly burst in popularity).
\section{Introduction}
With the ever-growing production of online content, characterizing and predicting user engagement (e.g., number of visits or social engagement such as Facebook likes) on web content may have multiple beneficial values such as: (1) understanding the human dynamics of information consumption; (2) supporting the decisions of content producers and providers on different tasks (e.g., marketing and content filtering); and, (3) understanding the physical processes that govern the growth of viewership on the Web. Several previous studies~\cite{Castillo2013,Figueiredo2011,Cha,Szabo2010} have characterized some of the factors that cause the popularity growth of different kinds of web content. Complementarily, various others~\cite{Ahmed2013,Pinto2013,Szabo2010,Nikolov2012} have focused on the the task of popularity prediction. We focus here on the latter task, aiming at predicting the popularity of a piece of content.
Popularity prediction is a difficult and important task since it mostly translates into income and profits for content providers, creators and consumers alike. For example, more visitors to a web page may lead to more ad-clicks and sales. Moreover, content provisioning to a large amount of users may require decisions such as geographical sharding of content to servers (due to the increased traffic). Thus, if planning is not performed correctly, longer latencies and loading times, and thus, fewer users may be expected. Finally, accurate and early predictions can lead to better services to the end consumer, such as search engine rankings~\cite{Radinsky2012}.
We here present a simple, and yet effective, model for predicting the popularity of online content. More specifically, we present the winning model of two of the three tasks of the ECML/PKDD 2014 Predictive Analytics Challenge. In the challenge, different features related to the popularity of 30,000 web pages from 100 different hosts were provided. The goal of the challenge is to predict the popularity of 30,000 other pages from the same 100 hosts 48 hours after their upload. The features provided for the task were measured in the first hour after upload for each page.
Our model exploits the temporal features related to web pages (e.g., past visits and social engagement), as well as typical popularity (i.e., number of visits) time series trends which exist in the dataset. Such trends are extracted via unsupervised learning methods. Specifically, it combines the temporal features with features that capture the distances between the popularity time series for each web page and the extracted trends. We present a data characterization that motivates the design of our solution, and show the gains in prediction accuracy (ranging from 15\% to 27\%) when it is compared to state of the art alternatives.
The rest of this paper is organized as follows. We formally describe the prediction problem and present the state of the art baseline methods in Section 2. In Section 3 we introduce our proposed solution, whereas our experiments and results are presented in Section 4. Finally, Section 5 concludes the paper.
\section{Background}
We start this section by defining the content popularity prediction problem (Section~\ref{subsec:ref}). In this definition, as throughout the rest of the paper, we refer to a particular piece of content as a web page\footnote{We here focus on web page popularity prediction, given the goal of the ECML/PKDD Challenge. However, our models are general and can be applied to other types of online content.}. Next, we discuss existing state of the art solutions used as baselines in our experimental study (Section~\ref{subsec:base}).
\subsection{Problem Definition} \label{subsec:ref}
The popularity prediction problem we tackle can be stated as follows. Let $\mathcal{H}$ be a set of web hosts (e.g., \texttt{blogger.com}), where a single host $h \in \mathcal{H}$ is comprised of a set of pages, $\mathcal{P}$ be the set of all pages, where $p \in \mathcal{P}$ is a single page, and $\mathcal{P}_{h}$ be the set of all pages from host $h$.
Moreover, let $\mathcal{F}$ be a set of features associated with each page $p \in \mathcal{P}$, where each feature value is computed up to a certain \emph{reference time} $tr$ (e.g., $tr=$ 1 hour). Thus, using the set of features ($\mathcal{F}$) and the set of pages ($\mathcal{P}$), a matrix $\bm{X}_{tr}$ with $|\mathcal{P}|$ rows and $|\mathcal{F}|$ columns is defined for the values of features measured up to the reference time. Moreover, a row $\bm{x}_{p,tr}$ of the matrix $\bm{X}_{tr}$ defines the measurements for the given page\footnote{For simplicity, we shall identify rows using $p$.}. Using the measurements $\bm{X}_{tr}$, our goal is to predict the user engagement on each page up to a \emph{target time} $tt$ (where $tt > tr$).
We here focus on the following metrics of user engagement, referred to as the response variables: number of visits $v_{p,tt}$, Facebook likes $f_{p,tt}$, and Twitter mentions $m_{p,tt}$. All of them are cumulative measures, computed from page's upload up until time $tt$. We can then define vectors of $|\mathcal{P}|$ rows for each response variable (e.g., $\bm{v}_{tt}$), or in more general terms, we can define a matrix $\bm{Y}_{tt}$ with three columns, one for each response variable:$ \bm{Y}_{tt} = [ \bm{v}_{tt}, \bm{f}_{tt}, \bm{m}_{tt} ]. $
With these definitions, the prediction task can be stated as a supervised machine learning task. Given a set of web pages for which both $\bm{X}_{tr}$ and $\bm{Y}_{tt}$ are available (the training set), our goal is to learn a function that maps $f(\bm{X}_{tr}) \rightarrow \bm{Y}_{tt}$. Ideally, such a function will generalize well for new pages not used in the training set. This function is usually defined as the {\it model}. The baseline methods, presented next, as well as our approach, introduced in Section 3, explores linear regression method to learn the model.
Moreover, unless otherwise noted, we use a fixed $tr=1h$ as well as a $tt=48hs$ from now on, since these are the reference and target times defined in the Predictive Analytics Challenge.
\subsection{Baseline Methods} \label{subsec:base}
One of the simplest prediction models, the Szabo-Huberman (SH) model \cite{Szabo2010}, defines one single feature for each page\footnote{The model was originally proposed for YouTube videos and Digg news.}, which is the number of visits measured up to the reference time $tr$. Using $tr$ = 1 hour, the SH model represents a single page as
$ \bm{x}_{p,1h} = <v_{p,1h}>$.
The SH model thus makes use of the following linear relation to provide predictions:
$$ log (1 + \bm{v}_{tt}) = log (1 + \bm{X}_{tr}) \theta. $$
Using linear regression, the parameter vector $\bm{\theta}$ (with only one cell in this case - $\theta$), is solved by minimizing:
$$ \min_{\bm{\theta}} || log(1 + \bm{X}_{tr}) \bm{\theta} - log(1 + \bm{v}_{tt}) ||^{2}_{2}, $$
\noindent where $|| \cdot ||^{2}_{2}$ is the squared $l2-norm$. The log transform is required given the linear correlations between $log(1 + \bm{v}_{tr})$ and $log(1 + \bm{v}_{tt})$ unveiled by the authors. The goal of this objective function minimizes the sum of squared errors on the log transformed data. We shall make use of the same objective since it is the one defined in the Predictive Analytics Challenge. However, we do note that in order to provide prediction in non-log transformed values, the authors suggest changing the linear regression objective by one based on the relative error, that is:
$$ \min_{\bm{\theta}} || (\bm{X}_{tr} \bm{\theta} - \bm{v}_{tt}) \circ \bm{v}^{-1}_{tt} ||^{2}_{2}. $$
\noindent where the inverse of a vector is defined as the cell-wise inverse, while $\circ$ is the cell-wise product (e.g., $\bm{x} \circ \bm{y} = <x_1y_1, ..., x_ny_n>$).
Pinto {\it et al.}~\cite{Pinto2013} extended the SH model by incorporating the whole history of the number of visits to the vector $\bm{x}_{p,tr}$. Using 5-minute time windows, the vector is defined as:
$$\bm{x}_{p,tr} = <v_{p,5min}, v_{p,10min},v_{p,15min}, \cdots, v_{p,55min}, v_{p,1h}>.$$
Defining $\bm{v}_{p,tr}$ as the vector of visits measured in fixed length time windows (e.g., 5 minutes)\footnote{The model presented by Pinto {\it et al.}~\cite{Pinto2013} defines the amount of visits on each time window ($v_i$) not as cumulative (total views up to the window) as we do here, but actually as the amount of views gained in that window ($v_i - v_{i-1}$ in our notation). We found that using cumulative values lead to better results in terms of root mean squared error, thus we maintain our definition.}, the model above can be re-written as: $\bm{x}_{p,tr} = \bm{v}_{p,tr}.$
The same authors proposed a second model, called the RBF model, which extends the set of features of each page by adding {\it distance} features. Such distance features, measured using Radial Basis Functions\footnote{$RBF(\bm{x}, \bm{y}) = e^{||\frac{\bm{x}-\bm{y}}{\gamma}||^2_2}$, where $\gamma$ is a input parameter.}, are computed between the vector $\bm{v}_{p,tr}$ and a fixed number $C$ of vectors for other pages, randomly selected from the training set. To avoid over-fitting, the authors suggest using ridge regression on the RBF model. Both the ML and RBF models were originally evaluated in terms of the relative errors, and not in terms of the log based regression as we do here.
Our last baseline is the model proposed by Castillo {\it et al.}~\cite{Castillo2013}. In a very similar approach to the SH model, the authors also made use of a linear regression on log scales. However, instead of using one visit feature, the authors also explored social engagement features. Thus, a possible representation for a web page is: $$\bm{x}_{p,1h} = <v_{p,1h}, f_{p,1h}, m_{p,1h}>. $$
In addition to these features, the authors also added other features, such as the entropy of tweets related to the web page. Since such features are unavailable in our dataset, we leave them out of the definition of the model. Finally, to mitigate issues of {\it multi-collinearity}, that is, correlation between predictors in the model, the authors suggest representing each page as:
$$\bm{x}_{p,1h} = <v_{p,1h}^2, f_{p,1h}^2, m_{p,1h}^2, v_{p,1h}f_{p,1h}, v_{p,1h}m_{p,1h}, f_{p,1h}m_{p,1h}>.$$
\noindent Since this model was initially proposed for news websites, we shall simply refer to it as the News model.
\section{Our Approach} \label{subsec:meth}
Our approach combines the ideas described in the previous section with new features not explored by previous work. Moreover, as a novelty aspect, we make use of trend features extracted via clustering of visit time series. We first describe the features we explore without considering these popularity trends (Section \ref{subsec:1}). Later, we discuss how we extract popularity trends and extend our model to include the distances between the popularity curve already observed of the page that is target of prediction and the previously identified trends (Section \ref{subsec:2}).
\subsection{Mixed Model}
\label{subsec:1}
We borrow some of the ideas of the baselines by exploring the following temporal features for each page: (1) the time series of the number of visits to a page (each observation is recorded at each 5-minute time windows) - $\bm{v}_{p,tr}$; (2) two time series of user engagement which measure the number of Facebook likes - $\bm{f}_{p,tr}$, and the number of Twitter mentions - $\bm{m}_{p,tr}$; (3) a time series of the average time each user spends on the page - $\bm{a}_{p,tr}$; (4) the weekday (e.g., Monday to Sunday) and hour (e.g., 0 to 23) the page was created - $d_p$ and $c_t$. Moreover, we explore a single non-temporal feature which is the host to which each page belongs - $h_p$.
We encode the weekday and hour the page was created, as well as its host in a binarized manner. That is, each value is represented by a sparse vector, where one cell, representing the given weekday (hour or host) has a value of one, and all other cells are zeroes. For example, a page uploaded on a Tuesday is represented as $<0, 1, 0, 0, 0, 0, 0>$. Thus, we represent the weekday in which a page was uploaded as a vector $\bm{d}_p$, the hour as $\bm{c}_p$, and the host as $\bm{h}_p$. In this sense, each host, day of the week, and hour of the day become an \emph{indicator variable}.
With these features, one possible manner of representing each page is:
$$ \bm{x}_{p,1h} = <\bm{v}_{p,tr}, \bm{f}_{p,tr}, \bm{v}_{p,tr}, \bm{a}_{p,tr}, \bm{d}_{p,tr}, \bm{c}_{p,tr}, \bm{h}_{p,tr}>.$$
\noindent However, to mitigate multi-collinearity issues and to capture the behavior of hosts with non-linear popularity growth (discussed in the next section), we represent each page as:
\begin{align*}
\bm{x}&_{p,1h} = <\bm{v}_{p,tr}, \bm{f}_{p,tr}, \bm{v}_{p,tr}, \bm{a}_{p,tr}, \\
&\bm{v}_{p,tr} \circ \bm{v}_{p,tr},
\bm{f}_{p,tr} \circ \bm{f}_{p,tr},
\bm{m}_{p,tr} \circ \bm{m}_{p,tr},
\bm{a}_{p,tr} \circ \bm{a}_{p,tr}, \\
&\bm{v}_{p,tr} \circ \bm{f}_{p,tr},
\bm{v}_{p,tr} \circ \bm{m}_{p,tr},
\bm{v}_{p,tr} \circ \bm{a}_{p,tr},
\bm{f}_{p,tr} \circ \bm{m}_{p,tr}, \\
&\bm{f}_{p,tr} \circ \bm{a}_{p,tr},
\bm{m}_{p,tr} \circ \bm{a}_{p,tr}, \\
&\bm{v}_{p,tr} \circ \bm{v}_{p,tr} \circ \bm{v}_{p,tr}, \\
&\bm{f}_{p,tr} \circ \bm{f}_{p,tr} \circ \bm{f}_{p,tr}, \\
&\bm{m}_{p,tr} \circ \bm{m}_{p,tr} \circ \bm{m}_{p,tr}, \\
&\bm{a}_{p,tr} \circ \bm{a}_{p,tr} \circ \bm{a}_{p,tr},
\bm{d}_{p,tr}, \bm{c}_{p,tr}, \bm{h}_{p,tr}>.
\end{align*}
We refer to this model as the Mixed model. The $\circ$ multiplications capture the same intuition as that of squaring the sum $(v_i + f_i + m_i)^2$ for each time window. Moreover, we also add the cubic terms (e.g., $v^3_i$) for the number of visits, Facebook likes, Twitter mentions and active time. To learn the model parameters we solve an linear regression task for each response variable.
\subsection{Mixed-Trend Model}
\label{subsec:2}
In order to capture the trend of each time series, we incorporate to the Mixed model features that capture the distance of the popularity curve of the target page measured during the reference time $tr$ to given trends, which were previously identified using an unsupervised learning method. Specifically, we experiment with K-Means clustering~\cite{Hastie2009} and KSC clustering~\cite{Yang2011} to extract such trends from the training set. For each response variable, we define a matrix $\bm{T}_{tr}$, where each row is the time series of the response for a given page:
$$ \bm{t}_{p,tr} = <\delta_{5min}, \delta_{10min}, \cdots, \delta_{55min}, \delta_{1h}>. $$
\noindent With the reference time of 1 hour, and a window length equal to 5 minutes, this matrix will have $|\mathcal{P}|$ rows and 12 columns. Each entry of the matrix, $\delta_i$, represents the {\em number visits} gained in that time window, i.e., $\delta_i = v_i - v_{i-1}$.
We note that, using this matrix to extract trends is a common approach in the literature~\cite{Nikolov2012,Yang2011}.
The time series trends can be considered as the most common {\it shapes} of the different vectors $\bm{t}_{p,tr}$. Different techniques will extract shapes in different manners from a given training set. For example, the K-Means algorithm will group time series into $k$ clusters according to the euclidean distance:
$$dist_{km}(\bm{t}, \bm{o})_{km} = || \bm{t} - \bm{o} ||^{2}_2. $$
In contrast, the KSC algorithm groups times series based on a distance metric that is invariant of scale in the popularity axis and shifts in the time axis \cite{Yang2011}. That is, two pages that have their popularities evolving according to similar processes (e.g., linear growth) will be assigned to the same cluster by KSC, regardless of the popularity values. Also, two pages that have stable popularity over time except for a peak in a single window will also be clustered together, regardless of the time when the peak occurred and the peak value. KSC is mostly a direct translation of the K-Means algorithm, except for the distance metric used, which is defined as:
$$dist_{ksc}(\bm{t}, \bm{o}) = \displaystyle\min_{\alpha, q} \quad \frac{||\bm{t} - \alpha \bm{o}(q)||_2}{||\bm{o}||_2}.$$
\noindent where $\bm{o}(q)$ is the operation of shifting vector $\bm{o}$ by $q$ units. For a fixed $q$, the exact solution for $\alpha$, obtained by computing the minimum of $dist_{ksc}$, is: $\alpha = \frac{\bm{t}'\bm{o}(q)}{||\bm{t}'||_2}.$ The optimal value of $q$ is found by considering all integers in the range of the size of the time series vectors (e.g., (-12,12)).
It is important to note that, unlike KSC, K-Means is not scale invariant. Thus, in order to make the method invariant in terms of popularity we apply the following transforms. Initially, we apply a $log(1 + \bm{T}_{tr})$ to the time series matrices. Secondly, we z-normalize (zero mean normalization) each log transformed time series vector. While this approach will keep the popularity invariance, since time series will have values in the same range, it does not tackle the time shifting invariant, as KSC does. We also note that both K-Means and KSC receive $k$, the target number of clusters, as input.
Given a new page $p$ for which a prediction is to be made, we can compute the distances between its popularity time series during the reference time $tr$ and each previously identified trend by simply computing the distances from $\bm{t}_{p,tr}$ to each cluster center (considering a fixed time window equal to $tr$), after clustering in the training set is done using either K-Means or KSC. Thus,
for each clustering method we can define a vector $\bm{s}_{p,tr}$ which includes the distances to the extracted trends. The Mixed-Trend model is thus the incorporation of these distance vectors into the Mixed model.
\section*{Acknowledgment}
This research is partially funded by a Google Brazil Focused Research Grant, the Brazilian National Institute of Science and Technology for Web Research (InWeb), CNPq, CAPES and Fapemig.
\bibliographystyle{abbrv}
{\small
\section{Related Work}
\section{Experimental Evaluation}
We now discuss our experimental evaluation. We start by discussing how we trained (i.e., parameterized) the models (Section \ref{subsec:cv}). Next, we provide some intuition on why our model works based on characteristics of the dataset (Section~\ref{sec:understanding}). We then compare both the Mixed and Mixed-Trend models and the baseline models (Section~\ref{sec:comparison}).
The results discussed in this section are computed on the training set of the Predictive Analytics Challenge dataset, which consists of 30,000 web pages from 100 different hosts, each host with exactly 300 pages. We did not make use of the test set since the response variables $\mathbf{Y}_{tt}$ are not publicly available on the test set. Instead, we evaluate our models by employing Generalized Cross Validation, as described below.
\subsection{Model Parameterization} \label{subsec:cv}
For the SH, ML, News and Mixed models, model parameters are learnt by the regression method, i.e., by minimizing the sum of squared errors on the log transformed data. However, for the RBF model, the parameter $\gamma$ (used by the RBF function), the regularization parameter of the ridge regression as well as the number $C$ of pages selected to build Radial Basis Functions must be determined. Similarly, the number of clusters $k$ must be given as input for the Mixed-Trend model.
Ideally, a temporal split of training and test sets would be performed to determine these parameters. However, given that the upload date of each page is not provided in the Predictive Analytics dataset, we decided to employ Generalized Cross Validation (GCV)~\cite{Hastie2009} to define the best parameter values. GCV is equivalent to leave-one-out cross validation (LOOCV). In LOOCV, one page per time is used to evaluate a model which is trained on the rest of the pages. Thus, for each page, we compute the squared error between the predicted and real values. GCV computes the same squared error for each page without the need of manually splitting the dataset into train and test sets. Specifically, only one model is trained for the whole dataset, and the GCV measures compute the LOOCV error for every page~\footnote{The following website provides a good summary of GCV \url{http://robjhyndman.com/researchtips/crossvalidation/}}. When comparing different model parameters, we measure the root mean squared error (RMSE) between the predicted and actual value for each page. The parameters with lowest RMSE are chosen.
For the Mixed-Trend model, we search for the best value of $k$ (i.e., number of clusters) in the $[1, 100]$ range, finding it to be $k$=50 (for both K-Means an KSC algorithms) in all cases. For the RBF model, we search for values of $\gamma$ and of the ridge regularization parameter considering the following options: $\{$0.001, 0.01, 0.1, 1, 10, 100, 1000$\}$. We also search for the best value of $C$ out of the options: $\{10, 50, 100\}$. The best parameter values were adopted in each case. When performing clustering, we make use of the entire dataset since we found that isolating a single page using the traditional LOOCV has little to no effect on our results.
We finally note that the SH, ML and RBF models are defined for a single engagement measure (e.g., number of visits). In order to evaluate these models for different engagement measures, we make the appropriate changes to the input features (e.g., changing from $v_{p,tr}$ to $f_{p,tr}$ or $m_{p,tr}$ in SH model).
\begin{figure*}[ttt!]
\centering
\mbox{\subfigure[Visits]{\includegraphics{fig/vtr_vtr_5m48h.pdf}}}\hfill%
\mbox{\subfigure[Facebook Likes]{\includegraphics{fig/ftr_ftr_5m48h.pdf}}}\hfill%
\mbox{\subfigure[Twitter Mentions]{\includegraphics{fig/mtr_mtr_5m48h.pdf}}}
\caption{Correlations between the predictors number of visits $v_{p,tr}$, Facebook likes $f_{p,tr}$ and Twitter mentions (Tweets) $m_{p,tr}$ in 5 minutes and their respective values after 48h. Each variable has been incremented by one due to log transformed x and y axis.}
\label{fig:corr5m48}
\end{figure*}
\begin{figure*}[ttt!]
\centering
\mbox{\subfigure[Visits vs Facebook Likes]{\includegraphics{fig/vtr_ftr_2h.pdf}}}\hfill%
\mbox{\subfigure[Visits vs Twitter Mentions]{\includegraphics{fig/vtr_mtr_2h.pdf}}}\hfill%
\mbox{\subfigure[Facebook Likes vs Twitter Mentions]{\includegraphics{fig/ftr_mtr_2h.pdf}}}
\caption{Correlations between the predictors number of visits $v_{p,tr}$, Facebook likes $f_{p,tr}$ and Twitter mentions (Tweets) $m_{p,tr}$. Each variable has been incremented by one due to log transformed x and y axis.}
\label{fig:corr2hpairs}
\end{figure*}
\subsection{Data Characterization} \label{sec:understanding}
In order to motivate our model, we initially show the correlations between the user engagement metrics measured up to the reference time $tr$ and their respective values at the target time $tt$. Figure~\ref{fig:corr2h48} shows these correlations for the number of visits $v_{p,tr}$ (Figure~\ref{fig:corr2h48}-a), Facebook likes $f_{p,tr}$ (Figure~\ref{fig:corr2h48}-b) and Twitter mentions $m_{p,tr}$ (Figure~\ref{fig:corr2h48}-c), using $tr=1$ hour. Note that both axes of the graphs are in log-scales. Also, a value of 1 was added to each measure on each page (e.g., the axis for visits shows $log(1 + v_{p,tr})$).
The figure shows that a strong linear correlation in log scales (captured by the Pearson correlation coefficient $\rho$) exists for each engagement metric, as observed in \cite{Szabo2010}. Values of $\rho$ exceed $0.73$ for Facebook likes, reaching $0.84$ for Twitter mentions. Such strong positive correlations motivate the use of linear regression methods to predict log-scaled engagement measures. However, the whole history of measures for each metric can also be useful to predict popularity values at $tt=48$ hours. This is exemplified in Figure~\ref{fig:corr5m48}, which shows scatter plots similar to those in Figure~\ref{fig:corr2h48}, but now assuming that $tr=5$ minutes. The figure shows that in some cases (such as visits and Twitter mentions), moderate correlations (e.g. $\rho=0.46$ for visits and $\rho=0.53$ for Twitter mentions) already exist even very soon after the page was created. This motivates the use of the whole history of the measurements (e.g., $\bm{v}_{p,tr}$) and not only their final value at the reference time.
We also looked at the correlations between engagement metrics. Figure~\ref{fig:corr2hpairs} shows that moderate correlations exist between every pair of metrics (e.g., $\rho$ of at least 0.35), which motivates our approach of multiplying different metrics to mitigate multi-collinearity issues. More surprisingly, we find that there exists pages that have more Facebook likes (and Twitter mentions) than actual visits (points above the 45 degree line in each plot). This result indicates that not every like or tweet implies in a visit, and suggests that measuring popularity on a single online social network service may be misleading, since people are not necessarily visiting the web pages. Finally, this result also suggests that we may not be able to completely rely on a single metric (e.g., Facebook likes) to predict the other (e.g., number of visits), since only moderate correlations exist between them.
Now that we have discussed the reasons behind using the whole history of the different metrics as predictors as well as behind our approach to deal with multi-collinearity, we look into the motivation for also exploiting the host, day of the week and time of the day as predictors. Figure~\ref{fig:hosts} shows the correlations between number of visits at $tr$=$1$ hour and at $tt$=$48$ hours for two hosts in our dataset. We note that host 68 (shown in black) has, very similar values of $v_{p,tr}$ and $v_{p,tt}$ for most pages (i.e., most pages are on the 45 degree line). Such finding implies that most pages of this host will not grow in views. In fact, if we train the SH model for this host only, it will find that the parameter $\theta$ has a value of $1.10$, that is, $log(1 + v_{p,tt}) = 1.10 log(1 + v_{p,tr})$. In contrast, host 3 shows a clear increase in popularity values for almost every page. In fact, the SH model, trained specifically for host 3, will capture the relationship between $v_{p,tr}$ and $v_{p,tt}$ as being $log(1 + v_{p,tt}) = 2.04 log(1 + v_{p,tr})$. This difference between hosts motivates the use of indicator variables (e.g., $\bm{h}_p$) to boost (positively or negatively) the general relationship that exists in the whole dataset (see Figure~\ref{fig:corr2h48}) to relationships specific to the behavior of each host. Similarly, we can correct for the behavior for the different upload days ($\bm{d}_p$) and hours ($\bm{c}_p$). Finally, it further motivates the use of squared and cubic terms in the model to capture non-linear relationships between values at $tr$ to those at $tt$.
\begin{figure}[tt!]
\centering
\includegraphics{fig/vtr_vtr_2h48_host.pdf}
\caption{Correlation between $v_{p,tr}$ and $v_{p,tt}$ for selected hosts.}
\label{fig:hosts}
\end{figure}
\begin{table*}[t]
\centering
\caption{Number of Features $|\mathcal{F}|$ and Prediction Results (Root mean squared error - RMSE)}
\begin{tabular}{lcccccccc|cc}
\toprule
& SH & ML & RBF & News & Mixed & Mixed-Trend & Mixed-Trend &&& Mixed-Trend\\
& & & & & & KSC & K-Means &&& K-Means on $\bm{Y}_{tt}$\\
\cline{2-11}
$|\mathcal{F}|$ & 1 & 12 & 22 up to 112 & 60 & 347 & 397 & 397 &&& 397\\
\cline{2-11}
Visits & 1.355 & 1.299 & 1.088 & 1.267 & 1.005 & {\bf 0.991} & {\bf 0.983} &&& 0.989\\
Facebook Likes & 1.835 & 1.793 & 1.534 & 1.525 & 1.390 & {\bf 1.383} & {\bf 1.380} &&& 1.378\\
Twitter Mentions & 0.863 & 0.852 & 0.779 & 0.786 & 0.669 & {\bf 0.667} & {\bf 0.667} &&& 0.666\\
\bottomrule
\end{tabular}
\label{tab:predresults}
\end{table*}
So far we have provided evidence that motivate our Mixed model. We motivate the Mixed-Trend model by showing in Figure~\ref{fig:examples} the evolution in the number of visits for two web pages, selected from our dataset, that have similar popularity in terms of total number of visits. The figure shows that the numbers of visits of the two pages evolve over time according to very different processes. The web page shown in the black/solid line is steadily decreasing in popularity over time, whereas the web page in the blue/dashed line experiences a sharp increase in popularity 25 minutes after its upload. Such an example motivates the need for the Mixed-Trend model. Indeed, in \cite{Pinto2013} the authors argued that prediction accuracy could be improved by building specialized models for each popularity trend, although no attempt to learn popularity trends and tackle such specialization was done. By incorporating the similarity of web pages to previously identified trends, as proposed here, we can effectively capture such differences in popularity curves, and thus improve prediction accuracy, as we shall discuss in the next section.
\subsection{Prediction Results} \label{sec:comparison}
We now discuss the prediction results in terms of the root mean squared error (RMSE) when measured using generalized cross validation (GCV). The results produced by all models, when using the best parameter values as discussed in Section \ref{subsec:cv}, are shown in Table~\ref{tab:predresults}. On the table we also show the number of features of each model. Moreover, in last column of the table we also show the RMSE values obtained on the challenge server, that is, when measuring RMSE based on $\bm{Y}_{tt}$ and not using GCV.
Considering only the baselines, we find that the SH model performs worse than all other methods, whereas the RBF model is the best baseline, except for predicting Facebook likes, for which the News model is the best baseline. More importantly, our proposed Mixed and Mixed-Trend models greatly outperform all baselines, for all three response variables. Moreover, by exploiting the distances to previously identified trends, the Mixed-Trend models, using either KSC or K-Means to extract the trends from the training set, also provides improvements over the simpler Mixed model, particularly for predicting number of visits. Compared to the baselines, the improvements of the Mixed-Trend models vary from 15\% (for Twitter mentions against the RBF model) to 27\% (for the number of visits against the SH model). Finally, we note only marginal differences in RMSE (if any) between extracting trends using K-Means or KSC. Thus, given the more scalable nature of K-Means~\cite{Yang2011}, we argue that the Mixed-Trend model using it as the trend extraction method is the most cost-effective solution.
Before concluding it is important to discuss whether over-fitting is occurring in our models. We argue that this is not the case based on three results. Initially, from the last column of Table~\ref{tab:predresults} we can see that the results for the Mixed-Trend K-Means model on the evaluation server test set is very close (and sometimes even smaller) than the one measured by GCV. Secondly, we also trained models using Ridge and Lasso regression~\cite{Hastie2009}, finding no improvements over the ordinary least squares linear regression we employ. Finally, we point out the result of Stone~\cite{Stone1977}, which shows that minimizing cross validated errors is asymptotically equivalent to minimizing Alkaike's Information Criterion (AIC). A similar result exists for linear models when using the Bayesian Information Criterion~\cite{Shao1993} (BIC). In order to avoid over-fitting both AIC and BIC penalize more complicated models. Thus, we also compared AIC and BIC values finding that the Mixed-Trend models always performs better than baseline approaches. These results indicate that on the Predictive Analytics Challenge dataset no over-fitting is occurring. However, it is impossible to generalize such a finding to any dataset. Thus, we point out that the use of regularized regression may be necessary on different datasets.
\begin{figure}[ttt!]
\centering
\includegraphics{fig/examples.pdf}
\caption{Popularity evolution of two selected pages.}
\label{fig:examples}
\end{figure}
|
1,116,691,497,204 | arxiv | \section{Introduction}
Globular cluster (GC) systems provide the signatures of formation and assembly
histories of their host galaxies assuming that major star formations in galaxies are
accompanied with global GC formation.
Several scenarios have been proposed to account for the observational properties
obtained for the GC systems (see a comprehensive review of Brodie \& Strader 2006).
Many aspects of those scenarios are in favor of the currently accepted hierarchical
galaxy formation theory (Press \& Schechter 1974) rather than the monolithic formation
at high redshift (Eggen et al. 1962; Larson 1974).
In this galaxy formation paradigm, constituent of galaxy mass including GCs is predicted
to form through quiescent as well as merger/interaction-driven star formation
(Kaviraj et al. 2007b).
One of the best templates in the local universe for testing this scenario is the elliptical
galaxy NGC 5128 due to its proximity.
There have been several pieces of evidence supporting the picture that the
NGC 5128 is the prototype for a postmerger elliptical galaxy (see Israel 1998 and references therein).
Previous photometric and spectroscopic observations of GCs also suggest that merging and/or
interaction events have played an important role in shaping its star cluster system
(Peng, Ford, \& Freeman 2004a, b; Woodley et al. 2007; Beasley et al. 2008).
Constraining the formation scenario of the NGC 5128 GC system requires the understanding of
its global age distribution. Clusters younger than the bulk of ancient Galactic counterparts
are of particular interest because
these objects represent the later stages of star
formation histories in galaxies. Recent spectroscopic observations suggest that
NGC 5128 hosts a cluster population significantly younger than the old GCs in the Milky Way
and M31 (Peng et al. 2004b). Based on the spectroscopic observations for an increased sample of GCs,
Beasley et al. (2008) reported the discovery of metal-rich, intermediate-age GCs (IAGCs) with
ages of $\sim 3 - 8$ Gyr in NGC 5128. They propose that this population may be the byproduct
formed during merging events and/or interactions involving star formation and GC formation several
gigayears ago.
However, it is important to note that age-dating of GCs via integrated spectra is hampered
by the degeneracy between age and the existence of hot old stellar population (e.g., blue
horizontal branch [HB] stars) affecting the strength of age-sensitive line indices (Lee, Yoon, \&
Lee 2000; Maraston et al. 2003; Thomas, Maraston, \& Bender 2003; Schiavon et al. 2004;
Lee \& Worthey 2005; Trager et al. 2005; Cenarro et al. 2007).
The effect of old blue HB stars in the integrated spectra can mimic young ages for old GCs,
raising a cause of concern that may cast doubt on the intermediate age nature of the GC in some galaxies.
The UV colors (e.g. FUV$-V$ and FUV$-$NUV), on the other hand, are known to
provide robust age estimation of simple stellar populations (e.g., Yi 2003; Rey et al. 2005, 2007;
Kaviraj et al. 2007a; Bianchi et al. 2007). Kaviraj et al. (2007a) found that the age constraint is far superior when
UV photometry is added to the optical colors and its quality is comparable or marginally better
than the case of utilizing spectroscopic indices.
With the new approach using UV observations, in this {\it letter}, we take advantage of the combination of
available optical photometry and the {\sl GALEX} ({\sl Galaxy Evolution Explorer}) UV photometry to
confirm the existence of IAGCs and to explore the age distribution of the NGC 5128 GC system.
In the following sections, we emphasize the importance of the UV photometry as a probe of
IAGCs in general.
Comparing with GCs in M31 and the Milky Way with the aid of our population models,
we describe the overall age distribution of GCs and identification of IAGCs in NGC 5128.
In this paper, we denote IAGCs as those having ages $\sim$ 3 $-$ 8 Gyrs.
\section{Observations and Data Analysis}
{\sl GALEX} (Martin et al. 2005) imaged one 1.25 deg circular field centered on 26 arcmin East and
7 arcmin North of the NGC 5128 core in two UV bands: FUV (1350 -- 1750\AA) and NUV (1750 -- 2750\AA).
The images were obtained on April 2004, and are included in the
{\sl GALEX} fourth and fifth data release (GR4/GR5)\footnote{http://galex.stsci.edu/gr4}.
Total integration times were 30,428 sec and 20,072 sec for NUV and FUV, respectively.
Preproccessing and calibrations were performed via the {\sl GALEX} pipeline
(Morrissey et al. 2005, 2007). {\sl GALEX} image has a sampling of 1.5 arcsec pixel$^{-1}$
which corresponds to 19 pc at the distance of NGC 5128 (3.9 Mpc, Woodley et al. 2007)
Using the DAOPHOTII/ALLSTAR package (Stetson 1987), we performed aperture
photometry for all detected point sources in the {\sl GALEX} NGC 5128 field.
Aperture corrections were derived using moderately bright, isolated objects.
Flux calibrations were applied to bring all measurements into the
AB magnitude system (Oke 1990; Morrissey et al. 2005, 2007).
Point sources in our {\sl GALEX} photometry were cross-matched using
a matching radius of 3 arcsec with the
catalog of Woodley et al. (2007). This catalog provides positions as well as
optical magnitudes and mean radial velocities for 415 GCs in NGC 5128.
All spurious and ambiguous sources were rejected based on visual inspection.
The final sample of visually confirmed GCs are 157 and 35 in NUV and FUV, respectively.
We adopted a foreground reddening value of $E(B-V)$ = 0.11 for NGC 5128 (Woodley et al. 2007) and
use the reddening law of Cardelli, Clayton, \& Mathis (1989).
The full UV catalog and discussion of the UV properties of
GCs in NGC 5128 will be presented in a forthcoming paper.
Figure 1 shows the optical color-magnitude diagram (CMD) of GCs in NGC 5128 detected
in the NUV and FUV bandpasses. For comparison,
we overplot GCs in M31 detected from {\sl GALEX} observations (Rey et al. 2007).
The CMD shows that most of the UV-detected objects in NGC 5128 and M31 have similar distribution
and are confined to $V-I<1.05$.
\section{Ultraviolet as a Probe of Intermediate-Age Globular Clusters}
FUV flux plays an important role in identifying IAGCs. Young ($< 1$ Gyr) stellar
populations emit a substantial portion of their flux in the UV.
Metal-poor old ($> 10$ Gyr) stellar populations also show large FUV to optical flux
ratio due to the contribution of hot HB stars.
On the contrary, intermediate-age ($\sim$ 3$-$8 Gyr) populations emit negligible
amount of FUV flux since the constituent stars are not hot enough to produce a
significantly large FUV flux (see Fig. 1 of Kaviraj et al. 2007a).
Consequently, if the IAGC candidates identified by spectroscopic observations are
truly intermediate in age, they should be very faint or not detected in our {\sl GALEX}
FUV photometry given our integration time and the detection limit (Lee \& Worthey 2005;
Rey et al. 2007; Kaviraj et al. 2007a).
The first use of UV color as a tool for identifying IAGCs was demonstrated in our M31 study
(see Rey et al. 2007). Spectroscopic observations of M31 clusters have suggested the
existence of IAGCs with mean age $\sim$ 5 Gyr (Burstein et al. 2004; Beasley et al. 2005;
Puzia et al. 2005). However, based on {\sl GALEX} FUV detections of more than half of
M31 IAGC candidates, Rey et al. (2007) suggested that a large fraction of the spectroscopically
identified IAGCs may not be truly intermediate in age but are rather old GCs with a developed
blue HB sequence. Among the 42 GCs in M31 whose ages are estimated by Kaviraj et al.
(2007a), we find that four IAGC candidates turn out to be old GCs with $> 12$ Gyr.
By comparing of mass-to-light ratios of three IAGC candidates in M31 with those of old GCs,
Strader et al. (2009) also found no evidence that M31 IAGC candidates
are of intermediate in age.
The most direct way to identify genuine IAGCs is to inspect CMDs of the clusters of interest.
In the case of M31, $HST$ CMDs of two IAGC candidates B311 and B058 exhibit clearly
developed blue HB sequences (Rich et al. 2005). In a separate study, Chandar et al. (2006)
showed that a star cluster in M33, C38, is a genuine IAGC with age $\sim 2$--5 Gyr based
on the HST CMD and Balmer line measurements. It is important to note that this cluster is
also confirmed to be a genuine IAGC using the {\sl GALEX} FUV observations of M33
(S. T. Sohn et al. 2009, in prep). In any case, UV$-$optical color can be used to discriminate genuine
IAGCs from the old GCs masquerading as IAGCs.
\section{Age Distribution of Globular Clusters in NGC 5128}
\subsection{Old Globular Clusters}
Figure 2 shows the $V-I$ versus UV$-V$ diagrams. We compare our NGC 5128 sample with those of the Milky Way
(crosses, Sohn et al. 2006) and M31 (open circles, Rey et al. 2007) GCs whose age distributions
are reasonably well constrained.
We also show our simple stellar population (SSP) models constructed using the Yonsei Evolutionary
Population Synthesis (YEPS) code (Lee, Yoon, \& Lee 2000; Lee et al. 2005;
Rey et al. 2005, 2007; Yoon et al. 2006, 2008).
In Fig. 2, NGC 5128 GCs appear to show tight distribution around 12 Gyr model line similar to that of Milky Way,
while GCs in M31 are rather scattered in $V-I$. This is partly due to the detection limit of optically
red GCs in NGC 5128 (see Fig. 1) and insufficient sample of Milky Way GCs obtained from previous UV observations of
various satellites (see Sohn et al. 2006). Furthermore, Rey et al. (2007) reported the existence
of UV-bright metal-rich GCs with extreme hot blue HB stars in M31 (e.g., NGC 6388 and NGC 6441 in the Milky Way, Rich et al. 1997).
In this regard, some of the red ($V-I>1.0$) M31 GCs that show UV excess with respect to the 14 Gyr model line
may be such peculiar objects. Considering these points, at a fixed $V-I$,
the majority of GCs in three galaxies show similar spread in the UV$-V$ colors and are well
accounted for by the 10--14 Gyr model lines. This suggests that the mean age and age spread,
at least, for old ($\geq 10$ Gyr) GCs are similar among GC systems of different galaxies,
Milky Way, M31, and NGC 5128.
\subsection{Intermediate-Age Globular Clusters}
Beasley et al. (2008) found a population of intermediate-age
and predominantly metal-rich ([Z/H] $> -1.0$) GCs (15 \% of the sample) from
their spectroscopic observations. Among the 21 IAGC candidates (age $\sim 3 - 8$ Gyr)
identified by Beasley et al. (2008), we detect only two in the {\sl GALEX} FUV passband.
In Figure 3, we show the $V-I$ vs. $FUV-V$ diagram for the spectroscopically identified
IAGC candidates in NGC 5128 (filled squares) and M31 (filled circles) detected in {\sl GALEX}
FUV passband. Population model lines covering range of intermediate (3 and 8 Gyr)
and old (10, 12, and 14 Gyr) ages are overplotted for guidance.
It is immediately apparent that all of the IAGC candidates of NGC 5128 and M31 detected
in the FUV show similar distribution to those of old GCs with $> 10$ Gyr, {\it i.e.}, all FUV-detected
IAGC candidates have significantly bluer $FUV-V$ colors than the 3 and 8 Gyr model lines.
This indicates that IAGC candidates detected in the FUV are in fact old GCs
($\ge 10$ Gyr) containing developed blue HB populations that contribute to the strong Balmer absorption lines.
It is important to note that, as shown in Fig. 3, most M31 IAGC candidates with $E(B-V)<0.16$ are
detected in the {\sl GALEX} FUV (6 out of 7, see Rey et al. 2007 for the details).
If we restrict the sample of M31 IAGC candidates to match the observed optical brightness and
color range ($M_{V}<-8$ and $V-I<1.05$, see Fig. 1) of the FUV-detected sample of NGC 5128 GCs,
4 out of 5 M31 IAGC candidates are detected in FUV.
In the case of NGC 5128 GCs, only two out of 9 IAGC candidates are detected in the FUV.
Since all of the NGC 5128 GCs detected in the FUV cover similar range of $(FUV-V)_{o}$ colors of
FUV-detected IAGC candidates in M31, most, if not all, spectroscopically identified IAGC candidates in
NGC 5128 are not likely to be as bright as those in M31.
Among the 21 IAGC candidates identified by Beasley et al. (2008), 12 GCs are detected in the
{\sl GALEX} NUV but not in the FUV.
Whereas the FUV flux of old ($> 8$ Gyr) GC is almost entirely
dominated by stars in the hot HB sequence, the NUV flux is influenced by both the HB stars
and those on the main-sequence turnoff.
In this regard, we cannot rule out that some of the NUV-detected IAGC candidates are truly
intermediate in age, despite the fact that NUV$-V$ is relatively insensitive to
age variations compared to the FUV$-V$ (see Fig. 2). To test this hypothesis, in Fig. 3, we show the bluer
limits of the NUV-detected IAGC candidates having similar $V$ magnitudes of FUV-detected IAGCs.
Most of the color limits are consistent with the NUV-detected IAGC candidates being $\sim 3 - 8$ Gyr in age.
In summary, our UV photometry suggests that NGC 5128 does possess a non-negligible
fraction of IAGCs that are intrinsically faint in the FUV
as proposed by previous spectroscopic studies.
\section{Discussion and Conclusions}
In this work, we explored the age distribution of GCs in the giant elliptical galaxy NGC 5128 using
the UV colors. The majority of NGC 5128 GCs show age ranges similar to old GCs in M31 and
the Galactic halo. Our most important result is that a large fraction of IAGCs identified by the
spectroscopic observations are not detected in the {\sl GALEX} FUV passband and therefore may be
truly intermediate in age. This is in contrast to the case of M31
GCs where the majority of IAGC candidates turned out to be old GCs with developed HB sequence
based on their FUV$-V$ colors (see Rey et al. 2007).
The existence of IAGCs in NGC 5128 supports the galaxy formation scenario accompanied with
at least two major star formation episodes; e.g., hierarchical assembly of the protogalactic
fragments or disks (Bekki et al. 2003; Beasley et al. 2002, 2003; Yi et al. 2004; Kaviraj et al. 2005).
In these models, some of the metal-rich GCs are formed from pre-enriched gas clouds and are on
average younger than the metal-poor GCs. Based on the kinematic analysis in combination with
the age distribution of GCs, an alternative mechanism may have taken place where the NGC 5128
formed its main body at early times and has gradually built up by minor mergers and gas-rich
satellite accretions accompanied by star formation episodes (Woodley 2006; Woodley et al. 2007).
The presence of IAGCs in NGC 5128 has an interesting implication for the
recent star formation (RSF) recently discovered using the large {\sl GALEX} UV sample of
early-type galaxies at different redshifts ($0<z<1$; e.g., Yi et al. 2005; Kaviraj et al. 2007b,
2008; Schawinski et al. 2007). Kaviraj et al. (2008) found that high-redshift early-type galaxies
in the range of $0.5<z<1$ exhibit typical RSFs in addition to the case of low-redshift
($0<z<0.1$) early-type galaxies. This provides a compelling evidence that RSFs in early-type
galaxies are non-negligible over the last 8 billion years. Furthermore, Kaviraj et al. (2008) suggest
that up to 10$-$15\% of the mass of luminous ($-23<M_{V}<-20.5$) early-type galaxies such as
NGC 5128 ($M_{V}=-21.08$, Gil de Paz et al. 2007) may have formed after $z=1$.
These results imply that early-type galaxies in the local Universe are likely to possess
intermediate-age stellar populations. In this respect, IAGCs in NGC 5128 may be considered
as relics of residual star formations that occurred during the last few billion years.
UV observations of the GC systems have been shown to provide important insights into the
identification of IAGCs which is at present difficult to be identified solely by spectroscopic
observations. In particular, the Balmer line strengths themselves cannot reliably pin down the age of
GCs because of the degeneracy between age and HB morphology. FUV colors, on the other
hand, can verify the contribution from hot stellar populations in GCs and help identify the
true IAGCs. Deep UV observations are highly anticipated for other galaxies with IAGC
candidates identified by various spectroscopic and near-infrared photometric observations.
\acknowledgments
We thank Sugata Kaviraj for useful suggestions on the manuscript.
This work was supported by the Korea Research
Foundation Grant funded by the Korean Government (MOEHRD) (KRF-2005-202-C00158) and
the Korea Science and Engineering Foundation (KOSEF)
through the Astrophysical Research Center for the Structure and Evolution of the Cosmos (ARCSEC).
{\sl GALEX} ({\sl Galaxy Evolution Explorer}) is a NASA Small Explorer, launched in
April 2003. We gratefully acknowledge NASA's support for construction,
operation, and science analysis for the {\sl GALEX} mission, developed in
cooperation with the Centre National d'Etudes Spatiales of France and
the Korean Ministry of Science and Technology.
\clearpage
|
1,116,691,497,205 | arxiv |
\section{Introduction}
While language models (LMs) have achieved remarkable capabilities with increasing model size \citep{brown2020gpt3,chowdery2022palm},
fine-tuning them on specific downstream tasks introduces significant engineering challenges and computational costs.
Although large models can perform zero-shot, instruction-prompted, and few-shot learning \citep{sanh2022t0,wei2022flan}, they are usually outperformed by fully fine-tuned models when sufficient training data is available.
To reduce the computational and memory overhead of fine-tuning LMs, parameter-efficient fine-tuning (PEFT) methods have been proposed, such as adapters \citep{houlsby2019adapters}, prefix tuning \citep{li-liang-2021-prefix}, and prompt tuning \citep{lester2021prompt}.
These methods update only a small subset of (possibly new) parameters of the LM, and have achieved competitive performance with full fine-tuning \citep{ding2022delta}.
However, PEFT methods still require full back-propagation through the LM during training, which is computationally expensive and memory intensive.
Given that (1) only a small number of parameters need to be updated to adapt an LM to a given task, (2) very large LMs have demonstrated strong in-context learning capabilities on a forward pass, and (3) a forward pass for very large LMs already entails a substantial amount of computation, we hypothesize that it is possible to train a separate model to perform the optimization or adaptation procedure entirely, using only a forward pass.
To avoid the costly computation of back-propagating through the LM to produce the parameter updates, especially for thousands or millions of iterations during training, we propose a new paradigm of \textbf{hypertuning}: using a \textit{hypermodel} to adapt a \textit{downstream} LM to a desired application.
As a concrete proof of concept, we explore a simple setup where hypermodels take as input a set of few-shot examples from a given task, and output the PEFT parameters corresponding to that task in a single forward pass.
To demonstrate the feasibility of this approach, we train \textit{HyperT5}: a set of T5-based hypermodels that output soft prefixes \citep{li-liang-2021-prefix} or LoRA parameters \citep{hu2022lora}, to be incorporated into a frozen downstream T5 LM.
To train HyperT5, we introduce a two-stage procedure for training hypermodels: \textit{hyperpretraining}, where we adapt a pretrained LM to generate PEFT parameters via a modified language modeling objective, followed by \textit{multi-task fine-tuning} (MTF) the hypermodel.
After training, HyperT5 models can take few-shot examples from unseen tasks and generate the corresponding PEFT parameters, allowing us to adapt a downstream LM without back-propagation.
We show in experiments across P3, Super-NaturalInstructions and MetaICL datasets that LMs can be hypertuned using just a small number of examples.
Furthermore, we show that when the hypermodel-generated parameters are used as initializations for further parameter-efficient fine-tuning, we can achieve faster training convergence and better overall performance.
This work serves as a first step toward hypertuning, and we are are aware of certain limitations of this preliminary setup.
Because our current formulation of hypermodels can only take a small number of examples as input, its performance cannot compare to full parameter-efficient fine-tuning or full fine-tuning.
HyperT5 also generally underperforms T5 explicitly trained for few-shot in-context learning with full attention across examples, although we note that the latter is more computationally expensive to use at inference time.
Nevertheless, we believe that our results demonstrate a promising step toward model adaptation without the need for back-propagation.
We plan to release the code and model weights for \textbf{HyperT5}, as well as the multi-task fine-tuned versions for the three datasets listed above.
\section{Related Work}
\paragraph{HyperNetworks}
Several works have explored the concept of "hypernetworks," where an auxiliary network is used to generate parameters for a primary network. This terminology was first introduced by \citet{ha2017hypernetworks} and applied to LSTMs. Among Transformer-based language models, \citet{mahabadi2021hyperformer} and \citet{he2022hyperprompt} incorporated hypernetworks into T5 models for knowledge sharing during multitask fine-tuning. \citet{peebles2022gdotpt} utilized a Transformer with diffusion for generating full model parameters for image-recognition and Cartpole tasks. Similarly, \citet{lester2022recycling} trained models to generate soft prompts for transferring between downstream models.
Our work is closely related to \citet{budhaditya2022boosting}, who also used a hypernetwork to modify downstream model parameters and incorporated Super-NaturalInstuctions (S-NI) in their experimental setting. They found that incorporating instructions via a hypernetwork trained with MAML \citep{finn2017maml} improved downstream performance.
\paragraph{Multi-task Training and Transfer}
A crucial ingredient to hypertuning is the transferrability of task knowledge and generalization to novel tasks.
Many past works \citep{phang2018stilts,pruksachatkun-etal-2020-intermediate,vu2020exploring} have explored the effectiveness of single- and multi-task transfer learning.
More recent work has shown that large-scale multi-task training tends allows models to generalize to unseen tasks \citep{sanh2022t0,wei2022flan,wang2022sni,chung2022flant5}.
\citet{min-etal-2022-metaicl} and \citet{chen-etal-2022-meta} show that few-shot learning also benefits from multi-task training. \citet{pfeiffer2020adapterfusion}, \citet{vu2021spot} and \citet{gu2021ppt} have also explored transfer learning among PEFT methods.
\section{HyperTuning}
\label{sec:hypertuning}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/explainer2.png}
\caption{
Overview of HyperTuning.
(A) Fine-tuning, where all model parameters are updated (red).
(B) Parameter-efficient fine-tuning (PEFT), where all model parameters are frozen (blue) and only a small number of parameters, $\phi$, are updated.
(C) HyperTuning, where a hypermodel is used to generate parameters $\phi$ for a frozen downstream model.
For instance, a hypermodel may take a set of few-shot examples to determine what $\phi$ to generate.
Only the hypermodel's parameters are updated during training.
(D) At inference time, the parameters $\phi$ only need to be generated once, and thereafter only need to store $\phi$, with no need to retain the few-shot examples.
}
\label{fig:summary_plot}
\end{figure}
The impetus for using hypermodels for adapting downstream models derives from two recent developments in natural language processing:
\paragraph{1) Large language models can perform in-context learning effectively.}
Large language models have been shown to be able to learn from the context of a small number of examples or instructions for a task, without any prior training on that task \citep{brown2020gpt3,min-etal-2022-metaicl,wang2022sni}.
This suggests that models can ``understand'' what the task is and how to tackle it based on a few samples or descriptions of the task.
This capability appears to improve as the models get larger or are trained on more relevant data \citep{chowdery2022palm,ouyang2022instructgpt,bai2022helpful}.
\paragraph{2) Large language models can be adapted to downstream tasks by tuning a small set of parameters.}
Along with the growth in model sizes, there have been significant advances in fine-tuning methods that only modify a small number of parameters (possibly adding some new ones) in a frozen language model to adapt it to a specific task \citep{houlsby2019adapters,li-liang-2021-prefix,lester2021prompt,ding2022delta}.
These methods often achieve performance comparable to fine-tuning all parameters in the model.
Importantly, the number of parameters that need to be changed is small enough that it is feasible to train a model to generate them \citep{qin2021ipt,lester2022recycling}.
Taken together, these findings suggest that we may be able to use an auxiliary model that can first extract some task-relevant knowledge from some input that describes the task (e.g. instruction, few-shot examples), and then generate a small number of adaptive parameters, thereby changing the main model's behavior to suit the task.
This approach, if successful, would enable us to adapt models to downstream applications without using backpropagation, or storing the encoded representations of few-shot examples in memory.
In other words, we can delegate the work of model adaptation to a separate model.
We call this approach \textbf{hypertuning}, inspired by the work on hypernetworks by \citet{ha2017hypernetworks}.
Hypertuning uses a \textit{hypermodel} to adapt a \textit{downstream model} to a target downstream task or application.
This is differs from \textit{fine-tuning}, which uses backpropagation and a gradient descent algorithm to update model parameters.
In this work, we present one possible formulation of hypertuning using few-shot examples and generating a small set of parameters with a single forward pass through the hypermodel.
However, this is just one possible way of performing hypertuning, and the idea of adapting models with hypermodels can be generalized to many other cases.
For example, hypermodels could also be trained to predict gradients or generate parameter updates based on input-output pairs.
This way, hypermodels could work with large training sets, not just a few examples.
Ultimately, with sufficiently general and well-trained hypermodels, we may be able to replace gradient-descent-based fine-tuning pipelines with hypertuning for many applications, while achieving similar or better performance.
\subsection{HyperTuning with Fewshot Examples}
Let $M$ be a model with parameters $\theta$, initialized at $\theta_0$ from pretraining, and $\mathbb{L}$ a loss function.
Given a dataset of size $N$ with input-output pairs $\{(x,y)\}$, standard fine-tuning minimizes the following objective over $\theta$:
\begin{equation}
\argmin_{\theta}{\frac{1}{N}\sum_{\{(x,y)\}}{\mathbb{L}\Big(y, M(\theta;x)}\Big)}
\end{equation}
In the case of parameter-efficient fine-tuning (PEFT), we fix $\theta=\theta_0$ and introduce a small set of trainable parameters $\phi$ (e.g. adapter parameters, soft prompts) that are injected into $M$.
We optimize only over $\phi$:
\begin{equation}
\argmin_{\phi}{\frac{1}{N}\sum_{\{(x,y)\}}{\mathbb{L}\Big(y, M(\theta_0;x,\phi)}\Big)}
\end{equation}
For hypertuning, we further define a \textit{hypermodel} $H$ with parameters $\xi$ that produces PEFT parameters $\hat{\phi}$ based on its input, which can be a set of few-shot examples or task instructions.
For example, if the hypermodel input is a set of few-shot examples $\{(x_i, y_i)\}_K$, we have:
\begin{equation}
\label{eq:hypermodel}
\hat{\phi} = H\Big(\xi; \{(x_i, y_i)\}_K\Big)
\end{equation}
One way to train the hypermodel $(H, \xi)$ is to perform PEFT on many tasks and use the resulting $\phi$ as targets.
However, this is costly in computation, requiring many fine-tuning runs, and does not leverage cross-task knowledge transfer.
Instead, we propose to train the hypermodel end-to-end, optimizing through the frozen model $(M, \theta_0)$.
Hence, the hypermodel training objective is:
\begin{equation}
\argmin_{\xi}{\frac{1}{N}\sum_{\{(x,y)\},\{\{(x_i, y_i)\}_K\}}{\mathbb{L}\bigg(y, M\Big(\theta_0;x,H(\xi; \{(x_i, y_i)\}_K)\Big)}\bigg)}
\end{equation}
At each training step, we sample a \textit{target example} $(x,y)$ and non-overlapping few-shot examples $\{(x_i, y_i)\}_K$.
We generate $\hat{\phi}$ from the few-shot examples and compute the loss with respect to $(x,y)$ and $\hat{\phi}$.
We then back-propagate the gradients through both $M$ and $H$ to update $\xi$.
Note that since $\hat{\phi}$ does not depend on $x$, it can be computed once for a given set of few-shot examples and reused for downstream predictions.
At inference time, we can use $\hat{\phi}$ directly without storing or recomputing the representations for $\{(x,y)\},\{(x_i, y_i)\}_K$, saving memory and computation.\footnote{By construction, few-shot examples occupy at least K times the memory of the target input $x$.}
\section{HyperT5: A T5-Based HyperModel}
\subsection{Architecture and Setup}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/hypert5_2.png}
\caption{
Overview of HyperT5.
(A) HyperT5 takes as input few-shot examples and outputs PEFT parameters $\phi$.
The model is initialized from an LM-adapted T5.
(B) In HyperT5-Prefix, $\phi$ are key and value prefixes for every attention layer.
(C) In HyperT5-LoRA, $\phi$ are additive low-rank modifications to the query and value linear maps.
}
\label{fig:hypert5}
\end{figure}
To demonstrate the feasibility of hypertuning, we propose \textit{HyperT5}, a hypermodel based on T5, where both the hypermodel and the downstream model share a T5 backbone (Figure~\ref{fig:hypert5}A).
We use a frozen LM-adapted T5 \footnote{This is the model introduced by \citet{lester2021prompt}. We use the T5 v1.1 architecture and initialize all experiments with the LM-adapted parameters, unless stated otherwise.} as the downstream model.
The hypermodel is also initialized with LM-adapted T5 parameters, but with some architectural changes.
As defined in Equation~\ref{eq:hypermodel}, the hypermodel encoder takes the few-shot examples (and/or task definitions, in the case of S-NI) as input.
The hypermodel decoder takes a fixed set of newly learned token embeddings as input, and outputs a set of decoder token representations, which are then fed to a set of MLPs to generate the PEFT parameters $\phi$ for the downstream model.
We also remove the causal masking from the decoder, since the hypermodel does not perform autoregressive generation.
We experiment with two PEFT methods: prefix tuning \citep{li-liang-2021-prefix} and LoRA \citep{hu2022lora}.
Prefix tuning (Figure~\ref{fig:hypert5}B) prepends a set of learned key and value representations within each attention layer, while LoRA (Figure~\ref{fig:hypert5}C) learns a low-rank additive modification to the query and value linear maps.
Both PEFT methods have been shown to achieve good performance across a wide range of tasks \citep{ding2022delta}.
\citet{chan2022differently} also suggest that modifying in-context representations and model weights can lead to different model behaviors, and we seek to demonstrate that hypertuning is applicable to very different PEFT methods.
We name the respective hypermodels \textit{HyperT5-Prefix} and \textit{HyperT5-LoRA}.
The number of decoder input tokens and the size of the MLPs depend on the choice of PEFT method and its hyperparameters.
For example, for HyperT5-Prefix that generates soft prefixes corresponding to prefix tuning, $\phi$ will be of the shape $[L,2,2,P,H]$,
where $L$ is the number of layers, 2 is for the encoder and decoder, 2 is for the key and value prefixes, $P$ is the number of prefix tokens, and $H$ is the hidden size.
We set the number of decoder input tokens to be $2P$.
We provide pseudo-code for HyperT5-Prefix and HyperT5-LoRA models in the Figure~\ref{app:pseudoprefix} and Figure~\ref{app:pseudolora} in the Appendix.
\subsection{HyperPretraining}
To train HyperT5, we first undergo an additional stage of pretraining to adapt the hypermodel to generate parameters $\phi$ for the downstream model, which we call \textit{hyperpretraining}.
As we show in Section~\ref{sec:hyperpretrainingresults}, hyperpretraining is crucial for good hypermodel performance.
We propose a simple scheme for hyperpretraining using a \textit{Context-Augmented Conditional Language Modeling} (CACLM) objective, which extends the conditional language-modeling (CLM) objective of T5 LM-adaptation.
As shown in Figure~\ref{fig:hyperpretraining}, we sample a 512-token sequence from a pretraining corpus and split it into four consecutive segments A--D.
The downstream model receives segment B as input and predicts segment C, following the CLM objective.
The hypermodel receives segments A and D as input, which provide additional context from the same document, and outputs PEFT parameters for the downstream model.\footnote{Segments A and D are marked by sentinel tokens.}
The hypermodel thus compresses contextual information to assist the downstream model in its CLM task.
We also make segment B very short (32 tokens) to encourage the downstream model to depend on the hypermodel information for accurate prediction of tokens in C.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/pretraining5.png}
\caption{
Overview of HyperPretraining using the Context-Augmented Conditional Language Modeling (CACLM) objective to train a hypermodel to predict PEFT parameters $\phi$.
(A) Sample a sequence of 512 tokens from a pretraining corpus, and splice into 4 segments A--D.
(B) The frozen downstream model takes as input B and predicts continuation C.
(C) The hypermodel is trained to encode additional context A and D into PEFT parameters $\phi$, providing additional information to the downstream model to predict C.
}
\label{fig:hyperpretraining}
\end{figure}
During hyperpretraining, we freeze the downstream model and only update the hypermodel parameters, training for 100K steps on the C4 dataset \citep{raffel2020t5}.
We perform hyperpretraining separately for HyperT5-Prefix and HyperT5-LoRA models.
Hyperparameters can be found in Appendix~\ref{app:trainingdetails}.
\section{Multi-Task Fine-Tuning with HyperT5}
\subsection{Multitask Fine-Tuning (MTF)}
After hyperpretraining, we conduct a second stage of training to train the hypermodel to generate task-specific PEFT parameters based on a small number of examples that we provide as input (Figure~\ref{fig:summary_plot}C).
By performing multi-task fine-tuning on a sufficiently large number of tasks, we hope to have the hypermodel learn to generalize to generate parameters for unseen tasks.
We adopt a similar training setup to MetaICL \citep{min-etal-2022-metaicl}, which uses multi-task fine-tuning \citep{sanh2022t0, wei2022flan} with both a target input example ($x$) and a set of few-shot input-output pairs $\{(x_i,y_i)\}_K$ as inputs.
The hypermodel takes the few-shot pairs as input, while the downstream model takes the target example as input, as shown in Equation~\ref{eq:hypermodel}.
We fine-tune only the hypermodel parameters and keep the downstream model parameters fixed, unless otherwise stated.
Appendix~\ref{app:inputformatting} shows how we format the few-shot inputs.
We compare our approach with two baselines: multi-task fine-tuning of a T5 model without few-shot inputs, and MetaICL (multi-task fine-tuning with few-shot inputs).
In MetaICL, the few-shot pairs are concatenated with the target example as input, both during training and evaluation on new tasks.
We also include baselines that use PEFT methods for multi-task fine-tuning, i.e. learning a single set of prefix tuning or LoRA parameters.
We perform multi-task fine-tuning for 10,000 steps with a batch size of 256.
For models that use few-shot inputs (MTF with fewshot, and hypermodels), we use up to 16 examples, and truncate tokens that exceed the maximum input length.
Appendix~\ref{app:datadetails} provides more details on the datasets.
\subsection{Datasets}
To demonstrate the generality of our approach, we conduct experiments on three different multi-task training datasets, each with different held-out tasks and evaluation protocols.
\textbf{Public Pool of Prompts (P3)} \citep{sanh2022t0} consists of 62 task datasets, and was used in training the T0 models.
The prompt are formatted with 0-shot inference in mind, and often contain instructions or the possible answer options.
For training our models, we use the T0-train subset. In order to fit multiple examples into the hypermodel's context, we further exclude dataset-prompt subsets with average input sequence lengths longer than 320 tokens. The list of included dataset-prompts can be found in Figure~\ref{fig:appp3train}.
Evaluation is performed on a fixed set of held-out tasks, based on multiple-choice scoring with accuracy.
We exclude StoryCloze from evaluation as the task is not distributed with training data.
\textbf{MetaICL} \citep{min-etal-2022-metaicl} introduced a few-shot multi-task training dataset, which is an extension of CrossFit \citep{ye-etal-2021-crossfit} with UnifiedQA \citep{khashabi-etal-2020-unifiedqa} and the addition of training data.
For brevity, we will refer to this dataset as MetaICL.
Unlike P3 and S-NI, the task inputs are not formatted for 0-shot inference; for instance, the task inputs may give no clue as to the goal of the task, or what the output space is.
They provide several different train-task splits for tasks, of which we run our experiments on three (HR$\rightarrow$LR, Non-NLI$\rightarrow$NLI, Non-Class$\rightarrow$Class) to economize on computation costs.
Evaluation is performed on held-out tasks, with ROUGE or Macro-F1 on model generations depending on the task.
\textbf{Super-NaturalInstructions (S-NI)} \citep{wang2022sni} consists of over 1,600 task datasets, each with a task definition as well as a fixed set of positive and negative demonstrations.
Following their findings, we focus our experiments on two settings: using only the task definition as the hypermodel input, and using definitions alongside two fixed positive examples.
We only use the English tasks within the dataset.
Evaluation is performed on a set of held-out tasks using ROUGE-L on model generations.
\subsection{Results}
\label{sec:results}
\subsubsection{P3}
\label{sec:results_p3}
\input{tables/table_01_p3}
\input{tables/table_02_p3_3b}
Table~\ref{tab:table_01_p3} and Table~\ref{tab:table_02_p3_3b} show the results of our experiments on the P3 dataset using T5-Large ($\sim$770M parameters) and T5-XL ($\sim$3B parameters), respectively.
We compare our HyperT5-Prefix and HyperT5-LoRA, which use hypermodels to generate task-specific PEFT parameters based on few-shot examples, with several baselines: prefix tuning, LoRA tuning, T5-MTF, and T5-MTF-Few-shot.
T5-MTF is a model that roughly corresponds to the T0 model, and we detail the differences in Appendix~\ref{app:p3details}.
Our results show that both HyperT5-Prefix and HyperT5-LoRA significantly improve over the prefix and LoRA tuning baselines, indicating the effectiveness of using hypermodels to adapt the frozen downstream T5 model to unseen tasks.
HyperT5-Prefix achieves performance close to T5-MTF, while T5-MTF-Few-shot attains the highest scores, in line with the findings of \citet{min-etal-2022-metaicl}.
These patterns are consistent across T5-Large and T5-XL,\footnote{We note that T0-XL performs much worse than our trained T5-MTF, which is in agreement with other work \citep{anonymous2023metrot5,wu-etal-2022-continued} that have reported similar results in replicating T0.} demonstrating the scalability of hypertuning.
We emphasize that HyperT5-Prefix/LoRA only introduces a very small number of PEFT parameters in the frozen downstream T5 model, whereas all parameters are tuned in the T5-MTF and T5-MTF-Few-shot models.
Moreover, the P3 examples are written with prompt templates that are optimized for zero-shot inference, which is the ideal input format for T5-MTF.
Furthermore, T5-MTF-Fewshot has full, bidirectional self-attention between the target input $x$ and the few-shot examples, whereas HyperT5-Prefix and HyperT5-Lora only incorporate information from the few-shot examples via the respective PEFT parameters.
To investigate whether the hypermodel benefits are complementary to updating the downstream model parameters, we conduct an additional set of experiments where we jointly train both the hypermodel and the downstream model (HyperTuning + Fine-Tuning), with results shown at the bottom of Table~\ref{tab:table_01_p3}.
We observe that both HyperT5-Prefix+ and HyperT5-Lora+ slightly surpass T5-MTF-Fewshot, suggesting that the hypermodels can further enhance the performance of fine-tuned downstream models.
\subsubsection{MetaICL}
\label{sec:results_metaicl}
Table~\ref{tab:table_03_metaicl} presents the results on three MetaICL task splits.
As in the previous experiments, both HyperT5 models surpass the PEFT models and T5-MTF in performance, except for T5-MTF-Few-shot, which outperforms them in all but one case: Non-NLI$\rightarrow$NLI, where HyperT5-Prefix achieves a higher score.
T5-MTF performs poorly in the MetaICL experiments, as it has to handle task examples zero-shot, and the MetaICL inputs are not suitable for zero-shot inference, as explained above.
\input{tables/table_03_metaicl}
\subsubsection{Super-NaturalInstructions (S-NI)}
We report the results on the different S-NI settings in Table~\ref{tab:table_04_natinst} for T5-Large and Table~\ref{tab:table_05_natinst_3b} for T5-XL, using both Def (definition-only) and Def+2Pos (definition and two fixed positive examples) settings.
The T5-MTF (Def) and T5-MTF (Def+2Pos) models are similar to the corresponding T$k$-Instruct variants \citep{wang2022sni}, with a slight difference in input formatting (see Appendix~\ref{app:inputformatting}).
For the hypermodels, we prepend the task definitions to the few-shot examples and treat them as part of the hypermodel input.
On average, the HyperT5 with Def+2Pos outperforms T5-MTF (Def) by a large margin, but still underperforms T5-MTF (Def+2Pos), in line with the above results.
\input{tables/table_05_natinst_rouge}
\subsection{Discussion}
Above, we evaluated hypermodels on three multi-task datasets, where they generate task-specific soft prefixes or LoRA parameters from a few examples or instructions.
In general, HyperT5 matched or exceeded T5-MTF models, but lagged behind T5-MTF-Fewshot models (or Def+2Pos models, in the case of S-NI).
This gap is expected, as T5-MTF-Fewshot uses full self-attention between the examples and the target input $x$, while HyperT5 encodes the examples into PEFT parameters that are independent of $x$.
We attribute some of the gap to this limitation.
However, this limitation also confers efficiency advantages to HyperT5 at inference time compared to T5-MTF-Fewshot.
In encoder-decoders such as T5, the full self-attention between the examples and $x$ prevents the separation of their representations: a new forward pass is needed for each new $x$.
In contrast, for hypermodels the examples can be encoded into PEFT parameters once, and reused for all subsequent inputs.
Even for decoder-only models (e.g. MetaICL based on GPT-2), where the examples can be cached as key and value representations, the cache size is likely much larger than the PEFT parameters, as the cache stores all the representations for every token in the examples, which are several times longer than the input by definition.
Thus, hypermodels in our setup sacrifice some performance for efficiency.
Regarding T5-MTF, one might wonder what the concrete benefit of HyperT5 is, given their similar performance.
After all, unlike T5-MTF-Fewshot, T5-MTF only uses $x$ as the input, requiring no extra computation or memory, and only one set of model weights.
Firstly, we stress that the HyperT5 model can only affect the downstream model through a small number of modified parameters, while in T5-MTF all the parameters that process $x$ are modified.
Although HyperT5 and T5-MTF have roughly the same number of tuned parameters, the parameters modified in T5-MTF directly interact with the input $x$, which we expect to help performance.
Secondly, we identify two separate, but possibly related, sources of performance improvement: better general task performance of the downstream model (which is usually the goal of MTF training), and adapting the downstream model to a new task based on few-shot examples, using hypermodels in our case.
Our aim in this work is to show the feasibility of the latter. We argue that both sources are complementary, and we showed in Section~\ref{sec:results_p3} that when we use hypermodels without freezing the downstream model, thereby acquiring both benefits, performance further improves.
More generally, we expect that training a hypermodel against an already multi-task fine-tuned model will lead to better performance than just using the model for zero-shot inference alone, and we plan to explore this in future work.
We also observe a consistent trend where HyperT5-Prefix outperforms HyperT5-LoRA.
We speculate that it is easier for hypermodels to learn to generate soft prefixes as compared to LoRA weights, since soft prefix are effectively model-internal hidden states, and the generated PEFT parameters are themselves transformations of the hypermodel hidden states.
Incidentally, another possible interpretation of the HyperT5-Prefix model is that the combination of the hypermodel and the downstream model can be seen as a dual-encoder, single-decoder model with separate encoders for the few-shot examples and the target example.
Lastly, the majority of the experiments were conducted with minimal hyperparameter-tuning, and the current results primarily serve as a proof-of-concept of hypertuning being a viable approach to adapt downstream models.
We expect that further exploration of hyperpretraining and MTF hyperparameters as well as hypermodel architectures may lead to better results and overcome some of the limitations we identified.
\label{sec:results_natinst}
\subsection{Is HyperPretraining Necessary?}
\label{sec:hyperpretrainingresults}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/hyperpretraining_steps_prefix.pdf}
\caption{HyperT5-Prefix}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figs/hyperpretraining_steps_lora.pdf}
\caption{HyperT5-LoRA}
\end{subfigure}
\caption{
Performance of HyperT5 models on P3 evaluation with different amounts of hyperpretraining.
HyperPretraining is crucial for good performance of the hypermodels.
However, hyperpretraining for too many steps can also hurt performance (as see in the case of HyperT5-LoRA).
}
\label{fig:hyperpretraining_steps}
\end{figure}
We demonstrate the benefits of hyperpretraining for the hypermodels in this section.
As mentioned in Section~\ref{fig:hyperpretraining}, we hyperpretrained the hypermodels for 100k steps before multi-task fine-tuning them on P3 tasks.
To examine the impact of hyperpretraining, we also multi-task fine-tuned HyperT5-Prefix and HyperT5-LoRA from LM-adapted T5 without any hyperpretraining, and from intermediate checkpoints over the course of hyperpretraining.
Figure~\ref{fig:hyperpretraining_steps} shows the average scores on the held-out tasks for these models.
Both HyperT5 models perform very poorly without any hyperpretraining, achieving scores similar to PEFT-only (see Table~\ref{tab:table_01_p3}).
With hyperpretraining, the performance of both hypermodels significantly improves.
While HyperT5-Prefix appears to consistently improve over the course of 100k steps, we observe that HyperT5-LoRA performance slightly declines after 50k steps.
Hypermodels targeting different PEFT methods may benefit from different amounts of hyperpretraining, and we emphasize that our choice of the number of hyperpretraining steps is by no means considered to be optimal.\footnote{We chose 100k steps based on the T5 LM-adaptation procedure \citep{lester2021prompt}. }
We expect that better hyperpretraining configurations can be explored in future work.
\section{HyperModels for Improved Parameter Initialization}
\label{sec:initialization}
Thus far, we have discussed hypermodels in the context of generating PEFT parameters in a single forward pass through the hypermodel.
We can also consider an alternative use of hypermodels: Instead of randomly initializing new parameters, we can use hypermodels to produce task-specific PEFT parameters based on a few examples from the task.
This can be seen as using task knowledge acquired by the hypermodel during training to provide a first approximation of PEFT parameters, and thereafter refining the parmaeters via regular PEFT training.
In conventional PEFT, wherever new parameters are introduced into the model, they are either initialized randomly, or with fixed initial values (e.g. the up-projection weights in LoRA are initialized to 0)--for brevity, we will refer to this simply as random initialization.
Beyond random initialization, \citet[][SPoT]{vu2021spot} and \citet[][PPT]{gu2021ppt} have explored transfer-learning within PEFT, first doing PEFT on one or more upstream tasks, and then using the learned PEFT parameters as an initialization for downstream PEFT.
This approach has two advantages over conventional PEFT initializations.
First, the hypermodel-generated parameters already perform well on the task, as shown in Section~\ref{sec:results}, so PEFT training can reach good performance faster.
Second, the hypermodel can automatically transfer relevant knowledge from previous tasks to the new task, similar to SPoT and PPT, except we let the hypermodel determine what previously learned task knowledge is most applicable to the new task.
For instance, a major challenge addressed in SPoT was searching for the set of upstream tasks whose PEFT parameters would be the most appropriate initialization for a downstream task--in our case, we can directly provide a hypermodel with few-shot examples to generate our desired initialization.
To investigate the effectiveness of using hypermodels to generate PEFT initializations, we use the P3-trained models from Section~\ref{sec:results_p3}, and perform prefix tuning and LoRA tuning on the held-out tasks individually.\footnote{We use one specific prompt format for each task, listed in Appendix~\ref{app:p3details}.}
For each method-task pair, we sweep across learning rates $\{1e^{-3}, 1e^{-4}, 1e^{-5}\}$ and take the best average result over 3 random seeds.
We consider two baselines for initializations: random initialization (Rand Init)
and using the multi-task fine-tuned PEFT parameters from Section~\ref{sec:results_p3} as initializations (Shared Init).
The hypermodel-generated initialization (Hyper Init) is generated using a randomly sampled set of 16 examples from the respective training sets.
We show the results of prefix tuning\footnote{Prefix tuning is performed via a reparameterization, in line with standard practice. Refer to Appendix~\ref{app:prefixtuning} for details.} and LoRA tuning with different initialization schemes in Table~\ref{tab:table_07_peft}.
We observe that for both prefix tuning and LoRA tuning, shared initialization significantly performs random initialization, while using a hypermodel-generated initialization outperforms both on average.
We also show the average performance across tasks over the course of tuning in Figure~\ref{fig:peft_init}.
We observe that hypermodel-generated initializations start with much better performance compared to the other two initialization schemes, and continue to outperform them over the course of fine-tuning.
Hence, hypermodels can be complementary to a standard PEFT pipeline, providing both performance gains and computational cost savings.
\input{tables/table_07_peft}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/prefix_mlp.pdf}
\caption{Prefix Tuning}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{figs/lora.pdf}
\caption{LoRA}
\end{subfigure}
\caption{
Average performance on P3 held-out tasks with prefix tuning and LoRA, using different parameter initializations.
Using hypermodel-generated initializations starts with higher performance and continues to perform better on average over the course of training.
}
\label{fig:peft_init}
\end{figure}
\section{Conclusion}
We introduce the concept of \textit{hypertuning}, which leverages a hypermodel to adapt a downstream model to a specific downstream application.
We present a basic framework for hypertuning, where a hypermodel is trained to produce parameters for a downstream model from few-shot examples in one forward pass, and we apply this framework to train HyperT5-Prefix and HyperT5-LoRA models that can adapt a fixed downstream T5 model.
We find that a two-stage training procedure of hyperpretraining and multi-task fine-tuning is effective for training hypermodels, and we evaluate the HyperT5 models on P3, MetaICL and S-NI datasets, showing that they can generate PEFT parameters that enable the downstream T5 models to perform well on unseen tasks.
Furthermore, the parameters generated by hypertuning can also serve as improved parameter initializations for parameter-efficient fine-tuning.
We regard these findings as an initial but encouraging indication of the potential of adapting large language models without back-propagation.
\section{Acknowledgements}
We would like to thank Sam Bowman for their thoughtful feedback and Jonas Pfeiffer for early idea discussion.
|
1,116,691,497,206 | arxiv | |
1,116,691,497,207 | arxiv | \section{Introduction}
\label{sec:intro}
The detection of gravitational waves (GWs) and light from the binary neutron star (NS) merger GW170817 \cite{theligoscientific:2017qsa, abbott:2018wiz, gbm:2017lvd} last year inaugurated the era of multimessenger astronomy with GWs. The electromagnetic (EM) counterpart, now called AT2017gfo/GRB 170817A, had thermal and non-thermal components. The latter consists of a prompt gamma-ray flash generated by a relativistic outflow \cite{monitor:2017mdv} and long lasting synchrotron emission powered by the interaction of this outflow with the interstellar medium (ISM) \cite{kasliwal:2017ngb, margutti:2017cjl, mooley:2017enz, lazzati:2017zsj, margutti:2018xqd, alexander:2018dcl, mooley:2018qfh, ghirlanda:2018uyx}. The thermal component, the so-called kilonova (kN), is thought to have been powered by the radioactive decay of ${\sim}0.03{-}0.06\, M_\odot$ of NS matter ejected during and shortly after the merger \cite{chornock:2017sdf, cowperthwaite:2017dyu, drout:2017ijr, nicholl:2017ahq, tanaka:2017qxj, tanvir:2017pws, perego:2017wtu, villar:2017wcc, waxman:2017sqv, metzger:2018uni, kawaguchi:2018ptg}.
These landmark observations had a far-reaching impact in nuclear and
high-energy astrophysics. The GW data have been used to constrain the NS
tidal deformability \cite{theligoscientific:2017qsa, abbott:2018wiz,
De:2018uhw, abbott:2018exr} and to derive new bounds on the poorly known
equation of state (EOS) of matter at supernuclear densities
\cite{annala:2017llu, fattoyev:2017jql, most:2018hfd, tews:2018iwm,
malik:2018zcf, abbott:2018exr, tsang:2018kqj}. The non-thermal EM
counterpart provided the first direct evidence that NS mergers power
short gamma-ray bursts (sGRBs) \cite{Paczynski:1986px, Eichler:1989ve,
Nakar:2007yr, Berger:2013jza, monitor:2017mdv}, and the thermal
counterpart confirmed that NS mergers are one of the main sites of production of $r$-process elements \cite{kasen:2017sxr, hotokezaka:2018aui}.
The inclusion of sky position and distance information obtained from the EM observations into the GW Bayesian data analysis allowed for a tighter determination of some of the binary parameters \cite{finstad:2018wid, De:2018uhw, abbott:2018wiz}. A joint GW and EM analysis has also been used to measure the Hubble constant \cite{abbott:2017xzu, hotokezaka:2018dfi}. Refs.~\cite{Gao:2017fcu, Pankow:2018iab} proposed to combine EM and GW data to constrain the mass ratio of the two NSs. Moreover, the EM data suggest that the merger resulted neither in prompt black hole (BH) formation, nor in the formation of a long-lived remnant \cite{margalit:2017dij}. This observation has been used to derive additional constraints on the NS EOS and, in particular, on the maximum mass for a nonrotating NS \cite{margalit:2017dij, shibata:2017xdx, rezzolla:2017aly, ruiz:2017due}. Ref.~\cite{bauswein:2017vtn} used an empirical relation between the threshold mass for prompt BH formation and the radius of the $1.6$-$M_\odot$ NS to place a lower bound on the latter. Ref.~\cite{radice:2017lry} pointed out that the EM observations also imply a lower limit on the tidal parameter $\tilde\Lambda$, \textit{e.g.}, Refs.~\cite{flanagan:2007ix, favata:2013rwa}. This is because, on the one hand, the amount of material ejected during merger is weakly dependent on $\tilde\Lambda$. On the other hand, the overall ejecta and associated kN are dominated by neutrino- and viscous-driven winds from the accretion disk and the mass of the latter strongly depends on $\tilde\Lambda$ \cite{Radice:2018pdn}. A similar approach, but based on the assumption that the outflow was proportional to the amount of the dynamical ejecta, has been proposed by Ref.~\cite{Coughlin:2018miv}.
Here, we extend the work of Ref.~\cite{radice:2017lry}. We incorporate numerical relativity results in a joint Bayesian analysis of the GW and EM data, and we improve the measurement on the binary mass ratio and the tidal deformability. The approach we present here is fully general and will become even more powerful when more accurate simulations spanning a larger portion of the binary parameter space become available.
The remainder of this paper is organized as follows. We discuss the numerical simulations and the setup for the Bayesian analysis in Sec.~\ref{sec:methods}. We give an account of our results in Sec.~\ref{sec:results}. Finally, Sec.~\ref{sec:conclusions} is dedicated to discussion and conclusions.
\section{Methods}
\label{sec:methods}
We perform Bayesian parameter estimation using the combined GW and EM data to determine posteriors for the binary parameters $\theta = \{\mathcal{M}^{\rm det}, q, \chi_{\rm eff}, \chi_a, \tilde\Lambda, t_{c,1}, t_{c,2} \}$, where $\mathcal{M}^{\rm det}= (1+z)\,(M_1\,M_2)^{3/5}/(M_1 + M_2)^{1/5}$ is the detector-frame chirp mass, $q = M_2/M_1 \leq 1$ is the binary mass ratio, $\chi_{\rm eff} = (M_1 \chi_{1z} + M_2 \chi_{2z})/(M_1 + M_2)$ and $\chi_a = (\chi_{1z} - \chi_{2z})/2$ are the parameters describing spin components aligned with the binary orbital angular momentum, and $t_{c,1}$ and $t_{c,2}$ are the arrival times at Livingston and at Hanford, respectively. Not aiming to measure the source's orientation and its sky position, we independently maximize the likelihood at each detector with respect to a constant wave phase and an amplitude normalization, and we assume that $t_{c, 1}$ and $t_{c, 2}$ can be independently adjusted. This approximation greatly simplifies the parameter estimation by reducing the number of parameters. Since GW170817 has a high matched filtering signal-to-noise ratio (SNR), this simplification does not bias the maximum-likelihood values of the parameters but only leads to percent-level increase of their uncertainties~\cite{Roulet:2018jbe}.
Assuming GW and EM data to be independent, we can write the joint GW and EM likelihood as the product of the separate likelihoods, namely
\begin{equation}
P\big[\{d_{\rm GW}, d_{\rm EM} \} | \theta\big] = P[d_{\rm GW} | \theta] \,P[d_{\rm EM} | \theta],
\end{equation}
where $d_{\rm GW}$ and $d_{\rm EM}$ denote the GW and EM data, respectively.
We compute the first factor with the relative binning method~\cite{zackay:2018qdy, dai:2018dca}. We use the noise-subtracted LIGO data release\footnote{In the noise-substracted data release, the glitch that happened to overlap with GW170817 in the Livingston strain has been removed by the LIGO/Virgo collaboration.} of GW170817 and include frequencies in the range $[23,\,1000]\,$Hz. The exclusion of higher frequency GW data results in a slightly broader posterior of $\tilde\Lambda$ whose support also extends to somewhat larger values, as discussed in detail in Ref.~\cite{dai:2018dca}. It is important, however, to remark that the two NSs first touch when the GW frequency is between 1.0~kHz and 1.5~kHz \cite{Damour:2009wj}. It is thus not clear whether or not current waveform models, which are typically constructed by adding tidal corrections to point particle models, are reliable past 1~kHz, \textit{e.g.}, Ref.~\cite{Kawaguchi:2018gvj}. Consequently, to be conservative, we restrict our analysis to the part of the GW signal below frequency of 1~kHz, which is theoretically well understood. We use the phenomenological waveform model \texttt{IMRPhenomD\_NRTidal} \cite{Dietrich:2017aum, Dietrich:2018uni} implemented in \texttt{LALSuite}.
We follow Ref.~\cite{abbott:2018wiz} for the choice of priors. Both component masses have flat priors in the range $[0.5,\,7.7]\ M_\odot$. The two dimensionless spin vectors have their moduli uniformly distributed in $[0,\,0.89]$ and have isotropic orientations. Their aligned components are then extracted and used to evaluate the non-precessing waveform model \linebreak \texttt{IMRPhenomD\_NRTidal}.
Following the prescription of Ref.~\cite{De:2018uhw}, we relate the component tidal deformability parameters through $\Lambda_1 = \Lambda_s\,q^3$ and $\Lambda_2 = \Lambda_s/q^3$, where $\Lambda_s$ is assigned a uniform prior within $[0,\,5000]$. This implicitly assumes that no first-order phase transition occurs in matter at densities intermediate between those achieved in the secondary and in the primary NS, so that the two NS radii are comparable. Note that the error introduced assuming that the NSs have a commensurate radii is much smaller than current statistical errors \cite{De:2018uhw}. This choice is also consistent with the use of data from our simulations not accounting for the possibility of first order phase transitions in dense matter. Finally, we exclude $\tilde\Lambda > 5000$ which is unreasonable with any plausible EOS.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{fig01.pdf}
\caption{Remnant disk mass as a function of the tidal deformability parameter $\tilde\Lambda$. The data points show the results from our simulations, while the dashed line shows the fit in the form of Eq.~(\ref{eq:mdisk_fit}). The gray shaded region in the lower panel shows the uncertainty $\sigma$ we use in Eq.~(\ref{eq:pdisk}). We find that disk formation is suppressed in the case of prompt BH formation.}
\label{fig:mdisk}
\end{figure}
Current models of the EM signal are not yet sufficiently advanced to follow the same procedure as for the GW data. However, extant light curve models indicate that $0.02{-}0.05\, M_\odot$ of material with a broad distribution in electron fraction and asymptotic velocity of ${\sim}0.1\, c$ is needed to explain the observations \cite{tanaka:2017qxj, perego:2017wtu, villar:2017wcc, waxman:2017sqv, kawaguchi:2018ptg}. Because of their properties, these ejecta are thought to originate from winds launched from the remnant accretion disk after merger, \textit{e.g.}, Ref.~\cite{Metzger:2017wot}. Long term simulations of postmerger disks indicate that these winds can entrain $10{-}40\, \%$ of the total disk mass \cite{dessart:2008zd, metzger:2008av, metzger:2008jt, lee:2009a, fernandez:2013tya, siegel:2014ita, just:2014fka, metzger:2014ila, perego:2014fma, martin:2015hxa, wu:2016pnw, siegel:2017nub, lippuner:2017bfm, fujibayashi:2017xsz, fujibayashi:2017puw, siegel:2017jug, metzger:2018uni, radice:2018xqa, fernandez:2018kax}. Consequently, we can conservatively estimate that a disk of at least $0.04\, M_\odot$ should have formed in GW170817. Accordingly, we approximate the EM likelihood as
\begin{equation}
P[d_{\rm EM} | \theta] \simeq P[M_{\rm disk}(\theta) > 0.04\, M_\odot].
\end{equation}
We have performed numerical relativity simulations of merging NS using the \texttt{WhiskyTHC} code \cite{radice:2012cu, radice:2013hxh, radice:2013xpa}. We considered 29 binaries, including both equal and unequal mass configurations and 4 temperature and composition dependent nuclear EOSs: the DD2 EOS \cite{typel:2009sy, hempel:2009mc}, the BHB$\Lambda\phi$ EOS \cite{banik:2014qja}, the LS220 EOS \cite{lattimer:1991nc}, and the SFHo EOS \cite{steiner:2012rk}. The simulations included temperature and compositional changes due to the emission of neutrinos using a leakage scheme \cite{radice:2016dwd}. A detailed account of the numerical results is given in Refs.~\cite{radice:2017lry, radice:2018xqa, Radice:2018pdn}.
The simulation data suggest that the remnant disk masses can be related to the tidal deformability parameter $\tilde\Lambda$ through the fitting formula \cite{Radice:2018pdn}
\begin{equation}\label{eq:mdisk_fit}
\begin{split}
&\log\left(\frac{M_{\rm disk}}{M_\odot}\right) \simeq \Phi(\tilde\Lambda) := \\
&\qquad\qquad \max \left\{ -3, \log\left[\alpha + \beta \tanh\left(
\frac{\tilde\Lambda - \gamma}{\delta} \right)\right] \right\},
\end{split}
\end{equation}
with coefficients $\alpha = 0.084$, $\beta = 0.127$, $\gamma = 567.1$, and $\delta = 405.14$. The numerical data, the best fit, and the residual are shown in Fig.~\ref{fig:mdisk}. We remark that our simulations have only sampled the region of parameter space with $q \geq 0.85$. Smaller mass ratios could result in larger disk masses for a given $\tilde\Lambda$. However, the variation of $M_{\rm disk}$ with $q$ reported in the literature, \textit{e.g.}, Refs.\cite{Shibata:2006nm, Rezzolla:2010fd}, are not large enough to affect our results in a qualitative way. Moreover, large mass asymmetries are disfavored in the light of the distribution of known binary NS systems in our galaxy \cite{Tauris:2017omb}. We leave the determination of $M_{\rm disk}$ as a function of $q$ to future work.
For the likelihood calculation we assume $\log (M_{\rm disk}/M_\odot)$ to have a Gaussian distribution with mean $\Phi(\tilde\Lambda)$. We conservatively take the standard deviation to be $\sigma=0.5$. This uncertainty is indicated by the gray shaded region in the bottom panel of Fig.~\ref{fig:mdisk}. Accordingly, we approximate the EM likelihood as
\begin{equation}\label{eq:pdisk}
\begin{split}
P[M_{\rm disk} > & 0.04\, M_\odot] = \\
&1 - \frac{1}{2}\left[ 1 + \mathrm{erf}\left(\frac{\log(0.04) - \Phi(\tilde\Lambda)}{\sqrt{2} \sigma}\right) \right].
\end{split}
\end{equation}
To explore the parameter space and obtain posterior samples, we couple the evaluaton of the likelihood function to \texttt{MultiNest}~\cite{Feroz:2008xx}. This is a Monte Carlo sampling algorithm that uses the technique of nested sampling, and is designed to efficiently cope with disjoint multi-modal posteriors in the multi-dimensional parameter space.
\section{Results}
\label{sec:results}
\begin{figure*}
\includegraphics[width=\textwidth]{fig02.pdf}
\caption{Posterior distributions obtained with (red) and without (blue) the inclusion of the EM constraints. The marginalized prior distribution for each parameter is shown as the black histogram in the plots along the diagonal. The off-diagonal plots show contours enclosing 68\% and 95\% quantiles for the two-dimensional joint posterior distributions. On the upper right corner, we indicate for each of the parameters the median value and the uncertainty (also shown by the vertical lines in the plots along the diagonal). The uncertainty corresponds to the 5\% and 95\% percentiles. Instead of showing the two arrival times $t_{c, 1}$ and $t_{c, 2}$ separately, we show $t_{c, 1}$ (Livingston) and $\Delta t_c = t_{c, 2} - t_{c, 1}$. Our chosen zero point for $t_{c, 1}$ is $0.0035\,$s in advance of that for $t_{c, 2}$. The results are consistent with the causality bound on the time delay between the two LIGO sites. The EM data favors larger values of the tidal deformability parameter $\tilde\Lambda$ and of the mass ratio $q$, \textit{i.e.}, larger NS radii and more symmetric binaries.}
\label{fig:posteriors}
\end{figure*}
The results of our analysis are summarized in Fig.~\ref{fig:posteriors}. There we show the marginalized 1-parameter histograms as well as the marginalized 2-parameter joint distributions for the posterior samples obtained both with and without including the EM data in the likelihood. The results clearly show that the mutual delay $\Delta t_c := t_{c, 2} - t_{c, 1}$ between the two LIGO detectors does not correlate with any of the intrinsic parameters, and that the independently inferred arrival times at the two sites do not differ by more than the causality bound. These justify the simplification that we have ignored the time, phase and amplitude correlations between the GW signals recorded at both detectors.
Our GW-only posteriors are consistent with those presented in Refs.~\cite{theligoscientific:2017qsa, abbott:2018wiz, De:2018uhw, abbott:2018exr}.See \cite{dai:2018dca} for a more detailed discussion of the GW-only posteriors obtained with our approach. However, our posterior for $\tilde\Lambda$ is broader because of the more conservative choice of cutoff frequency for the GW data \cite{dai:2018dca}. This is expected because $\tilde\Lambda$ is mostly encoded in the high-frequency part of the GW signal \cite{Damour:2009wj, De:2018uhw}. Also note that there is a degeneracy between $\tilde\Lambda$ and the {\it common} arrival time of the two detectors. This is because both tidal deformability and the arrival time cause phasing corrections that grow as positive powers of the frequency $f$, with similar power indices: $5/3$ and $1$, respectively \cite{dai:2018dca}.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{fig03.pdf}
\caption{Posterior distribution function for $\tilde\Lambda$ obtained with (red) and without (blue) the inclusion of the EM constraints. The inclusion of EM information shifts the posterior towards larger values of $\tilde\Lambda$ and further away from zero.}
\label{fig:lambda}
\end{figure}
The inclusion of EM information in the likelihood function has a strong impact on the recovered posterior for $\tilde\Lambda$, shown in Fig.~\ref{fig:lambda}. Values of $\tilde\Lambda$ smaller than about $300$ appear to be excluded by the EM data and the overall distribution for $\tilde\Lambda$ is shifted towards larger values. The 90\% confidence interval for $\tilde\Lambda$ shifts from $(53, 625)$ with median $297$ to $(323, 776)$ with median $487$. Other parameters that correlate with $\tilde\Lambda$ are also affected. The most notable is the binary mass ratio $q$, with the EM data favoring more comparable component masses (see Fig.~\ref{fig:posteriors}).
The lower limit $\tilde\Lambda \gtrsim 300$ is not as stringent as the one in Ref.~\cite{radice:2017lry}, who found $\tilde\Lambda \gtrsim 400$. The reason for this discrepancy is that, in the analysis performed here, the probability of forming an accretion disk with a mass more than one standard deviation larger than $\Phi(\tilde\Lambda)$ is not zero, while it was instead implicitly assumed so in Ref.~\cite{radice:2017lry}. On the other hand, we want to emphasize that the goal of Ref.~\cite{radice:2017lry} was not to perform a fully quantitative analysis, as we have done here, but only to illustrate the key idea. In this sense, our results and those of Ref.~\cite{radice:2017lry} are fully consistent.
We can translate the measurement of $\tilde\Lambda$ into a constraint on the radius of a $1.4\ M_\odot$ NS following Refs.~\cite{De:2018uhw, Zhao:2018nyf}. They derived the EOS insensitive relation
\begin{equation}\label{eq:r14}
R_{14} = (11.2 \pm 0.2) \frac{\mathcal{M}}{M_\odot} \left( \frac{\tilde\Lambda}{800} \right)^{1/6}\ {\rm km},
\end{equation}
To apply this formula we compute the rest-frame binary chirp mass from the detector-frame chirp mass as $\mathcal{M} (1 + z) = \mathcal{M}^{\rm det}$, where $z$ is taken to be $0.0099$ following Ref.~\cite{abbott:2018wiz}. Accordingly, we find the median value of $\mathcal{M}$ to be $1.186\ M_\odot$. From the GW data alone we infer $R_{14} = (11.3^{+1.5}_{-2.8} \pm 0.2)\ {\rm km}$ (90\% credible interval, statistical and systematic uncertainties). With the additional constraint due to the EM data we find $R_{14} = (12.2^{+1.0}_{-0.8} \pm 0.2)\ {\rm km}$. The systematic errors in this estimate include only the uncertainty related to the use of Eq.~(\ref{eq:r14}), but not the possible systematic effects in our numerical relativity data, which we cannot presently quantify. Notwithstanding this caveat, our estimates provide the tightest constraint on the NS radius to date, with an uncertainty of only $2.2~{\rm km}$. Moreover, our analysis strongly disfavors NS radii smaller than $11.2\ {\rm km}$, which would have resulted in early BH formation and would have created accretion disks not sufficiently massive to fuel the outflow inferred from the kN observations.
\section{Conclusions}
\label{sec:conclusions}
We have performed a Bayesian parameter estimation analysis of GW170817/AT2017gfo combining both the GW and the EM data. Specifically, we have argued that EM observations imply a lower limit on the merger remnant disk mass of $0.04\ M_\odot$, and we have used a fit to the simulation data to estimate the probability with which such constraint is fulfilled depending on the binary tidal deformability parameter $\tilde\Lambda$. Then, we have assumed GW and EM data to be independent, and we have employed this probability to construct a joint likelihood for the GW and the EM data. We have used the relative binning method to efficiently evaluate the GW part of the likelihood, while the EM part of the likelihood is analytic. Finally, we have derived the posterior probabilities for binary parameters using a multimodal nested sampler.
We find that the inclusion of the EM information shifts the support of the posterior distribution for $\tilde\Lambda$ to larger values than those inferred from the GW data alone. In particular, values of $\tilde\Lambda$ less than ${\sim}300$ are excluded. This corresponds to a lower limit on the radius of a $1.4\ M_\odot$ NS $R_{14}$ of $11.2\ {\rm km}$. The 90\% credible interval for $R_{14}$ is found to be $12.2^{+1.0}_{-0.8}\ {\rm km}$ with an additional $0.2\ {\rm km}$ of systematic uncertainty. EM data also favors larger values of $q$, \textit{i.e.}, a more symmetric binary, compared to inference from the GW data alone.
We have assumed that both NSs in GW170817 had similar radii, following \cite{De:2018uhw}. However, this hypothesis would be violated in the presence of first order phase transition at densities intermediate between those achieved in the primary and in the secondary NS. Such scenario, the so-called twin star hypothesis, is presently not excluded for GW170817 \cite{Paschalidis:2017qmb}. If GW170817 was the merger of a regular NS with an hybrid star or a quark star, then our analysis would be invalid. The empirical formula used to relate $\Lambda_1$ and $\Lambda_2$ and Eq.~(\ref{eq:r14}) can be extended to deal with phase transitions, but only at the price of significantly larger systematic errors \cite{Zhao:2018nyf}. Perhaps more importantly, our analysis relies on fits to a relatively large, but still limited set of numerical relativity simulations that do not include examples with first order phase transitions. Additional simulations, spanning a larger range of the parameter space and more EOSs and including cases with first-order phase transitions, would be required to confirm our results. This will be the object of our future work.
\subsection*{Acknowledgments}
It is a pleasure to acknowledge Albino Perego, Sebastiano Bernuzzi, Tim Dietrich, Ingo Tews, Sanjay Reddy, Matias Zaldarriaga, and Adam Burrows for discussions.
DR acknowledges support from a Frank and Peggy Taplin Membership at the Institute for Advanced Study and the Max-Planck/Princeton Center (MPPC) for Plasma Physics (NSF PHY-1804048).
Computations were performed on the supercomputers Bridges, Comet, and Stampede (NSF XSEDE allocation TG-PHY160025), on NSF/NCSA Blue Waters (NSF PRAC ACI-1440083 and AWD-1811236), and on CINECA's Marconi (PRACE proposal 2016153522). LD is partially supported at the Institute for Advanced Study by NASA through Einstein Postdoctoral Fellowship grant number PF5-160135 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. LD is also supported at the Institute for Advanced Study by the Raymond and Beverly Sackler Foundation.
\bibliographystyle{epj}
|
1,116,691,497,208 | arxiv | \section{INTRODUCTION}
Flows driven by active agents display a rich variety of dynamical states \cite{ramaswamy10arcm,marchetti13rmp,yeomans2014natmat}. Active stresses and hydrodynamics collude to create collective motion, both regular and chaotic, in systems of motile micro-organisms \cite{mendelson99jba,dombrowski04prl,ishikawa08prl} or artificial self-propelled agents \cite{howse07prl,bricard13nat} on scales much larger than the individual. For example, sufficiently dense suspensions of motile micro-organisms, such as {\it B. Subtilis}, exhibit a spatio-temporally disordered phase. Owing to its reminiscence of hydrodynamic turbulence, this phenomenon has been termed active turbulence \cite{wolgemuth08bpj,wensink12pnas,dunkel13prl,bratanov15pnas,thampi16epj,genkin17prx}. Similar observations were also reported in systems dominated by nematic interactions such as ATP-driven microtubule networks \cite{sanchez12nature}. Besides active turbulence, remarkably ordered phases were found in a number of systems. Self-organized vortex lattices, for example, have been discovered both in hydrodynamically interacting systems, such as spermatozoa \cite{riedel05science}, as well as in dry microtubule systems \cite{sumino2012nature}. Confinement offers yet another possibility of organizing flows into regular large-scale flow \cite{suzuki17pnas} and vortex patterns \cite{wioland16nap}.
The occurrence of these phenomena in vastly different systems has motivated the development and exploration of a range of minimal mathematical models. They can be broadly categorized into agent-based models of self-propelled particles with nematic or polar interactions \cite{vicsek95prl,marchetti13rmp,grossmann14prl,grossmann15epje,bechinger16rmp} and continuum theories for a small number of order parameters \cite{wolgemuth08bpj,wensink12pnas,dunkel13njp,giomi15prx,urzay17jfm}. These models have been shown to capture a variety of dynamical phases of active fluids, including active turbulence and vortex lattice states. For example, in \cite{wensink12pnas} the active turbulence phase was modeled and compared with experiments. Regarding ordered phases, vortex lattices have been observed and investigated at the crossover from the hydrodynamic to the friction-dominated regimes of models for confined active fluids \cite{doostmohammadi16natcomm}. These systems display phases of two-signed vortices with length scales defined by the dimensions of the system. In a class of particle-based models for active matter, the emergence of vortex lattices has been related to a classical pattern formation mechanism as a result of a Turing instability \cite{grossmann14prl,grossmann15epje}.
While many such models have been shown to capture the dynamics of active systems qualitatively and quantitatively, the complexity of disordered states like active turbulence eventually calls for a statistical description. The goal of such a non-equilibrium statistical mechanics of active matter is the computation of fundamental statistical quantities such as correlation functions without resorting to expensive numerical integration of systems with thousands or even millions of degrees of freedom.
Recent developments of statistical theories on top of minimal continuum theories for active matter have provided insights into the small-scale correlation structure of an active nematic fluid based on a mean field approach for the vorticity field \cite{giomi15prx}, as well as a theory capturing large-scale features of polar bacterial flows based on analytical closure techniques \cite{bratanov15pnas}. A theoretical framework capturing the correlation function or equivalently the spectral properties for the full range of scales of such prototypical active systems, however, is currently lacking.
In this Rapid Communication, we set out to close this gap. Borrowing techniques from turbulence theory, we derive correlation functions and spectra of the turbulent phase of the minimal continuum theory recently established in \cite{wensink12pnas} to capture the dynamics of dense bacterial suspensions. Further exploring the parameter space, we also discover a novel phase of turbulent pattern formation, i.e.~an extensive turbulent transient governed by strong advection which eventually results in a highly ordered vortex lattice state. We demonstrate that turbulence characteristics crucially contribute to the emergence of this novel pattern through nonlinear advective energy transfer. This mechanism differs profoundly from the classical route to pattern formation. To make this transparent, we first briefly recapitulate classical pattern formation in this minimal model for active fluids in absence of nonlinear advection.
\subsection{Minimal Model for Active Fluids}
The starting point is the equation for active turbulence as proposed in \cite{wensink12pnas,dunkel13njp} for a two-dimensional incompressible velocity field $\boldsymbol u(\boldsymbol x,t)$ describing the coarse-grained dynamics of a dense bacterial suspension. It takes the nondimensionalized form \footnote{For the nondimensionalization we start from the equation presented in \cite{wensink12pnas} and note that the term involving $\lambda_1$ can be absorbed into the pressure gradient term. Then we define the time scale $T = 4 \Gamma_2 / \Gamma_0^2$ and the length scale $L = \sqrt{-2\Gamma_2/\Gamma_0}$ to nondimensionalize the equation. To obtain Eq.~\eqref{eq:equationofmotion}, the parameters in the dimensional equation are mapped to the ones in the nondimensional equation according to $\lambda_0 \rightarrow \lambda$, $\Gamma_0T/L^2 \rightarrow -2$, $\Gamma_2T/L^4 \rightarrow 1$, $\alpha T \rightarrow \alpha+1$ and $\beta L^2/T \rightarrow \beta$. We note that one additional parameter can be scaled out \cite{oza16epje}, which we refrain from here for presentation purposes.}
\begin{equation}
\label{eq:equationofmotion}
\partial_t \boldsymbol u + \lambda \boldsymbol u \cdot \nabla \boldsymbol u = -\nabla p - (1+\Delta)^2 \boldsymbol u - \alpha \boldsymbol u - \beta \boldsymbol u^2\, \boldsymbol u
\end{equation}
and represents a minimal field theory for a polar order parameter field, combining Navier-Stokes dynamics (advective nonlinearity and nonlocal pressure gradient) with elements of pattern forming systems (linear wave number selection and a saturating higher-order nonlinearity). Owing to its similarity to the Navier-Stokes equation, this minimal model is particularly suited to develop a statistical theory with methods from turbulence theory.
\begin{figure}
\includegraphics[width=1.0\textwidth]{fig_1.pdf}
\caption{The continuum model Eq.~\eqref{eq:equationofmotion} displays a range of dynamical phases of the vorticity field depending on the nonlinear advection: (a) classical pattern formation ($\lambda=0$, simulation 1 in Table \ref{tab:simpara}), (b) active turbulence ($\lambda=3.5$, simulation 2 in Table \ref{tab:simpara}) and (c) turbulent pattern formation ($\lambda=7$, simulation 3 in Table \ref{tab:simpara}). Notably, the dispersion relation shown in (d) along with the nonlinear damping is kept fixed for all examples. The dashed green line corresponds to the most unstable wave number, given by $k=k_c$, which sets the wave number of the pattern in (a). The horizontal orange lines in (a) and (c) correspond to five times the length scale of the patterns, i.e.~$10\pi$/$k_c$ and $10\pi/k_0$, respectively, exemplifying that the wave number selection in the turbulent pattern forming phase (c) differs from the classical pattern forming phase (a).}
\label{fig:dynamicalstates}
\end{figure}
\begin{table}
\begin{tabular}{ cccccccc}
No. & dynamical state & $\lambda$ & $\alpha$ & $\beta$ & $N$ & $D$ & $\Delta t$ \\
\hline
1 & square lattice & 0 & -0.8 & 0.01 & 2048 & 250 & $10^{-2}$\\
2 & active turbulence & 3.5 & -0.8 & 0.01 & 2048 & 250 & $10^{-3}$ \\
3 & hexagonal lattice & 7.0 & -0.8 & 0.01 & 2048 & 250 & $10^{-3}$ \\
4 & hexagonal lattice & 7.0 & -0.8 & 0.01 & 2048 & 125 & $10^{-3}$ \\
5 & active turbulence & 3.5 & -0.3 & 0.01 & 2048 & 250 & $10^{-3}$ \\
6 & benchmark case \cite{wensink12pnas,bratanov15pnas} & 3.5 & -1.178 & 0.01125 & 2048 & 250 & $10^{-3}$ \\
\end{tabular}
\caption{Simulation parameters. The active fluid is characterized through the parameters $\lambda$, $\alpha$ and $\beta$. The simulations are run on grids with $N^2$ grid points, discretizing a domain of lateral extent $D$; $\Delta t$ denotes the time step.}
\label{tab:simpara}
\end{table}
The dynamical phases of this continuum theory are explored in Fig.~\ref{fig:dynamicalstates}. Unless otherwise noted, we fix $\alpha=-0.8$ and $\beta=0.01$ to focus on the role of nonlinear advection. The results are obtained numerically with a pseudo-spectral code using a second-order Runge-Kutta scheme, and an integrating factor is used for treating the linear terms. More details on the simulations are provided in the supporting information. Table \ref{tab:simpara} lists the range of parameters explored in this manuscript.
\subsection{Classical Pattern Formation}
For $\lambda=0$ the equation reduces to a vectorial Swift-Hohenberg type system which follows a gradient dynamics as discussed in the supporting information. In this parameter regime, we observe the emergence of stationary square lattices consistent with previous literature~\cite{dunkel13njp,oza16epje}. Figure~\ref{fig:dynamicalstates}(a) shows a non-ideal square lattice with defects such as grain boundaries from our numerical simulations. As expected, the emergence of this state can be explained with tools from classical pattern formation theory in terms of amplitude equations. We analyze the corresponding amplitude equations \cite{cross09book} of the vorticity formulation of Eq.~\eqref{eq:equationofmotion}. The analysis detailed in the SI reveals the stability of the square lattice state with amplitude $A=\sqrt{-\alpha k_c^2/(5\beta)}$, which corresponds to a maximum value of the field of $4A$. In comparison, single-stripe patterns are linearly unstable. For the investigated parameters given in Table \ref{tab:simpara} the value of the theoretically predicted amplitude is $4.00$, which is confirmed by our simulations to within $5$ percent. This brief exposition serves to show that the classical pattern formation in absence of nonlinear advection leads to a stationary square lattice state with wave number $k_c=1$.
\section{Active Turbulence}
As the advective term is switched on by setting $\lambda=3.5$, the nonlinear energy transfer sets in, which by generating vortices of larger size renders the stationary square lattice pattern unstable. As a result, a self-sustained turbulence-like phase emerges (see Fig.~\ref{fig:dynamicalstates}(b)), which has been characterized, e.g. in \cite{wensink12pnas,bratanov15pnas,james2018vortex}. Borrowing techniques from classical turbulence theory, we here establish a statistical description for the two-point correlation function and energy spectra for the full range of dynamically active scales.
To this end, we consider the velocity covariance tensor $R_{ij}(\boldsymbol r) = \langle u_i(\boldsymbol x,t) u_j(\boldsymbol x+\boldsymbol r,t) \rangle \equiv \langle u_i u_j' \rangle$ which is among the most fundamental statistical objects of interest; by virtue of kinematic relations, it contains the correlation structure of the velocity field as well as of the vorticity and velocity gradient tensor fields \cite{batchelor53book}. Its evolution equation for the statistically homogeneous and isotropic turbulent phase is readily obtained as
\begin{align}\label{eq:covarianceevo}
&\partial_t R_{ij} + \lambda\partial_k \langle u_k' u_i u_j' - u_k u_i u_j' \rangle = -2\left[ (1+\Delta)^2 + \alpha\right]R_{ij} -\beta \langle u_k u_k u_i u_j' + u_k' u_k' u_i u_j' \rangle \, .
\end{align}
As a result of statistical isotropy, the pressure contribution vanishes. The quadratic and cubic nonlinearities result in unclosed terms which obstruct a direct computation of the covariance without making further assumptions. The main effect of the $\beta$-term is to saturate the velocity growth. Owing to the approximate Gaussianity of the velocity field \cite{wensink12pnas,dunkel13prl,bratanov15pnas,james2018vortex}, the correlator in this term can be factorized using Wick's theorem, which yields $\langle u_k u_k u_i u_j' + u_k' u_k' u_i u_j' \rangle = 2 R_{kk}(\boldsymbol 0)R_{ij}(\boldsymbol r) + 2 R_{ik}(\boldsymbol 0)R_{kj}(\boldsymbol r) + 2 R_{ik}(\boldsymbol r) R_{kj}(\boldsymbol 0)$.
An analogous attempt to factorize the triple correlators fails as this amounts to neglecting the energy transfer across scales, a hallmark feature of turbulence \cite{monin13book}. A more sophisticated closure needs to be established. For the subsequent treatment we choose a Fourier representation of the covariance tensor $R_{ij}(\boldsymbol r)$ in terms of the spectral energy tensor $\Phi_{ij}(\boldsymbol k)$. For a statistically isotropic two-dimensional flow, it takes the form $\Phi_{ij}(\boldsymbol k,t) = E(k,t)/(\pi k)\left[ \delta_{ij} - k_ik_j/k^2 \right]$, where $E(k,t)$ denotes the energy spectrum function. Starting from Eq.~\eqref{eq:covarianceevo}, an evolution equation for the energy spectrum function can be derived which takes the form \cite{batchelor53book,monin13book,pope00book}
\begin{equation}\label{eq:spectrumevo}
\partial_t E(k,t) + T(k,t) = 2 L(k,t) E(k,t) \, .
\end{equation}
Here, $T(k,t)$ is the energy transfer term between different scales which results from the triple correlators in Eq.~\eqref{eq:covarianceevo}; $L(k,t) = -(1-k^2)^2 - \alpha - 4 \beta E_0(t)$ is the \emph{effective linear term}, which represents all linear terms as well as the Gaussian factorization of the cubic nonlinearity with $E_0(t) = \int E(k,t) \, \mathrm{d}k$. The effective linear term is responsible for the energy injection around $k_c=1$ as well as for the damping at small and large scales.
For the energy transfer term, we adopt the so-called eddy-damped quasi-normal Markovian (EDQNM) approximation and present here the main steps of the derivation for active fluids. More details are given in the SI. For a more comprehensive account of this model, which has been successfully applied to hydrodynamic turbulence, we refer the reader to \cite{Orszag1974,lesieur2012turbulence,SagautBook}. The core idea of this closure scheme is to consider the evolution equation for the triple correlators in addition to Eq.~\eqref{eq:spectrumevo}, from which $T(k,t)$ can be obtained straightforwardly. The occurring fourth-order moments are then factorized assuming Gaussianity, similar to the treatment of the nonlinear damping term in Eq.~\eqref{eq:covarianceevo}, i.e. $\langle\hat{u}\hat{u}\hat{u}\hat{u}\rangle = \Sigma\langle\hat{u}\hat{u}\rangle\langle\hat{u}\hat{u}\rangle$ (written in a symbolic fashion). The influence of the neglected cumulants is modeled by an additional damping, which leads to an effective damping $\eta_{kpq}$ (see SI for more information). As a result we obtain an evolution equation for the triple correlators of the velocity modes $\boldsymbol k$, $\boldsymbol p$ and $\boldsymbol q$:
\begin{align}
\left[\partial_t+ \eta_{kpq}\right]\langle \hat{u}({\boldsymbol k})\hat{u}({\boldsymbol p})\hat{u}({\boldsymbol q}) \rangle= \lambda\Sigma\langle\hat{u}\hat{u}\rangle\langle\hat{u}\hat{u}\rangle.
\end{align}
As a next step, we apply the so-called Markovianization by assuming that the right-hand side evolves slowly, such that this equation can be integrated analytically and the steady state solution can be obtained by taking $t\rightarrow \infty$. The energy transfer function, which is a contraction of the triple velocity tensor, can then be written as
\begin{align}\label{eq:edqnm}
T(k,t)=\iint_{\Delta}\frac{\lambda^2}{\eta_{kpq}} \, \big[a(k,p,q)E(p,t)E(q,t)+b(k,p,q)E(q,t)E(k,t)\big]\mathrm{d}p\mathrm{d}q \, .
\end{align}
Here $1/\eta_{kpq}$ acts as a characteristic time scale which results from the turbulent damping. The geometric factors $a(k,p,q)$ and $b(k,p,q)$ are associated to contractions of the isotropic tensor $\langle \hat{u}({\boldsymbol k})\hat{u}({\boldsymbol p})\hat{u}({\boldsymbol q}) \rangle$; the exact expressions of the terms are given in the SI. $\Delta$ restricts the integration domain in $p,q$-space so that the three wave numbers $k,p,q$ form the sides of a triangle. These triadic interactions are a direct consequence of the quadratic advective nonlinearity. While technically quite involved, the key feature is that the energy transfer term is expressed in terms of the energy spectrum only, i.e.~we have obtained a closure. To illustrate the results, the left panel of Fig.~\ref{fig:edqnmresults} shows a comparison of the terms of Eq.~\eqref{eq:spectrumevo} obtained from the EDQNM closure with a direct estimation from simulation data for active turbulence. Very good agreement is found for all wave numbers. Consistent with the observations in \cite{bratanov15pnas}, the energy transfer term takes energy from the linear injection scale and transports it upscale. This inverse energy transfer is typical for two-dimensional flows \cite{davidson15book}. Interpreting these results in the context of bacterial turbulence, the dominant energy injection occurs on a length scale comparable to the individual bacteria \cite{wensink12pnas}, yet their collective motion displays much larger scales. In the framework of the continuum model Eq.~\eqref{eq:equationofmotion}, this collective behavior is the result of an energy transfer to larger scales induced by nonlinear advection. The EDQNM theory captures this effect accurately. Also the effective linear term, which injects energy in a wave number band around $k_c=1$, but extracts energy at large and small scales, is captured accurately, demonstrating the fidelity of the Gaussian factorization of nonlinear damping. The spectra resulting from the EDQNM closure are shown in the middle panel of Fig.~\ref{fig:edqnmresults}. To demonstrate the validity of the closure theory for a broader parameter range, we additionally varied the $\alpha$ parameter (see Table \ref{tab:simpara}). Furthermore, we also compare with the reference case reported in \cite{bratanov15pnas,wensink12pnas}, which in our normalized set of parameters corresponds to $\alpha=-1.178, \beta=0.01125$. In previous literature, this reference case has been shown to capture experimental results \cite{wensink12pnas}. As the value of $\alpha$ is decreased, the energy injection into the system becomes more intense and acts on a wider range of scales. As a result the energy spectra show an increased broadband excitation. Due to the inverse energy transfer the spectral peak gradually shifts from the most unstable wave number to smaller wave numbers, indicating the emergence of larger-scale flow structures. All of these trends are captured accurately by EDQNM without further adjustments. The EDQNM theory therefore extends the low-wave-number theory developed in \cite{bratanov15pnas} to the full range of scales. With the full energy spectra at hand, correlation functions can be computed in a straightforward manner. The results are shown in the right panel of Fig.~\ref{fig:edqnmresults}. As the flow becomes increasingly turbulent, the correlation length increases. This can be understood from the previous observations in spectral space. Through the inverse energy transfer, larger-scale structures are excited leading to longer-range correlations. Again, EDQNM captures these observations accurately. These findings highlight the crucial impact of the nonlinear advection on the system and motivate the exploration of the dynamics in the parameter range of strong nonlinear advection.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{fig_2.pdf}
\caption{(a) Energy budget of active turbulence: direct numerical simulation (DNS) results (dashed lines, simulation 2 in Table \ref{tab:simpara}) vs EDQNM closure theory. The black, green and blue curves correspond to the energy spectrum, the transfer term and the effective linear term, respectively. (b) Spectra from DNS of active turbulence compared to EDQNM closure theory. (c) Longitudinal velocity autocorrelation of active turbulence: DNS vs EDQNM closure theory. The blue, black and green curves in (b) and (c) correspond to the simulations 2, 5 and 6, respectively, as listed in Table \ref{tab:simpara}.
}
\label{fig:edqnmresults}
\end{figure*}
\section{Turbulent Pattern Formation}
Further increasing the strength of the nonlinear advection to $\lambda=7$ leads to a surprising new dynamical state emerging from a turbulent transient as visualized in Fig.~\ref{fig:vortexlattice}. From random initial conditions vortices arise, triggered by small-scale instabilities. Many vortices are screened by surrounding vorticity of opposite sign, reducing their Biot-Savart interaction. Some of them, however, form dipoles, which propagate rapidly through the flow. These dipoles contribute significantly to the turbulent dynamics. In the course of time, a spontaneous symmetry breaking occurs, such that one sign of vorticity prevails. As a result, less dipoles form and the dynamics stabilizes. Repeating the numerical experiment with different random initial conditions confirms that both vorticity signs are equally probable in this spontaneous symmetry breaking. By the continued emergence of vortices the system eventually crystallizes into a quasi-stationary hexagonal vortex lattice state. The wave number characterizing this turbulent pattern is significantly smaller than na\"ively expected based on the linear critical wave number $k_c=1$ in the classical pattern formation case. This can be explained as follows: as the turbulent pattern emerges out of a turbulent transient, there is an inverse transfer of energy feeding larger scales. As a result, the peak energy injection scale in Eq.~\eqref{eq:spectrumevo} (i.e.~the maximum of $2L(k,t)E(k,t)-T(k,t)$) shifts to smaller wave numbers during the transient, giving rise to larger-scale flow structures. Because $\int T(k,t) \mathrm{d}k=0$ by virtue of $T(k,t)$ being an energy transfer term, Eq.~\eqref{eq:spectrumevo} implies the constraint $\int L(k,t) E(k,t) \mathrm{d}k=0$ once the statistically stationary state with the vortex lattice is reached. Given the fact that the system forms a regular vortex pattern with a sharply localized spectrum around the lattice wave number, this constraint can only be satisfied if the lattice wave number $k_0$ is close to the zero-crossing of the effective linear term, i.e.~close to the wave number corresponding to the smallest neutral mode. For the current choice of parameters, this prediction yields $k_0 \approx 0.58$ in very good agreement with the numerical observation ($k_0 \approx 0.57$). To further confirm this prediction, we scanned the entire $\alpha$-range $[-0.95,-0.75]$ leading to stable vortex lattices, keeping all other parameters fixed. We observed a trend of the lattice wave number slowly increasing with $\alpha$, which is captured by the prediction to within ten percent (not shown). We conclude that this turbulent pattern formation selects the \emph{neutral} mode rather than the fastest growing linear mode. We stress that this mechanism profoundly differs from the Turing mechanism reported in \cite{grossmann14prl,grossmann15epje} due to the extended turbulent transient leading to the selection of the neutral mode.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{fig_3.pdf}
\caption{Emergence of hexagonal vortex lattice after a turbulent transient (simulation 4 in Table \ref{tab:simpara}). (a,b,c): Vorticity field after $t=20,150,850$. The insets show the two-dimensional vorticity spectra with the wave vectors corresponding to the most unstable wave number indicated by an orange circle. The inset (c) clearly shows six isolated peaks at $k_0 \approx 0.57$ which characterize the vortex lattice. For visualization purposes, these figures were obtained through a simulation on a smaller domain with half the domain length compared to Fig.~\ref{fig:dynamicalstates}. Note that the final vortex crystal state selects a sign of vorticity different from that of Fig.~\ref{fig:dynamicalstates}, exemplifying spontaneous symmetry breaking in this system. Panel (d) shows the evolution of the enstrophy, as well as the maximum and the minimum vorticity through the transient to the final quasi-stationary state.}
\label{fig:vortexlattice}
\end{figure*}
It remains to explain the type of lattice. Nonlinear advection favors axisymmetric vortices. As these structures populate the domain over time, they form the densest possible packing consistent with this geometry, resulting in the hexagonal pattern. Unlike the case of classical pattern formation ($\lambda=0$), this vortex lattice is quasi-stationary with perturbations from weaker background turbulence. The most striking feature of this phenomenon is the long turbulent transient phase preceding the formation of the pattern, which lasts much longer than the typical lifetimes of the vortices in the turbulent phase. Furthermore, unlike classical pattern formation, the dominant length scale in the system is given by the neutral mode in the effective dispersion relation.
\section{Conclusions}
The correlation functions and spectra of a minimal model for active turbulence developed in this paper establish a quantitative statistical theory of active turbulence. We adapted the EDQNM closure scheme for classical hydrodynamic turbulence to capture the linear driving and damping as well as the nonlinear energy transfer across scales along with nonlinear damping. For the range of investigated parameters, the theory has been found to accurately capture simulation results. It revealed that the spectral peak, associated with the typical size of turbulent flow structures, originates from the interplay of linear and nonlinear physics: energy is injected in a band of unstable modes which then cascades uphill before dissipated by linear and nonlinear damping terms. EDQNM therefore quantitatively captures the statistics of the collective behavior emerging in the continuum model Eq.~\eqref{eq:equationofmotion}. Having demonstrated the potential of methods from turbulence theory to capture disordered active matter states, we hope that our findings may spur further research. For instance, a generalization to active nematics might be an interesting direction for future research.
Further exploring the parameter space towards strong nonlinear advection, we find a highly ordered lattice state of dynamically self-organized vortices which emerges from an extensive turbulent transient. The inverse energy transfer of two-dimensional turbulence turns out to be a crucial ingredient in this turbulent pattern formation: the same mechanism leading to the spectral peak in the turbulent phase selects the neutral wave number in this turbulent pattern formation. While the potential importance of neutral modes has been pointed out in \cite{slomka17prf} based on kinematic considerations, our findings show that they are indeed dynamically relevant.
Regarding possible experimental realizations of the vortex lattice state reported here, we note that we observe it in a regime of strong nonlinear advection due to active stresses. Recent research has indicated that such a regime, in which the value of $\lambda$ is large, can be achieved by a microstate with strong polar interaction among the active particles \cite{reinken2018derivation}. Furthermore, we observe the vortex lattice in a parameter range (controlled by $\alpha$) of both large- and small-scale damping. Thus experiments involving active fluids with strong polar interactions and with substrate-mediated friction could potentially realize this novel ``turbulent pattern formation'' phenomenon.
Interestingly, the mechanism reported here shares similarity with quasicrystalline vortex lattices in drift-wave turbulence \cite{kukharkin95prl}, although their vortex pattern appear less stable than the ones reported here. Vortex crystals have also been observed in two-dimensional Navier-Stokes turbulence driven by a combination of deterministic and stochastic forcings \cite{jimenez07pof}, in truncated two-dimensional turbulence \cite{smithr1994finite}, in simulations of quasi-geostrophic turbulence \cite{arbic2004effects} as well as in two-dimensional fluid films with polymer additives \cite{gupta2017melting}. Furthermore, vortex lattices have been predicted \cite{abrikosov57jpc} and observed \cite{essmann67pla} in superconductors. These observations in profoundly different physical systems point at the ostensibly universal occurrence of highly ordered states in strongly nonlinear regimes. The investigation of this phenomenon in generic systems which combine features of pattern formation with non-Lyapunov dynamics such as nonlinear advection appears as one exciting direction for future research.
\begin{acknowledgments}
This work was supported by the Max Planck Society. MJ gratefully acknowledges the financial support by the International Max Planck Research School ``Physics of Biological and Complex Systems'', G\"{o}ttingen.
\end{acknowledgments}
|
1,116,691,497,209 | arxiv | \section{Introduction}\label{sec1}
We work over an algebraically closed field $k$ of characteristic 2.
Complex Enriques surfaces with a finite group of automorphisms are completely
classified into seven types. The main purpose of this paper
is to determine which types of such Enriques surfaces exist in characteristic 2.
Recall that, over the complex numbers, a generic Enriques surface has an infinite group of automorphisms (Barth and Peters \cite{BP}). On the other hand, Fano \cite{F} gave an Enriques surface with a finite group of automorphisms.
Later Dolgachev \cite{D1} gave another example of such Enriques surfaces. Then Nikulin \cite{N} proposed a classification of such Enriques surfaces in terms of the periods. Finally the second author \cite{Ko} classified all complex Enriques surfaces
with a finite group of automorphisms, geometrically. There are seven types ${{\text{I}}, {\text{II}},\ldots, {\text{VII}}}$ of such Enriques surfaces. The Enriques surfaces of type ${{\text{I}}}$ or
${{\text{II}}}$ form an irreducible one dimensional family, and each of the remaining types
consists of a unique Enriques surface.
The first two types contain exactly twelve nonsingular rational curves, on the other hand, the remaining five types contain exactly twenty nonsingular rational curves.
The Enriques surface of type ${{\text{I}}}$ (resp. of type ${{\text{VII}}}$) is the example given
by Dolgachev (resp. by Fano). We call the dual graphs of all nonsingular rational curves on the Enriques surface of type $K$ the dual graph of type $K$ ($K = {{\text{I}}, {\text{II}},..., {\text{VII}}}$).
In positive characteristics, the classification problem of Enriques surfaces with a finite group of automorphisms is still open. Especially the case of characteristic 2 is most interesting. In the paper \cite{BM2}, Bombieri and Mumford classified
Enriques surfaces in characteristic 2 into three classes, namely, singular, classical and supersingular Enriques surfaces.
As in the case of characteristic $0$, an Enriques surface
$X$ in characteristic 2 has a canonical double cover
$\pi : Y \to X$, which is a separable ${\bf Z}/2{\bf Z}$-cover,
a purely inseparable $\mu_2$- or $\alpha_2$-cover according to $X$ being singular, classical or supersingular. The surface $Y$ might have singularities, but it is $K3$-like in the sense that its dualizing sheaf is trivial.
In this paper we consider the following problem:
{\it does there exist an Enriques surface in characteristic $2$ with a finite group of automorphisms whose dual graph of all nonsingular rational curves is of type ${\rm I, II,..., VI}$ or ${\rm VII}$ $?$} Note that if Enriques surface $S$ in any characteristic has the dual graph of
type $K$ ($K={\rm I, II,..., VII}$), then the automorphism group ${\rm Aut}(S)$ is finite by Vinberg's criterion (see Proposition \ref{Vinberg}).
We will prove the following Table \ref{Table1}:
\begin{table}[!htb]\label{}
{\offinterlineskip
\halign{\strut\vrule#&\quad\hfil\rm#\hfil\quad&&
\vrule#&\quad#\hfil\quad\cr\noalign{\hrule}
& {\rm Type}&&${\text{I}}$ && ${\text{II}}$ && ${\text{III}}$&&${\text{IV}}$&&${\text{V}}$ && ${\text{VI}}$ &&${\text{VII}}$&\cr
\noalign{\hrule}
& {\rm singular} && {$\bigcirc$} && {$\bigcirc$} && {$\times$} && {$\times$} && {$\times$} && {$\bigcirc$} && {$\times$} &\cr
\noalign{\hrule}
& {\rm classical} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\bigcirc$} &\cr
\noalign{\hrule}
& {\rm supersingular} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\times$} && {$\bigcirc$} &\cr
\noalign{\hrule}
}}
\
\caption{}
\label{Table1}
\end{table}
\noindent
In Table \ref{Table1}, $\bigcirc$ means the existence and $\times$ means the non-existence
of an Enriques surface with the dual graph of type ${\text{I}},..., {\text{VII}}$.
In case of type ${{\text{I}}, {\text{II}}, {\text{VI}}}$, the construction of such Enriques surfaces over the complex numbers works well
in characteristic 2 (Theorems \ref{Ithm}, \ref{IIthm}, \ref{VIthm}). The most difficult and interesting case is of type ${{\text{VII}}}$.
We give a 1-dimensional family of classical
and supersingular Enriques surfaces with a finite group of automorphisms whose dual graph is of type ${{\text{VII}}}$ (Theorems \ref{main}, \ref{main2}). We remark that this family is non-isotrivial (Theorem \ref{non-isotrivial}).
Recently the authors \cite{KK} gave a one dimensional family
of classical and supersingular Enriques surfaces which contain a remarkable forty divisors, by using a theory
of Rudakov and Shafarevich \cite{RS} on purely inseparable covers of surfaces. We employ here the same method
to construct the above classical and supersingular Enriques surfaces with the dual graph of type ${{\text{VII}}}$.
It is known that
there exist Enriques surfaces in characteristic 2 with a finite group of automorphisms
whose dual graphs of all nonsingular rational curves do not appear
in the case of complex surfaces
(Ekedahl and Shepherd-Barron\cite{ES}, Salomonsson\cite{Sa}). See Remark \ref{extra}.
The remaining problem of the classification of Enriques surfaces in characteristic 2 with
a finite group of automorphisms is to determine such Enriques surfaces appeared only
in characteristic 2.
The plan of this paper is as follows. In section \ref{sec2}, we recall the known results
on Rudakov-Shafarevich's theory on derivations, lattices and Enriques surfaces.
In section \ref{sec3}, we give a construction of a one dimensional family of classical
and supersingular Enriques surfaces with the dual graph of type ${\text{VII}}$.
Moreover we show the non-existence of singular Enriques surfaces with the dual graph of type ${\rm VII}$ (Theorem \ref{non-existVII}).
In section \ref{sec4}, we discuss other cases, that is, the existence of singular Enriques surfaces of type ${\text{I}}, {\text{II}}, {\text{VI}}$ and the non-existence of other cases
(Theorems \ref{Ithm}, \ref{non-existI}, \ref{IIthm}, \ref{non-existII}, \ref{VIthm},
\ref{non-existVI}, \ref{non-existIII}).
In appendices A and B, we give two remarks. As appendix A, we show that
the covering $K3$ surface of any singular Enriques surface has height $1$.
As appendix B, we show that
for each singular Enriques surface with the dual graph of type ${\rm I}$ its canonical cover is isomorphic to the Kummer surface of the product of two ordinary elliptic curves.
\medskip
\noindent
{\bf Acknowledgement.} The authors thank Igor Dolgachev for valuable conversations.
In particular all results in Section \ref{sec4} were obtained by discussion with him
in Soeul and Kyoto, 2014.
They thank him that he permits us to give these results in this paper.
The authors also thank Matthias Sch\"utt and Hiroyuki Ito for pointing out the non-existence of
singular Enriques surfaces with the dual graph of nonsingular rational curves of type
${\text{VII}}$.
\section{Preliminaries}\label{sec2}
Let $k$ be an algebraically closed field of characteristic $p > 0$,
and let $S$ be a nonsingular complete algebraic surface defined over $k$.
We denote by $K_{S}$ a canonical divisor of $S$.
A rational vector field $D$ on $S$ is said to be $p$-closed if there exists
a rational function $f$ on $S$ such that $D^p = fD$.
A vector field $D$ for which $D^p=0$ is called of additive type,
while that for which $D^p=D$ is called of multiplicative type.
Let $\{U_{i} = {\rm Spec} A_{i}\}$ be an affine open covering of $S$. We set
$A_{i}^{D} = \{D(\alpha) = 0 \mid \alpha \in A_{i}\}$.
Affine varieties $\{U_{i}^{D} = {\rm Spec} A_{i}^{D}\}$ glue together to
define a normal quotient surface $S^{D}$.
Now, we assume that $D$ is $p$-closed. Then,
the natural morphism $\pi : S \longrightarrow S^D$ is a purely
inseparable morphism of degree $p$.
If the affine open covering $\{U_{i}\}$ of $S$ is fine enough, then
taking local coordinates $x_{i}, y_{i}$
on $U_{i}$, we see that there exsit $g_{i}, h_{i}\in A_{i}$ and
a rational function $f_{i}$
such that the divisors defined by $g_{i} = 0$ and by $h_{i} = 0$ have no common divisor,
and such that
$$
D = f_{i}\left(g_{i}\frac{\partial}{\partial x_{i}} + h_{i}\frac{\partial}{\partial y_{i}}\right)
\quad \mbox{on}~U_{i}.
$$
By Rudakov and Shafarevich \cite{RS} (Section 1), divisors $(f_{i})$ on $U_{i}$
give a global divisor $(D)$ on $S$, and zero-cycles defined
by the ideal $(g_{i}, h_{i})$ on $U_{i}$ give a global zero cycle
$\langle D \rangle $ on $S$. A point contained in the support of
$\langle D \rangle $ is called an isolated singular point of $D$.
If $D$ has no isolated singular point, $D$ is said to be divisorial.
Rudakov and Shafarevich (\cite{RS}, Theorem 1, Corollary)
showed that $S^D$ is nonsingular
if $\langle D \rangle = 0$, i.e., $D$ is divisorial.
When $S^D$ is nonsingular,
they also showed a canonical divisor formula
\begin{equation}\label{canonical}
K_{S} \sim \pi^{*}K_{S^D} + (p - 1)(D),
\end{equation}
where $\sim$ means linear equivalence.
As for the Euler number $c_{2}(S)$ of $S$, we have a formula
\begin{equation}\label{euler}
c_{2}(S) = {\text{deg}} \langle D \rangle - \langle K_{S}, (D)\rangle - (D)^2
\end{equation}
(cf. Katsura and Takeda \cite{KT}, Proposition 2.1).
Now we consider an irreducible curve $C$ on $S$ and we set $C' = \pi (C)$.
Take an affine open set $U_{i}$ above such that $C \cap U_{i}$ is non-empty.
The curve $C$ is said to be integral with respect to the vector field $D$
if $g_{i}\frac{\partial}{\partial x_{i}} + h_{i}\frac{\partial}{\partial y_{i}}$
is tangent to $C$ at a general point of $C \cap U_{i}$. Then, Rudakov-Shafarevich
\cite{RS} (Proposition 1) showed the following proposition:
\begin{prop}\label{insep}
$({\rm i})$ If $C$ is integral, then $C = \pi^{-1}(C')$ and $C^2 = pC'^2$.
$({\rm ii})$ If $C$ is not integral, then $pC = \pi^{-1}(C')$ and $pC^2 = C'^2$.
\end{prop}
A lattice is a free abelian group $L$ of finite rank equipped with
a non-degenerate symmetric integral bilinear form $\langle . , . \rangle : L \times L \to {\bf Z}$.
The signature of a lattice is the signature of the real vector space $L\otimes {\bf R}$ equipped with the symmetric bilinear form extended from the one on $L$ by linearity. A lattice is called even if
$\langle x, x\rangle \in 2{\bf Z}$
for all $x\in L$.
We denote by $U$ the even unimodular lattice of signature $(1,1)$,
and by $A_m, \ D_n$ or $\ E_k$ the even {\it negative} definite lattice defined by
the Cartan matrix of type $A_m, \ D_n$ or $\ E_k$ respectively.
We denote by $L\oplus M$ the orthogonal direct sum of lattices $L$ and $M$.
Let ${\rm O}(L)$ be the orthogonal group of $L$, that is, the group of isomorphisms of $L$ preserving the bilinear form.
In characteristic 2, a minimal algebaic surface with numerically trivial
canonical divisor is called an Enriques surface if the second Betti
number is equal to 10. Such surfaces $S$ are divided into three classes
(for details, see Bombieri and Mumford \cite{BM2}, Section 3):
\begin{itemize}
\item[$({\rm i})$] $K_{S}$ is not linearly equivalent to zero
and $2K_{S}\sim 0$. Such an Enriques surface is called a classical Enriques surface.
\item[$({\rm ii})$] $K_{S} \sim 0$, ${\rm H}^{1}(S, {\mathcal{O}}_{S}) \cong k$
and the Frobenius map acts on ${\rm H}^{1}(S, {\mathcal{O}}_S)$ bijectively.
Such an Enriques surface is called a singular Enriques surface.
\item[$({\rm iii})$] $K_{S} \sim 0$, ${\rm H}^{1}(S, {\mathcal{O}}_{S}) \cong k$
and the Frobenius map is the zero map on ${\rm H}^{1}(S, {\mathcal{O}}_S)$.
Such an Enriques surface is called a supersingular Enriques surface.
\end{itemize}
Let $S$ be an Enriques surface and let ${\text{Num}}(S)$ be the quotient of the N\'eron-Severi group of $S$ by torsion. Then ${\text{Num}}(S)$ together with the intersection product is
an even unimodular lattice of signature $(1,9)$ (Cossec and Dolgachev \cite{CD}, Chap. II, Theorem 2.5.1), and hence is isomorphic to $U\oplus E_8$.
We denote by ${\rm O}({\text{Num}}(S))$ the orthogonal group of ${\text{Num}}(S)$. The set
$$\{ x \in {\text{Num}}(S)\otimes {\bf R} \ : \ \langle x, x \rangle > 0\}$$
has two connected components.
Denote by $P(S)$ the connected component containing an ample class of $S$.
For $\delta \in {\text{Num}}(S)$ with $\delta^2=-2$, we define
an isometry $s_{\delta}$ of ${\text{Num}}(S)$ by
$$s_{\delta}(x) = x + \langle x, \delta\rangle \delta, \quad x \in {\text{Num}}(S).$$
The isometry $s_{\delta}$ is called the reflection associated with $\delta$.
Let $W(S)$ be the subgroup of
${\rm O}({\text{Num}}(S))$ generated by reflections associated with all nonsingular rational curves on $S$. Then $P(S)$ is divided into chambers
each of which is a fundamental domain with respect to
the action of $W(S)$ on $P(S)$.
There exists a unique chamber containing an ample
class which is nothing but the closure of the ample cone $D(S)$ of $S$.
It is known that the natural map
\begin{equation}\label{coh-trivial}
\rho : {\text{Aut}}(S) \to {\rm O}({\text{Num}}(S))
\end{equation}
has a finite kernel (Dolgachev \cite{D2}, Theorems 4, 6).
Since the image ${\text{Im}}(\rho)$ preserves the ample cone, we see ${\text{Im}}(\rho) \cap W(S) = \{1\}$.
Therefore ${\text{Aut}}(S)$ is finite if the index $[{\text{O}}({\text{Num}}(S)) : W(S)]$ is finite.
Thus we have the following Proposition (see Dolgachev \cite{D1}, Proposition 3.2).
\begin{prop}\label{finiteness}
If $W(S)$ is of finite index in ${\rm O}({\rm Num}(S))$, then ${\rm Aut}(S)$ is finite.
\end{prop}
\noindent
Over the field of complex numbers, the converse of Proposition \ref{finiteness}
holds by using the Torelli type theorem for Enriques surfaces (Dolgachev \cite{D1}, Theorem 3.3).
Now, we recall Vinberg's criterion
which guarantees that a group generated by finite number of reflections is
of finite index in ${\rm O}({\text{Num}}(S))$.
Let $\Delta$ be a finite set of $(-2)$-vectors in ${\text{Num}}(S)$.
Let $\Gamma$ be the graph of $\Delta$, that is,
$\Delta$ is the set of vertices of $\Gamma$ and two vertices $\delta$ and $\delta'$ are joined by $m$-tuple lines if $\langle \delta, \delta'\rangle=m$.
We assume that the cone
$$K(\Gamma) = \{ x \in {\text{Num}}(S)\otimes {\bf R} \ : \ \langle x, \delta_i \rangle \geq 0, \ \delta_i \in \Delta\}$$
is a strictly convex cone. Such $\Gamma$ is called non-degenerate.
A connected parabolic subdiagram $\Gamma'$ in $\Gamma$ is a Dynkin diagram of type $\tilde{A}_m$, $\tilde{D}_n$ or $\tilde{E}_k$ (see \cite{V}, p. 345, Table 2). If the number of vertices of $\Gamma'$ is $r+1$, then $r$ is called the rank of $\Gamma'$. A disjoint union of connected parabolic subdiagrams is called a parabolic subdiagram of $\Gamma$. We denote by $\tilde{K_1}\oplus \tilde{K_2}$ a parabolic subdiagram which is a disjoint union of two
connected parabolic subdiagrams of type $\tilde{K_1}$ and $\tilde{K_2}$, where
$K_i$ is $A_m$, $D_n$ or $E_k$. The rank of a parabolic subdiagram is the sum of the rank of its connected components. Note that the dual graph of singular fibers of an elliptic fibration on $S$ gives a parabolic subdiagram. For example, a singular fiber of type ${\rm III}$, ${\rm IV}$ or ${\rm I}_{n+1}$ defines a parabolic subdiagram of type $\tilde{A}_1$, $\tilde{A}_2$ or
$\tilde{A}_n$ respectively.
We denote by $W(\Gamma)$ the subgroup of ${\rm O}({\text{Num}}(S))$
generated by reflections associated with $\delta \in \Gamma$.
\begin{prop}\label{Vinberg}{\rm (Vinberg \cite{V}, Theorem 2.3)}
Let $\Delta$ be a set of $(-2)$-vectors in ${\text{Num}}(S)$
and let $\Gamma$ be the graph of $\Delta$.
Assume that $\Delta$ is a finite set, $\Gamma$ is non-degenerate and $\Gamma$ contains no $m$-tuple lines with $m \geq 3$. Then $W(\Gamma)$ is of finite index in ${\rm O}({\text{Num}}(S))$ if and only if every connected parabolic subdiagram of $\Gamma$ is a connected component of some
parabolic subdiagram in $\Gamma$ of rank $8$ {\rm (}= the maximal one{\rm )}.
\end{prop}
\noindent
Finally we recall some facts on elliptic fibrations on Enriques surfaces.
\begin{prop}\label{multi-fiber}{\rm (Dolgachev and Liedtke \cite{DL}, Theorem 4.8.3)}
Let $f : S \to {\bf P}^1$ be an elliptic fibration on an Enriques surface $S$ in
characteristic $2$. Then the following hold.
$({\rm i})$ If $S$ is classical, then $f$ has two tame multiple fibers, each is
either an ordinary elliptic curve or a singular fiber of additive type.
$({\rm ii})$ If $S$ is singular, then $f$ has one wild multiple
fiber which is a smooth ordinary elliptic curve or a singular fiber of multiplicative type.
$({\rm iii})$ If $S$ is supersingular, then $f$ has one wild multiple fiber which is a
supersingular elliptic curve or a singular fiber of additive type.
\end{prop}
\begin{proof}
As for the number of multiple fibers in each case, it is given
in Bombieri and Mumford \cite{BM2}, Proposition 11.
Let $2G$ be a multiple fiber of $f : S \longrightarrow {\bf P}^1$.
If $S$ is classical, the multiple fiber $2G$ is tame.
Therefore, the normal bundle ${\mathcal{O}}_{G}(G)$ of $G$ is of order 2 (cf. Katsura and Ueno \cite{KU},
p. 295, (1.7)). On the other hand, neither the Picard variety ${\rm Pic}^0({\bf G}_m)$
of the multiplicative group ${\bf G}_m$ nor ${\rm Pic}^0(E)$ of the supersingular elliptic curve
$E$ has any 2-torsion point. Therefore, $G$ is either an ordinary elliptic curve or
a singular fiber of additive type. Now, we consider an exact sequence:
$$
0 \longrightarrow {\mathcal{O}}_S(-G) \longrightarrow {\mathcal{O}}_S \longrightarrow {\mathcal{O}}_{G}
\longrightarrow 0.
$$
Then, we have the long exact sequence
$$
\rightarrow H^1(S, {\mathcal{O}}_S) \longrightarrow H^1(G, {\mathcal{O}}_G)
\longrightarrow H^2(S, {\mathcal{O}}_S(-G)) \longrightarrow H^2(S, {\mathcal{O}}_S)\rightarrow 0.
$$
If $S$ is either singular or supersingular, we have
$H^1(S, {\mathcal{O}}_S)\cong H^2(S, {\mathcal{O}}_S)\cong k$.
Note that in our case the canonical divisor $K_S$ is linearly equivalent to 0.
Since $2G$ is a multiple fiber,
by the Serre duality theorem, we have
$$
H^2(S, {\mathcal{O}}_S(-G)) \cong H^0(S, {\mathcal{O}}_S(K_S + G)) \cong H^0(S, {\mathcal{O}}_S(G))\cong k.
$$
Therefore, we see that
the natural homomorphism
$$
H^1(S, {\mathcal{O}}_S) \longrightarrow H^1(G, {\mathcal{O}}_G)
$$
is an isomorphism. If $S$ is singular, then the Frobenius map $F$ acts bijectively
on $H^1(S, {\mathcal{O}}_S)$. Hence, $F$ acts on $H^1(G, {\mathcal{O}}_G)$ bijectively.
Therefore, $G$ is either an ordinary elliptic curve or a singular fiber of multiplicative type.
If $S$ is supersingular, then the Frobenius map $F$ is the zero map
on $H^1(S, {\mathcal{O}}_S)$. Hence, $F$ is also a zero map on $H^1(G, {\mathcal{O}}_G)$.
Therefore, $G$ is either a supersingular elliptic curve or a singular fiber of additive type.
\end{proof}
Let $f : S \to {\bf P}^1$ be an elliptic fibration on an Enriques surface $S$.
We use Kodaira's notation for singular fibers of $f$:
$${\rm I}_n,\ {\rm I}_n^*,\ {\rm II},\ {\rm II}^*,\ {\rm III},\ {\rm III}^*,\ {\rm IV},\ {\rm IV}^*.$$
\begin{prop}\label{singular-fiber}
Let $f : S \to {\bf P}^1$ be an elliptic fibration on an Enriques surface $S$ in
characteristic $2$. Then the type of reducible singular fibers is one of the following:
$$({\rm I}_3, {\rm I}_3, {\rm I}_3, {\rm I}_3), \ ({\rm I}_5, {\rm I}_5), \
({\rm I}_9),\ ({\rm I}_4^*),\ ({\rm II}^*),\ ({\rm III}, {\rm I}_8),$$
$$({\rm I}_1^*, {\rm I}_4), \ ({\rm III}^*, {\rm I}_2),\
({\rm IV}, {\rm IV}^*),\ ({\rm IV}, {\rm I}_2, {\rm I}_6),\ ({\rm IV}^*, {\rm I}_3).$$
\end{prop}
\begin{proof}
Consider the Jacobian fibration $J(f) : R \to {\bf P}^1$ of $f$ which is a rational elliptic surface.
It is known that the type of singular fibers of $f$ coincides with that of
$J(f)$ (cf. Liu-Lorenzini-Raynaud \cite{LLR}, Theorem 6.6). Now the assertion follows from the classification of singular fibers
of rational elliptic surfaces in characteristic 2 due to Lang \cite{L1}, \cite{L2} (also see Ito \cite{I}).
\end{proof}
\section{Enriques surfaces with the dual graph of type VII}\label{sec3}
In this section, we construct Enriques surfaces in characteristic 2 whose dual graph
of all nonsingular rational curves is of type VII.
The method to construct them is similar to the one in Katsura and Kondo \cite{KK}, \S 4.
We consider the nonsingular complete model
of the supersingular elliptic curve $E$ defined by
$$
y^2 + y = x^3 + x^2.
$$
For $(x_1, y_1), (x_2, y_2) \in E$, the addition of this elliptic curve is given by,
$$
\begin{array}{l}
x_{3} = x_1 + x_2 +
\left(\frac{y_2 + y_1}{x_2 + x_1}\right)^2 + 1 \\
y_3 = y_1 + y_2 + \left(\frac{y_2 + y_1}{x_2 + x_1}\right)^3 + \left(\frac{y_2 + y_1}{x_2 + x_1}\right) + \frac{x_1y_2 +x_2y_1}{x_2 +x_1} + 1.
\end{array}
$$
The ${\bf F}_4$-rational points of $E$ are given by
$$
\begin{array}{l}
P_{0} = \infty, P_{1} =(1, 0), P_{2} =(0, 0), P_{3} =(0, 1),
P_{4} =(1, 1).
\end{array}
$$
The point $P_{0}$ is the zero point of $E$, and these points make
the cyclic group of order five :
$$
P_{i} = iP_{1} \quad (i = 2, 3, 4),~P_{0} = 5P_{1}
$$
Now we consider the relatively minimal
nonsingular complete elliptic surface $\psi : R \longrightarrow {\bf P}^1$
defined by
$$
y^2 + sxy + y = x^3 + x^2 + s
$$
with a parameter $s$.
This surface is a rational elliptic surface with two singular fibers of type ${\text{I}}_5$
over the points given by $s = 1, \infty$, and two singular fibers of type ${\text{I}}_1$
over the points given by $t = \omega, \omega^2$.
Here, $\omega$ is a primitive cube root of unity.
We consider the base change of $\psi : R \longrightarrow {\bf P}^1$
by $s = t^2$.
Then, we have the elliptic surface defined by
$$
(*)\quad \quad \quad y^2 + t^2xy + y = x^3 + x^2 + t^2.
$$
We consider the relatively minimal nonsingular complete model
of this elliptic surface :
\begin{equation}\label{pencil3}
f : Y \longrightarrow {\bf P}^1.
\end{equation}
The surface $Y$ is an elliptic $K3$ surface.
From $Y$ to $R$, there exists a generically surjective
purely inseparable rational map. We denote by $R^{(\frac{1}{2})}$
the algebraic surface whose coefficients of the defining equations are the square
roots of those of $R$. Then, $R^{(\frac{1}{2})}$ is also a rational surface, and
we have the Frobenius morphism $F : R^{(\frac{1}{2})}\longrightarrow R$. $F$ factors
through a generically surjective
purely inseparable rational map from $R^{(\frac{1}{2})}$ to $Y$.
By the fact that $R^{(\frac{1}{2})}$ is rational
we see that $Y$ is unirational. Hence, $Y$
is a supersingular $K3$ surface, i.e. the Picard number $\rho (Y)$ is equal
to the second Betti number $b_{2}(Y)$ (cf. Shioda \cite{S}, p.235, Corollary 1).
The discriminant of the elliptic surface $f : Y \longrightarrow {\bf P}^1$ is given by
$$
\Delta = (t + 1)^{10}(t^2 + t + 1)^2
$$
and the $j$-invariant is given by
$$
j = t^{24}/(t + 1)^{10}(t^2 + t + 1)^2.
$$
Therefore,
on the elliptic surface $f : Y \longrightarrow {\bf P}^1$,
there exist two singular fibers of type ${\text{I}}_{10}$ over
the points given by $t = 1, \infty$, and two singular fibers
of type ${\text{I}}_2$ over
the points given by $t = \omega, \omega^2$.
The regular fiber over the point defined by $t = 0$ is
the supersingular elliptic curve $E$.
The elliptic $K3$ surface $f: Y \longrightarrow {\bf P}^1$
has ten sections $s_{i}, m_{i}$ $(i = 0, 1, 2, 3, 4)$ given as follows:
$$
\begin{array}{ll}
s_0 : \mbox{the zero section} &\mbox{passing through}~P_{0}~\mbox{on}~E\\
s_1 : x = 1, y = t^2 &\mbox{passing through}~P_{1}~\mbox{on}~E\\
s_2 : x = t^2, y = t^2 &\mbox{passing through}~P_{2}~\mbox{on}~E\\
s_3 : x = t^2, y = t^4 + t^2 + 1&\mbox{passing through}~P_{3}~\mbox{on}~E\\
s_4 : x = 1, y = 1 &\mbox{passing through}~P_{4}~\mbox{on}~E\\
m_0 : x = \frac{1}{t^2}, y = \frac{1}{t^3} +\frac{1}{t^2} + t &\mbox{passing through}~P_{0}~\mbox{on}~E\\
m_1 : x = t^3 + t + 1,~ y = t^4 + t^3 + t &\mbox{passing through}~P_{1}~\mbox{on}~E\\
m_2 : x = t,~ y = t^3 &\mbox{passing through}~P_{2}~\mbox{on}~E\\
m_3 : x = t,~ y = 1&\mbox{passing through}~P_{3}~\mbox{on}~E\\
m_4 : x = t^3 + t + 1,~ y = t^5 + t^4 + t^2 + t + 1&\mbox{passing through}~P_{4}~\mbox{on}~E.
\end{array}
$$
These ten sections make the cyclic group of order 10, and the group structure is
given by
$$
s_{i} = is_1,\ m_i =m_0 + s_i~(i = 0, 1, 2, 3, 4),\ 2m_0 = s_0
$$
with $s_0$, the zero section.
The images of $s_{i}$ (resp. $m_{i}$) ($i = 0, 1, 2, 3, 4$) on $R$ give sections
(resp. multi-sections) of $\psi : R \longrightarrow {\bf P}^1$.
The intersection numbers of the sections $s_i, m_i$ $(i = 0, 1, 2, 3, 4)$
are given by
\begin{equation}\label{int-sections}
\langle s_i, s_j\rangle =-2\delta_{ij},~ \langle m_i, m_j\rangle~=-2\delta_{ij},~ \langle s_i, m_j\rangle = \delta_{ij},~
\end{equation}
where $\delta_{ij}$ is Kronecker's delta.
On the singular elliptic surface $(*)$, we denote by $F_1$ the fiber
over the point defined by $t = 1$. $F_1$ is an irreducible curve and on $F_1$
the surface $(*)$ has only one singular point $P$.
The surface $Y$ is a surface obtained by the minimal resolution of singularities of $(*)$.
We denote the proper transform of $F_1$ on Y again by $F_1$, if confusion doesn't occur.
We have nine exceptional curves $E_{1,i}$ $(i = 1,2, \ldots, 9)$ over the point $P$, and
as a singular fiber of type $I_{10}$ of the elliptic surface $f : Y \longrightarrow {\bf P}^1$,
$F_1$ and these nine exceptional curves make a decagon $F_1E_{1,1}E_{1,2}\ldots E_{1,9}$ clockwisely.
The blowing-up at the singular point $P$ gives two exceptional curves
$E_{1,1}$ and $E_{1,9}$, and they intersect each other at a singular point. The blowing-up
at the singular point again gives two exceptional curves $E_{1,2}$ and $E_{1,8}$.
The exceptional curve $E_{1,2}$ (resp. $E_{1,8}$) intersects $E_{1,1}$ (resp. $E_{1,9}$) transeversely.
Exceptional curves $E_{1,2}$ and $E_{1,8}$ intersect each other at a singular point, and so on.
By successive blowing-ups,
the exceptional curve $E_{1,5}$ finally appears to complete the resolution of singularity
at the point $P$, and it intersects $E_{1,4}$ and $E_{1,6}$ transeversely.
Summerizing these results, we see that $F_1$ intersects $E_{1,1}$ and $E_{1,9}$ transversely, and that $E_{1,i}$ intersects $E_{1,i+ 1}$ $(i = 1, 2, \ldots, 8)$ transversely.
We choose $E_{1,1}$ as the component
which intersects the section $m_2$. Then,
10 sections above intersect these 10 curves transversely as follows:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
sections & $s_0$ &$s_1$ &$s_2$ &$s_3$ &$s_4$ &$m_0$ &$m_1$ &$m_2$ & $m_3$ &$m_4$ \\
\hline
componets &$F_1$ &$E_{1,8}$ &$E_{1,6}$ &$E_{1,4}$ &$E_{1,2}$ &$E_{1,5}$ &$E_{1,3}$ & $E_{1,1}$ & $E_{1,9}$ & $E_{1,7}$ \\
\hline
\end{tabular}
\end{center}
\noindent
Here, the table means that the section $s_0$ intersects the singular fiber
over the point defined by
$t= 1$ with the component $F_1$, for example.
The surface $Y$ has the automorphism $\sigma$ defined by
$$
(t, x, y) \mapsto (\frac{t}{t+1}, \frac{x + t^4 + t^2 + 1}{(t + 1)^4},
\frac{x + y +s^6 + s^2}{(s + 1)^6}).
$$
The automorphism $\sigma$ is of order 4 and replaces the fiber over the point $t = 1$
with the one
over the point $t = \infty$, and also replaces the fiber over the point $t =\omega$
with the one over the point $t = \omega^2$. The automorphism $\sigma$ acts
on the ten sections above as follows:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
sections & $s_0$ &$s_1$ &$s_2$ &$s_3$ &$s_4$ &$m_0$ &$m_1$ &$m_2$ & $m_3$ &$m_4$ \\
\hline
$\sigma^{*}$(sections) &$s_0$ &$s_2$ &$s_4$ &$s_1$ &$s_3$ &$m_0$ &$m_2$ & $m_4$ & $m_1$ & $m_3$ \\
\hline
\end{tabular}
\end{center}
Using the automorphism $\sigma$, to construct the resolution of singularity on the fiber
over the point $P_{\infty}$ defined by $t = \infty$, we use the resolution of singularity
on the fiber over the point $P_{1}$ defined by $t = 1$.
We attach names to the irreducible components of the fiber over $P_{\infty}$
in the same way as above.
Namely,
on the singular elliptic surface $(*)$, we denote by $F_{\infty}$ the fiber
over the point defined by $t = \infty$. We also denote the proper transform
of $F_{\infty}$ on $Y$ by $F_{\infty}$.
We have 9 exceptinal curves $E_{\infty,i}$ $(i = 1,2, \ldots, 9)$ over the point $P_{\infty}$, and
as a singular fiber of type $I_{10}$ of the elliptic surface $f : Y \longrightarrow {\bf P}^1$,
$F_{\infty}$ and these 9 exceptional curves make a decagon $F_{\infty}E_{\infty, 1}E_{\infty, 2}\ldots E_{\infty, 9}$ clockwisely. $F_{\infty}$ intersects $E_{\infty, 1}$ and $E_{\infty, 9}$ transversely, and that $E_{\infty, i}$ intersects $E_{\infty, i+ 1}$
$(i = 1, 2, \ldots, 8)$ transversely.
The singular fiber of $f : Y \longrightarrow {\bf P}^1$ over the point defined by $t= \omega$
(resp. $t = \omega^{2}$) consists of two irreducible components $F_{\omega}$
and $E_{\omega}$ (resp. $F_{\omega^{2}}$ and $E_{\omega^{2}}$),
where $F_{\omega}$ (resp. $F_{\omega^{2}}$)
is the proper transform of the fiber over the point $P_{\omega}$
(resp. $P_{\omega^{2}}$) in $(*)$.
Then,
the 10 sections above intersect singular fibers of elliptic surface $f : Y \longrightarrow {\bf P}^1$ as follows:
{
\footnotesize
\begin{center}
\begin{table}[!htb]\label{}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}
\hline
sections & $s_0$ &$s_1$ &$s_2$ &$s_3$ &$s_4$ &$m_0$ &$m_1$ &$m_2$ & $m_3$ &$m_4$ \\
\hline
$t= 1$ &$F_1$ &$E_{1,8}$ &$E_{1,6}$ &$E_{1,4}$ &$E_{1,2}$ &$E_{1,5}$ &$E_{1,3}$ & $E_{1,1}$ & $E_{1,9}$ & $E_{1,7}$ \\
\hline
$t = \infty$ &$F_\infty$ &$E_{\infty, 6}$ &$E_{\infty, 2}$ &$E_{\infty, 8}$ &$E_{\infty, 4}$ &$E_{\infty, 5}$ &$E_{\infty, 1}$ & $E_{\infty, 7}$ & $E_{\infty, 3}$ & $E_{\infty, 9}$ \\
\hline
$t = \omega$ & $F_\omega$ &$F_\omega$ &$F_\omega$ &$F_\omega$ &$F_\omega$ &$E_{\omega}$ &$E_{\omega}$ & $E_{\omega}$ & $E_{\omega}$ & $E_{\omega}$ \\
\hline
$t = \omega^2$ &$F_{\omega^2}$ &$F_{\omega^2}$ &$F_{\omega^2}$ &$F_{\omega^2}$ &$F_{\omega^2}$ &$E_{\omega^2}$ &$E_{\omega^2}$ & $E_{\omega^2}$ & $E_{\omega^2}$ & $E_{\omega^2}$ \\
\hline
\end{tabular}
\caption{}
\label{Table2}
\end{table}
\end{center}
}
\begin{prop}\label{}
The surface $Y$ is a supersingular $K3$ surface with Artin invariant $1$.
\end{prop}
\begin{proof}
The elliptic fibration $(\ref{pencil3})$ has two singular fibers of type
${\text{I}}_{10}$, two singular fibers of type ${\text{I}}_2$ and
ten sections. Hence the assertion follows
from the Shioda-Tate formula (cf. Shioda \cite{Shio}, Corollary 1.7).
\end{proof}
Incidentally, by the Shioda-Tate formula, we also see that the order of
the group of the sections of $f : Y \longrightarrow {\bf P}^1$ is equal to 10
and so the group is isomorphic to ${\bf Z}/10{\bf Z}$.
Now, we consider a rational vector field
$$
D' = (t - 1)(t - a)(t - b)\frac{\partial}{\partial t} + (1 + t^2x)\frac{\partial}{\partial x}
$$
with $a, b \in k, \ a+b=ab, \ a^3\not=1$.
Then, we have $D'^2 = t^2 D'$, that is, $D'$ is $2$-closed.
On the surface $Y$, the divisorial part of $D'$ is given by
$$
\begin{array}{rl}
(D') & = E_{1,1} + E_{1,3} + E_{1,5} + E_{1,7} + E_{1,9} + E_{\infty, 1} + E_{\infty, 3}
+ E_{\infty, 5} + E_{\infty, 7} \\
& + E_{\infty, 9} - E_{\omega} - E_{\omega^{2}} - 2(F_{\infty} + E_{\infty, 1} + E_{\infty, 2}+ E_{\infty, 3}
+ E_{\infty, 4} + E_{\infty, 5} \\
&+ E_{\infty, 6} + E_{\infty, 7} + E_{\infty, 8} + E_{\infty, 9}).
\end{array}
$$
We set $D = \frac{1}{t - 1}D'$. Then, $D^2=abD$, that is, $D$ is also 2-closed
and $D$ is of additive type if $a=b=0$ and
of multiplicative type otherwise. Moreover, we have
\begin{equation}\label{divisorial}
\begin{array}{rl}
(D) & = - (F_{1} + E_{1,2} + E_{1,4} + E_{1,6} + E_{1,8}
+ F_{\infty} + E_{\infty, 2} + E_{\infty, 4} \\
&+ E_{\infty, 6} + E_{\infty, 8} + E_{\omega} + E_{\omega^2}).
\end{array}
\end{equation}
From here until Theorem \ref{main}, the argument is parallel to the one
in Katsura and Kondo \cite{KK}, \S 4, and so we give just a brief sketch of the proofs
for the readers' convenience.
\begin{lemma}
The quotient surface $Y^{D}$ is nonsingular.
\end{lemma}
\begin{proof}
Since $Y$ is a $K3$ surface, we have $c_{2}(Y) = 24$.
Using $(D)^2 = -24$ and the equation (\ref{euler}), we have
$$
24 = c_{2}(Y) = {\text{deg}} \langle D\rangle - \langle K_{Y}, (D)\rangle - (D)^2 = {\text{deg}} \langle D\rangle + 24.
$$
Therefore, we have ${\text{deg}} \langle D\rangle = 0$. This means that $D$ is divisorial, and that
$Y^{D}$ is nonsingular.
\end{proof}
By the result on the canonical divisor formula of Rudakov and Shafarevich (see the equation (\ref{canonical})),
we have
$$
K_{Y} = \pi^{*} K_{Y^D} + (D).
$$
\begin{lemma}\label{exceptional}
Let $C$ be an irreducible curve contained in the support of the divisor $(D)$,
and set $C' = \pi (C)$. Then, $C'$ is an exceptional curve of the first kind.
\end{lemma}
\begin{proof}
By direct calculation, $C$ is integral with respect to $D$. Therefore,
we have $C = \pi^{-1}(C')$ by Proposition \ref{insep}.
By the equation $2C'^2 = (\pi^{-1}(C'))^2 = C^2 = - 2$, we have $C'^2 = -1$.
Since $Y$ is a $K3$ surface, $K_Y$ is
linearly equivalent to zero.
Therefore, we have
$$
2\langle K_{Y^D}, C'\rangle = \langle \pi^{*}K_{Y^D}, \pi^{*}(C')\rangle\\
= \langle K_Y - (D), C\rangle = C^2 = -2.
$$
Therefore, we have $\langle K_{Y^D}, C'\rangle = -1$ and
the arithmetic genus of $C'$ is equal to $0$.
Hence, $C'$ is an exceptional curve of the first kind.
\end{proof}
We denote these 12 exceptional curves on $Y^{D}$ by $E'_{i}$ ($i = 1, 2, \ldots, 12$),
which are the images of irreducible components of $-(D)$ by $\pi$.
Let
$$\varphi : Y^{D} \to X_{a,b}$$
be the blowing-downs of $E'_{i}$ ($i = 1, 2, \ldots, 12$). For simplicity, we denote $X_{a,b}$ by $X$.
Now we have the following commutative diagram:
$$
\begin{array}{ccc}\label{maps}
\quad Y^{D} & \stackrel{\pi}{\longleftarrow} & Y \\
\varphi \downarrow & & \downarrow f \\
\quad X=X_{a,b} & & {\bf P}^1 \\
g \downarrow & \quad \swarrow_{F}& \\
\quad {\bf P}^1 & &
\end{array}
$$
Here $F$ is the Frobenius base change.
Then, we have
$$
K_{Y^D} = \varphi^{*}(K_{X}) + \sum_{i = 1}^{12}E'_{i}.
$$
\begin{lemma}
The canonical divisor $K_{X}$ of $X$ is numerically equivalent to $0$.
\end{lemma}
\begin{proof}
As mentioned in the proof of Lemma \ref{exceptional}, all irreducible curves which appear
in the divisor $(D)$ are integral with respect to the vector field $D$.
For an irreducible component $C$ of $(D)$, we denote by $C'$ the image $\pi (C)$ of $C$.
Then, we have $C = \pi^{-1}(C')$ by Proposition \ref{insep}. Therefore, we have
$$
(D) = - \pi^{*}(\sum_{i = 1}^{12}E'_{i}).
$$
Since $Y$ is a $K3$ surface,
$$
0 \sim K_{Y} = \pi^{*}K_{Y^D} + (D)
= \pi^{*}( \varphi^{*}(K_{X}) + \sum_{i = 1}^{12}E'_{i}) + (D) = \pi^{*}(\varphi^{*}(K_{X}))
$$
Therefore, $K_{X}$ is numerically equivalent to zero.
\end{proof}
\begin{lemma}
The surface $X$ has $b_{2}(X) = 10$ and $c_{2}(X) = 12$.
\end{lemma}
\begin{proof}
Since $\pi : {Y} \longrightarrow {Y}^{D}$ is finite and
purely inseparable, the \'etale cohomology of $Y$ is isomorphic to
the \'etale cohomology of $Y^{D}$. Therefore, we have
$b_{1}(Y^{D}) = b_{1}(Y) = 0$,
$b_{3}(Y^{D})= b_{3}(Y) = 0$ and $b_{2}(Y^{D})
= b_{2}(Y) = 22$. Since $\varphi$ is the blowing-downs
of 12 exceptional curves of the first kind, we see
$b_{0}(X) =b_{4}(X) = 1$, $b_{1}(X) =b_{3}(X) = 0$ and $b_{2}(X) = 10$.
Therefore, we have
$$
c_{2}(X) = b_{0}(X) - b_{1}(X) + b_{2}(X) -b_{3}(X) + b_{4}(X) = 12.
$$
\end{proof}
\begin{theorem}\label{main}
Under the notation above, the following statements hold.
\begin{itemize}
\item[$({\rm i})$] The surface $X=X_{a,b}$ is a supersingular Enriques surface
if $a = b = 0$.
\item[$({\rm ii})$] The surface $X=X_{a,b}$ is a classical Enriques surface
if $a + b = ab$ and $a \notin {\bf F}_{4}$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $K_X$ is numerically trivial,
$X$ is minimal and the Kodaira dimension $\kappa (X) $ is equal to $0$.
Since $b_2(X) = 10$, $X$ is an Enriques surface.
Since $Y$ is a supersingular K3 surface, $X$ is either supersingular or classical.
In case that $a= b = 0$, the integral fiber of the elliptic fibration $f : Y \longrightarrow {\bf P}^1$
with respect to $D$ exists only over the point $P_{0}$ defined by $t = 0$.
Hence $g : X \longrightarrow {\bf P}^1$ has only one multiple fiber.
Therefore, the multiple fiber is wild, and $X$ is a supersingular Enriques surface.
In case that $a \not\in {\bf F}_4$, the integral fibers of
the elliptic fibration $f : Y \longrightarrow {\bf P}^1$
with respect to $D$ exist over the points $P_{a}$ defined by $t = a$
and $P_b$ defined by $t = b$. Therefore, the multiple fibers are tame, and
we conclude that $X$ is a classical Enriques surface.
\end{proof}
Recall that the elliptic fibration $f : Y \to {\bf P}^1$ given in (\ref{pencil3})
has two singular fibers of type ${\text{I}}_{10}$, two singular fibers of type ${\text{I}}_2$ and
ten sections. This fibration induces an elliptic fibration
$$g : X\to {\bf P}^1$$
which has two singular fibers of type ${\text{I}}_5$, two singular fibers of type ${\text{I}}_1$, and
ten 2-sections.
Thus we have twenty nonsingular rational curves on $X$.
Denote by $\mathcal{E}$ the set of curves contained in the support of the divisor $(D)$:
$$\mathcal{E} = \{F_{1}, E_{1,2}, E_{1,4}, E_{1,6}, E_{1,8}, F_{\infty}, E_{\infty, 2}, E_{\infty, 4}, E_{\infty, 6}, E_{\infty, 8}, E_{\omega}, E_{\omega^2}\}.$$
The singular points of four singular fibers of $g$ consist of twelve points denoted by
$\{ p_1,..., p_{12}\}$ which are the images of the twelve curves in $\mathcal{E}$.
We may assume that $p_{11}, p_{12}$ are the images of $E_{\omega}, E_{\omega^2}$ respectively. Then $p_{11}, p_{12}$ (resp. $p_1,..., p_{10}$) are the singular points of the singular fibers
of $g$ of type ${\text{I}}_1$ (resp. of type ${\text{I}}_5$).
Each of the twenty nonsingular rational curves passes through two points from $\{p_1,..., p_{12}\}$ because its preimage on $Y$ meets exactly two curves from twelve curves in $\mathcal{E}$ (see Table \ref{Table2}).
Let $\mathcal{S}_1$ be the set of fifteen nonsingular rational curves which are ten components
of two singular fibers of $g$ of type ${\rm I}_5$ and five 2-sections which do not pass
through $p_{11}$ and $p_{12}$, that is, the images of $s_0, s_1,..., s_4$.
Then the dual graph of the curves in $\mathcal{S}_1$ is the line graph of the Petersen graph.
For the Petersen graph, see Figure \ref{petersen}.
Here the line graph $L(G)$ of a graph $G$ is the graph
whose vertices correspond to the edges in $G$ bijectively and two vertices in $L(G)$ are
joined by an edge iff the corresponding edges meet at a vertex in $G$.
In the following Figure \ref{enriques12}, we denote by ten dots the ten points $\{p_1,..., p_{10}\}$. The fifteen lines denote the fifteen nonsingular rational curves in $\mathcal{S}_1$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=50mm]{fano.eps}
\end{center}
\caption{}
\label{enriques12}
\end{figure}
On the other hand,
let $\mathcal{S}_2$ be the set of curves which are the images of $m_0,..., m_4$.
Then the dual graph of the curves in $\mathcal{S}_2$ is the complete graph with
five vertices in which each pair of the vertices forms the extended Dynkin diagram of type
$\tilde{A}_1$ because all of them pass through the two points $p_{11}$ and $p_{12}$. Each vertex in $\mathcal{S}_1$ meets exactly one vertex in $\mathcal{S}_2$ with multiplicity 2, because
any component of the singular fibers of type ${\text{I}}_{10}$ meets exactly one section from
$m_0,..., m_4$ (see Table \ref{Table2}) and $s_i$ meets only $m_i$ ($i=0,1,...,4)$ (see the equation (\ref{int-sections})). On the other hand, the vertex in
$\mathcal{S}_2$ meets three vertices in $\mathcal{S}_1$ with multiplicity 2, because $m_i$ meets
one component of each singular fiber of type ${\text{I}}_{10}$ and $s_i$.
The dual graph $\Gamma$ of the twenty curves in $\mathcal{S}_1$ and $\mathcal{S}_2$ forms the same dual graph of
nonsingular rational curves of the Enriques surfaces of type ${\text{VII}}$ given in Figure \ref{Figure7-7} (Fig. 7.7 in \cite{Ko}).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=60mm]{Figure7-7.360x360pt.eps}
\end{center}
\caption{}
\label{Figure7-7}
\end{figure}
\noindent
The 15 curves in $\mathcal{S}_1$ (resp. five curves in $\mathcal{S}_2$) correspond to
$E_1, ..., E_{15}$ (resp. $K_1,..., K_5$) in Figure \ref{Figure7-7}. It is easy to see that the maximal parabolic subdiagrams in $\Gamma$ are
$$\tilde{A}_8,\ \tilde{A}_4\oplus \tilde{A}_4, \ \tilde{A}_5\oplus \tilde{A}_2\oplus
\tilde{A}_1, \ \tilde{A}_7\oplus \tilde{A}_1$$
which are coresponding to elliptic fibrations of type
$$({\rm I}_9),\ ({\rm I}_5, {\rm I}_5), \ ({\rm I}_6, {\rm IV}, {\rm I}_2), \ ({\rm I}_8, {\rm III}),$$
respectively.
It follows from Vinberg's criterion (Proposition \ref{Vinberg}) that $W(X)$ is of finite index in ${\text{O}}({\text{Num}}(X))$.
The same argument in \cite{Ko}, (3.7) implies that $X$ contains exactly twenty nonsingular rational curves
in $\mathcal{S}_1, \mathcal{S}_2$.
\begin{lemma}\label{injectiv}
The map $\rho : {\rm Aut}(X) \to {\rm O}({\rm Num}(X))$ is injective.
\end{lemma}
\begin{proof}
Let $\varphi \in {\text{Ker}}(\rho)$. Then $\varphi$ preserves each nonsingular rational curve on $X$.
Since each nonsingular rational curve meets other curves at least three points,
$\varphi$ fixes all 20 nonsingular rational curves pointwisely. Now consider the elliptic fibration $g : X \to {\bf P}^1$. Since this fibration has has ten 2-sections, $\varphi$ fixes a general fiber of $g$ and hence $\varphi$ is identity.
\end{proof}
By Proposition \ref{finiteness}, we now have the following theorem.
\begin{theorem}\label{main2}
The automorphism group ${\rm Aut}(X)$ is isomorphic to the symmetric group $\mathfrak{S}_5$ of degree five and $X$ contains exactly twenty nonsingular rational curves whose dual graph is of type ${\rm VII}$.
\end{theorem}
\begin{proof}
We have already showed that ${\rm Aut}(X)$ is finite and $X$ contains exactly twenty
nonsigular rational curves whose dual graph $\Gamma$ is of type ${\rm VII}$.
It follows Lemma \ref{injectiv} that ${\rm Aut}(X)$ is a subgroup of ${\text{Aut}}(\Gamma) \cong\mathfrak{S}_5$. Then by the same argument in \cite{Ko}, (3.7), we see that ${\rm Aut}(\Gamma)$ is represented by automorpisms of $X$.
\end{proof}
\begin{theorem}\label{non-isotrivial}
The one dimensional family $\{X_{a,b}\}$ is non-isotrivial.
\end{theorem}
\begin{proof}
Denote by $\Gamma$ the dual graph of all nonsingular rational curves on $X$ which is
given in Figure \ref{Figure7-7}.
$\Gamma$ contains only finitely many extended Dynkin diagrams (= the disjoint union of $\tilde{A}_m, \tilde{D}_n, \tilde{E}_k$), that is, $\tilde{A}_8, \tilde{A}_7\oplus \tilde{A}_1, \tilde{A}_4\oplus \tilde{A}_4, \tilde{A}_5\oplus \tilde{A}_2\oplus \tilde{A}_1$ (see also Kondo \cite{Ko}, page 274, Table 2).
Note that the elliptic fibrations on $X$ bijectively correspond to
the extended Dynkin diagrams in $\Gamma$. This implies that $X$ has only finitely many
elliptic fibrations.
The $j$-invariant of the elliptic curve which appears as the fiber $E_{a}$ defined by $t = a$
of the elliptic fibration $f : Y \longrightarrow {\bf P}^{1}$ is equal to
$a^{24}/(a + 1)^{10}(a^2 + a + 1)^2$ (cf. section 3). Consider
the multiple fiber $2E'_{a}$ on the elliptic fibration on the Enriques surface $X$ which is the image of $E_a$.
Since we have a purely inseparable
morphism of degree 2 from $E_{a}$ to $E'_{a}$, we see that the $j$-invariant of $E'_{a}$
is equal to $a^{48}/(a + 1)^{20}(a^2 + a + 1)^4$. This implies the infiniteness of the number of elliptic curves which appear as the multiple fibers of the elliptic fibration on an Enriques surface
in our family of Enriques surfaces with parameter $a$. Therefore, in our family of
Enriques surfaces there are infinitely many non-isomorphic ones (see also Katsura-Kond\=o \cite{KK},
Remark 4.9).
\end{proof}
\begin{remark}
The pullback of an elliptic fibration $\pi : X\to {\bf P}^1$ to the covering
$K3$ surface $Y$ gives an elliptic fibration $\tilde{\pi} : Y\to {\bf P}^1$. The type of reducible singular fibers of $\tilde{\pi}$ is $({\rm I}_{10}, {\rm I}_{10}, {\rm I}_2, {\rm I_2})$ if $\pi$ is of type $\tilde{A}_4\oplus \tilde{A}_4$, $({\rm I}_{16}, {\rm I}_1^*)$
if $\pi$ is of type $\tilde{A}_7\oplus \tilde{A}_1$, $({\rm I}_{12}, {\rm III}^*, {\rm I}_4)$ if $\pi$ is of type $\tilde{A}_5\oplus \tilde{A}_2\oplus \tilde{A}_1$, and
type $({\rm I}_{18}, {\rm I}_2, {\rm I}_{2}, {\rm I}_2)$ if $\pi$ is of type $\tilde{A}_8$, respectively.
\end{remark}
The following theorem is due to M. Sch\"utt and H. Ito.
\begin{theorem}\label{non-existVII}
There are no singular Enriques surfaces with the dual graph of type ${\rm VII}$.
\end{theorem}
\begin{proof}
Assume that there exists an Enriques surface $S$ with the dual graph of type ${\rm VII}$.
In the dual graph of type {\rm VII} there exists a parabolic subdiagram $\tilde{A}_5\oplus \tilde{A}_2 \oplus \tilde{A}_1$. By Proposition \ref{singular-fiber}, it corresponds to an elliptic fibration on $S$ with singular fibers of type $({\rm IV}, {\rm I}_2, {\rm I}_6)$. For example, the linear system $|2(E_1+E_2+E_{14})|$ defines a such fibration. Moreover the dual graph of type ${\rm VII}$ tells us that the singular fiber $E_1+E_2+E_{14}$ of type ${\rm IV}$ is a multiple fiber because $E_3$ is a 2-section of this fibration (see Figure \ref{Figure7-7}). This contradicts to Proposition \ref{multi-fiber}, (ii).
\end{proof}
\section{Examples of singular $K3$ surfaces with a finite automorphism group}\label{sec4}
\subsection{Type ${\text{I}}$}\label{type1}
Let $(x_0,x_1,x_2,x_3)$ be a homogeneous coodinate of ${\bf P}^3$.
Consider the nonsingular quadric $Q$ in ${\bf P}^3$ defined by
\begin{equation}\label{type1quadric}
x_0x_3 + x_1x_2=0
\end{equation}
which is the image of the map
$${\bf P}^1\times {\bf P}^1 \to {\bf P}^3,\quad ((u_0,u_1),(v_0,v_1)) \to (u_0v_0, u_0v_1, u_1v_0, u_1v_1).$$
The involution of ${\bf P}^1\times {\bf P}^1$
$$((u_0,u_1),(v_0,v_1)) \to ((u_1,u_0),(v_1,v_0))$$
induces an involution
\begin{equation}\label{typeIinv}
\tau : (x_0,x_1,x_2,x_3) \to (x_3,x_2,x_1,x_0)
\end{equation}
of $Q$ whose fixed point set on $Q$ is one point $(1,1,1,1)$.
Consider four lines on $Q$ defined by
$$L_{01}:x_0=x_1=0,\quad L_{02}: x_0=x_2=0,$$
$$L_{13}: x_1=x_3=0, \quad L_{23}: x_2=x_3=0,$$
and a $\tau$-invariant pencil of quadrics
$$C_{\lambda,\mu} : \lambda (x_0+x_3)(x_1+x_2)+ \mu x_0x_3 =0$$
passing through the four vertices
$$(1,0,0,0), \quad (0,1,0,0),\quad (0,0,1,0),\quad (0,0,0,1)$$
of the quadrangle $L_{01}, L_{02}, L_{13}, L_{23}$.
Note that two conics
$$Q_1: x_0+x_3=0, \quad Q_2: x_1+x_2=0$$
tangent to $C_{\lambda,\mu}$ at two vertices of the quadrangle.
Obviously
$$C_{1,0} = Q_1 +Q_2, \quad C_{0,1} = L_{01}+L_{02} + L_{13}+L_{23},$$
and
$C_{\lambda,\mu}$ $(\lambda\cdot \mu\not=0)$ is a nonsingular elliptic curve.
Thus we have the same configuration of
curves given in \cite{Ko}, Figure 1.1 except $Q_1$ and $Q_2$ tangent at $(1,1,1,1)$.
Now we fix $(\lambda_0, \mu_0)\in {\bf P}^1$ $(\lambda_0\cdot \mu_0\not=0)$ and
take Artin-Schreier covering $S \to Q$ defined by the triple
$(L, a, b)$ where $L= \mathcal{O}_Q(2,2)$, $a \in H^0(Q,L)$ and $b\in H^0(Q,L^{\otimes 2})$
satisfying $Z(a) = C_{0,1}$ and
$Z(b) = C_{0,1} + C_{\lambda_0,\mu_0}$.
The surface $S$ has four singular points over the four vertices of quadrangle given
locally by $z^2 +uvz + uv(u+v)=0$. In the notation in Artin's list (see \cite{A}, \S 3), it is of type $D^1_4$. Let $Y$ be the minimal nonsingular model of $S$.
Then the exceptional divisor over a singular point has the dual graph of type $D_4$.
The canonical bundle formula implies that $Y$ is a $K3$ surface.
The pencil $\{C_{\lambda,\mu}\}_{(\lambda,\mu)\in {\bf P}^1}$ induces an elliptic fibration on $Y$.
The preimage of $L_{01}+L_{02} + L_{13}+L_{23}$ is the singular fiber of type ${\text{I}}_{16}$
and the preimage of $Q_1+Q_2$ is the union of two singular fibers of type ${\text{III}}$.
Note that the pencil has four sections. Thus we have 24 nodal curves on $Y$.
Note that the dual graph of these 24 nodal curves coincide with the one given in
\cite{Ko}, Figure 1.3.
The involution $\tau$
can be lifted to a fixed point free involution $\sigma$ of $Y$ because the branch divisor $C_{0,1}$ does not contain the point $(1,1,1,1)$. By taking the quotient
of $Y$ by $\sigma$, we have a singular Enriques surface $X=Y/\langle \sigma \rangle$.
The above elliptic fibration induces an elliptic pencil on $X$ with singular fibers of type ${\text{I}}_8$ and of type ${\text{III}}$.
Since the ramification divisor of the covering $S\to Q$ is the preimage of $L_{01}+L_{02} + L_{13}+L_{23}$, the multiple fiber of this pencil is the singular fiber of type ${\text{I}}_8$.
By construction, $X$ contains twelve nonsingular rational curves whose dual graph coincides with the one given in \cite{Ko}, Figure 1.4.
It follows from Vinberg's criterion (Proposition \ref{Vinberg}) that $W(X)$ is of finite index in ${\text{O}}({\text{Num}}(X))$, and hence
the automorphism group ${\text{Aut}}(X)$ is finite (Proposition \ref{finiteness}). The same argument as in the proof of \cite{Ko}, Theorem 3.1.1 shows that ${\text{Aut}}(X)$ is isomorphic to the digedral group $D_4$ of order 8. Thus we have the following theorem.
\begin{theorem}\label{Ithm}
These $X$ form a one dimensional family of singular Enriques surfaces whose dual graph of nonsingular rational curves is of type ${\rm I}$. The automorphism group ${\rm Aut}(X)$ is isomorphic to the dihedral group $D_4$ of order $8$.
\end{theorem}
\begin{theorem}\label{non-existI}
There are no
classical and supersingular Enriques surfaces with the dual graph of type ${\rm I}$.
\end{theorem}
\begin{proof}
From the dual graph of type ${\text{I}}$, we can see that such Enriques surface has an elliptic fibration with
a multiple fiber of type ${\text{I}}_8$. The assersion now follows from Proposition \ref{multi-fiber}.
\end{proof}
\begin{remark}\label{typeInumtrivial}
In the above, we consider special quadrics $C_{\lambda, \mu}$ tangent to $Q_1, Q_2$.
If we drop this condition and consider general $\tau$-invariant quadrics through
the four vertices of the quadrangle $L_{01}, L_{02}, L_{13}, L_{23}$, we have a two dimensional family of singular Enriques surfaces $X$. The covering transformation of $Y \to S$ descends to a numerically trivial involution of $X$, that is, an involution of $X$ acting trivially on ${\text{Num}}(X)$. In the appendix \ref{kummer}, we discuss Enriques surfaces with a numerically trivial involution.
\end{remark}
\subsection{Type ${\text{II}}$}\label{type2}
We use the same notation as in (\ref{type1}).
We consider a $\tau$-invariant pencil of quadrics defined by
$$C_{\lambda,\mu} : \lambda (x_0+x_1+x_2+x_3)^2+ \mu x_0x_3 =0
$$
which tangents to the quadrangle $L_{01}, L_{02}, L_{13}, L_{23}$
at $(0,0,1,1)$, $(0,1,0,1)$, $(1,0,1,0)$, $(1,1,0,0)$ respectively.
Let
$$L_1: x_0+x_1=x_2+x_3=0,\quad L_2: x_0+x_2=x_1+x_3=0$$
be two lines on $Q$ which passes the tangent points of $C_{\lambda,\mu}$ and
the quadrangle $L_{03}, L_{12}, L_{02}, L_{13}$.
Note that
$$C_{1,0}= 2L_1+ 2L_2, \quad C_{0,1}= L_{01}+L_{02} + L_{13}+L_{23},$$
and
$C_{\lambda,\mu}$ $(\lambda\cdot \mu\not=0)$ is a nonsingular elliptic curve.
Thus we have the same configuration of
curves given in \cite{Ko}, Figure 2.1.
Now we fix $(\lambda_0, \mu_0)\in {\bf P}^1$ $(\lambda_0 \cdot \mu_0 \not=0)$ and
take Artin-Schreier covering $S \to Q$ defined by the triple
$(L, a, b)$ where $L= \mathcal{O}_Q(2,2)$, $a \in H^0(Q,L)$ and $b\in H^0(Q,L^{\otimes 2})$
satisfying $Z(a) = C_{0,1}$ and
$Z(b) = C_{0,1} + C_{\lambda_0,\mu_0}$.
The surface $S$ has four singular points over the four tangent points
of $C_{\lambda_0,\mu_0}$ with the quadrangle and
four singular points over the four vertices of the quadrangle.
A local equation of each of the first four singular points
is given by $z^2 +uz + u(u+v^2)=0$ and the second one is given by $z^2 + uvz + uv=0$.
In the first case, by the change of coordinates
$$
t=z +\omega u + v^2,\quad s = z +\omega^2 u + v^2,\quad v = v
$$
($\omega^3=1$, $\omega\not= 1$), then
we have $v^4 +ts=0$ which gives a rational double point of type $A_3$. In the second case, obviously, it is a rational double point of type $A_1$.
Let $Y$ be the minimal nonsingular model of $S$.
Then the exceptional divisor over a singular point in the first case has the dual graph of type $A_3$ and in the second case the dual graph of type $A_1$.
The canonical bundle formula implies that $Y$ is a $K3$ surface.
The pencil $\{C_{\lambda,\mu}\}_{(\lambda,\mu)\in {\bf P}^1}$ induces an elliptic fibration on $Y$.
The preimage of $L_{01}+L_{02} + L_{13}+L_{23}$ is the singular fiber of type ${\text{I}}_{8}$
and the preimage of $C_{1,0}$ is the union of two singular fibers of type ${\text{I}}_1^*$.
Note that the pencil has four sections. Thus we have 24 nodal curves on $Y$.
Note that the dual graph of these 24 nodal curves coincide with the one given in
\cite{Ko}, Figure 2.3.
The involution $\tau$
can be lifted to a fixed point free involution $\sigma$ of $Y$ because the branch divisor $C_{0,1}$ does not contain the point $(1,1,1,1)$. By taking the quotient
of $Y$ by $\sigma$, we have a singular Enriques surface $X=Y/\langle \sigma \rangle$.
The above elliptic fibration induces an elliptic pencil on $X$ with singular fibers of type ${\text{I}}_4$ and of type ${\text{I}}_1^*$.
Since the ramification divisor of the covering $S\to Q$ is the preimage of $L_{01}+L_{02} + L_{13}+L_{23}$, the multiple fiber of this pencil is the singular fiber of type ${\text{I}}_4$.
By construction, $X$ contains twelve nonsingular rational curves whose dual graph $\Gamma$ coincides with the one
given in \cite{Ko}, Figure 2.4. The same argument as in the proof of \cite{Ko}, Theorem 3.2.1 shows that $W(X)$ is of finite index in ${\text{O}}({\text{Num}}(X))$ and $X$ contains only these twelve nonsingular rational curves. It now follows from Proposition \ref{finiteness} that
the automorphism group ${\text{Aut}}(X)$ is finite.
By the similar argument as in the proof of Lemma \ref{injectiv}, we see that
the map $\rho : {\text{Aut}}(X) \to {\text{O}}({\text{Num}}(X))$ is injective. Moreover, by the same argument
as in the proof of \cite{Ko}, Theorem 3.2.1, ${\text{Aut}}(X)$ is isomorphic to ${\text{Aut}}(\Gamma) \cong \mathfrak{S}_4$.
Thus we have the following theorem.
\begin{theorem}\label{IIthm}
These $X$ form a one dimensional family of singular Enriques surfaces whose dual graph of nonsingular rational curves is of type ${\rm II}$. The automorphism group ${\rm Aut}(X)$ is isomorphic to the symmetric group $\mathfrak{S}_4$ of degree four.
\end{theorem}
\begin{theorem}\label{non-existII}
There are no
classical and supersingular Enriques surfaces with the dual graph of type ${\rm II}$.
\end{theorem}
\begin{proof}
From the dual graph of type ${\text{II}}$, we can see that such Enriques surface has an elliptic fibration with
a multiple fiber of type ${\text{I}}_4$. The assersion now follows from Proposition \ref{multi-fiber}.
\end{proof}
\subsection{Type ${\text{VI}}$}\label{type6}
Over the field of complex numbers, the following example was studied
by Dardanelli and van Geemen \cite{DvG}, Remark 2.4. This surface $X$ is isomorphic to
the Enriques surface of type ${\text{VI}}$ given in \cite{Ko} (In \cite{DvG}, Remark 2.4, they claimed that $X$ is of type ${\text{IV}}$, but this is a misprint). Their construction
works well in characteristic 2.
Let $(x_1,\cdots , x_5)$ be a homogeneous coodinate of ${\bf P}^4$.
Consider the surface $S$ in ${\bf P}^4$ defined by
\begin{equation}
\sum_{i=1}^{5} x_i = \sum_{i=1}^5 {1/x_i} = 0.
\end{equation}
\noindent
Let
$$\ell_{ij} : x_i=x_j=0 \ \ (1\leq i<j \leq 5),$$
$$p_{ijk} : x_i=x_j=x_k=0 \ \ (1\leq i<j<k\leq 5).$$
The ten lines $\ell_{ij}$ and ten points $p_{ijk}$ lie on $S$.
By taking partial derivatives, we see that $S$ has ten nodes at $p_{ijk}$. Let $Y$ be the minimal
nonsingular model of $S$. Then $Y$ is a $K3$ surface.
Denote by $L_{ij}$ the proper transform of $\ell_{ij}$ and by $E_{ijk}$ the exceptional curve over $p_{ijk}$. The Cremonat transformation
$$(x_i) \to \left({1/x_i}\right)$$
acts on $Y$ as an automorphism $\sigma$ of order 2. Note that the fixed point set of
the Cremonat transformation is
exactly one point $(1,1,1,1,1)$. Hence $\sigma$ is a fixed point free involution of $Y$.
The quotient surface $X=Y/\langle \sigma \rangle$ is a singular Enriques surface.
Obviously the permutation group $\mathfrak{S}_5$ acts on $S$ which commutes with $\sigma$. Therefore $\mathfrak{S}_5$ acts on $X$ as automorphisms. The involution $\sigma$ changes $L_{ij}$ and $E_{klm}$, where $\{i,j,k,l,m\} =\{1,2,3,4,5\}$.
The images of twenty nonsingular rational curves $L_{ij}$, $E_{ijk}$ give ten nonsingular rational curves on $X$ whose
dual graph is given by the following Figure \ref{petersen}. Note that this graph is
well known called the Petersen graph.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=60mm]{petersen.eps}
\end{center}
\caption{}
\label{petersen}
\end{figure}
\noindent
Here $\bar{L}_{ij}$ is the image of $L_{ij}$ (and $E_{klm}$).
Note that $\mathfrak{S}_5$ is the automorphism group of the Petersen graph.
The hyperplane section $x_i+x_j=0$ on $S$ is the union of the double line $2\ell_{ij}$ and two lines through $p_{klm}$ defined by $x_kx_l+x_kx_m+x_lx_m=0$. Thus we have
additional twenty nodal curves on $Y$.
Note that the Cremonat transformation changes
two lines defined by $x_kx_l+x_kx_m+x_lx_m=0$.
Thus $X$ contains twenty nonsingular rational curves
whose dual graph $\Gamma$ coincides with the one
of the Enriques surface of type {\rm VI} (see Fig.6.4 in \cite{Ko}).
It now follows from Proposition \ref{finiteness} that
the automorphism group ${\text{Aut}}(X)$ is finite.
The same argument as in the proof of \cite{Ko}, Theorem 3.1.1 shows
that $X$ contains
only these 20 nonsingular rational curves.
By a similar argument to the one in the proof of Lemma \ref{injectiv}, we see that
the map $\rho : {\text{Aut}}(X) \to {\text{O}}({\text{Num}}(X))$ is injective.
Since the classes of twenty nonsingular rational curves generate ${\text{Num}}(X)\otimes {\bf Q}$,
${\text{Aut}}(X)$ is isomorphic to ${\text{Aut}}(\Gamma) \cong \mathfrak{S}_5$.
Thus we have the following theorem.
\begin{theorem}\label{VIthm}
The surface $X$ is a singular Enriques surfaces whose dual graph of nonsingular rational curves is of type ${\rm VI}$. The automorphism group ${\text{Aut}}(X)$ is isomorphic to the symmetric group $\mathfrak{S}_5$ of degree five.
\end{theorem}
\begin{theorem}\label{non-existVI}
There are no
classical and supersingular Enriques surfaces with the dual graph of type ${\rm VI}$.
\end{theorem}
\begin{proof}
A pentagon in the Figure \ref{petersen}, for example,
$|\bar{L}_{12} + \bar{L}_{34}+ \bar{L}_{15}+ \bar{L}_{24}+\bar{L}_{35}|$,
defines an elliptic fibration on $X$.
The multiple fiber of this fibration is nothing but the pentagon, that is, of type
${\text{I}}_5$.
The assertion now follows from Proposition \ref{multi-fiber}.
\end{proof}
\begin{remark}\label{type7}
Over the field of complex numbers, Ohashi found that the Enriques surface
of type ${\text{VII}}$ in \cite{Ko} is
isomorphic to the following surface (see \cite{MO}, \S 1.2).
Let $(x_1,\cdots , x_5)$ be homogeneous coodinates of ${\bf P}^4$.
Consider the surface in ${\bf P}^4$ defined by
\begin{equation}
\sum_{i< j} x_i x_j = \sum_{i<j<k} x_ix_jx_k = 0
\end{equation}
which has five nodes at coodinate points and whose minimal resolution is a $K3$ surface $Y$.
The standard Cremonat transformation
$$(x_i) \to \left({1/ x_i}\right)$$
acts on $Y$ as a fixed point free involution $\sigma$. Thus
the quotient surface $X=Y/\langle \sigma \rangle$ is a complex Enriques surface.
In characteristic 2, the involution $\sigma$ has a fixed point $(1,1,1,1,1)$ on $Y$, and
hence the quotient is not an Enriques surface.
\end{remark}
\subsection{Type ${\text{III}}, {\text{IV}}, {\text{V}}$}\label{type345}
In each case of type ${\text{III}}$, ${\text{IV}}$, ${\text{V}}$, from the dual graph (cf. Kondo \cite{Ko}, Figures 3.5, 4.4, 5.5)
we can find an elliptic fibration
which has two reducible multiples fibers. In fact, the parabolic subdiagram
of type $\tilde{D}_6 \oplus \tilde{A}_1 \oplus \tilde{A}_1$ in case ${\text{III}}$
(of type $\tilde{A}_3 \oplus \tilde{A}_3 \oplus \tilde{A}_1 \oplus \tilde{A}_1$ in case ${\text{IV}}$, of type $\tilde{A}_5 \oplus \tilde{A}_2 \oplus \tilde{A}_1$ in case ${\text{V}}$) defines such an elliptic fibration
(see \cite{Ko}, Table 2, page 274). Hence if an Enriques
surface with the same dual graph of nodal curves exists in characteristic 2,
then it should be classical (Proposition \ref{multi-fiber}). On the other hand, in each case of type ${\text{III}}$, ${\text{IV}}$, ${\text{V}}$,
there exists an elliptic fibration which has a reducible multiple fiber
of multiplicative type (see \cite{Ko}, Table 2, page 274).
However this is impossible
because any multiple fiber of an elliptic fibration on a classical Enriques surface
is nonsingular or singular of additive type (Proposition \ref{multi-fiber}).
Thus we have prove the following theorem.
\begin{theorem}\label{non-existIII}
There are no Enriques surfaces with the same dual graph as
in case of type ${\rm III}$, ${\rm IV}$ or ${\rm V}$.
\end{theorem}
Combining Theorems \ref{main2}, \ref{non-existVII}, \ref{Ithm}, \ref{non-existI}, \ref{IIthm}, \ref{non-existII}, \ref{VIthm}, \ref{non-existVI}, \ref{non-existIII},
we have the Table \ref{Table1} in the introduction.
\begin{remark}\label{extra}
In characteristic 2, there exist Enriques surfaces with a finite group of automorphisms
whose dual graphs of all nonsingular rational curves do not appear
in the case of complex surfaces. For example,
it is known that there exists an Enriques surface $X$ which has a genus 1 fibration
with a multiple singular fiber of type $\tilde{E}_8$ and with a 2-section
(Ekedahl and Shepherd-Barron\cite{ES}, Theorem A, Salomonsson\cite{Sa}, Theorem 1).
We have ten nonsingular rational curves on $X$, that is, nine components of the singular fiber and
a 2-section, whose dual graph is given in Figure \ref{E10Dynkin}.
\begin{figure}[htbp]
\begin{center}
\begin{picture}(120,30)
\put(0, 20){\circle{5}}
\put(0, 30){\makebox(0, 0){}}
\put(2, 20){\line(1, 0){15}}
\put(20, 20){\circle{5}}
\put(20, 30){\makebox(0, 0){}}
\put(22, 20){\line(1, 0){15}}
\put(40, 20){\circle{5}}
\put(40, 30){\makebox(0, 0){}}
\put(42, 20){\line(1, 0){15}}
\put(60, 20){\circle{5}}
\put(60, 30){\makebox(0, 0){}}
\put(62, 20){\line(1, 0){15}}
\put(80, 20){\circle{5}}
\put(80, 30){\makebox(0, 0){}}
\put(82, 20){\line(1, 0){15}}
\put(100, 20){\circle{5}}
\put(100, 30){\makebox(0, 0){}}
\put(102, 20){\line(1, 0){15}}
\put(120, 20){\circle{5}}
\put(120, 30){\makebox(0, 0){}}
\put(122, 20){\line(1, 0){15}}
\put(140, 20){\circle{5}}
\put(140, 30){\makebox(0, 0){}}
\put(142, 20){\line(1, 0){15}}
\put(160, 20){\circle{5}}
\put(160, 30){\makebox(0, 0){}}
\put(40, 17){\line(0, -1){15}}
\put(40, 0){\circle{5}}
\put(50, 0){\makebox(0, 0){}}
\put(120, 0){\makebox(0, 0){}}
\end{picture}
\caption{}
\label{E10Dynkin}
\end{center}
\end{figure}
\noindent
It is easy to see that they generate ${\text{Num}}(X) \cong U\oplus E_8$. Moreover it is known that the reflection subgroup generated by reflections associated with these $(-2)$-vectors is of
finite index in ${\text{O}}({\text{Num}}(X))$ (Vinberg \cite{V}, Table 4; also see Proposition \ref{Vinberg}) and hence ${\text{Aut}}(X)$ is finite (Proposition \ref{finiteness}).
\end{remark}
|
1,116,691,497,210 | arxiv | \section{Introduction}
It is not a secret that today's success of person re-identification (ReID) is mainly owed to the accumulation of visible images. Deep models trained with RGB data have spawned beyond-human performance over various benchmarks \cite{zheng2015scalable}. Despite of these advances, it is worth noting that visible cameras are sensitive to illumination variations, often failing to capture valid visual information in low lighting conditions (\textit{e.g.}, at night). Fortunately, owing to the inherent illumination robustness, infrared (IR) images provide effective complement for building a 24-hour ReID system. This has attracted considerable attention to pedestrian retrieval across RGB and IR sensing modalities, \textit{i.e.}, RGB-IR ReID \cite{wu2017rgb}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{fig1.pdf}
\end{center}
\vspace{-1cm}
\caption{Comparison of Modality-Aware Multiple Granularity (MMGL) and ImageNet pre-training. MMGL boosts performance with solid data efficiency and also outperforms other self-supervised methods. Times are measured on an Nvidia 2080Ti GPU.}
\label{fig:problem}
\end{figure}
Facing the lack of large-scale public datasets, pre-training backbones on ImageNet \cite{deng2009imagenet}, and then fine-tuning on target datasets has become a de-facto paradigm for ReID tasks \cite{ye2021deep}. However, in RGB-IR ReID, the domain shift between ImageNet and target multi-modality datasets is so large that the marginal benefit from large-scale data \cite{he2019rethinking} is partly counteracted by the \textit{modality bias training} issue \cite{huang2021alleviating}. Huge amounts of pre-learned RGB information could overwhelm the `scarce' IR spectrum during fine-tuning, leading to biased representations. Recently, starting from ImageNet pre-trained checkpoints, \cite{ye2021deep} present state-of-the-art results for different ReID tasks. It achieves 95\% Rank-1 accuracy for single-modality ReID, but small performance gains on RGB-IR datasets (see Fig. \ref{fig:problem}, Rank-1 increases only from 25\% to 48\%). This motivates us to think --- \textit{Is ImageNet pre-training the only starting point for cross-modality image retrieval?}
One intuitive alternative might be `without pre-training', but `training from scratch' is almost impossible owing to the notorious \textit{over-fitting} problem. As shown by the brown line in Fig. \ref{fig:problem}, directly training on RGB-IR ReID datasets from random initialization has not achieved competitive results as \cite{he2019rethinking}. To mitigate over-fitting, \cite{fu2021unsupervised} perform unsupervised pre-training on a huge LUPerson dataset, gaining significant performance boost on mainstream ReID tasks. However, such improvement still heavily relies on \textit{expensive} human-collected data. Besides, this dataset also only contains RGB data, which is unable to alleviate the above-mentioned \textit{modality bias training} issue.
This paper questions \textit{modality bias training} and \textit{expensive pre-training} by exploring a novel regime: we report that competitive cross-modality ReID accuracy is achievable when directly pre-training on \textit{target RGB-IR datasets, without additional data and manual labels}. More interestingly, we even \textit{surpass} ImageNet supervised pre-training by a large margin with $<$5\% data size \textit{(0.05M vs. 1M)} and no sophisticated tuning tricks. The secret to our success lies in two novel designs. \textbf{First}, we develop a \textit{permutation recovery} pretext task, training an encoder end-to-end to recover the original order of randomly shuffled person images. By globally mapping RGB-IR image pairs into the same permutations, a shared permutation latent space will be learned to narrow the distribution gap between RGB and IR pixels, yielding modality-invariant representations. \textbf{Second}, based on the shuffled image patches, we further advance a part-aware cycle-contrastive learning strategy to capture fine-grained visual cues at the local granularity. Given a specific patch, we treat it as a query to derive two attentive representations within/across modality using soft nearest neighbor retrieval. The retrieved patches form a cycle between modalities, acting as a positive pair for contrastive learning \cite{hjelm2018learning}. Employing such cross-modality cycle-consistency enables contrastive learning for unpaired multi-modal images, effectively improving the local discriminability of learned representations. Compared to prior work \cite{fu2021unsupervised}, our formulation uses natural cross-modality correlations to avoid laborious data collection and augmentation, allowing efficient RGB-IR ReID.
When directly pre-training a simple baseline from scratch \cite{ye2021deep}, our MMGL paradigm achieves \textbf{6.47\% absolute improvement} over its ImageNet counterpart (Fig. \ref{fig:problem}, Purple Line). Extensive experiments demonstrate that this comparability is retained when applying MMGL to various state-of-the-arts. Furthermore, the pre-trained models also show better generalization capability in cross-dataset settings. Our contributions are three-fold: \textbf{(1)} We are the first to explore pre-training solutions for RGB-IR ReID and pioneer a non-ImageNet-powered paradigm MMGL to enable self-supervised pre-training directly on target datasets, effectively solving the \textit{modality bias training} issue with promising data efficiency. \textbf{(2)} We propose a part-aware cycle-contrastive learning strategy to increase the discriminabiliy of cycle-consistent RGB-IR patches, significantly improving performance of RGB-IR ReID. \textbf{(3)} We conduct extensive experiments to show the good generalization ability of MMGL over various state-of-the-art models, losses, and datasets.
\section{Related Work}
\textbf{RGB-IR Person ReID} focuses efforts in tackling pixel and feature misalignment issues between modalities. Currently, there are mainly two lines of literature: image synthesis and shared feature learning. Image synthesis methods usually adopt generative adversarial networks (GAN) \cite{goodfellow2014generative} to minimize pixel-level difference across modality by synthesizing IR/RGB counterparts for RGB/IR images. Following this vein, \cite{wang2019rgb} firstly propose to combine pixel and feature alignment to learn modality-invariant and discriminative representations. Several studies also apply cross-modality paired image synthesis \cite{wang2020cross}, feature disentanglement \cite{choi2020hi}, and intermediate modality generation \cite{li2020infrared} to enhance the quality of generated images. However, it is ill-posed for GAN-based methods to recover color appearances for IR images, resulting in the deficiency of identity information in fake RGB images \cite{ye2019bi}. Shared feature learning attempts to discover a common feature subspace to align modality distributions. \cite{wu2017rgb} first release a multimodal ReID dataset (SYSU-MM01) and present a deep zero-padding network for RGB-IR ReID. \cite{ye2021channel} propose a two-stream network to extract shared features. Recently, various feature selection approaches are designed to enhance representation discriminability, including graph neural networks \cite{ye2020dynamic} and automated feature search \cite{chen2021neural}. However, they all adopt ImageNet pre-training as an outset, suffering from modality bias brought by cross-modality ReID.
\noindent\textbf{Self-Supervised Learning (SSL)} is currently the fastest-growing branch of unsupervised learning, which is mainly exploited to pre-train network to solve a pre-designed \textit{pretext} task, aiming to learn `universal' representations for downstream task from unlabeled raw data using data itself as supervisory signals \cite{jing2020self}. Over the last decade, self-supervised pre-training has witnessed a wide range of pretext task designs based on data generation \cite{zhang2016colorful}, spatial or temporal contexts \cite{noroozi2016unsupervised}, and multi-modal correspondence \cite{zou2018df}. For various types of self-supervised pretext tasks, contrastive supervision aims to maximize agreement between different data views to forge a discriminative feature subspace \cite{caron2020unsupervised,grill2020bootstrap,zbontar2021barlow,chen2021jigsaw,chen2021exploring}, which is proved effective in learning high quality representations for downstream tasks \cite{chen2020simple,he2020momentum}. Nevertheless, contrastive correspondence is hard to be mined in multi-modal ReID scenarios due to the unpaired nature of heterogeneous images. In this paper, we leverage cycle-consistency \cite{zhu2017unpaired} between human body parts to enable contrastive learning for RGB-IR ReID.
\section{Methodology}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{pipeline.pdf}
\end{center}
\caption{An overview of the proposed Modality-Aware Multiple Granularity Learning (MMGL) pipeline. Generally, MMGL consists of a permutation recovery branch and a cycle-contrast branch at global and local granularities respectively. The former aims to learn a shared permutation latent space by recovering the original order of each cross-modality shuffled image pair. The latter seeks to enhance patch discriminative power by maxing the agreement between patch representations derived with cross-modality cycle-consistency.}
\label{fig:pipeline}
\end{figure*}
Prior work \cite{ye2020dynamic} suggests that `universe' RGB-IR ReID representations should be both modality-invariant and discriminative. This motivates two novel designs of our MMGL pre-training paradigm: \textbf{(a)} \textit{Permutation Recovery} pretext task that maps randomly shuffled RGB-IR image pairs into a shared permutation latent space for global invariant learning. \textbf{(b)} \textit{Part-aware Cycle-Contrastive Learning} strategy that maximizes agreement between cycle-consistent RGB-IR image patches to improve local discriminability. Fig. \ref{fig:pipeline} illustrates the idea, introduced next.
\subsection{Cross-Modality Permutation Recovery}
As shown in Fig. \ref{fig:pipeline}, given a ranking vector $\hat{O}$ that represents a randomly shuffled image patch sequence, the permutation recovery task aims to learn to reconstruct its original counterpart $O$ with a \textit{permutation matrix} $P$ \cite{mena2018learning}. Mathematically, $P$ belongs to the set of 0-1 doubly stochastic matrices, where each non-zero element in the $i$-th row and $j$-th column suggests that the current $i$-th patch should be assigned to the $j$-th place of the sequence. This leads to a regression problem $O=P_{\Theta, \hat{O}}^{-1}\hat{O}$, where we seek to derive $P$ with network parameters $\Theta$. However, the discrete nature of this problem may pose great challenges to the model optimization. Here, we present how to approximate this task solving process in the context of backpropagation.
\noindent\textbf{Permutation Generation.} We introduce a modality-shared shuffling operator $G(X_{rgb},X_{ir},\hat{O})$ that transforms randomly composed cross-modality image pairs $\{X_{rgb},X_{ir}\}$ to their shuffled counterparts $\{\hat{X}_{rgb},\hat{X}_{ir}\}$ (see Fig. \ref{fig:pipeline}, left). Suppose the shuffled image contains $N$ patches, $\hat{O}$ is then a random permutation of an array $[1,\cdot\cdot\cdot,N]$ sampled from the uniform distribution, which denotes the mapping from original image patches to their randomly shuffled patches. For each image with $H$ height and $W$ width, the shuffling generates a rearranged sequence of image patches (with size of $H/N\times W$) by choosing the $i$-th patch from $\hat{O}_{i}$ position of the original sequence. Note that $\hat{O}$ is shared within each cross-modality image pair, which maps both modality images into a common permutation subspace. This is beneficial to learn invariant features for global modality alignment.
\noindent\textbf{Permutation Recovery.} We map shuffled images to corresponding affinity matrices $\hat{P}\in\mathbb{R}^{N\times N}$ to recover their original orders. Such mappings can be fitted by a encoder $\mathcal{F}(\hat{X}_{rgb}, \hat{X}_{ir}, \Theta)$ with parameters $\Theta$ that transforms each image into a $N^{2}$-dim feature representation. Specifically, for each $\{\hat{X}_{rgb}, \hat{X}_{ir}\}$, $\mathcal{F}$ learns two global representations $f_{rgb}$ and $f_{ir}$. We then reduce their dimensions to $N^{2}$ using a shared fully-connected layer $\mathcal{G}$, \textit{i.e.,} $\hat{f}_{rgb}, \hat{f}_{ir}=\mathcal{G}(f_{rgb}, f_{ir})$. It is worth noting that the selection of $\mathcal{F}$ should be consonant with downstream supervised models. We transform $\hat{f}$ into a $N\times N$ matrix $\hat{P}$, in which each row and column can be regarded as a logit vector that denotes the possibility of a patch belonging to a serial position (\textit{i.e.,} a category).
Nevertheless, it is tough to directly fit permutation matrix $P$ with the learned $\hat{P}$, because each patch is discrete, making the approximation process non-differentiable. To this end, we introduce the Gumbel-Sinkhorn operator \cite{mena2018learning} to relax $\hat{f}$ to the continuous domain so as to fit a categorical distribution. The Sinkhorn operator is defined as:
\begin{equation}
\begin{aligned}
\operatorname{Sinkhorn}^{0}(\hat{P}) &=\exp (\hat{P}), \\
\operatorname{Sinkhorn}^{l}(\hat{P}) &=\mathcal{T}_{c}\left(\mathcal{T}_{r}\left(\operatorname{Sinkhorn}^{l-1}(\hat{P})\right)\right), \\
\operatorname{Sinkhorn}(\hat{P}) &=\lim _{l \rightarrow \infty} \operatorname{Sinkhorn}^{l}(\hat{P}),
\end{aligned}
\end{equation}
where $\mathcal{T}_{r}(\hat{P})=\hat{P} \oslash\left(\hat{P} \mathbf{1}_{N} \mathbf{1}_{N}^{\top}\right)$, $\mathcal{T}_{c}(\hat{P})=\hat{P} \oslash\left(\mathbf{1}_{N} \mathbf{1}_{N}^{\top} \hat{P}\right)$ indicates the row and column-wise normalization of a matrix respectively, $\oslash$ means element-wise division, $\mathbf{1}_{N}$ is a column vector of ones, $l$ is the number of iterations.
Based on the Sinkhorn operator, we can reparameterize the hard choice of the permutation matrix with Gumbel-Softmax distribution \cite{jang2016categorical}:
\begin{equation}
\operatorname{G-Sinkhorn}(\hat{P} / \tau)={\operatorname{softmax} }((\operatorname{trace}(P^{\top}\hat{P})+\gamma)/\tau),
\end{equation}
where $\gamma$ denotes random noises sampled from a Gumbel distribution, $\tau$ is the temperature hyperparameter.
After relaxing the learned affinity matrix $\hat{P}$, we could gradually approach the ground truth $P$ via backpropagation, leading to the following permutation reconstruction error:
\begin{equation}
\mathcal{L}_{p}=\sum_{i=1}^{M}\left\|O_{i}-\hat{P}^{-1} \hat{O}_{i}\right\|^{2},
\end{equation}
where $M$ is the size of a mini-batch, $O$ and $\hat{O}$ is the original/shuffled ranking vector, respectively.
\subsection{Part-Aware Cycle-Contrastive Learning}
With cross-modality permutation recovery, the pre-trained model is able to learn modality-invariant biometrics (\textit{e.g.,} shape and texture) in favour of modality alignment. However, it may collapse to \textit{low-loss} solutions that hinder it learning desired representations, e.g., directly utilizing boundary patterns and textures continuing across patches to solve the task. Such `\textit{shortcuts}' will suppress the intra-class compactness, leading to fuzzy decision boundaries for identity recognition.
Recent advances on contrastive learning show its promising capability in learning discriminative representations \cite{chen2020simple,he2020momentum}. With delicately designed data augmentation strategies, it maximums the agreement between different views of the same image to achieve better intra-class compactness and inter-class discriminability. Nonetheless, due to the unpaired nature of heterogeneous images in our task, it is infeasible to directly apply off-the-shelf contrastive learning pipelines to multi-modal scenarios. As most augmentations are intra-modality, they also can not reflect cross-modality correlations of identity semantics.
Here, we propose a part-aware cycle-contrastive (PCC) constraint that uses cross-modality cycle consistency to enable contrastive learning at the local granularity. For each image patch, we utilize a \textit{PCB-style} projection head \cite{sun2019learning} to map it into a normalized 256-dim representation (see Fig. \ref{fig:pipeline}, lower branch). After projection, we introduce a forward-backward nearest neighbor process to capture cross-modality cycle-consistency. Given a query representation $q_{i}$, we first derive its soft nearest neighbor $\hat{q}$ from the universal representation set $U$ of counterpart modality. Then, we compute the soft nearest neighbor of $\hat{q}$ backwards within the patch set of the same image. The cycle-consistency is satisfied when the two retrieved soft nearest neighbors are similar. The retrieval process is defined as:
\begin{equation}
\hat{q}_{i}=\sum_{u \in U} \alpha_{q_{i}, u} u,\quad\alpha_{q_{i}, u}=\frac{\exp \left(\operatorname{sim}\left(q_{i}, u\right) / \tau\right)}{\sum_{u^{\prime} \in U} \exp \left(\operatorname{sim}\left(q_{i}, u^{\prime}\right) / \tau\right)},
\end{equation}
where $\tau$ is the temperature and $\operatorname{sim}$ is the cosine similarity.
After the similarity-based forward-backward retrieval, we would obtain a cross-modality patch pair$\{\hat{q}_{ir},\hat{q}_{rgb}\}$ with similar semantics and two sets of retrieved representations $\{V_{ir}, V_{rgb}\}$. We regard $\{\hat{q}_{ir},\hat{q}_{rgb}\}$ as a positive pair while $\{V_{ir}, V_{rgb}\}$ as negative sets for contrastive learning:
\begin{equation}
\label{eq:PCC}
\mathcal{L}_{\text {PCC}}=-\log \frac{\exp \left(\operatorname{sim}\left(\hat{q}_{ir}, \hat{q}_{rgb}\right) / \tau\right)}{\sum_{u \in\left\{V_{ir}, V_{rgb}\right\}} \exp \left(\operatorname{sim}\left(\hat{q}_{i}, u\right) / \tau\right)}.
\end{equation}
By pulling together $\{\hat{q}_{ir},\hat{q}_{rgb}\}$ whilst pushing away all negative pairs, the deep network will learn to discover modality correspondence across semantic-similar body partitions. This explicit supervision facilitates the fine-grained alignment of heterogeneous images, helping to transfer better modality invariance and discriminative power to downstream cross-modality ReID models.
The overall learning objective of MMGL is formulated as:
\begin{equation}
\mathcal{L}_{\text{MMGL}}=\mathcal{L}_{p}+\lambda\mathcal{L}_{\text {PCC}},
\end{equation}
where $\lambda$ is a trade-off factor to balance each objective.
\subsection{Supervised Fine-Tuning for RGB-IR ReID}
In the fine-tuning stage, we transfer the MMGL pretrained backbone to downstream models and perform supervised learning for cross-modality image retrieval. Following existing RGB-IR ReID studies \cite{ye2020dynamic}, we optimize the identity cross-entropy loss $\mathcal{L}_{\text{id}}$ and triplet loss $\mathcal{L}_\text{triplet}$ \cite{hermans2017defense} as follow:
\begin{equation}
\label{loss:ReID}
\mathcal{L}_{\text{ReID}}=\mathcal{L}_{\text{id}}+\mathcal{L}_{\text{triplet}}.
\end{equation}
\section{Experiments}
We evaluate MMGL on standard RGB-IR ReID benchmarks. Please refer to the \textbf{Appendix} for more experimental results.
\subsection{Datasets and Evaluation Protocols}
\textbf{Datasets.} Our experiments are based on SYSU-MM01 \cite{wu2017rgb} and RegDB \cite{nguyen2017person} benchmarks.
SYSU-MM01 is the current largest RGB-IR ReID dataset collected by 4 RGB and 2 IR cameras. Statistically, the training set contains 22,258 RGB and 11,909 IR images of 395 persons, while the testing set is divided into a query set including 3,803 IR images and a RGB gallery set (96 identities both). The gallery set has two versions according to evaluation modes. In the \textit{indoor search} mode, only images captured by two indoor cameras are involved. For the \textit{all search} mode, all images obtained by four RGB cameras are used.
RegDB is a relatively tiny dataset acquired by a dual-camera system (\textit{i.e.,} paired RGB and thermal cameras). It includes 412 persons, where each person has 10 visible and 10 thermal images, respectively. Randomly sampled 206 persons' images are used for training while the others are employed for testing. There are also two evaluation modes, \textit{i.e., visible-thermal} and \textit{thermal-visible}, by alternatively using all visible/thermal images as the query set.
\renewcommand\arraystretch{1.2}
\begin{table*}[t]
\caption{MMGL with common baselines on SYSU-MM01 with Rank-1, 10, 20 (\%) and mAP (\%) evaluation metrics.}
\vspace{-0.3cm}
\label{Table:SYSU}
\resizebox{1\textwidth}{!}{
\begin{tabular}{c|c|c|cccc|cccc|cccccccc}
\multirow{3}{*}{Method} &\multirow{3}{*}{Venue} & \multirow{3}{*}{Pre-Train} & \multicolumn{8}{c|}{All-Search} & \multicolumn{8}{c}{Indoor-Search} \\
& & & \multicolumn{4}{c|}{Single-Shot} & \multicolumn{4}{c|}{Multi-Shot} & \multicolumn{4}{c|}{Single-Shot} & \multicolumn{4}{c}{Multi-Shot} \\
& & & r1 & r10 & r20 & mAP & r1 & r10 & r20 & mAP & r1 & r10 & r20 & \multicolumn{1}{c|}{mAP} & r1 & r10 & r20 & mAP \\ \Xhline{1.5pt}
\multirow{3}{*}{One-Stream} &\multirow{3}{*}{TPAMI 2021} & \textcolor{gray}{Random Init.} &\textcolor{gray}{26.29} &\textcolor{gray}{67.86} &\textcolor{gray}{81.26} &\textcolor{gray}{27.28} &\textcolor{gray}{30.83} &\textcolor{gray}{73.07} &\textcolor{gray}{85.53} &\textcolor{gray}{20.81} &\textcolor{gray}{26.05} &\textcolor{gray}{72.15} &\textcolor{gray}{87.99} & \multicolumn{1}{c|}{\textcolor{gray}{36.24}} &\textcolor{gray}{31.39} &\textcolor{gray}{78.86} &\textcolor{gray}{91.82} &\textcolor{gray}{25.96} \\
& & ImageNet-1k &43.66 &83.92 &92.23 &43.70 &50.16 &87.10 &94.50 &37.20 &48.93 &89.28 &96.40 & \multicolumn{1}{c|}{57.86} &56.80 &92.47 &97.11 &49.11 \\
& & MMGL (Ours) &\textbf{50.77} &\textbf{88.01} &\textbf{94.08} &\textbf{49.61} &\textbf{58.77} &\textbf{91.66} &\textbf{96.39} &\textbf{43.13} &\textbf{54.00} &\textbf{91.73} &\textbf{96.77} &\multicolumn{1}{c|}{\textbf{62.57}} &\textbf{62.91} &\textbf{94.01} &\textbf{97.58} &\textbf{53.48} \\ \hline
\multirow{3}{*}{AGW} &\multirow{3}{*}{TPAMI 2021} & \textcolor{gray}{Random Init.} &\textcolor{gray}{25.47} &\textcolor{gray}{66.06} &\textcolor{gray}{80.11} &\textcolor{gray}{26.07} &\textcolor{gray}{32.67} &\textcolor{gray}{75.81} &\textcolor{gray}{87.13} &\textcolor{gray}{22.98} &\textcolor{gray}{26.38} &\textcolor{gray}{70.61} &\textcolor{gray}{86.08} & \multicolumn{1}{c|}{\textcolor{gray}{35.95}} &\textcolor{gray}{33.37} &\textcolor{gray}{80.01} &\textcolor{gray}{92.15} &\textcolor{gray}{27.83} \\
& & ImageNet-1k &48.58 &87.58 &94.83 &49.37 &54.73 &92.46 &96.72 &42.90 &54.67 &89.56 &95.91 & \multicolumn{1}{c|}{42.90} &62.13 &93.42 &97.06 & 54.56 \\
& & MMGL (Ours) &\textbf{55.05} &\textbf{89.61} &\textbf{95.06} &\textbf{53.05} &\textbf{61.64} &\textbf{93.14} &\textbf{97.33} &\textbf{46.09} &\textbf{58.93} &\textbf{93.23} &\textbf{97.51} & \multicolumn{1}{c|}{\textbf{66.34}} &\textbf{65.35} &\textbf{95.11} &\textbf{97.90} &\textbf{57.04} \\
\hline
\multirow{3}{*}{DDAG} &\multirow{3}{*}{ECCV 2020} & \textcolor{gray}{Random Init.} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} & \multicolumn{1}{c|}{\textcolor{gray}{fail}} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} \\
& & ImageNet-1k &54.75 &90.39 &95.81 &53.02 &\textbf{61.83} &92.68 &97.49 &\textbf{47.06} &\textbf{61.02} &94.06 &\textbf{98.41} & \multicolumn{1}{c|}{\textbf{67.98}} &\textbf{69.23} &95.13 &\textbf{98.31} &\textbf{59.42} \\
& & MMGL (Ours) &\textbf{55.65} &\textbf{91.10} &\textbf{96.06} &\textbf{53.51} &60.97 &\textbf{92.88} &\textbf{97.53} &46.85 &59.39 &\textbf{94.32} &97.93 & \multicolumn{1}{c|}{66.96} &69.20 &\textbf{95.35} &98.02 &58.89 \\
\hline
\multirow{3}{*}{NFS} &\multirow{3}{*}{CVPR 2021} & \textcolor{gray}{Random Init.} &\textcolor{gray}{30.45} &\textcolor{gray}{71.83} &\textcolor{gray}{82.97} &\textcolor{gray}{31.43} &\textcolor{gray}{34.18} &\textcolor{gray}{76.63} &\textcolor{gray}{87.34} &\textcolor{gray}{25.01} &\textcolor{gray}{30.03} &\textcolor{gray}{75.59} &\textcolor{gray}{89.57} & \multicolumn{1}{c|}{\textcolor{gray}{40.38}} &\textcolor{gray}{34.70} &\textcolor{gray}{82.66} &\textcolor{gray}{93.16} &\textcolor{gray}{30.09} \\
& & ImageNet-1k &56.91 &91.34 &96.52 &55.45 &63.51 &94.42 &97.81 &48.56 &62.79 &96.53 &99.07 & \multicolumn{1}{c|}{69.79} &70.03 &97.70 &99.51 & 61.45 \\
& & MMGL (Ours) &\textbf{60.83} &\textbf{92.20} &\textbf{97.51} &\textbf{58.36} &\textbf{66.37} &\textbf{96.97} &\textbf{98.58} &\textbf{51.79} &\textbf{66.71} &\textbf{97.83} &\textbf{99.28} &\multicolumn{1}{c|}{\textbf{71.32}} &\textbf{74.57} &\textbf{98.42} &\textbf{99.62} &\textbf{64.31}
\\ \hline
\multirow{3}{*}{CAJ} &\multirow{3}{*}{ICCV 2021} & \textcolor{gray}{Random Init.} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} & \multicolumn{1}{c|}{\textcolor{gray}{fail}} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} \\
& & ImageNet-1k &66.62 &95.44 &98.56 &64.04 &74.07 &96.88 &98.93 &56.75 &71.39 &96.81 &99.31 & \multicolumn{1}{c|}{76.47} &82.11 &97.95 &99.03 &70.22 \\
& & MMGL (Ours) & \textbf{67.82} &\textbf{96.02} &\textbf{98.88} &\textbf{65.25} &\textbf{76.23} &\textbf{97.66} &\textbf{99.29} &\textbf{58.36} &\textbf{74.55} &\textbf{97.88} &\textbf{99.52} & \multicolumn{1}{c|}{\textbf{78.75}} &\textbf{84.32} &\textbf{99.13} &\textbf{99.85} &\textbf{72.42} \\
\end{tabular}
}
\end{table*}
\renewcommand\arraystretch{1}
\begin{table*}[t]\scriptsize
\centering
\caption{Cross-dataset performance evaluation on RegDB with Rank-1, 10, 20 (\%) and mAP (\%) metrics.}
\vspace{-0.3cm}
\label{Table:RegDB}
\begin{tabular}{c|c|c|c|cccc|cccc}
\multirow{2}{*}{Method} & \multirow{2}{*}{Venue} & \multirow{2}{*}{Pre-Train} & \multirow{2}{*}{Source} & \multicolumn{4}{c|}{Visble-Thermal} & \multicolumn{4}{c}{Thermal-Visble} \\
& & & & r1 & r10 & r20 & mAP & r1 & r10 & r20 & mAP \\ \Xhline{1pt}
\multirow{3}{*}{One-Stream} & \multirow{3}{*}{TPAMI 2021} & \textcolor{gray}{Random Init.} & \textcolor{gray}{RegDB} &\textcolor{gray}{17.04} &\textcolor{gray}{33.74} &\textcolor{gray}{44.76} &\textcolor{gray}{19.81} &\textcolor{gray}{16.70} &\textcolor{gray}{33.20} &\textcolor{gray}{44.37} &\textcolor{gray}{19.79} \\
& & ImageNet & ImageNet-1k & 63.17 & 84.02 & 91.89 & 61.32 & 61.39 & 83.27 & 90.99 & 60.12 \\
& & MMGL (Ours) & SYSU-MM01 &\textbf{65.72} &\textbf{85.31} &\textbf{92.25} &\textbf{63.23} &\textbf{64.71} &\textbf{85.28} &\textbf{92.73} &\textbf{63.85} \\ \hline
\multirow{3}{*}{AGW} & \multirow{3}{*}{TPAMI 2021} & \textcolor{gray}{Random Init.} & \textcolor{gray}{RegDB} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} \\
& & ImageNet & ImageNet-1k & 70.73 & 86.46 & 91.41 & 65.04 & 69.85 & 86.31 & 89.62 & 63.66 \\
& & MMGL (Ours) & SYSU-MM01 &\textbf{73.56} &\textbf{88.01} &\textbf{92.87} &\textbf{68.28} &\textbf{72.75} &\textbf{87.03} &\textbf{90.25} &\textbf{68.03} \\ \hline
\multirow{3}{*}{DDAG} & \multirow{3}{*}{ECCV 2020} & \textcolor{gray}{Random Init.} & \textcolor{gray}{RegDB} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} \\
& & ImageNet & ImageNet-1k & \textbf{69.34} & \textbf{86.19} & \textbf{91.49} & \textbf{63.46} & \textbf{68.06} & \textbf{85.15} & \textbf{90.31} & \textbf{61.80} \\
& & MMGL (Ours) & SYSU-MM01 & \multicolumn{1}{c}{68.51} & \multicolumn{1}{c}{85.88} & \multicolumn{1}{c}{90.45} & \multicolumn{1}{c|}{62.73} & \multicolumn{1}{c}{67.89} & \multicolumn{1}{c}{85.00} & \multicolumn{1}{c}{89.64} & \multicolumn{1}{c}{60.27} \\ \hline
\multirow{3}{*}{NFS} & \multirow{3}{*}{CVPR 2021} & \textcolor{gray}{Random Init.} & \textcolor{gray}{RegDB} & \multicolumn{1}{c}{\textcolor{gray}{37.92}} & \multicolumn{1}{c}{\textcolor{gray}{63.57}} & \multicolumn{1}{c}{\textcolor{gray}{72.83}} & \multicolumn{1}{c|}{\textcolor{gray}{38.05}} & \multicolumn{1}{c}{\textcolor{gray}{37.35}} & \multicolumn{1}{c}{\textcolor{gray}{62.81}} & \multicolumn{1}{c}{\textcolor{gray}{71.33}} & \multicolumn{1}{c}{\textcolor{gray}{37.69}} \\
& & ImageNet & ImageNet-1k & \multicolumn{1}{c}{80.54} & \multicolumn{1}{c}{91.96} & \multicolumn{1}{c}{95.07} & \multicolumn{1}{c|}{72.10} & \multicolumn{1}{c}{77.95} & \multicolumn{1}{c}{90.45} & \multicolumn{1}{c}{93.62} & \multicolumn{1}{c}{69.79} \\
& & MMGL (Ours) & SYSU-MM01 & \multicolumn{1}{c}{\textbf{82.24}} & \multicolumn{1}{c}{\textbf{92.38}} & \multicolumn{1}{c}{\textbf{96.16}} & \multicolumn{1}{c|}{\textbf{74.98}} & \multicolumn{1}{c}{\textbf{80.32}} & \multicolumn{1}{c}{\textbf{92.08}} & \multicolumn{1}{c}{\textbf{94.76}} & \multicolumn{1}{c}{\textbf{73.91}}\\ \hline
\multirow{3}{*}{CAJ} & \multirow{3}{*}{ICCV 2021} & \textcolor{gray}{Random Init.} & \textcolor{gray}{RegDB} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} &\textcolor{gray}{fail} \\
& & ImageNet & ImageNet-1k & 83.25 & 94.29 & 97.04 & 77.31 & 82.63 & 94.58 & 97.11 & 75.96 \\
& & MMGL (Ours) & SYSU-MM01 & \multicolumn{1}{c}{\textbf{85.51}} & \multicolumn{1}{c}{\textbf{95.77}} & \multicolumn{1}{c}{\textbf{98.02}} & \multicolumn{1}{c|}{\textbf{79.03}} & \multicolumn{1}{c}{\textbf{84.04}} & \multicolumn{1}{c}{\textbf{95.93}} & \multicolumn{1}{c}{\textbf{97.38}} & \multicolumn{1}{c}{\textbf{77.59}}
\end{tabular}
\end{table*}
\noindent\textbf{Evaluation Protocols.} We adopt the standard RGB-IR ReID evaluation protocols \cite{ye2020dynamic} and use the Cumulative Matching Characteristic (CMC) and mean Average Precision (mAP) as evaluation metrics. Following \cite{wu2017rgb}, experimental results on SYSU-MM01 are based on the average of ten randomly splits of the gallery and query set. Following \cite{ye2020dynamic}, we conduct a 10-fold cross validation on RegDB and report the average performance.
\subsection{Implementation Details}
We implement MMGL with PyTorch on an Nvidia 2080Ti GPU. All images are resized to 288$\times$144. An SGD optimizer with 0.9 momentum and 0.0005 weight decay is used for optimization. We set initial learning rate to 0.1 with linear warm-up \cite{ye2020dynamic} and decay it with the cosine decay schedule without restarts, with a total of 100 epochs.
\noindent\textbf{Pre-Training Stage.} For pre-training, we randomly sample 56 RGB and 56 IR images without labels for each training batch. Each image is horizontally split into 6 stripes equally. Conventional random cropping with zero-padding, horizontal flipping, AugMix \cite{hendrycks2019augmix} and random erasing \cite{zhong2020random} are chosen for data augmentation. Following \cite{mena2018learning}, we use $l=20$ for Sinkhorn operator iterations. For each sample, we generate 10 reconstructions using Gumbel perturbations. Following \cite{he2020momentum}, we set $\tau$ to 0.07. And $\lambda$ is empirically set to 0.2.
\noindent\textbf{Fine-Tuning Stage.} For fine-tuning, we drop the \textit{PCB-style} projection head and initialize the model with MMGL checkpoints, while \textit{leaving other default settings (\textit{e.g.,} hyperparameters and data augmentation strategies) unchanged}. Note that these settings are not customized for MMGL, tuning them likely leads to better results. We leave it to future work.
\subsection{RGB-IR ReID with MMGL}
We first evaluate the proposed method with five state-of-the-art models, including One-stream and AGW \cite{ye2021deep}, DDAG \cite{ye2020dynamic}, NFS \cite{chen2021neural}, and CAJ \cite{ye2021channel}. On SYSU-MM01, a random initialized backbone is firstly pre-trained with MMGL. Then we fine-tune the pre-trained checkpoint with default settings to perform supervised RGB-IR ReID. More experiments on other RGB-IR ReID models are provided in the \textbf{Appendix}.
As shown in Table \ref{Table:SYSU}, when straightforwardly performing supervised training on SYSU-MM01 from random initialization, much inferior accuracy is observed for all methods. DDAG and CAJ even encounter gradient explosion and fail to converge. This suggests serious over-fitting has happened due to insufficient RGB-IR training samples.
Our proposed MMGL pre-training strategy achieves consistent and significant improvement (more than \textbf{25\%} Rank-1 and mAP boost in average) over baselines without pre-training. Such performance gains are obtained neither using additional training data nor supervised pre-training. This manifests that the proposed self-learning technique and part-aware cycle-contrastive constraint provide data-efficient and powerful regularization to prevent over-fitting.
More strikingly, MMGL pre-trained model even outperform its ImageNet-supervised counterpart on most state-of-the-arts. This indicates that elimination of modality bias has huge positive effects on cross-modality ReID. Furthermore, the promising results on AGW and NFS demonstrate that MMGL is robust to different backbones and loss functions.
Another noteworthy feature of our approach is its fast convergence speed. As shown in Fig. \ref{fig:problem}, when pre-training on SYSU-MM01 with AGW baseline, MMGL has greatly reduced the amount of training time ImageNet model takes from seven days to four hours, while without any increase in fine-tuning. This results in more efficient RGB-IR ReID.
\subsection{Generalization Ability across Datasets}
We further evaluate the cross-dataset generalization capability of MMGL features. Specifically, we pre-train the aforementioned five models with MMGL on SYSU-MM01, and then transfer the learned features to solve RGB-IR ReID on RegDB. We use the same fine-tuning hyperparameters ImageNet-supervised model has, and provide random initialization results as references. More transfer learning results are included in the \textbf{Appendix}.
Table \ref{Table:RegDB} shows the fine-tuning results on RegDB. The performance gap between no pre-training and ImageNet pre-training widens more than that on SYSU-MM01, possibly owing to the even smaller size of RegDB. Severe over-fitting also makes it hard to train models from scratch --- AGW, DDAG, and CAJ even meet gradient explosion on RegDB.
Surprisingly, MMGL pre-training demonstrates consistent performance and good transferability across datasets, and even surpasses its ImageNet-supervised counterpart when outfitted on various off-the-shelf models. Note that such improvement is achieved under large domain shift, as SYSU-MM01 is captured by near-infrared cameras while RegDB is collected by far-infrared sensors. All of the above results suggest that MMGL features are robust and generalizable.
\subsection{Ablation Studies}
We evaluate the effectiveness of each MMGL component on the SYSU-MM01 dataset in both \textit{all search} and \textit{indoor search} modes. For fair comparison, all ablations are conducted on AGW baseline \cite{ye2021deep} with fixed default hyper-parameters. Specifically, $\mathcal{P}$ denotes the permutation recovery, and $\mathcal{C}$ represents the PCC constraint (Eq. \ref{eq:PCC}).
\vspace{-0.5cm}
\renewcommand\arraystretch{1.2}
\begin{table}[h
\linespread{2}
\caption{Ablation Studies on SYSU-MM01.}
\vspace{-0.5cm}
\label{Table:ablation}
\begin{center}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\multirow{2}{*}{Pre-Train} & \multicolumn{3}{c|}{All-Search} & \multicolumn{3}{c}{Indoor-Search} \\
& r1 & r10 & mAP & r1 & r10 & mAP \\ \Xhline{1.5pt}
\textcolor{gray}{Rand Init} & \textcolor{gray}{25.47} & \textcolor{gray}{66.06} & \textcolor{gray}{26.07} & \textcolor{gray}{26.38} & \textcolor{gray}{70.61} & \textcolor{gray}{35.95} \\
ImageNet-1k & 48.58 & 87.58 & 49.37 & 54.73 & 92.46 & 63.72 \\
$\mathcal{P}$ & 50.99 & 88.25 & 49.77 & 55.53 & 91.29 & 63.33 \\
$\mathcal{P}+\mathcal{C}$ & \textbf{55.05} & \textbf{89.61} & \textbf{53.05} & \textbf{58.93} & \textbf{93.23} & \textbf{66.34} \\
\end{tabular}
}
\end{center}
\end{table}
\vspace{-0.5cm}
As shown in Table \ref{Table:ablation}, when only pre-training with the $\mathcal{P}$ task, AGW presents encouraging performance even better than ImageNet pre-training. One possible explanation is that permutation recovery task explicitly exploits body typology, a natural supervision being invariant to both modalities for pre-training. Each cross-modality image pair is mapped into a shared permutation latent space where modality distributions can be better-aligned. It is noteworthy that this result is achieved without extra data and labels, suggesting that supervised pre-training on large RGB image sets does not bring much improvement to downstream cross-modality tasks.
Note that performance boost by sole permutation recovery is modest, as the model may easily find undesired trivial solutions, \textit{e.g.,} by directly exploiting boundary patterns of patches to solve the task. Surprisingly, when imposing the proposed PCC constraint on patch embeddings ($\mathcal{P} + \mathcal{C}$), significant improvement can be observed (\textbf{+3.70\%} Rank-1, \textbf{+3.21\%} mAP). On the one hand, it offers discriminative features via patch discrimination. On the other hand, it ensures invariance by attracting cycle-retrieved RGB-IR pairs.
\subsection{Comparison with Other SSL Methods}
We compare MMGL with eight self-supervised learning methods on the SYSU-MM01 dataset. The involved methods can be divided into two categories: \textbf{1)} pretext task design (Colorization, Jigsaw, PSL) and \textbf{2)} contrastive learning (SimCLR, SwAV, BYOL, SimSiam, and Barlow Twins).
\begin{table}[h]\small
\caption{Comparison with other SSL Methods on SYSU-MM01 (Single-Shot \& All-Search).}
\vspace{-0.3cm}
\centering
\label{Table:SSL}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l|ccc}
Method & r1 & r10 & mAP \\ \Xhline{1.5pt}
Colorization \cite{zhang2016colorful} & 30.57 & 69.02 & 29.49 \\
Jigsaw \cite{noroozi2016unsupervised} & 40.74 & 82.07 & 37.71 \\
PSL \cite{li2021progressive} & 50.86 & 86.12 & 49.35 \\ \hline
SimCLR \cite{chen2020simple} & 46.17 & 84.93 & 43.38 \\
SwAV \cite{caron2020unsupervised} & 48.25 & 85.62 & 45.07 \\
BYOL \cite{grill2020bootstrap} & 52.37 & 87.34 & 50.03 \\
SimSiam \cite{chen2021exploring} & \textcolor{gray}{fail} & \textcolor{gray}{fail} & \textcolor{gray}{fail} \\
Barlow Twins \cite{zbontar2021barlow} & 51.78 & 86.45 & 50.23 \\ \hline
MMGL (Ours) &\textbf{55.05} &\textbf{89.61} &\textbf{53.05}
\end{tabular}
}
\end{table}
From the results listed in Table \ref{Table:SSL}, we see that Colorization produces modest accuracy on the RGB-IR ReID task. This is because IR images inherently contain no color information. Jigsaw and PSL are more complex but most similar pretext tasks to permutation recovery. Counter-intuitively, they yield suboptimal performance. One possible reason is that the pose variations in person images raise severe spatial misalignment in horizontal direction, rendering them less effective in learning discriminative features than MMGL.
Contrastive learning (CL) methods share similar learning objective with our proposed PCC constraint, namely instance discrimination \cite{he2020momentum}. They perform better than other pretext tasks, but still attains inferior results than MMGL. Notably, SimSiam even fails to converge on SYSU-MM01. This is because existing CL methods heavily rely on intra-modality data augmentation to create multiple views of the same sample, which does not explicit consider the modality discrepancy in RGB-IR ReID. Instead, PCC utilizes cross-modality cycle-consistency to generate positive patch pairs, leading to more modality-invariant representations.
Moreover, it has been proven that existing CL methods do not work well with small batch sizes \cite{chen2020simple}. However, person ReID is a typical single-GPU task where the efficiency greatly matters. Our proposed paradigm brilliantly exploits body partitions to increase the amount of negative samples with negligible memory costs. Generally, MMGL outperforms state-of-the-art SSL methods consistently.
\section{Conclusion}
This paper makes the first attempt to investigate the pre-training solution for RGB-IR cross-modality person ReID. To overcome the modality bias issue raised by ImageNet pre-training, we propose a self-supervised MMGL pre-training paradigm that allows ReID models to be pre-trained and fine-tuned directly on existing cross-modality pedestrian datasets, showing superior performance and robustness against over-fitting. By solving a permutation recovery pretext task, MMGL learns highly-invariant representations across modalities. We further propose a part-aware cycle-contrastive learning strategy to learn correspondence between unpaired RGB-IR patches, significantly improving the discriminability of local features. Extensive experiments reveal the effectiveness and transferability of MMGL, even surpassing its ImageNet supervised counterpart without extra data or manual labels.
\newpage
\bibliographystyle{named}
{\fontsize{7.5pt}{7.5pt}\selectfont
|
1,116,691,497,211 | arxiv | \section{Introduction}
Topological phases have attracted much attention in the context of solid
state materials\cite{Hasan,Qi} with the emergence of topological edge
states. They are generalized to higher-order topological phases\cit
{Fan,Science,APS,Peng,Lang,Song,Bena,Schin,FuRot,EzawaKagome,Khalaf}, where
topological corner states and topological hinge states emerge. Recently,
they are also found in various linear systems such as photonic\cit
{KhaniPhoto,Hafe2,Hafezi,WuHu,TopoPhoto,Ozawa16,Ley,KhaniSh,Zhou,Jean,Ota18,Ozawa,Ota19,OzawaR,Hassan,Ota,Li,Yoshimi,Kim,Iwamoto21
, acoustic\cite{Prodan,TopoAco,Berto,Xiao,He,Abba,Xue,Ni,Wei,Xue2},
mechanical\cit
{Lubensky,Chen,Nash,Paul,Sus,Sss,Huber,Mee,Kariyado,Hannay,Po,Rock,Takahashi,Mat,Taka,Ghatak,Wakao}
and electric circuit\cit
{TECNature,ComPhys,Hel,Lu,YLi,EzawaTEC,Research,Zhao,EzawaLCR,EzawaSkin,Garcia,Hofmann,EzawaMajo,Tjunc,Lee,Kot}
systems. Now, nonlinear topological photonics is an emerging field\cit
{Ley,Zhou,Smi,Kruk,MacZ}, where nonlinearity is naturally introduced by the
Kerr effect. Nonlinear higher-order topological phases have been
experimentally studied in photonics\cite{Zange,Kirch}. Topological edge
states and topological corner states have been observed in nonlinear systems
just as in linear systems.
It is a hard task to construct a general theory of the topological physics
in nonlinear systems because there are many ways to introduce nonlinearity.
It would be necessary to make individual studies of typical nonlinear models
to achieve at a systematic understanding. We studied the dimerized
Toda-lattice model\cite{TopoToda} and a nonlinear mechanical system\cit
{MechaRot} in previous works. These models contain the Su-Schrieffer-Heeger
(SSH) model as an essential term. Indeed, these models are reduced to the
dynamical SSH model provided the nonlinear term is ignored, where the
topological number is well defined and the zero-mode edge state emerges in
the topological phase. Then, we carried out numerical analysis to show the
topological physics is valid even in the presence of the nonlinear term.\
These models have only two phases, the topological phase and the trivial
phase in the phase diagram in the ($\lambda ,\xi $) plane, with $\lambda $
the\ dimerization parameter and $\xi $ the nonlinearity parameter.
\begin{figure}[t]
\centerline{\includegraphics[width=0.48\textwidth]{KagomeIllust}}
\caption{Illustration of (a) a dimerized lattice and (b) a breathing Kagome
lattice with (a) $t_{A}=0$, (b) $t_{A}t_{B}\neq 0$ and (c) $t_{B}=0$. A line
(triangle) contains many small segments (triangles). At the edges (corners)
of the chain (triangle), there are two (three) isolated atoms for $t_{A}=0$,
while there are dimer (trimer) states for $t_{B}=0$. They are marked by
dotted circles. The size of the line (triangle) is $L=5$. }
\label{FigKagomeIllust}
\end{figure}
In this paper, we study the quench dynamics governed by a nonlinear Schr\"{o
dinger equation consisting of the hopping term with the hopping matrix
M_{nm}$ and the nonlinear term proportional to the nonlinearity parameter
\xi $. In the quench dynamics, we give a pulse to a lattice point and
explore its time evolution. The dynamics is sensitive to the presence of the
topological edge and corner states. We perform a numerical analysis in a
wide region of parameters and construct phase diagrams in the ($\lambda ,\xi
$) plane. We are interested in the systems which describe nontrivial
topological dynamics in the linear limit ($\xi =0$). As explicit examples,
we take $M_{nm}$ on the SSH lattice and on the breathing Kagome lattice. We
confirm analytically the validity of the topological dynamics in the weak\
nonlinearity regime ($\xi \ll 1$) based on the first-order perturbation
theory in $\xi $. We show that the topological phase boundary between the
topological and trivial phases is well defined and\ not modified in this
weak nonlinearity regime.\ In the strong nonlinearity regime ($\xi \gg 1$)
where the nonlinear term is dominant, we obtain analytically the
nonlinearity-induced localization phase, where the state is localized due to
the nonlinear term. It is unrelated to the topological physics because the
term $M_{nm}$ is irrelevant in this regime. The transition from the weak to
the strong nonlinearity regime is a transition from extended states to
localized states. We have also found a new phase formed by a cooperative
effect of these two terms, which is the oscillation-mode phase in the
vicinity of the dimerized nonlinear SSH model and the trimerized breathing
Kagome model illustrated in Fig.\ref{FigKagomeIllust}(a3) and (b3),
respectively.
This paper is composed as follows. In Section \ref{SecSelfTrap}, we review
the nonlinear Schr\"{o}dinger equation. We discuss it analytically in the
linear limit ($\xi =0$), in the weak nonlinearity regime ($\xi \ll 1$) and
in the strong nonlinearity regime ($\xi \gg 1$). We find that the
topological phase transition point does not change in the weak nonlinear
regime. On the other hand, the system turns into\ the nonlinearity-induced
localization phase in the strong nonlinearity regime.\ In Section \re
{SecSSH}, we explicitly study the nonlinear SSH model, where the phase
diagram is determined by a numerical analysis. It consists of the
topological phase, the trivial phase, the nonlinearity-induced localization
phase and the dimer phase. We discuss the origin of the dimer phase as an
operative effect of the hopping term and the nonlinear term. In Section \re
{SecKagome} we explicitly study the nonlinear second-order topological phase
on the breathing Kagome lattice, where the phase diagram is constructed by a
numerical analysis. The analysis and the results are quite similar to those
in the nonlinear SSH model except for the trimer phase replacing the dimer
phase.
\section{Nonlinear Schr\"{o}dinger equation\label{SecSelfTrap}}
A typical nonlinear equation is the nonlinear Schr\"{o}dinger equation
\begin{equation}
i\frac{\partial \psi }{\partial t}+\varepsilon \frac{\partial ^{2}\psi }
\partial x^{2}}+\xi \left\vert \psi \right\vert ^{2}\psi =0,
\end{equation
where the third term is a nonlinear term. It is introduced by the Kerr
effect in the case of photonic systems\cite{Szameit,Chris}. The nonlinearity
is controlled by the parameter $\xi $, where large $\xi $ indicates strong
nonlinearity.
There is a lattice version of the above equation,
\begin{equation}
i\frac{d\psi _{n}}{dt}+\varepsilon \left( \psi _{n+1}-2\psi _{n}+\psi
_{n-1}\right) +\xi \left\vert \psi _{n}\right\vert ^{2}\psi _{n}=0,
\label{DNLS}
\end{equation
which is called the discrete nonlinear Schr\"{o}dinger equation\cite{Cai,Kev
. There are two conserved quantities. One is the Hamiltonian\cit
{Eil,Szameit,Korab}
\begin{equation}
H=\sum_{n=1}^{N}\left( \varepsilon \left\vert \psi _{n+1}-\psi
_{n}\right\vert ^{2}-\frac{\xi }{2}\left\vert \psi _{n}\right\vert
^{4}\right) .
\end{equation
and the other is the excitation number
\begin{equation}
N_{\text{exc}}=\sum_{n=1}^{N}\left\vert \psi _{n}\right\vert ^{2}.
\label{Normal}
\end{equation}
The discrete nonlinear Schr\"{o}dinger equation (\ref{DNLS}) is defined on
the one-dimensional lattice. It is generalized to a nonlinear equation on an
arbitrary lattice\cite{Eil,Szameit,Chris}
\begin{equation}
i\frac{d\psi _{n}}{dt}+\sum_{m=1}^{N}M_{nm}\psi _{m}+\xi \left\vert \psi
_{n}\right\vert ^{2}\psi _{n}=0, \label{DST}
\end{equation
where $M_{nm}$ represents a hopping matrix, and $N$ is the number of the
lattice sites.
We investigate such
a system that contains the topological and trivial phases provided the
nonlinear term is ignored. We study analytically and numerically the phase
diagram of the model (\ref{DST}). The main issue is how the topological
phase defined in the linear model ($\xi =0$) is robust against the
introduction of the nonlinear term.
There are two conserved quantities\cite{Korab}. One is the Hamiltonia
\begin{equation}
H=\sum_{n=1}^{N}\left( -M_{nm}\psi _{n}^{\ast }\psi _{m}-\frac{\xi }{2
\left\vert \psi _{n}\right\vert ^{4}\right) ,
\end{equation
and the other is the excitation number (\ref{Normal}).
We analyze the quench dynamics by imposing an initial conditio
\begin{equation}
\psi _{n}\left( t\right) =\delta _{n,m}\qquad \text{at}\qquad t=0.
\label{IniCon}
\end{equation
Namely, giving an delta-function type input at the site $m$ initially, we
study its time evolution. Because of the conservation rule (\ref{Normal}),
the conditio
\begin{equation}
\sum_{n=1}^{N}\left\vert \psi _{n}\right\vert ^{2}=1 \label{NormConser}
\end{equation
is imposed throughout the time evolution.
A comment is in order. It is possible to eliminate the nonlinearity
parameter $\xi $ entirely from Eq.(\ref{DST}). By setting $\psi _{j}=\psi
_{j}^{\prime }/\sqrt{\xi }$, we may rewrite (\ref{DST}) a
\begin{equation}
i\frac{d\psi _{n}^{\prime }}{dt}+\sum_{m}M_{nm}\psi _{m}^{\prime
}+\left\vert \psi _{n}^{\prime }\right\vert ^{2}\psi _{n}^{\prime }=0.
\label{DST1}
\end{equation
The initial condition (\ref{IniCon}) is replaced by
\begin{equation}
\psi _{n}^{\prime }\left( t=0\right) =\sqrt{\xi }\delta _{n,m}.
\label{IniCon22}
\end{equation
Namely, the quench dynamics subject to Eq.(\ref{DST}) is reproduced by the
nonlinear equation (\ref{DST1}) with the modified initial condition (\re
{IniCon22}). Consequently, it is possible to use a single sample to
investigate the quench dynamics at various nonlinearity $\xi $ only by
changing the initial condition as in (\ref{IniCon22}). Nevertheless, we use
the form of Eq.(\ref{DST}) throughout the paper to make the nonlinear effect
manifest.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{Dynamics}}
\caption{Bird's eye's view of time evolution of the amplitude $|\protect\ps
_{n}|$\ in the nonlinear SSH model. The horizontal axes are the site index
n $ and the time $t$ ranging $0\leq t \leq 30$. (a1)$\sim$(a4) Topological
phase ($\protect\lambda =-0.5$). (b1)$\sim $(b4) Trivial phase ($\protec
\lambda =0.5$). We have set $\protect\xi =0$ for (a1) and (b1), $\protect\xi
=0.1$ for (a2) and (b2), $\protect\xi =0.5$ for (a3) and (b3), and $\protec
\xi =1$ for (a4) and (b4). }
\label{FigDynamics}
\end{figure*}
\subsection{Linearized model}
We first study the linear limit by setting $\xi =0$
\begin{equation}
i\frac{d\psi _{n}}{dt}+\sum_{m}M_{nm}\psi _{m}=0. \label{TopoDyna}
\end{equation
We diagonalize $M_{nm}$ as
\begin{equation}
M\bar{\psi}_{p}=E_{p}\bar{\psi}_{p}, \label{EigenA}
\end{equation
where $p$ labels the eigen index, $1\leq p\leq N$. Then, we obtain decoupled
equation
\begin{equation}
i\frac{d\bar{\psi}_{p}}{dt}+E_{p}\bar{\psi}_{p}=0,
\end{equation
whose solutions are given b
\begin{equation}
\bar{\psi}_{p}\left( t\right) =\exp \left[ -itE_{p}\right] \bar{\psi
_{p}\left( 0\right) . \label{LSol}
\end{equation
The initial state is expanded as
\begin{equation}
\psi _{n}\left( 0\right) =\delta _{n,m}=\sum_{p}c_{p}\bar{\psi}_{p}\left(
0\right) . \label{Expand}
\end{equation
Because Eq.(\ref{TopoDyna}) is a linear model, the topological numbers
defined with respect to $M_{nm}$ determines the topological phases of the
system.
There are localized states in a topological phase known as zero-mode edge
states in the one-dimensional topological phase and zero-mode corner states
in the\ two-dimensional second-order topological phase. We impose the
initial condition (\ref{IniCon}) with $m=1$, o
\begin{equation}
\psi _{n}\left( 0\right) =\delta _{n,1}, \label{IniConA}
\end{equation
where the site $n=1$ denotes the left edge of a chain or the top corner of a
triangle. The zero-mode edge (corner) state is given in terms of an
eigenstate of $M_{nm}$, which we assume to be $\bar{\psi}_{1}$ with $E_{1}=0$
in (\ref{EigenA}). With the use of the expansion (\ref{Expand}), the
zero-mode edge (corner) state $\bar{\psi}_{1}$ is well approximated by $\psi
_{1}$ at $t=0$, o
\begin{equation}
\psi _{1}\left( 0\right) \simeq c_{1}\bar{\psi}_{1}\left( 0\right) .
\end{equation
Since the zero-mode edge (corner) state has the zero energy, there is no
dynamics
\begin{equation}
\psi _{1}\left( t\right) =c_{1}\bar{\psi}_{1}\left( 0\right) .
\end{equation
As a result, there remains a finite component $c_{1}$ at the edge (corner)
site even after time evolution.
On the other hand, there is no zero-mode localized state at the edge
(corner) in the trivial phase, and the state $\psi _{n}\left( t\right) $
rapidly penetrates into the bulk. Consequently, it is possible to
differentiate the topological and trivial phases numerically by checking
whether there remains a finite component or not under the initial condition (\ref{IniConA}).
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{SSHDiagram}}
\caption{(a1)$\sim$(a3) Phase diagram of the nonlinear SSH model. (a1)
Bird's eye's view, (a2) top view and (a3) schematic illustration of the
phase diagram. (b1)$\sim$(b4) Amplitude $|\protect\psi_1|$ as a function of
$\protect\lambda $ for various $\protect\xi $. (b1) $\protect\xi =0$, (b2)
$\protect\xi =0.1$, (b3) $\protect\xi =1$, and (b4) $\protect\xi =5$.}
\label{FigSSHDiagram}
\end{figure*}
\subsection{Weak nonlinear regime}
\label{SecWeak}
We study the weak nonlinear regime of Eq.(\ref{DST}), which is the regime
where the first-order perturbation in $\xi $ is valid. We may insert the
linear solution (\ref{LSol}) to the nonlinear term proportional to $\xi $ in Eq.(\ref{DST}), i.e.
\begin{equation}
\xi \left\vert \psi _{n}(t)\right\vert ^{2}\psi _{n}(t)=\xi \left\vert \psi
_{n}\left( 0\right) \right\vert ^{2}\psi _{n}(t)+O(\xi ^{2}),
\end{equation
and obtai
\begin{equation}
i\frac{d\psi _{n}}{dt}+\sum_{m}\overline{M}_{nm}\psi _{m}=0,
\label{TopoDynaA}
\end{equation
wher
\begin{equation}
\overline{M}_{nm}\equiv M_{nm}+\delta _{nm}\xi \left\vert \psi _{n}\left(
0\right) \right\vert ^{2}.
\end{equation
The second term $\delta _{nm}\xi \left\vert \psi _{n}\left( 0\right)
\right\vert ^{2}$ may be regarded as an on-site random potential in the
linearized model. Then, the topological phase is robust against the on-site
potential as far as the bulk gap does not close or equivalently $\xi
\left\vert \psi _{n}\left( 0\right) \right\vert ^{2}$ is smaller than the
gap of $M_{nm}$. Consequently, in the weak nonlinearity regime, the
topological number is well defined, and the topological phase boundary is
unchanged as $\xi $\ increases. See explicit examples in Sec.\ref{SecSSH}
and Sec.\ref{SecKagome}, where the topological phase diagrams are
numerically constructed.
\subsection{Strong nonlinear regime}
\label{SecStrong}
We next study the strong nonlinear regime ($\xi \gg 1$), which is the regime
where the hopping term is negligible with respect to the nonlinear term. We
may approximate Eq.(\ref{DST}) a
\begin{equation}
i\frac{d\psi _{n}}{dt}=-\xi \left\vert \psi _{n}\right\vert ^{2}\psi _{n},
\end{equation
where all equations are separated. We se
\begin{equation}
\psi _{n}\left( t\right) =r_{n}e^{i\theta _{n}\left( t\right) }, \label{rt}
\end{equation
and make an ansatz that $r_{n}$ is a constant in the time $t$. This ansatz
is confirmed numerically in Sec.\ref{SecSSH} and Sec.\ref{SecKagome}. Then,
the solution is given b
\begin{equation}
\theta _{n}=\xi r_{n}^{2}t+c.
\end{equation
Hence, the amplitude does not decrease. Due to the norm conservation (\ref{NormConser}),
we find $\psi _{m}\left( t\right) =\delta _{nm}.$\ Namely,
the state $\psi _{n}$\ does not spread under the initial condition (\ref{IniCon}),
as we will see by taking an explicit model in Sec.\ref{SecDNS}.
This phase may be referred to as the nonlinearity-induced localization phase.
We note that there is no concept of topology in the strong nonlinear regime
because the $M_{nm}$ term is irrelevant. This property is confirmed in Sec.\ref{SecSSH} and Sec.\ref{SecKagome} based on explicit examples.
\subsection{Dynamics of edge or corner state}
We consider the case where the edge (corner) is perfectly decoupled from\
all sites in the bulk. See Fig.\ref{FigKagomeIllust}(a1) and (b1) for
examples. It is enough to solve the single differential equation,
\begin{equation}
i\frac{d\psi _{1}}{dt}=\varepsilon \psi _{1}-\xi \left\vert \psi
_{1}\right\vert ^{2}\psi _{1},
\end{equation
where $\varepsilon $\ is the on-site energy of the site $n=1$. As in the
strong nonlinear regime, we assume the condition (\ref{rt}), and we obtain
\begin{equation}
-r_{n}e^{i\theta _{n}\left( t\right) }\frac{d\theta _{n}\left( t\right) }{dt
=\varepsilon r_{n}e^{i\theta _{n}\left( t\right) }-\xi r_{n}^{3}e^{i\theta
_{n}\left( t\right) },
\end{equation
or
\begin{equation}
\frac{d\theta _{n}\left( t\right) }{dt}=-\varepsilon +\xi r_{n}^{2}.
\end{equation
The solution is given b
\begin{equation}
\theta _{n}=\left( -\varepsilon +\xi r_{n}^{2}\right) t+c,
\end{equation
with a constant $c$. It shows that the amplitude does not change as a
function of the time $t$.
\section{Nonlinear SSH model\label{SecSSH}}
\subsection{Model}
We consider explicit models. The first example is the nonlinear SSH mode
\cite{Hadad,Gor,Tulo,Zhou}, where $M_{nm}$ is given b
\begin{equation}
M_{nm}=-\delta _{nm}\left( t_{A}+t_{B}\right) +\left( t_{A}\delta
_{n,2m-1}+t_{B}\delta _{2n,2m+1}\right) \label{HoppingMmn}
\end{equation
We illustrate the lattice model of the SSH model in Fig.\ref{FigKagomeIllust
, which is a dimerized lattice. For $t_{A}=0$, two edge sites are perfectly
decoupled whereas all other bulk sites are dimerized as in Fig.\re
{FigKagomeIllust}(a1). On the other hand, for $t_{B}=0$, all of the sites
are dimerized as in Fig.\ref{FigKagomeIllust}(a3).
The equations of motion (\ref{DST}) rea
\begin{align}
i\frac{d\psi _{2n-1}}{dt}& =t_{B}\left( \psi _{2n-2}-\psi _{2n-1}\right)
+t_{A}\left( \psi _{2n}-\psi _{2n-1}\right) \notag \\
& -\xi \left\vert \psi _{2n-1}\right\vert ^{2}\psi _{2n-1}, \label{EqA} \\
i\frac{d\psi _{2n}}{dt}& =t_{A}\left( \psi _{2n-1}-\psi _{2n}\right)
+t_{B}\left( \psi _{2n+1}-\psi _{2n}\right) \notag \\
& -\xi \left\vert \psi _{2n}\right\vert ^{2}\psi _{2n}, \label{EqB}
\end{align
with alternating bondings $t_{A}$ and $t_{B}$. We introduce the dimerization
control parameter defined b
\begin{equation}
\lambda =\frac{t_{A}-t_{A}}{t_{A}+t_{A}}, \label{SpringCon}
\end{equation
where $|\lambda |\leq 1$.
\begin{figure}[t]
\centerline{\includegraphics[width=0.48\textwidth]{DimerDynamics}}
\caption{(a1)$\sim$(a4) Time evolution of Re[$\protect\psi_1(t)$] in the
nonlinear Schr\"{o}dinger model on the dimer described by Eqs.(\protect\ref{dimer1}) and (\protect\ref{dimer2}).
(a1), (b1) $\protect\xi =0$; (a2),
(b2) $\protect\xi =0.1$; (a3), (b3) $\protect\xi =0.5$; (a4), (b4) $\protec
\xi =1$. (a1)$\sim$(a4) The vertical axis is Re[$\protect\psi_1(t)$] and the
horizontal axis is the time $t$. (b1)$\sim$(b4) Fourier component of Re[
\protect\psi_1(\protect\omega )$]. The horizontal axis is the frequency
\protect\omega$, which is the Fourier component of the time $t$, while the
vertical axis is Re[$\protect\psi_1(\protect\omega )$].}
\label{FigDimerDynamics}
\end{figure}
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{Localized}}
\caption{ (a1)$\sim$(c1) Amplitude $|\protect\psi _{1}|$ after enough time
as a function of $\protect\xi$. (a2)$\sim $(a7), (b2)$\sim $(b7) and (c2)
\sim $(c7) Time evolution of $|\protect\psi _{n}|$ in the discrete nonlinear
Schr\"{o}dinger equation (\protect\ref{DNLS}). (a1)$\sim$(a7) topological
phase at $\protect\lambda =-0.5$, (a1)$\sim$(a7) topological phase boundary
at $\protect\lambda =0$, and (a1)$\sim$(a7) trivial phase at $\protec
\lambda =0.5$.}
\label{FigLocalized}
\end{figure*}
\subsection{Phase diagram}
Starting from the initial condition (\ref{IniConA}), we explore the time
evolution of $\psi _{n}$ for various $\xi $\ and show the results in Fig.\re
{FigDynamics}(b). As indicated by a general consideration given before, we
find that there remains a finite component at the edge site ($n=1$) in the
topological phase, while it is almost zero in the trivial phase.
We show the absolute value of $\psi _{1}$ after enough time as a function of
$\lambda $ for various $\xi $\ in Fig.\ref{FigSSHDiagram}(b1)$\sim $(b4).
First, we study the linear model as shown in Fig.\ref{FigSSHDiagram}(b1).
The amplitude $\left\vert \psi _{1}\right\vert $\ is finite in the
topological phase, while it is almost zero in the trivial phase. The overall
structure is almost identical in the linear limit ($\xi =0$) and in the weak
nonlinear regime ($\xi =0.1$)\ as shown in Fig.\ref{FigSSHDiagram}(b2). For
medium nonlinearity ($\xi =1$), there appears an oscillation mode for
\lambda \geq 0.55$, as shown in Fig.\ref{FigSSHDiagram}(b3). We will argue
that this is due to the dimerization effect in Sec.\ref{SecDimer}. For
strong nonlinearity ($\xi =5$), the amplitude $\left\vert \psi
_{1}\right\vert $\ is almost 1 for $\lambda \leq 0.52$. We have already
argued that this is due to the nonlinearity-induced localization in Sec.\re
{SecStrong}.
There are four phases. First, we have the topological and trivial phases in
the weak nonlinear regime. The topological phase boundary is almost
independent of the nonlinearity $\xi $ . The amplitude gradually decreases
from $1$ to $0$ depending on the dimerization from $\lambda =-1$ to $\lambda
=0$, as we have argued in Sec.\ref{SecWeak}. On the other hand, there is a
nonlinearity-induced localization phase for large $\xi $. The amplitude is
almost $1$ entirely in the nonlinearity-induced localization phase, as we
have argued in Sec.\ref{SecStrong}.
In addition, there is a dimer phase in the vicinity of $\lambda \simeq 1$,
where the system is almost dimerized. The states $\psi _{1}$ and $\psi _{2}
\ oscillate between the two adjacent sites ($n=1,2$) at the edge.
Furthermore, the trivial phase penetrates into the dimer phase for $\lambda
\geq 0.25$ in Fig.\ref{FigSSHDiagram}(a2) as in (a3).
\subsection{Topological number}
The hopping matrix (\ref{HoppingMmn}) leads to the SSH Hamiltonian in the
momentum space
\begin{equation}
M\left( k\right) =-\left( t_{A}+t_{B}\right) I_{2}+\left(
\begin{array}{cc}
0 & t_{A}+t_{B}e^{-ik} \\
t_{A}+t_{B}e^{ik} &
\end{array
\right) . \label{EqK}
\end{equation
The topological number is the Berry phase defined b
\begin{equation}
p_{i}=\frac{1}{2\pi }\int_{9}^{2\pi }A\left( k\right) dk,
\label{ChiralIndex}
\end{equation
where $A\left( k\right) =-i\left\langle \psi (k)\right\vert \partial
_{k}\left\vert \psi (k)\right\rangle $ is the Berry connection with $\psi
(k) $ the eigenfunction of $M\left( k\right) $. We obtain $\Gamma =1$ for
\lambda <0$ and $\Gamma =0$ for $\lambda >0$. It is known that the SSH
system is topological for $\lambda <0$ and trivial for $\lambda >0$. There
are two isolated edge states in the limit $\lambda \simeq -1$, while all of
the states are dimerized in the limit $\lambda \simeq 1$: See Fig.\re
{FigKagomeIllust}(a).
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{KagomeColor}}
\caption{Time evolution of the spatial distribution of the amplitude $
\protect\psi _{n}|$ in the nonlinear breathing Kagome model. (a1)$\sim$(b8)
linear model with $\protect\xi =0$ for various time. (c1)$\sim$(d8) weak
nonlinear model with $\protect\xi =1$. (e1)$\sim$(f8) strong nonlinear model
with $\protect\xi =4$, where the system is in the nonlinearity-induced
localized phase. The color density indicates the amplitude $|\protect\psi
_{n}|$ . We have set $\protect\lambda =-0.5$ for (a1)$\sim$(a8), (c1)$\sim
(c8) and (e1)$\sim$(e8), where the system is topological, while we have set
\protect\lambda =0.5$ for (b1)$\sim$(b8), (d1)$\sim$(d8) and (f1)$\sim$(f8),
where the system is trivial. }
\label{FigKagomeColor}
\end{figure*}
\subsection{Dimer limit}
\label{SecDimer}
Next, we study\ the dimer limit with $t_{B}=0$ as in Fig.\re
{FigKagomeIllust}(a3), where $\lambda =1$. The differential equations
are explicitly given by
\begin{align}
i\frac{d\psi _{1}}{dt}& =t_{A}\left( \psi _{2}-\psi _{1}\right) -\xi
\left\vert \psi _{1}\right\vert ^{2}\psi _{1}, \label{dimer1} \\
i\frac{d\psi _{2}}{dt}& =t_{A}\left( \psi _{1}-\psi _{2}\right) -\xi
\left\vert \psi _{2}\right\vert ^{2}\psi _{2}. \label{dimer2}
\end{align
We show a numerical solution of the time evolution of $\psi _{1}$ and $\psi
_{2}$ in Fig.\ref{FigDimerDynamics}.
In the linear model ($\xi =0$),\ they oscillate alternately without changing
their amplitudes
\begin{equation}
\psi _{1}=e^{-it_{A}t}\cos t_{A}t,\qquad \psi _{2}=ie^{-it_{A}t}\sin t_{A}t,
\end{equation
where the phases are different by $\pi $. Once the nonlinearity is
introduced, there appears an oscillation whose period is much longer than
the original period. The overall oscillation period becomes shorter as the
nonlinearity increases. It shows a complicated behavior for strong
nonlinearity\ as in Fig.\ref{FigDimerDynamics}.
In the nonlinear model ($\xi \neq 0$),\ there is an oscillatory behavior
with long and short periods in the dimer phase as in Fig.\re
{FigDimerDynamics}. This is easily seen by examining the Fourier component
\psi \left( \omega \right) $, where $\omega $\ is the frequency as\ in Fig
\ref{FigDimerDynamics}(b1)$\sim $(b4).\ There are two sharp peaks in $\psi
\left( \omega \right) $, which indicates that there are short-period and
long-period modes.
This oscillatory behavior may be understood as follows. By using an ansat
\begin{equation}
\psi _{2}=-\psi _{1}, \label{Psi12}
\end{equation
the equations (\ref{dimer1}) and (\ref{dimer2}) are summarized to one
equatio
\begin{equation}
i\frac{d\psi _{1}}{dt}=-2t_{A}\psi _{1}-\xi \left\vert \psi _{1}\right\vert
^{2}\psi _{1},
\end{equation
whose solution is given b
\begin{equation}
\theta _{1}=\left( 2t_{A}+\xi r_{n}^{2}\right) t+c, \label{DiAna}
\end{equation
and a constant $r_{1}$ with the polar expression (\ref{rt}). This explains a
short-period oscillation mode in Fig.\ref{FigDimerDynamics}.
However, the ansatz (\ref{Psi12}) for the analytical solution is not
compatible to the initial condition (\ref{IniCon}). Thus, we cannot apply
the analytic solution (\ref{DiAna}) for the quench dynamics, although the
ansatz (\ref{Psi12}) holds after enough time.\ In order to adjust the ansatz
(\ref{Psi12}) to the initial condition (\ref{IniCon}), the long-period
oscillation mode would appear.
These dimer oscillations give rise to the dimer phase in the vicinity of
\lambda =1$ in the phase diagram in Fig.\ref{FigSSHDiagram}(a3).
\begin{figure*}[t]
\centerline{\includegraphics[width=0.98\textwidth]{KagomeDiagram}}
\caption{(a1)$\sim $(a3) Phase diagram of the nonlinear breathing Kagome
model. (a1) Bird's eye's view, (a2) top view, and (a3) schematic
illustration of the phase diagram. (b1)$\sim $(b4) Amplitude $|\protect\psi
_{1}|$ as a function of $\protect\lambda $ for various $\protect\xi $. (b1)
\protect\xi =0$, (b2) $\protect\xi =0.1$, (b3) $\protect\xi =1$, and (b4)
\protect\xi =5$.}
\label{FigKagomeDiagram}
\end{figure*}
\subsection{Discrete nonlinear Schr\"{o}dinger equation}
\label{SecDNS}
The discrete nonlinear Schr\"{o}dinger equation (\ref{DNLS}) is a limit of
the nonlinear SSH model (\ref{EqA}) and (\ref{EqB}) by setting $\lambda =0$.
We show the time evolution of $\psi _{n}$ starting from the initial
condition (\ref{IniCon}), where the initial state is taken at the site $m$
in the bulk in Fig.\ref{FigLocalized}. For weak nonlinearity $\xi \lesssim 4
, the state rapidly spreads as shown in Fig.\ref{FigLocalized}(b2)$\sim
(b7). On the other hand, for strong nonlinearity $\xi \gtrsim 4$, the state
almost remains at the initial site $m$ as shown in Fig.\ref{FigLocalized
(b6) and (b7). We also show $\left\vert \psi _{1}\right\vert $ as a function
of $\xi $ in Fig.\ref{FigLocalized}(b1). It shows a drastic change at $\xi
\simeq 4$ for $\lambda =0$, which indicates the nonlinearity-induced
localization transition.
We have also shown the time evolution of $\psi _{n}$\ starting from the
initial condition (\ref{IniCon}) in the case of the nonlinear SSH model in
Fig\ref{FigLocalized}(a2)$\sim $(a7) for the topological phase with $\lambda
=-0.5$ and Fig\ref{FigLocalized}(c2)$\sim $(c7) for the trivial phase with
\lambda =0.5$. The nonlinearity-induced localization transition is found to
occur at $\xi \simeq 2$.
It is to be noted in Fig\ref{FigLocalized} that there is almost no
difference in the dynamics of $\left\vert \psi _{n}\right\vert $ between the
topological and trivial phases for $\lambda =\mp 0.5$. It dictates that we
cannot differentiate the topological and trivial phases by starting from a
site in the bulk. It is because the bulk state is almost identical between
the topological and trivial phases. The key difference is the presence
of topological edge or corner states in the topological phase.
As a result, the nonlinearity-induced localization occurs irrespective of
the dimerization $\lambda $. It is because the nonlinearity effect is
dominant in the dynamics for large $\xi $.
\section{Nonlinear second-order topological phases\label{SecKagome}}
\subsection{Model}
Recently, the nonlinear second-order topological phase has been studied in
photonics\cite{Kirch}. We proceed to study the case where the matrix $M_{nm}$
describes the breathing Kagome lattice, whose lattice structure is
illustrated in Fig.\ref{FigKagomeIllust}(b). The matrix $M$ in the momentum
space is given by\cite{EzawaKagome}
\begin{equation}
M\left( \mathbf{k}\right) =-\left(
\begin{array}{ccc}
0 & h_{12} & h_{13} \\
h_{12}^{\ast } & 0 & h_{23} \\
h_{13}^{\ast } & h_{23}^{\ast } &
\end{array
\right) , \label{H3}
\end{equation
with
\begin{align}
h_{12}& =t_{A}e^{i\left( k_{x}/2+\sqrt{3}k_{y}/2\right) }+t_{B}e^{-i\left(
k_{x}/2+\sqrt{3}k_{y}/2\right) }, \\
h_{23}& =t_{A}e^{i\left( k_{x}/2-\sqrt{3}k_{y}/2\right) }+t_{B}e^{i\left(
-k_{x}/2+\sqrt{3}k_{y}/2\right) }, \\
h_{13}& =t_{A}e^{ik_{x}}+t_{B}e^{-ik_{x}},
\end{align
where we have introduced two hopping parameters $t_{A}$ and $t_{B}$
corresponding to upward and downward triangles, as shown in Fig.\re
{FigKagomeIllust}(b).
\subsection{Topological number}
There are three mirror symmetries for the breathing Kagome lattice. They are
the mirror symmetries $M_{x}$ with respect to the $x$ axis, and $M_{\pm }$
with respect to the two lines obtained by rotating the $x$ axis by $\pm 2\pi
/3$. The polarization along the $x_{i}$ axis is the expectation value of the
position,
\begin{equation}
p_{i}=\frac{1}{S}\int_{\text{BZ}}A_{i}d^{2}\mathbf{k}, \label{PolarP}
\end{equation
where $A_{i}=-i\left\langle \psi (\mathbf{k})\right\vert \partial
_{k_{i}}\left\vert \psi (\mathbf{k})\right\rangle $ is the Berry connection
with $x_{i}=x,y$, and $S=8\pi ^{2}/\sqrt{3}$ is the area of the Brillouin
zone; $\psi (\mathbf{k})$ the eigenfunction of $M\left( \mathbf{k}\right) $.
The topological number is defined by\cite{EzawaKagome}
\begin{equation}
\Gamma =3\left( p_{x}^{2}+p_{y}^{2}\right) .
\end{equation
We obtain $\Gamma =0$ for $t_{A}/t_{B}<-1$ and $t_{A}/t_{B}>2$, which is the
trivial phase with no zero-mode corner states. On the other hand, we obtain
\Gamma =1$ for $-1<t_{a}/t_{b}<1/2$, which is the topological phase with the
emergence of three zero-mode corner states. Finally, $\Gamma $ is not
quantized for $1/2<t_{A}/t_{B}<2$, which is the metallic phase.
For $t_{A}=0$, three corner sites are perfectly decoupled\ from the bulk as
in Fig.\ref{FigKagomeIllust}(b1). On the other hand, for $t_{B}=0$, all
sites are trimerized as in Fig.\ref{FigKagomeIllust}(b3). We study a quench
dynamics starting from the initial condition (\ref{IniConA}), where the
state is perfectly localized at the top corner site. We note that we use the
tight-bind model although the continuum model is used in the previous wor
\cite{Kirch}, where the essential physics is identical.
\subsection{Phase diagram}
We show the phase diagram in Fig.\ref{FigKagomeDiagram}. There are four
phases in the nonlinear breathing Kagome model. The trimer phase appears
instead of the dimer phase characteristic to the nonlinear SSH model. The
trimer phase and the nonlinearity-induced localization phase are smoothly
connected, both of which are irrelevant to the topological number. The
difference is that there is an oscillatory behavior in the trimer phase but
not in the nonlinearity-induced localization phase. On the other hand, there
is a sharp transition between the trivial and nonlinearity-induced
localization phases. This is also the case for the transition between the
trivial and trimer phase. As in the case of the nonlinear SSH model, the
topological phase boundary between the topological and trivial phases is
almost unchanged for $0\leq \xi \lesssim 3$ as in Fig.\ref{FigKagomeDiagram
(a3).
We start with the study of the linear model ($\xi =0$). We show the spatial
distribution of the amplitude $|\psi _{n}|$ for various time in Fig.\re
{FigKagomeColor}(a1)$\sim $(a8) and Fig.\ref{FigKagomeColor}(b1)$\sim $(b8).
In the topological phase, the amplitude remains finite at the top corner
site. On the other hand, the amplitude rapidly spreads into the bulk and
disappears in the trivial phase.
The weak nonlinear regime ($\xi \simeq 0$) is analyzed just as in Sec.\re
{SecWeak}. Namely, the topological analysis based on the formula (\re
{TopoDynaA}) is valid as in the linear model. We have numerically confirmed
this observation in Fig.\ref{FigKagomeColor}(c1)$\sim $(c8). Indeed, we may
regard the system even with $\xi =1$ as the one in the weak nonlinear
regime. We show the spatial distribution for $\xi =1$ in Fig.\re
{FigKagomeColor}(c1)$\sim $(c8), which is almost identical to the one in the
linearized model ($\xi =0$) in Fig.\ref{FigKagomeColor}(a1)$\sim $(a8). This
is also the case for the trivial phase as shown in Fig.\ref{FigKagomeColor
(d1)$\sim $(d8). The amplitude at the top corner site after enough time is
shown in Fig.\ref{FigKagomeDiagram}(b1)$\sim $(b4). We note that the overall
feature is quite similar to the one in the nonlinear SSH model. On the other
hand, the state is localized in both of the topological and trivial phases
for $\xi =4$ in Fig.\ref{FigKagomeColor}(e1)$\sim $(e8) and Fig.\re
{FigKagomeColor}(f1)$\sim $(f8). It indicates that the state is in the
nonlinearity-induced localization phase.
\begin{figure}[t]
\centerline{\includegraphics[width=0.48\textwidth]{TrimerDynamics}}
\caption{(a1)$\sim$(a4) Time evolution of Re[$\protect\psi_1(t)$] in the
nonlinear Schr\"{o}dinger model on the trimer described by Eqs.(\protect\re
{trimer1}), (\protect\ref{trimer2}) and (\protect\ref{trimer3}). (a1), (b1)
\protect\xi =0$; (a2), (b2) $\protect\xi =0.1$; (a3), (b3) $\protect\xi =0.5
; (a4), (b4) $\protect\xi =1$. (a1)$\sim$(a4) The vertical axis is Re[
\protect\psi_1(t)$] and the horizontal axis is time. (b1)$\sim$(b4) Fourier
component of Re[$\protect\psi_1(\protect\omega )$]. The horizontal axis is
the frequency $\protect\omega$, which is the Fourier component of the time
t $, while the vertical axis is Re[$\protect\psi_1(\protect\omega )$]. }
\label{FigTrimerDynamics}
\end{figure}
\subsection{Trimer limit}
Next, we study\ the trimer limit with $t_{B}=0$ as in Fig.\re
{FigKagomeIllust}(b3), where $\lambda =1$. The differential equations
are explicitly given by
\begin{align}
i\frac{d\psi _{1}}{dt}& =\varepsilon \psi _{1}+t_{A}\left( \psi _{1}-\psi
_{2}\right) +t_{A}\left( \psi _{1}-\psi _{3}\right) -\xi \left\vert \psi
_{1}\right\vert ^{2}\psi _{1}, \label{trimer1} \\
i\frac{d\psi _{2}}{dt}& =\varepsilon \psi _{2}+t_{A}\left( \psi _{2}-\psi
_{1}\right) +t_{A}\left( \psi _{2}-\psi _{3}\right) -\xi \left\vert \psi
_{2}\right\vert ^{2}\psi _{2}, \label{trimer2} \\
i\frac{d\psi _{3}}{dt}& =\varepsilon \psi _{3}+t_{A}\left( \psi _{3}-\psi
_{1}\right) +t_{A}\left( \psi _{3}-\psi _{2}\right) -\xi \left\vert \psi
_{3}\right\vert ^{2}\psi _{3}. \label{trimer3}
\end{align
Without loss of generality we may set $\psi _{2}=\psi _{3}$ and obtai
\begin{align}
i\frac{d\psi _{1}}{dt}& =\varepsilon \psi _{1}+2t_{A}\left( \psi _{1}-\psi
_{2}\right) -\xi \left\vert \psi _{1}\right\vert ^{2}\psi _{1}, \\
i\frac{d\psi _{2}}{dt}& =\varepsilon \psi _{2}+t_{A}\left( \psi _{2}-\psi
_{1}\right) -\xi \left\vert \psi _{2}\right\vert ^{2}\psi _{2}.
\end{align
It is hard to solve\ these equations analytically except for the linear
model, where the solution is given b
\begin{align}
\psi _{1}& =\frac{1}{3}\left( 1+2e^{-3it_{A}t}\right) , \\
\psi _{2}& =\psi _{3}=\frac{1}{3}\left( 1-e^{-3it_{A}t}\right) .
\end{align
Although this is modified smoothly as a function of $\xi $, it presents the
short-period oscillation.\ The long-period oscillation emerges once the
nonlinearity $\xi $ is introduced. We show the Fourier component $\psi
\left( \omega \right) $\ in Fig.\ref{FigTrimerDynamics}(b1)-(b4), where we
see two sharp peaks in $\psi \left( \omega \right) $ corresponding to the
short-period and long-period oscillations.
These trimer oscillations give rise to the trimer phase in the vicinity of
\lambda =1$\ in the phase diagram in Fig.\ref{FigKagomeDiagram}(a3).
\section{Conclusion}
The topological physics has been developed in linear systems such as
condensed matter systems, electric circuits and acoustic systems. The key
issue of the topological physics is the emergence of topological edge or
corner states in the topological phase, which has been firmly established by
various experimental observation. Now there are attempts generalize it to
nonlinear systems.
In the present work, we investigated the one-dimensional nonlinear SSH model
and the two-dimensional nonlinear breathing Kagome model. These models
contain the hopping term $\sum_{m}M_{nm}\psi _{m}$ and the nonlinear term
\xi \left\vert \psi _{n}\right\vert ^{2}\psi _{n}$. Dynamics is determined
as a result of the competition between these two terms. As far as the
hopping term is dominant, the topological dynamics is valid in these models.
On the other hand, when the nonlinear term is dominant, the
nonlinearity-induced localization phase emerges. There is another phenomenon
due to a cooperative effect of these two terms, which is the oscillation
mode in the dimer (trimer) limit of the nonlinear SSH (breathing Kagome)
model. We have studied these new phenomena analytically and numerically. Our
results are summarized in the phase diagrams in Fig.\ref{FigSSHDiagram} and
in Fig.\ref{FigKagomeDiagram}.
These results show that there are varieties of the effect of nonlinearity to
topological phases. It is an interesting problem to study various nonlinear
topological systems.
The author is very much grateful to N. Nagaosa for helpful discussions on
the subject. This work is supported by the Grants-in-Aid for Scientific
Research from MEXT KAKENHI (Grants No. JP17K05490 and No. JP18H03676). This
work is also supported by CREST, JST (JPMJCR16F1 and JPMJCR20T2).
|
1,116,691,497,212 | arxiv | \section{Introduction}
The Multiprocessor Scheduling Problem (MSP) is the problem of assigning a set of tasks ${j_1, j_2,...j_n}$ to a set of processors ${p_1, p_2,...p_m}$ in such a way that the makespan, or total time required for the completion of the resulting schedule is as small as possible. The tasks may have arbitrary dependency constraints, so they can be modeled as a DAG in which tasks correspond to vertices, and edges encode dependencies between tasks. MSP has been well studied in both theoretical computer science and operations research. Its applications range from industrial project management to tasking cloud-based distributed systems.
MSP is one problem in a large taxonomy of scheduling problems. Similar problems take into account heterogeneous processors, multiple resource types, communication cost between processors, and the amount of information known the the scheduler. Work on these variants is described in Section 1.3. We chose to focus our work on the basic MSP instead of one of its more esoteric cousins because we are ultimately interested in doing exactly what the problem describes: scheduling multiprocessors.
Before describing Fujita's branch and bound algorithm and our implementation and analysis of it, we provide an introduction to the terminology and notation used to describe MSP and other scheduling problems. We also give a brief survey of the approximate and exact methods and algorithms used to solve MSP.
\subsection{Graham's Notation}
Graham proposed a widely used notation \cite{graham:notation} for succinctly classifying scheduling problems. In Graham's notation a scheduling problem is described in three fields as in $\alpha | \beta | \gamma$. The $\alpha$ field describes the number of processors, $\beta$ describes task configuration options, and $\gamma$ describes the objective function.
In particular, $\alpha$ is $Pn$ if we have $n$ identical processors, $Qn$ if we have $n$ uniform processors meaning that each processor has a different compute speed, and $Rn$ if we have $n$ unrelated processors meaning that each processor has a different compute speed for each task. When there is no $n$, the problem is for any number of processors.
$\beta$ is a set that may contain any number of the following options: $r_j$ if tasks have specified release dates, $d_j$ if they have deadlines, $p_j = x$ if each task has weight $x$, $prec$ if tasks have general precedence constraints, and $pmtn$ if tasks can be preempted, meaning they can be stopped and resumed arbitrarily, even moving to other processors.
Finally, $\gamma$ can be any number of different objective functions including the makespan denoted by $C_{max}$, the mean flow-time (completion time minus release date) denoted by $\sum C_i$, or maximum lateness $L_{max} = max(0, C_i - d_i)$.
\subsection{Model}
For our purposes, we are primarily interested in the NP-hard $Pn| prec | C_{max}$ problem. In this precedence-constrained problem, the task graph can be represented as DAG where each vertex $u$ is associated with a task cost $c(u)$ and each edge $(u, v) \in E$ implies that task $v$ can be started only after $u$ is finished.
Without loss of generality, we can require that the DAGs we schedule contain a single source vertex and a single sink vertex. If there is no unique sink or source in the DAG, we can simply append a vertex source $u$ with weight $c(u)=0$ as a predecessor to all vertices with zero in-degree and a sink vertex $v$ with $c(v)=0$ as a successor of all vertices with zero out-degree to enforce this requirement.
We adopt the definitions and notation used by Fujita to describe the problem. The only difference is that Fujita considers a generalization of the MSP in which there is allowed to be a communication cost associated with scheduling a successor task on a different processor than its predecessors. This more realistically models the application of scheduling tasks on modern NUMA machines, but we omit communication costs from our model for simplicity.
In our model, we say that a schedule of our task graph $G$ on $p$ processors is a mapping from a vertex $v$ to a tuple $(p, \tau)$ where $p$ is a processor which will process $v$ on the time interval $[\tau, \tau + c(v)]$.
\vspace{2mm}
$\textbf{Definition 1 (Feasible Solution)\cite{fujita}.}$ A Schedule $f$ is said to be feasible, if it satisfies the following two conditions:
\begin{enumerate}
\item For any $u,v \in V$, if $f(u) = (p, \tau')$ and $f(v) = (p, \tau'')$, then $\tau' + c(u) \leq \tau''$ or $\tau'' + c(v) \leq \tau'$.
\item For any $(u,v) \in E$, if $f(u) = (p', \tau')$ and $f(v) = (p'', \tau'')$, then $\tau'' \geq \tau' + c(u)$
\end{enumerate}
\vspace{2mm}
The makespan of $f$ is defined to be the completion time of the exit task $v$ under schedule $f$. The static cost of a path in $G$ is defined as the summation of the execution costs on the path. A path with a maximum static cost is called a critical path in G. Furthermore, we call $t_{cp}$ the static cost of a critical path in G. Lastly, we define
\vspace{2mm}
$\textbf{Definition 2 (Topological Sort)\cite{fujita}.}$ A topological sort of $G = (V, E)$ is a bijection $\phi$ from $V$ to ${1, 2,...|V|}$ such that for any $u, v \in V$, if $u$ is a predecessor of $v$, then $\phi(u) < \phi(v)$.
\vspace{2mm}
This representation of the precedence constraints will be useful in describing our Branch-and-Bound algorithm. It also helps us define the concept of a partial solution.
\vspace{2mm}
$\textbf{Definition 3 (Partial Solution)\cite{fujita}.}$ Let the graph $G(V,E)$ represent the precedence constraints. A partial solution $x$ is a feasible schedule for a subset of the vertices in $G$. Let $U$ be this subset, the we have that $\phi(u) < \phi(v)$ $\forall u \in U$ and $\forall v \in V$.
\vspace{2mm}
We note that a solution or a partial solution can be represented as a permutation of the vertices that it schedules. A permutation uniquely represents a schedule, and a partial permutation uniquely represents a partial schedule. To derive a schedule from a partial permutation of the vertices, we iterate through the permutation and assign each task to the first available machine once all its predecessors have finished their execution. Since we only consider those permutations that form feasible partial schedules, we know when we choose how to assign a task that all of its predecessors have already been assigned in the schedule.
\subsection{Known Solutions}
To contextualize our work in the current state of the field, we mention several other scheduling problems similar to MSP and list their best-known runtimes \cite{brucker:textbook}. While the general $P | prec | C_{max}$ problem is NP-hard, some variants are easily solved while others are polynomial but have very high degree. Among the problems known to be solvable in polynomial time are:
\begin{itemize}
\item $P | p_i = p; tree | C_{max}$ which Hu \cite{hu} solved in $O(n)$.
\item $P2 | p_i = 1; prec; r_i | \sum C_i$ for which Baptiste and Timkowski \cite{baptiste} found an $O(n^9)$ solution
\item $R || \sum C_i$ which was solved in $O(mn^3)$ \cite{bruno}
\end{itemize}
On the other hand, the best-known solutions for variants like $Qm | r_i | C_{max}$ are run in pseudo-polynomial time \cite{lawler}and even simplified versions like $P | p_i = 1; prec | C_{max}$ are known to be NP-hard.
Solutions to this intractable problem have migrated towards approximation schemes. These schemes fall into three categories. The first category encompasses standalone approximation algorithms for the online problem like the guessing scheme of Albers et al \cite{albers:approx} that accomplishes a $(4/3 + \epsilon)$-competitive algorithm building a polynomial number of $O((m/\epsilon)^{O(log(1/\epsilon)/\epsilon)})$ schedules. Integer Programming approaches have also proven to be feasible for graphs with 30-50 jobs \cite{patterson}. The second category are heuristics based on Graham's original List Scheduling Algorithm \cite{graham:list}. However, the accuracy of these approximation strategies is limited. In fact, it has been shown by Ullman that if an approximation scheme for MSP can achieve better than $(4/3 + \epsilon)$, then it could be shown that $P = NP$ \cite{ullman}. The third category consists of meta-heuristic strategies. We expand on the last two strategies here.
\subsection{List Scheduling}
This algorithm is essentially a greedy strategy that maintains a list of ready tasks (ones with cleared dependencies) and greedily assigns tasks from the ready set to available processors as early as possible based on some priority rules. Regardless of the priority rule, List Scheduling is guaranteed to achieve a $2 - 1/m$ approximation. This result can be proved quite simply:
\begin{lemma}
List Scheduling with any Priority Rule achieves a $(2 - 1/m) OPT$ approximation
\end{lemma}
\begin{proof}
Given a scheduling of jobs on $m$ processors with makespan $M$ where the sum of all task weights is $S$, we can choose any path and observe that at any point in time, either a task on our path is running on a processor, or no processor is idle. We call $I$ the total idle time and $L$ is the total length of our path. Consequently, we know that:
\begin{itemize}
\item $I \leq (m-1)L$ since processors can be idle only when a task from our path is running.
\item $L \leq M_{OPT}$ since the optimal makespan is longer than any path in the DAG
\item $M_{OPT} \geq S/m$ since $S/m$ describes the makespan with zero idle time
\item $m \times M = I + S$ since the idle time plus sum of all tasks must give us the total "time" given by makespan times number of processors
\item $m \times M \leq (m-1)M_{OPT} + m M_{OPT}$ implies that $M \leq (2-1/m) M_{OPT}$
\end{itemize}
\end{proof}
One important priority rule is the Critical Path heuristic which prioritizes tasks on the Critical Path, or longest path from the task to the sink. Other classical priority rules include Most Total Successors (MTS), Latest Finish Time (LFT), and Minimum Slack. Consider, for example, Figure 1.
\begin{figure}
\centering
\includegraphics[width=100px]{DAG.png}
\caption{DAG of Tasks and Precedence Constraints}
\end{figure}
When at the source node $s$, List Scheduling would maintain a ready set with tasks $v_1$ and $v_2$. With a Latest Finish Time priority rule, $v_1$ would be first assigned to a processor since it finishes at 4 time steps. With a Critical Path heuristic, either task could be selected since the maximum-length path to the sink vertex is 4 for any path taken.
Kolisch \cite{kolisch:priority} gives an analysis of four modern priority rules: Resource Scheduling Method (RSM), Improved RSM, Worst Case Slack (WCS), and Average Case Slack (ACS) with better experimental accuracy. In particular, he found that WCS performed best, followed by ACS, IRSM, and LFT. Our List Scheduling implementation utilizes this type of priority rules and attempts to improve upon them by combining them with a branch-and-bound algorithm.
\subsection{Meta-Heuristics}
More recently, research has moved towards using meta-heuristics, a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms. For MSP, several strategies have been proposed including utilizing simulated annealing \cite{bouleimen:99}, genetic algorithms \cite{auyeung:genetic}, and even Ant-Colony optimization \cite{selvan}.
While these meta-heuristics can provide modest improvements in most cases, the largest increases in efficiency are accomplished when heuristics are customized to the MSP problem structure. These meta-heuristics also fail to give a guarantee on the quality of the result, and can converge to local optima. While meta-heuristics can give decent approximations in sub-exponential time, in some situations, obtaining an exact optimal solution is desirable.
\section{BRANCH AND BOUND METHOD}
The branch-and-bound (BB) method, which is essentially a search algorithm on a tree representing an expansion of all possible assignments, provides an exact solution to MSP. In general, the BB method attempts to reduce the number of expanded sub-trees by pruning the ones that will generate worse solutions than the current best solution. This reduces the number of solutions explored, which would otherwise grow with factorial of the number of nodes.
Given a graph $G(V,E)$, with an associated partial ordering $\phi$ we can construct the following search tree. The source of the tree is a partial solution only containing the source node of the graph. Each node in the tree corresponds to a partial solution $x$ with respect to a subset $U \subseteq V$, under the form of a permutation of vertices. This means that $x$ provides a scheduling for the nodes in $U$. The leaf nodes are complete feasible solutions. A children of a partial solution $x$ is itself a partial solution that schedules all nodes according to the solution $x$ and also schedules an additional node. Formally, all children of a partial solution $x$ with respect to a subset $U \subseteq V$ are partial solutions with respect to a subset $U \cup \{u\}$ such that $\phi(v) < \phi(u)$ $\forall v \in U$. This means that each vertex that has all its predecessors already scheduled will lead to a new children node and start a new sub-tree. Many nodes will produce schedules with respect to the same subset of vertices. However, they will represent different permutations of the vertices in the subset. The leaves of the tree will contain all permutations of the vertices that lead to feasible schedules. This derives directly from our construction of the graph.
In the BB method, we explore the tree with a depth first search approach. The initial node is the source of the tree, which only contains the trivial schedule for the source of the graph. We expand subsequent nodes according to a priority rule of the same type of those described above. The priority rule that we adopt in our implementation is HLFET (highest level first). Fujita \cite{fujita} also uses the same priority rule in his implementation of the BB algorithm. Both Adams and Canon have studied the performance and robustness of priority rules \cite{Adam, Canon} in the context of the List Scheduling algorithm described in the previous section. In both numerical experiments the authors have shown that HLFET performs consistently well. Other priority rules and heuristic methods that produce a better estimations of the best node to expand next have been developed, e.g. genetic, and simulated annealing methods. These algorithms give better results compared to the simple priority rules \cite{Auyeung, bouleimen:99}. They are therefore generally used in approximation algorithms for the MSP problem, such as the Grahm's List Scheduling algorithm \cite{graham:list}. These methods also require a significantly longer computation time compared to HLFET. For the BB algorithm, since the heuristic has to be evaluated at every node of the search tree, such computationally expensive methods do not produce any beneficial results.
In our implementation, the priority rule HLFET assigns a level to every vertex in the graph. The level of a vertex is defined as the sum of the weights of all vertices along the longest path from the vertex to the sink. The search part of the BB algorithm is therefore a depth-first-search algorithm where the priority of nodes in the queue is determined according to HLFET. At each step the BB algorithm expands the node with highest priority first. Intuitively, this will prioritize nodes that have a long list of dependent tasks. A naive search of this type without any bounding component would require to visit all leaf nodes in the search tree. This corresponds to evaluating the schedule quality of all permutations leading to a feasible result, which grows as $O(n!)$ where $n = |V|$.
The core idea of the branch-and-bound algorithm is to prune off all sub-trees that are guaranteed to generate worse solutions than the current best solution. This will significantly reduce the number of nodes that are expanded in practice. We now need to find a method that produces such a guarantee - the difficulty being that it has to be a guarantee on all solutions that can be reached in a given sub-tree. In the next section we describe two methods to find a lower bound on the makespan of all complete feasible solutions based on a given partial solution.
It is interesting to note that the BB algorithm generates the solution produced by Graham's list scheduling algorithm \cite{graham:list} with priority queue HLFET as its first solution. The first path expanded in the BB algorithm is composed of the sequence of ready nodes with highest priority at each step, just like in Graham's list scheduling algorithm. The priority rule ensures that the search starts with a good estimate of the optimal solution, and maximizes the number of sub-trees that are pruned.
\section{Fernandez and Fujita Bounds}
We present here the two lower bounding techniques that we implemented. We first describe the Fernandez bound \cite{fernandez}, which is a generalization of Hu's bound \cite{hu} among others. Then we explain the Fujita Bound \cite{fujita}, which generally produces a better lower bound than Fernandez, but is more computationally expensive. Both of these bounds rely on estimating the minimum number of machines required to keep the makespan under a certain total time.
\subsection{Fernandez Bound}
We first need to define $S_x$ the set of complete feasible solutions that can be reached by expanding a given partial solution $x$. All solutions in $S_x$ are represented by permutations in which the initial vertices are exactly the same vertices as in the permutation representing $x$.
Suppose we are given some partial solution $x$, we will now show how to obtain a lower bound on the makespan of all schedules in $S_x$. Fujita \cite{fujita} does not define the quantities correctly, which is very misleading. We are going to follow the logic and definition directly from Fernandez, but stick to the simpler notation employed by Fujita. Let $\theta$ be a subinterval of $\subseteq [0,t_{cp})$ and let $\sigma \in S_x$ be a permutation defining a complete solution in $S_x$.
Suppose that we want to impose a bound on the makespan. Let this bound be $t_{cp}$, the size of the critical path. We define the absolute minimum start time and maximum end time of a task to be respectively the earliest time a task could start executing given its precedence constraints and the latest completion time of a task in order to ensure that its successors can complete within $t_{cp}$. We will refer to these two quantities as $\text{mnEnd}$ and $\text{mxStart}$. Note that these quantities are completely determined from the graph of precedence constraints and do not depend on the number of machines.
We can formally define $\text{mnEnd}$ and $\text{mxStart}$ recursively, which provides an $O(n)$ method for their computation:
\begin{eqnarray}
\text{mnEnd}(u) &=& c(u) + \max_{v\in V_p(u)} \text{mnEnd}(v)\\
\text{mxStart}(u) &=& \min \left\{t_{cp}, \min_{v\in V_s(u)} \text{mxStart}(u)\right\}
\end{eqnarray}
where $V_s(u)$ and $V_p(u)$ are respectively the set of successors and predecessors of $u$.
To determine the previous quantities but given a partial schedule $x$, we fix the start and end times of the tasks in $x$ and calculate $\text{mxStart}$ and $\text{mnEnd}$ with these additional constraints. For vertices that are not in the partial schedule $x$, we note that $\text{mxStart}$ does not depend $x$. On the other hand, $\text{mnEnd}$ depends on $x$ even for nodes that are not in the partial $x$. Note that the the dependence on the number of machines only comes from the estimation of the execution times of the tasks in the partial schedule $x$.
Consider schedules in $S_x$. We are interested in finding the minimum active time across all machines during a certain interval $\theta$, while bounding the makespan of a the schedules $\sigma\in S_x$ to $t_{cp}$. We define this quantity as $R(\theta)$, and will refer to it as the minimum density function. We will calculate $R(\theta)$ using the previous definitions of $\text{mnEnd}$ and $\text{mxStart}$, we show the detailed derivation at the end of this section. Given this quantity, we could determine the minimum number of machines needed to terminate in time $t_{cp}$ with the following equation:
\begin{equation}
m_L(t_{cp}) = \max_{\theta \subseteq [0,t_{cp})}\left\lceil\frac{R(\theta)}{|\theta|}\right\rceil
\label{eq:1}
\end{equation}
If the number of machines that we have available is greater than $m_L(t_{cp})$, the length of the critical path is the best bound that we can give using this approach. Let $m$ be the number of machines that we are given. If $m_L(t_{cp}) < m$, we can find a better upper bound using the approach described by Fernandez \cite{fernandez}. The Fernandez bound on the makespan is $t_{cp} + q$ where $q$ is defined as:
\begin{equation}
q = \max_{\theta \subseteq [0,t_{cp})}\left\lceil -|\theta| + \frac{R(\theta)}{m}\right\rceil
\end{equation}
Intuitively, we don't have enough machines to complete in $t_{cp}$. During the interval of time that requires most machines, which is the interval with the largest minimum activity, it will take us more time than $\theta$ since we don't have as many machines. We therefore add this extra work $q$ averaged out across all machines to $t_{cp}$.
\subsection{Fujita Bound}
The bound proposed by Fujita relies on equation \ref{eq:1}. The general idea is that we will vary our bound to calculate $\text{mxStart}$ and $\text{mnEnd}$ and find the largest time such that $m < m_L(t_{cp})$. This will certainly be a lower bound on the makespan, since we find the highest time such that the solution is guaranteed to still not be feasible as we don't have enough machines. The Fujita bound relies on calculating $m_L(T)$ multiple times, and is therefore more computationally intensive.
There are two steps in finding this bound. The first step consists in finding the interval within which the bound lies, and then we use binary search to determine the highest time $T$ such that $m < m_L(t_{T})$. Here again, Fujita made an error which makes the logic of the algorithm wrong (the signs of the inequalities are in the wrong direction).
To find an interval, we evaluate $m_L(t_{cp} + \Delta)$ for $\Delta = 1, 2, 4, 8 ...$, until we get $m_L(T) < m$. This gives us the interval $[t_{cp} + 2^{n-1}, t_{cp}+ 2^n)$ within which the bound lies. We then use binary search in this interval and find the highest time $T$ such that $m < m_L(t_{T})$. This requires a total time of $O(\log\Delta_{\text{final}})$.
\subsection{Minimum density function}
Now we just have to show how to determine the minimum density function $R(\theta)$ given a partial schedule $x$ and a time bound $T$. The minimum density function is the minimum active time across all machines during a certain interval $\theta$, while bounding the makespan of the schedules $\sigma\in S_x$ to $T$.
Let $A$ be the list of all $\text{mnEnd}(u)$ and let $B$ be the list of all $\text{mxStart}(u)$ $\forall u \in V$. We create a sorted list $C$ by merging in linear time the two sorted lists $A$ and $B$. The two lists $A$ and $B$ are constructed recursively, and are sorted by construction.
We now notice that the density function will change only at the time instances corresponding to elements $t_i$ of $C$. This is because the set of tasks that could intersect the interval $\theta$ change only at time instances $t_i\in C$. Furthermore, as shown by Fernandez and Fujita, both $R(\theta)/|\theta|$ and $R(\theta)/m - |\theta|$ decrease monotonically as we increase $\theta$. We will therefore only consider the elements $t_i$ of $C$ as possible limits for the interval $\theta$.
We then have that the minimum density function is the minimum intersection between the execution time of jobs and the interval $\theta_{ij}$. The only jobs that will be considered are jobs that are necessarily intersecting the interval. We then only take the minimum intersection for each of them. We then define $A^*$ as the set of tasks $u \in V$ such that $t_i < \text{mnEnd}$ and $B^*$ as the set of tasks such that $t_j > \text{mxStart}$. The intersection $A^* \cap B^*$ is the set of tasks that necessarily intersect the interval $\theta_{ij}$. Using the set $A^*\cap B^*$ we can determine the minimum density function:
\begin{equation}
R(\theta_{ij}) = \sum_{u \in A^* \cap B^*} \min \{\text{mnEnd} - t_i, c(u), t_j - \text{mxStart}, t_j -t_i\}
\end{equation}
Where $c(u)$ is the weight of task $u$. We see that for each intersecting job, we take the minimum intersection time to be factored in the minimum density function.
This computation takes $O(n)$ in our implementation, which makes the computation of the Fernandez bound $O(n^3)$. In the Fujita bound, we have to repeat this $O(n^3)$ computation to find the correct interval and to search the optimal time bound. Our implementation is publicly available at \cite{github}
\section{Experiments}
To evaluate our implementation, we run it on DAGs generated with the RanGen project generator \cite{Demeulemeester}. Although RanGen produces problem instances for project scheduling problems that contain multiple resource types, we simply set the number of resources to zero to generate DAGs appropriate for our problem. To control the complexity of the generated DAGs we set the order strength parameter in RanGen to 0.1. The order strength is the number of precedence constraints in the generated DAG divided by the largest possible number of precedence constraints. We found that setting order strength of 0.1 produced reasonable-looking DAGs that had plenty of edges but were still solvable on a reasonable number of machines by our implementation in a reasonable amount of time. Although it is unclear that the quality of our implementation run on randomly generated DAGs exactly corresponds to its quality when run on real problems, we believe that being able to control precisely the size and complexity of our test set lets us more thoroughly evaluate and understand the performance of the algorithm.
\begin{figure}[htpb]
\includegraphics[width=8cm]{smallcompleted4}
\centering
\caption{The percent of DAGs of different sizes that were scheduled on four machines in less than one minute using the Fernandez bound and Fujita's binary search bound}
\label{fig:smallcompleted4}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{smallcompleted8}
\centering
\caption{The percent of DAGs of different sizes that were scheduled on eight machines in less than one minute using the Fernandez bound and Fujita's binary search bound}
\label{fig:smallcompleted8}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{smallrunFB}
\centering
\caption{The run times in seconds of successfully scheduled DAGs of different sizes using the Fernandez bound. Top left: m=4. Top right: m=8. Bottom: m=16.}
\label{fig:smallrunFB}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{smallrunFujita}
\centering
\caption{The run times in seconds of successfully scheduled DAGs of different sizes using Fujita's bound. Top left: m=4. Top right: m=8. Bottom: m=16.}
\label{fig:smallrunFujita}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{largecompleted24}
\centering
\caption{The percent of DAGs of different sizes that were scheduled on 24 machines in less than one minute using the Fernandez bound and Fujita's binary search bound}
\label{fig:largecompleted24}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{largecompleted28}
\centering
\caption{The percent of DAGs of different sizes that were scheduled on 28 machines in less than one minute using the Fernandez bound and Fujita's binary search bound}
\label{fig:largecompleted28}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{largerunFB}
\centering
\caption{The run times in seconds of successfully scheduled DAGs of different sizes using the Fernandez bound. Top left: m=24. Top right: m=32. Bottom left: m=36. Bottom right: m=40.}
\label{fig:largerunFB}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=8cm]{largerunFujita}
\centering
\caption{The run times in seconds of successfully scheduled DAGs of different sizes using Fujita's bound. Top left: m=24. Top right: m=32. Bottom left: m=36. Bottom right: m=40.}
\label{fig:largerunFujita}
\end{figure}
Our goals in the experiments are to explore how the runtime of the implementation changes with the inputs to the problem and how Fujita's binary search method for lower bounding the makespan of partial solutions compares to using the Fernandez bound. The first experiment explores the runtime of the algorithm when finding schedules for 4, 8, and 16 machines on DAGs with between 12 and 25 vertices. Figure \ref{fig:smallcompleted4} shows what percent out of thirty DAGs of each size were able to be scheduled on four machines in less than the sixty allotted seconds. Unsurprisingly, the larger the DAG, the harder it is to schedule. However, we were surprised to see that Fujita's binary search bounding method performed worse than just using the Fernandez bound, since Fujita had claimed his method to be an improvement\cite{fujita}.
We were also surprised to find that increasing the number of machines made the scheduling problem easier, though upon reflection this makes sense because having more machines available gives the scheduler more flexibility to make different choices without making the schedule much worse, leading to a better lower bound early on in the execution. Figure \ref{fig:smallcompleted8} shows the percentage of DAGs successfully scheduled in under a minute for eight machines. These are the same DAGs as in Figure \ref{fig:smallcompleted4}, but with eight machines only the largest of the DAGs could not be scheduled. Scheduling for sixteen machines completes in under a minute for all thirty DAGs. For those DAGs that could be scheduled in under a minute, the amount of time each size DAG took to schedule is shown in Figures \ref{fig:smallrunFB} and \ref{fig:smallrunFujita} for the Fernandez and Fujita bounds, respectively. Note that for each DAG, either the DAG is represented in this figure or the DAG took more than sixty seconds to schedule.
The second experiment investigated the execution time of the algorithm for much larger DAGs. Sixteen DAGs each of sizes $100, 105, \dots, 150$ were scheduled on $24, 28, 32, 36, \text{ and } 40$ machines. Any fewer machines and even the 100 vertex DAGs timed out too much to be useful. Overall, the trends seen for large DAGs and large numbers of machines reflect the trends seen with the smaller numbers. Using the Fernandez bound was still more efficient than using Fujita's binary search bounding method, though the gap did seem to close a little bit. It is possible that with even larger graphs using Fujita's method would become beneficial. As with the smaller DAGs, using more machines continued to make the problem easier. Figures \ref{fig:largecompleted24} and \ref{fig:largecomleted28} show the percent of the large DAGs that were successfully scheduled in under a minute on 24 and 28 machines, respectfully. For 32 or more machines, all the DAGs could be scheduled in under a minute. Of those machines that could be scheduled in under a minute, the time it took to schedule each of the large DAG sizes is given in Figures \ref{fig:largerunFB} and \ref{fig:largerunFujita} for the implementation using the Fernandez bound and the implementation with Fujita's bound, respectively.
During development of our implementation we saw that Fujita's binary search bounding method does indeed produce lower bounds at least as good as the Fernandez bound. The only reason the Fernandez bound performs better in our experiments is that Fujita's bound is more computationally complex to calculate. Although the binary search procedure is requires only a number of steps logarithmic in the difference between the lower bound of the current partial schedule and the critical path length of the DAG, each one of those steps requires recomputing the minimum end times, maximum start times, and the minimum work density. Fujita presented a method for calculating the minimum work density in linear time, but our current implementation calculates it in quadratic time. It is therefore possible that reimplementing this calculation to run in linear time would make our implementation using Fujita's bound better than our implementation using the Fernandez bound.
\section{Future work}
One of the most interesting things about the experimental results is that DAGs seem to either be easy or hard to schedule, either taking at most a couple seconds to schedule or taking over sixty seconds. Although there were a few DAGs that took a larger amount of time under sixty seconds to schedule, they were rare. This phenomenon suggests that there might be some way to analyze DAGs and classify them as hard or easy for certain heuristics. If so, the branch and bound algorithm could statically or dynamically choose to use different heuristics for determining the next vertex from the ready set to reduce the number of hard cases.
There are also a number of more immediate ideas we would like to investigate. For example, we would like to quantify how many fewer partial schedules are evaluated when the lower bounding procedure is improved. If we knew how much an improvement in the lower bounding made a difference, we might be able to predict for which DAGs using a more expensive but more exact lower bounding procedure such as Fujita's binary search method would be beneficial.
Finally, we would like to further investigate and compare heuristic algorithms for DAG scheduling. One way we can do this is by halting the branch and bound algorithm after a fixed number of steps and returning the best schedule found so far. Another way is to multiply the lower bound at each step by $(1 + \varepsilon)$ to more aggressively prune the search tree. This would produce an approximation algorithm reaching $(1+\varepsilon)$OPT. It would interesting to compare the computation time of the algorithm using this approximation method compared to other approximation algorithms. Finally, we could investigate improving the branch and bound algorithm performance by implementing multiple list scheduling priority rules, evaluating them, and using them to select new vertices from the ready set in the branch and bound algorithm.
\section{Conclusions}
In this paper we analyze the Multiprocessors Scheduling Problem, and specifically the problem $Pn|prec|c_{max}$ in Graham notation. We describe several approaches used in the literature to solve this $NP$ hard problem. We first explore an approximation algorithm, and then an algorithm that finds the optimal result. In particualar, we derive the $(2-1/m)$OPT bound on the list scheduling algorithm proposed by Graham. We then analyze the Branch-and-Bound method proposed by Fernandez and Fujita, correcting two mistakes in Fujita's exposition of the algorithm.
We have implemented and numerically tested the Branch-and-Bound algorithm, with both the Fernandez bound and the Fujita bound. Experiments were performed on data generated with RanGen, a tool specifically designed for benchmark tests of scheduling algorithms. With both bounds the algorithm obtains OPT in a few seconds on DAGs of size up to 150 nodes. Our tests demonstrated that Fujita does indeed produce better lower bounds than Fernandez in general. We show however, that this improvement does not justify the increase in computation time.
\bibliographystyle{abbrv}
|
1,116,691,497,213 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ith} the development of computer vision techniques,
many learning, detection, matching, and tracking algorithms that are based on small features of images have appeared recently. However, many of these algorithms are quite sensitive to weather conditions under which the image is taken. In this work, we consider the image recovering with good visual quality from a single color image that is spotted by rain during the capturing.
Weather conditions can be classified into steady and dynamic according to the constituent particles \cite{Garg_2004_CVPR}. The former one contains small particles (e.g., fog) and the later one includes large particles (e.g., rain and snow). In the steady condition, small particles cannot be captured by cameras; while in the dynamic condition, droplets of rain and snow can be clearly filmed. He \emph{et al.} proposed a de-haze approach that is based on dark channel priors and has achieved excellent results on various challenging examples \cite{He_2011_PAMI}. Another fast image dehazing work which based on linear transformation are proposed by Wang \emph{et al.} \cite{Wang_2017_TMM}. However, in the case of dynamic weather, the existing rain removal methods still need to be improved. The difficulty lies in two aspects: rain droplets appear in an image randomly and large droplets interfere original image contents.
The earliest work on rain dates back to the study of statistical characteristics of rain in the atmospheric science in 1948 \cite{Marshall_1948_JM}. According to these characteristics,
rain appears in a picture looks quite random and is of different shapes, which makes it difficult to detect and remove rain streaks from a single image. Therefore, most works pay attentions to rain removal in videos
\cite{Garg_2004_CVPR,Zhang_2006_ICME,Brewer_2008_CS,Bossu_2011_CV,Barnum_2007_PACV,Barnum_2010_CV}, where the rain detection is relatively easier. For example, Barnum \emph{et al.} detected and removed rain streaks for videos in the frequency domain \cite{Barnum_2007_PACV,Barnum_2010_CV}. To the best of our knowledge, dealing with rain removing in a single image started in 2009 when Roser \emph{et al.} detected rain streaks in single image \cite{Roser_2009_CV}. Later on, several other rain removal works based on a single image were proposed, e.g., \cite{Fu_2011_ASSP,Kang_2012_TIP,Chen_2014_CSVT,Xu_2012_CIS,Kim_2013_ICIP,Ding_2015_MTA,Huang_2014_TMM}.
This paper aims at removing rain from a single image. Although the detection and removal of rain in a single image is more challenging as compared with videos, there are still some observations that can be utilized for the rain identification. On one hand, rain streaks are more reflective than other parts of images, leading to higher pixel intensities compared with non-rain pixels. On the other hand, rain streaks usually do not occlude objects completely due to their semi-transparency property. The former one can be utilized for the rain detection while the latter one facilitates the reconstruction in the gradient domain after computing image gradients on all non-rain locations.
There are two main challenges: accurate rain streaks identification and high quality rain-removed image recovery. The detection of rain streaks will not be accurate if only pixel intensities are involved, because other objects with similar or even higher pixel intensities would be mis-classified as rain pixels. After the rain pixels are identified, the final result needs to be reconstructed without introducing noticeable artifacts (e.g., blurring of image contents). Methods such as the weighted mean of neighbouring non-rain pixels and image inpanting~\cite{Bertalmio_2000_CGIT} are all possible candidates that have been considered previously.
In this paper, in order to deal with the first challenge, we over-detect rain pixels in single rain image by the
method in \cite{Wang_2016_ICIP} firstly. To improve the accuracy, we further employ a morphological processing technique\cite{Gonzalez_2002_PUSR} to refine all detected rain pixels. For the second challenge, we decompose the input image into a rain-layer and a non-rain layer in the gradient domain after the rain pixels are identified. We reconstruct the final result by using the non-rain layer under the image quasi-sparsity priors.
The contributions of our work are the following aspects. (1) We propose a quite unique detection method of rain streaks. (2) We simplify the sparsity \cite{Levin_2007_PAMI} to quasi-sparsity and combine it with the detection of rain to complete rain-removal tasks. After simplifying the sparsity to quasi-sparsity, the loss function is derived into a $L_{1}$-norm minimization problem. (3) An additional constraint is added to solve the color shift problem that often appears in \cite{Levin_2007_PAMI} successfully. An example of our rain-removed results is shown in Fig. \ref{fig:rain_derain}.
The rest of the paper is organized as follows. We briefly review some related works in Section \ref{sec:RelatedWorks}. We propose the rain streaks detection algorithm in Section \ref{sec:RainStreaksDetection}. The reconstruction of rain-removed images is presented in Section \ref{sec:ImageReconstruction}. The results and comparisons are presented and discussed in Section \ref{sec:ExperimentalResults}. Finally, some conclusions are drawn in Section \ref{sec:Conclusion}.
\section{Related Works}
\label{sec:RelatedWorks}
Rain removal can be performed in the spatial domain or the frequency domain, and some are focused on the single-image scenario. A brief review of the existing algorithms is presented in the following.
\textbf{Rain removal from videos in the spatial domain:} Garg and Nayar analyzed the visual effect of rain streaks comprehensively \cite{Garg_2004_CVPR} by developing a correlation model to describe rain's dynamics and a motion blur model to explain the photometry of rain. Through these two models, rain streaks can be detected efficiently and then removed in videos. To make the study more complete, Garg and Nayar further built a rain appearance model based on a rain oscillation model that was developed in the atmospheric science in \cite{Garg_2006_TG}. They also developed an image-based rain-rendering algorithm by creating a database to describe different kinds of rain appearances under various lighting and viewing directions. In \cite{Garg_2007_CV}, Garg and Nayar analyzed various factors that influence the visual effect of rain. Based on these analyses, an efficient algorithm was developed to control rain. Besides, by modeling the distortion of raindrop, they accomplished photorealistic rain-rendering.
Another rain removal algorithm that is based on both temporal and chromatic characteristics of rain streaks in video was proposed by Zhang \emph{et al.} \cite{Zhang_2006_ICME}. This work shows that a certain area is not always infected by rain streaks. On the other hand, when indeed affected by rain, the intensity changes of chromatic components (namely, R, G, B) of a pixel approximately equal to each other. These two properties have been utilized to detect and then remove rain streaks in videos. However, constrained by temporal properties, this method can only deal with the videos that are obtained by using a stationary camera.
In \cite{Brewer_2008_CS}, Brewer and Liu suggested that (1) a region with instantaneous intensity spike be probably affected by rain streaks and (2) streak-like objects in a region with a nearly consistent range of aspect ratios be considered as rain streaks. Once detected, rain streaks can be removed by calculating the mean value of two neighbouring frames. A rain streaks detection method that uses a histogram of orientation of streaks (HOS) was introduced by Bossu \emph{et al.} in \cite{Bossu_2011_CV}. This method proposes to decompose an image sequence into foreground and background, while potential rain streaks are detected in foreground. Then, HOS is calculated, which follows a model of
Gaussian-uniform mixture. Finally, the Gaussian distribution whose amplitude stands for rain presence and the uniform distribution standing for noise are separated by an algorithm of expectation maximization.
\textbf{Rain removal from videos in the frequency domain:}
In \cite{Barnum_2007_PACV}, Barnum \emph{et al.} combined
a physical model of rain streaks (for determining the general shape and brightness of rain) and some statistical properties of rain streaks to show the influence of rain on image sequences in the frequency domain.
Once detected, the spectrum of rain streaks can be suppressed to obtain rain-removed image sequences. Later on, they combined a shape model
with statistical properties of rain streaks to detect and remove rain
streaks, also in the frequency domain, and demonstrated a better accuracy \cite{Barnum_2010_CV}.
\begin{figure}[t]
\begin{minipage}{0.48\linewidth}
\centering{\includegraphics[width=1\linewidth]{images/test25}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.48\linewidth}
\centering{\includegraphics[width=1\linewidth]{images/test25_I_nd}}
\centerline{(b)}
\end{minipage}
\caption{(a) Original rain image. (b) Rain-removed image by our method.}
\label{fig:rain_derain}
\end{figure}
\begin{figure*}
\centering
\begin{minipage}{1\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/pipeline}}
\centerline{}
\end{minipage}
\caption{Pipeline of our method. We first identify rain locations from the input rain image (a) to generate an initial rain map (b). Then, a morphological processing is used to refine these initial locations, producing a new non-rain map (c) and rain locations (d) (white pixels). The final result (e) and optionally a rain layer (f) can be reconstructed based on (c) and (d), respectively.}
\label{fig:pipeline}
\end{figure*}
\textbf{Single image rain removal:} Roser \emph{et al.} detected rain streaks in a single image monocularly, based on a photometric raindrop model \cite{Roser_2009_CV}. Meanwhile, Halimeh \emph{et al.} detected raindrops on car windshield by utilizing a model that describes the shape of raindrop and a relationship between raindrops and the environment \cite{Halimeh_2009_VS}.
For the first time, Fu \emph{et al.} accomplished the rain-removal task for a single image by utilizing morphological component analysis (MCA) \cite{Fu_2011_ASSP}. Some improved or extended versions have been proposed by Kang \emph{et al.} \cite{Kang_2012_TIP} ,Chen \emph{et al.} \cite{Chen_2014_CSVT}, Wang \emph{et al.} \cite{Wang_2016_ICIP} and Wang \emph{et al.}
\cite{Wang_2017_TIP}. In particular, Kang \emph{et al.} used the histogram of oriented gradients (HOG) \cite{Dalal_2005_CVPR} to separate rain and non-rain dictionary atoms, while Chen \emph{et al.} extended rain removal task to a single color image.
The denosing paper \cite{Huang_2014_TMM} on TMM which treats rain as a kind of noise removes rain streak by a self-learning based image decomposition method.
In \cite{Wang_2016_ICIP,Wang_2017_TIP}, Wang \emph{et al.} developed a rough detection method of rain to remove bright rain streaks.
More recently, Luo \emph{et al.} proposed that a rain image be decomposed into the rain layer and non-rain layer by a highly discriminative code on a learned dictionary that is based on a screen blend model \cite{Luo_2015_ICCV}. On the other hand, a novel rain removal method based on the guided filter was proposed by Xu \emph{et al.} \cite{Xu_2012_CIS}, in which a rain-free guidance image is constructed and a guided filter \cite{He_2013_PAMI} is used to remove rain in a single image. In \cite{Kim_2013_ICIP}, Kim \emph{et al.} assumed that rain streaks have an elliptical shape. Then, a kernel regression method \cite{Takeda_2007_TIP} is used to extract elliptical components in the image to detect rain streaks. Once detected, rain streaks are removed by non-local mean filter \cite{Buades_2005_CVPR}. In the meantime, Chen \emph{et al.} proposed a low-rank model of rain streaks to capture the spatio-temporally correlated rain streaks \cite{Chen_2013_ICCV}.
Lately, a rain removal method based on the $L_0$ gradient minimization was proposed by Ding \emph{et al.} \cite{Ding_2015_MTA}. By this method, majority of rain streaks can be restrained, but a lot of image details also vanish with the rain streaks removal. Another novel rain removal method was developed by Li \emph{et al.} \cite{Li_2016_CVPR}, in which some patch-based priors for both the background layer and rain layer are used to accomplish the rain removal task. Because these priors are based on Gaussian mixture models and can accommodate the rain streaks with multiple orientations and scales, this method obtains the state-of-the-art effectiveness.
\textbf{Deep learning based methods:} In recent years, deep learning is utilized in many computer vision tasks, including rain
removal. In \cite{Fu_2017_TIP}, Fu \emph{et al.} designed a DerainNet to learn the mapping relationship between rain and clean images. They also proposed a deep detail network which
can directly reduces the mapping range to simplify the learning
process, then remove rain streaks in single color images \cite{Fu_2017_CVPR}. Yang \emph{et al.} built a new model for rain images and designed a multi-task deep learning architecture
to remove rain streaks in single images \cite{Yang_2017_CVPR}.
A DID-MDN net tried to estimate the density of rain
first, then remove rain streaks \cite{Zhang_2018_CVPR}.
\section{Rain Streaks Detection}
\label{sec:RainStreaksDetection}
Fig. \ref{fig:pipeline} shows the pipeline of our method.
Given an input rain image (Fig. \ref{fig:pipeline}(a)), we first detect the rain
locations according to pixel intensities. Since the initial
locations (Fig. \ref{fig:pipeline}(b)) are usually inaccurate,
they will be refined using the proposed morphology approach, generating a
refined rain location map (Fig. \ref{fig:pipeline}(d)) as well as a non-rain location map (Fig. \ref{fig:pipeline}(c)).
The final result (Fig. \ref{fig:pipeline}(e)) is reconstructed from the image gradients
on all non-rain locations.
Optionally, we can also reconstruct an image that contains rain only (Fig. \ref{fig:pipeline}(f)).
In this section, we utilize the rain image in Fig. \ref{fig:pipeline}(a)
as an example to present the details of rain streaks detection.
Firstly, an initial rain location is obtained by the method in \cite{Wang_2016_ICIP}.
Then, the mis-detections are refined by a morphological processing and the principal component analysis (PCA).
\subsection{Initial detection of rain streaks}
Rain pixels often possess higher values than
their neighbouring non-rain pixels. Therefore, Wang \emph{et al.} \cite{Wang_2016_ICIP}
over-detected rain location by this characteristic.
For each pixel $I(i,j)$ in a given rain image $I$,
Wang \emph{et al.} calculate 5 mean values $\bar{I}^{(k)}$ $(k=1,2,3,4,5)$
in the windows $w^{(k)}$ with pixel $I(i,j)$ located in the center,
top-left, top-right, bottom-left, and bottom-right of the window, respectively.
If the following inequalities
\begin{equation}\label{eq:detect_condition}
I(i, j)>\bar{I}^{(k)}=\frac{\sum_{\{m,n\}\in w^{(k)}}I(m,n)}{|w^{(k)}|}, k\!=\!\{1,2,3,4,5\},
\end{equation}
where $|w^{(k)}|$ stands for the window size,
are satisfied for all color channels, $I(i,j)$ is recognized as a rain pixel
and the corresponding term $S_R(i,j)$ in the so-called binary location map $S_R$ is set to be 1;
otherwise $S_R(i,j)$ is assigned as 0.
Detection result $S_{R}$ is a binary image as shown in Fig. \ref{fig:initial_location}.
\begin{figure}[t]
\centering
\begin{minipage}{0.48\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_loca}}
\centerline{(a)}
\end{minipage}
\caption{Initial rain's location map.}
\label{fig:initial_location}
\end{figure}
\subsection{An analysis of mis-detections}
It can be seen from Fig. \ref{fig:initial_location}
that not only rain streaks but also some non-rain components
appear in rain detection result. How to recognize those
non-rain components and eliminate their influence is
thus very critical - a lot of image details and useful
information would otherwise get lost after the removal of rain streaks.
In order to separate rain from non-rain objects,
some characteristics of rain can be useful.
We describe them as follows:
\begin{itemize}
\item rain streaks usually do not have too large size in width,
\item the directions of all rain streaks in a scene are nearly consistent,
\item the color of a rain streak is usually shallow white, and
\item the length of a rain streak is usually larger than its width.
\end{itemize}
These characteristics are very robust to describe rain, and
some of them have been utilized in some existing rain-removal
works, such as \cite{Chen_2014_CSVT}, \cite{Kim_2013_ICIP} and so on.
Later on, we will see that when these characteristics are
combined with our proposed morphological processing,
the error detection will be reduced largely.
\subsection{Refining of initial locations of rain streaks}
\noindent \textbf{First}, all connected components shown in Fig. \ref{fig:initial_location}
are extracted by the morphology method and the details can be referred to \cite{Gonzalez_2002_PUSR}.
\begin{figure}[t]
\begin{minipage}{0.48\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_single_streak}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.48\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_width}}
\centerline{(b)}
\end{minipage}
\caption{(a) An example of PCA description. (b) Refined result by connected component width.}
\label{fig:single_streak}
\end{figure}
\noindent \textbf{Second}, PCA is used to describe the shape of every connected component.
In order to describe this step more visually, we select one connected component
from Fig. \ref{fig:initial_location} as an example
to show the refining process, the selected component is in Fig. \ref{fig:single_streak}(a).
Because some colors can not be seen clearly on a black background,
we have changed the selected streak to black and the background to white in Fig. \ref{fig:single_streak}(a).
\begin{figure*}[t]
\begin{minipage}{0.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_angle}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_color}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}{.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_aspect_ratio}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L}}
\centerline{(d)}
\end{minipage}
\caption{(a) Refined result by the connected component angle. (b) Refined result by the connected component color. (c) Refined result by the connected component aspect ratio. (d) Dilation of rain streaks.}
\label{fig:revision}
\end{figure*}
For $p^{th}( p=1, 2, ..., P)$ connected component, we calculate the covariance matrix of location vectors of all pixels in it.
Suppose that there are $N$ pixels in $p^{th}$ connected component.
Hence, there are $N$ sample vectors of pixel locations so that the mean location
vector $\bm{m}_{\bm{z}}$ and covariance matrix $\bm{C}_{\bm{z}}$ can be calculated as
\begin{equation} \label{eq:mean_approximation}
\bm{m}_{\bm{z}} = \frac {1}{N} \sum^{N}_{n=1} \bm{z}_{n}
\end{equation}
\begin{equation} \label{eq:covariance_matrix2}
\bm{C}_{\bm{z}} = \frac {1}{N} \sum^{N}_{n=1} \bm{z}_{n}\bm{z}_{n}^{T} - \bm{m}_{\bm{z}}\bm{m}_{\bm{z}}^{T}
\end{equation}
where $\bm{z}_{n}= [x_{n}, y_{n}]^{T}$, and $x_n$ and $y_n$ are respectively
the corresponding coordinates of the $n^{th}$ pixel ($n=1, 2, \cdots, N$).
After the covariance matrix $\bm{C}_{\bm{z}}$ of $p^{th}$ connected component is obtained,
we perform the eigenvalue decomposition of $\bm{C}_{\bm{z}}$
and obtain the eigenvalues $\lambda_{1}$, $\lambda_{2}$ and
their corresponding eigenvector $\bm{e}_{1}$, $\bm{e}_{2}$ ($\lambda_{1}$ is the larger eigenvalue).
The description of PCA to the shape of connected
components are shown in Fig. \ref{fig:single_streak}(a).
The red arrows stand for two eigenvectors, while two yellow arrows denote
the coordinate axes. Here, $\theta$ is the angle between $x$-axis and eigenvector
$\bm{e}_{1}$ and it can be calculated as $\theta=\arctan(\frac {\bm{e}_{1}(2)}{\bm{e}_{1}(1)})$.
Notice that in order to avoid the red direction arrow from occluding the connected component,
the origin of the coordinate system is not placed on the connected component.
From Fig. \ref{fig:single_streak}(a), we learn that $\bm{e}_{1}$
(corresponding to the larger eigenvalue $\lambda_{1}$) points to the direction
where the location variance has the maximum value; whereas $\bm{e}_{2}$
(corresponding to the smaller eigenvalue $\lambda_{2}$) is perpendicular to the maximum variance direction.
Accordingly, we define the length of a connected component as
\begin{equation} \label{eq:length}
L=c\lambda_{1}
\end{equation}
and its width as
\begin{equation} \label{eq:width}
W=c\lambda_{2}
\end{equation}
where $c$ is a proportional parameter. We assume that $c$ is a constant in an image.
The specific value of $c$ is not important, because it does not affect the ratio of the
length and width of a connected component.
The more important quantity is the direction angle of a connected components,
which is denoted as $\theta$ in Fig. \ref{fig:single_streak}(a), but is now re-defined as
\begin{equation} \label{eq:angle}
D=\theta
\end{equation}
and name $D$ as the direction of a connected component.
In our experiment, the values $\lambda_{1}$, $\lambda_{2}$, $\bm{e}_{1}$,
$\bm{e}_{2}$ and $D$ of all $p^{th}(p=1, 2, ..., P)$ connected components are calculated,
$P$ is the number of connected components.
As an example, these values of the given connected components
in Fig. \ref{fig:single_streak}(a) are
$\lambda_{1}=172.8949$, $\lambda_{2}=0.5852$,
$\bm{e}_{1}=(0.9309, 0.3653)^{T}$,
$\bm{e}_{2}=(-0.3653, 0.9309)^{T}$ and $D=21.4286^{\circ}$ respectively.
\noindent \textbf{Third}, after obtaining the quantified characteristics of all connected components,
we recognise non-rain connected components as follows.
\begin{itemize}
\item As we said above, rain streaks usually do not have large width
as compared to some non-rain objects.
Hence, the $K$-means is used here to classify the connected components by their width $W$.
The connected components with larger width are mis-detected non-rain components and
we set their corresponding values in location map $S_{R}$ as $0$.
The refined result in this way is shown in Fig. \ref{fig:single_streak}(b).
There are not so many wide non-rain objects in this given image, hence
the refinement by width is not too apparent.
We can see that some non-rain components at right
bottom corner disappear in Fig. \ref{fig:single_streak}(b).
This is because when textures of an image are complex,
some non-rain streaks combine together and form
a larger connected component so that the width becomes large.
\item An apparent characteristic of rain streaks is that they follow
nearly the same falling direction and the angle will not be too large generally.
If we use the direction angle $D$ of connected component defined in Equation
(\ref{eq:angle}) to describe this characteristic, $\vert D \vert$
of rain components must be less than a threshold $T1$ ($\vert D \vert$ is
the absolute value of $D$).
Hence, by the threshold $T1$, we can recognize the mis-detected non-rain connected components
in Fig. \ref{fig:single_streak} (b). Then the non-rain connected
components are set to be 0, and the refined result is shown in Fig. \ref{fig:revision}(a).
\item After refining by the width and direction constraints,
majority of non-rain components are recognized. However, some non-rain components that
are similar in shape to the rain streaks still remain.
Rain streaks usually possess neutral color.
According to this feature, Chen \emph{et al.} \cite{Chen_2014_CSVT} proposed to
identify rain dictionary atoms by the eigen color feature \cite{Tsai_2008_IET_CV}.
In our work, we utilize the color characteristics of
rain to revise the mis-detected non-rain connected components.
For $p^{th}$ connected component in Fig. \ref{fig:revision}(a),
we calculate the mean color vector of all pixels in it,
and denote as $[\bar{R}, \bar{G}, \bar{B}]$.
Then we transform this 3-D RGB color vector into a 2-D vector as follows:
\begin{small}
\begin{equation}\label{eq:color_transform}
\begin{split}
u & =\frac{2\Phi-\bar{G}-\bar{B}} {\Phi} \\
v & =max \left \{
\frac{\Phi-\bar{G}}{\Phi}, \frac{\Phi-\bar{B}}{\Phi}
\right \}
\end{split}
\end{equation}
\end{small}
where $\Phi=\frac{1}{3}(\bar{R}+\bar{G}+\bar{B})$.
It is clear from (\ref{eq:color_transform}) that,
after the transform any connected components having neutral color will
be clustered around $(0, 0)$ in the $u$-$v$ space.
Hence, we calculate the magnitude of this 2-D vector $(u, v)$
(i.e.,the Euclidean distance to the origin of the $u$-$v$ space),
if the magnitude is larger than a pre-set value $T2$,
the $p^{th}$ connected component is recognized as a mis-detected non-rain connected component.
For all remaining connected components in Fig. \ref{fig:revision}(a),
we repeat this process and revise the mis-detected non-rain connected components.
The refined result is shown in Fig. \ref{fig:revision}(b).
\item According to \cite{Kim_2013_ICIP}, a rain streak has a larger size
in length than in width. Hence, we classify the connected components
whose aspect ratio are less than $\mu$ as non-rain components.
By excluding the connected components that have small aspect ratios,
the refined result is shown in Fig. \ref{fig:revision}(c).
\item Finally, in order to avoid some slim rain edges
from remaining in our final rain-removed result,
we dilate the connected components in
Fig. \ref{fig:revision}(c) by a $3\times 3$ 'disk' mask,
to obtain the final result for rain streaks detection,
as shown in Fig. \ref{fig:revision}(d).
\end{itemize}
Our rain detection is a stepwise revision method.
By utilizing morphology and PCA, we quantify the rain's
characteristics and detect rain streaks relatively accurately.
\section{Image Reconstruction}
\label{sec:ImageReconstruction}
In this section, we try to verify the sparsity
of natural rain images and utilize one Laplacian distribution
to approximate the sparsity prior of natural rain image,
we name the approximate prior as \emph{quasi-sparsity prior}.
Then, based on the quasi-sparsity and several constraints,
the rain-removed result is obtained by
separating a rain image into rain layer and background layer.
\subsection{Quasi-sparsity of rain images}
In \cite{Levin_2007_PAMI}, Levin and Weiss tried to separate
the background and reflection from an image by sparsity prior of natural images.
We also utilize image sparsity in our rain removal task.
The sparsity of an image mentioned in \cite{Levin_2007_PAMI} can
be depicted as: when a derivative filter is applied on an image,
the logarithm of the histogram of the obtained gradient image reaches
peak value at zero and falls off much faster than a Gaussian.
Levin \emph{et al.} demonstrated that sparse distributions
will lead to a better image decomposition \cite{Levin_2002_NIPS}.
Hence, the sparsity of a natural
image is crucial to its decomposition into several layers.
\begin{figure}[t]
\centering
\begin{minipage}{0.48\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_distribution_compare}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.48\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_sparsity_prior}}
\centerline{(b)}
\end{minipage}
\caption{(a) Log-probability of several distributions. (b) Sparsity verification on one rain image.}
\label{fig:sparsity}
\end{figure}
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{0.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_I_nd}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}{.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_removed_rain}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{.24\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/rain18_L_nonrain}}
\centerline{(d)}
\end{minipage}
\caption{(a) Original rain image. (b) Rain-removed image.
(c) Rain component removed from (a).
(d) Non-rain location $S_{NR}$ that is obtained by $S_{NR}=1-S_{R}$ (the white area).}
\label{fig:results}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}{0.195\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/test75}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{0.195\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/test75_I_nd}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}{.195\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/test75_I_nd_rain}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{.195\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/test75_I_nd_revised}}
\centerline{(d)}
\end{minipage}
\hfill
\begin{minipage}{.195\linewidth}
\centering{\includegraphics[width=.9\linewidth]{images/test75_I_nd_rain_revised}}
\centerline{(e)}
\end{minipage}
\caption{(a) One rain image; (b) background layer without the third constraint;
(c) rain layer without the third constraint;
(d) background layer with the third constraint; (e) rain layer with the third constraint.}
\label{fig:correct_separate}
\end{figure*}
Fig. \ref{fig:sparsity}(a) illustrates the logarithm probabilities of
several distributions. Laplacian distribution exactly results in a straight line and
connects the maximum and minimum values.
We can see that Gaussian distribution falls off the slowest and is above the
straight line so that it is viewed as non-sparse. The other two distributions
below the straight line are classified as sparse according to \cite{Levin_2007_PAMI}.
Laplacian distribution is on the border of
sparsity and non-sparsity.
In order to verify the sparsity of rain images,
we conduct an experiment on nearly 200 rain images and part of these
images are also used in the experiment section. Here, we use the image
in Fig. \ref{fig:results}(a) as an example to illustrate the sparsity of rain images.
Fig. \ref{fig:sparsity}(b) shows the logarithm curve (the blue curve)
of histogram after applying an horizontal derivative filter on it.
Obviously, the result reveals that the rain image satisfies the sparsity requirement.
However, decomposing a rain image $I$ into the rain layer $I_{R}$ and background layer $I_{NR}$ as
\begin{equation} \label{eq:image_decomposition}
I=I_{R}+I_{NR}
\end{equation}
is a massively ill-posed problem. To simplify this kind of problem,
Levin \emph{et al.} proposed that users can label some edges or areas that belong
to $I_{R}$ and some other edge or areas that belong to $I_{NR}$
to increase the constraint for this kind of problem \cite{Levin_2007_PAMI}.
Sparsity ensures that an edge of unit contrast will not be split,
and will appear in one layer \cite{Levin_2007_PAMI}.
In our task, we have detected nearly all rain locations and
the remaining region is labelled as the non-rain area.
Our detection offers better constraints to this ill-posed problem
than the manually-labeled operation in \cite{Levin_2007_PAMI},
and also realize the role of sparsity in certain degree.
Unlike in \cite{Levin_2007_PAMI}, we relax the probability constraint and utilize single
Laplacian function to approximate the sparsity of rain images
and named as \emph{quasi-sparse distribution}:
\begin{equation} \label{eq:histogram_approximation}
P(x)=e^{- \vert x \vert}
\end{equation}
Hence, the quasi-sparsity prior over the whole image $I$ is as follows:
\begin{equation}\label{eq:laplacian_approximation}
P(I)=\prod_{i, k}e^{- \vert \omega_{i, k} \cdot I \vert}
\end{equation}
where $\omega_{i,k}$ is the $k^{th}$ filter which centered
at $i^{th}$ pixel. The filters
with two orientations (horizontal and vertical) and two degrees (the
first derivative and the second derivative) are used here.
\subsection{Optimization}
For an given rain image $I$, $S_{R}$ is the detected rain location,
and the non-rain location can be obtained by $S_{NR}=1-S_{R}$.
The following constraints are satisfied to separate an rain image
into rain layer $I_{R}$ and background (non-rain) layer $I_{NR}$:
\begin{enumerate}
\item $I=I_{R}+I_{NR}$;
\item the gradients of $I_{R}$ and $I_{NR}$ at their corresponding locations
in $S_{R}$ and $S_{NR}$ respectively agree with the gradient of image $I$;
\item the values of $I_{NR}$ at location $S_{NR}$ are close to the value of $I$.
\end{enumerate}
The first two constraints are also utilized in \cite{Levin_2007_PAMI}.
As shown later, this will lead some non-normal separation for some specific images.
To improve the separation, we add the third constraint and it plays a role
as boundary condition.
As the work in \cite{Weiss_2001_ICCV}, we assume that
derivative filters are independent over space and orientation;
rain layer $I_{R}$ and background layer $I_{NR}$ are independent.
Then the quasi-sparsity prior can be written as follows according
to the first constraint:
\begin{equation} \label{eq:prior_define}
P(I)=P(I_{R})P(I_{NR})=\prod_{i, k}e^{- ( \vert \omega_{i, k} \cdot I_{R} \vert + \vert \omega_{i, k} \cdot I_{NR} \vert)}
\end{equation}
We would like to obtain $I_{R}$ and $I_{NR}$ which maximize the
above likelihood function. It is equal to minimize the following loss function:
\begin{equation}\label{eq:loss_function}
J(I_{R}, I_{NR})=\sum_{i, k} \vert \omega_{i, k} \cdot I_{R} \vert + \vert \omega_{i, k} \cdot I_{NR} \vert
\end{equation}
Combined with the second and third constraints,
we rewrite Equation (\ref{eq:loss_function}) as
\begin{equation}\label{eq:loss_function1}
\begin{split}
&J_{1}(I_{R})=\sum_{i, k} \vert \omega_{i, k} \cdot I_{R} \vert + \vert \omega_{i, k} \cdot (I-I_{R}) \vert \\
& \qquad \quad + \lambda \sum_{i \in S_{R}, k} \vert \omega_{i,k} \cdot I_{R} - \omega_{i, k} \cdot I \vert \\
& \qquad \quad +\lambda \sum_{i \in S_{NR}, k} \vert \omega_{i, k} \cdot I_{R} \vert \\
& \qquad \quad +\eta \sum_{i \in S_{NR}} \vert I_{R} \vert
\end{split}
\end{equation}
where $\lambda$ and $\eta$ are regularization parameters.
If $v$ is defined as the vectorized version of image $I_{R}$,
Equation (\ref{eq:loss_function1}) becomes
\begin{equation}\label{eq:loss_function2}
J_{2}(v)= \Vert Av-b \Vert_{1}
\end{equation}
where $\Vert \cdot \Vert_{1}$ is the $L_{1}$ norm, $A$
is relative to the derivative filters, $\lambda$ and $\eta$, and $b$ is relative to the image derivative,
the values of $I$ at location $S_{NR}$, zero and $\lambda$, $\eta$.
This is a $L_{1}$-norm optimization problem,
and it can be solved by iterative reweighted least squares (IRLS) \cite{Burrus_2009_CPAM}.
We summarize the process in Algorithm \ref{alg:whole_algorithm}.
Once $v$ is obtained, we resize it back to the rain-layer image $I_{R}$.
Then, the rain-removed image $I_{NR}$ can be obtained as
\begin{equation}\label{eq:rain_remove}
I_{NR}=I-I_{R}
\end{equation}
One example of the rain layer and rain-removed image is shown in Fig. \ref{fig:results}(a) and (b), respectively.
In Fig. \ref{fig:results}(c)(d), we show the constructed rain layer and the non-rain location $S_{NR}$.
As mentioned above, the third constraint plays an important role
in the correct separation of rain images. Here, we show
an example in Fig. \ref{fig:correct_separate} to suggest the role of this constraint.
In Fig. \ref{fig:correct_separate}(b)(c), we can see that serious
color shift (means that the colors of non-rain details in (b) are abnormal) will
appear without the third constraint.
The reason is that some colors go to the rain layer (c) in under-determined conditions.
By adding the third constraint, the separation quality
can be improved and we can obtain a natural rain-removed image.
\begin{algorithm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\caption{IRLS}
\label{alg:whole_algorithm}
\begin{algorithmic}
\REQUIRE $A$, $b$, $Iter$
\renewcommand{\algorithmicrequire}{Initialization}
\REQUIRE $v=[ A^{T}A ]^{-1}Ab$
\FOR{$t$=1 to $Iter$ }
\STATE $e=abs(Av-b)$
\STATE $z(i) = e(i)^{-0.5}, i=1, 2, ...$
\STATE $\Omega = diag(z)$
\STATE $v =[ A^{T}\Omega^{T}\Omega A ]^{-1}A^{T}\Omega^{T}\Omega b$
\ENDFOR
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\ENSURE $v$
\end{algorithmic}
\end{algorithm}
\section{Experimental Results}
\label{sec:ExperimentalResults}
\begin{table*} [t]
\centering
\caption{The Average Time Consumed by Selected Methods on $256 \times 256$ Images.}
\begin{tabular}{lccccccc}
\hline
Method & \cite{Ding_2015_MTA} & \cite{Chen_2014_CSVT} & \cite{Luo_2015_ICCV} & \cite{Li_2016_CVPR} & \cite{Fu_2017_CVPR} & \cite{Zhang_2018_CVPR} & Ours \\
Time(s) & 1.25s & 97.15s & 69.69s & 1260.40s & 5.30s & 0.20s & 28.01s \\
\hline
\end{tabular}
\label{tab:time}
\end{table*}
\begin{table*} [t]
\small
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{Image Performances (Top: \textbf{PSNR}, Bottom: \textbf{SSIM}) of Different Methods (Rows) on $11$ Synthesized Rain Images (Columns) against Ground-truth.}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
\hline
& Image 1 & Image 2 & Image 3 & Image 4 & Image 5 & Image 6 & Image 7 & Image 8 & Image 9 & Image 10 & Image 11 \\
\hline
\cite{Ding_2015_MTA} & \tabincell{c}{34.65 \\ 0.867} & \tabincell{c}{33.70 \\ 0.889} & \tabincell{c}{33.89 \\ 0.802} & \tabincell{c}{34.17 \\ 0.805} & \tabincell{c}{35.16 \\ 0.861} & \tabincell{c}{35.93 \\ 0.835} & \tabincell{c}{41.29 \\ 0.796} & \tabincell{c}{31.77 \\ 0.811} & \tabincell{c}{32.50 \\ 0.874} & \tabincell{c}{34.58 \\ 0.907} & \tabincell{c}{33.22 \\0.832 } \\
\hline
\cite{Chen_2014_CSVT} & \tabincell{c}{34.31 \\ 0.803} & \tabincell{c}{32.36 \\ 0.759} & \tabincell{c}{34.92 \\ 0.750} & \tabincell{c}{34.68 \\ 0.738} & \tabincell{c}{34.95 \\ 0.774} & \tabincell{c}{32.55 \\ 0.824} & \tabincell{c}{38.58 \\ 0.775} & \tabincell{c}{31.84 \\ 0.602} & \tabincell{c}{32.11 \\ 0.704} & \tabincell{c}{34.59 \\ 0.854} & \tabincell{c}{34.15 \\ 0.784} \\
\hline
\cite{Luo_2015_ICCV} & \tabincell{c}{32.69 \\ 0.767} & \tabincell{c}{30.23 \\ 0.703} & \tabincell{c}{31.53 \\ 0.748} & \tabincell{c}{32.43 \\ 0.820} & \tabincell{c}{33.73 \\ 0.888} & \tabincell{c}{29.45 \\ 0.841} & \tabincell{c}{35.95 \\ 0.784} & \tabincell{c}{29.45 \\ 0.790} & \tabincell{c}{30.43 \\ 0.879} & \tabincell{c}{31.63 \\ 0.864} & \tabincell{c}{32.99 \\ 0.843} \\
\hline
\cite{Li_2016_CVPR} & \tabincell{c}{31.55 \\ 0.701} & \tabincell{c}{30.45 \\ 0.686} & \tabincell{c}{31.23 \\ 0.789} & \tabincell{c}{32.27 \\ 0.691} & \tabincell{c}{33.34 \\ 0.748} & \tabincell{c}{31.13 \\ 0.754} & \tabincell{c}{36.39 \\ 0.681} & \tabincell{c}{29.54 \\ 0.570} & \tabincell{c}{30.32 \\ 0.686} & \tabincell{c}{32.35 \\ 0.786} & \tabincell{c}{32.42 \\ 0.749} \\
\hline
Ours & \tabincell{c}{\textbf{35.46} \\ \textbf{0.886}} & \tabincell{c}{\textbf{35.30} \\ \textbf{0.901}} & \tabincell{c}{\textbf{35.04} \\ \textbf{0.827}} & \tabincell{c}{\textbf{34.86} \\ \textbf{0.832}} & \tabincell{c}{\textbf{35.38} \\ \textbf{0.897}} & \tabincell{c}{\textbf{36.03} \\ \textbf{0.842}} & \tabincell{c}{\textbf{41.31} \\ \textbf{0.846}} & \tabincell{c}{\textbf{31.94} \\ \textbf{0.854}} & \tabincell{c}{\textbf{33.42} \\ \textbf{0.883}} & \tabincell{c}{\textbf{34.91} \\ \textbf{0.916}} & \tabincell{c}{\textbf{34.53} \\ \textbf{0.866}} \\
\hline
\end{tabular}
\label{tab:psnrssim}
\end{table*}
\begin{figure*}[t]
\centering
\begin{minipage}{0.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/001_GT}}
\end{minipage}
\begin{minipage}{0.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_Ding}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_Kang}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_luo}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_li}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_Fu_label}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd_by_Zhang_label}}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR1_I_nd}}
\end{minipage} \\
\vspace{0.5mm}
\begin{minipage}{0.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/005_GT}}
\centerline{(a)}
\end{minipage}
\begin{minipage}{0.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5}}
\centerline{(b)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_Ding}}
\centerline{(c)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_Kang}}
\centerline{(d)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_luo}}
\centerline{(e)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_li}}
\centerline{(f)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_Fu_label}}
\centerline{(g)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd_by_Zhang_label}}
\centerline{(h)}
\end{minipage}
\begin{minipage}{.1\linewidth}
\centering{\includegraphics[width=.99\linewidth]{images/RR5_I_nd}}
\centerline{(i)}
\end{minipage}
\caption{(a) Groudtruth. (b) Original synthesized rain images. (c) Results by Ding \emph{et al.} in \cite{Ding_2015_MTA}.
(d) Results by Chen \emph{et al.} in \cite{Chen_2014_CSVT}. (e) Results by Luo \emph{et al.} in \cite{Luo_2015_ICCV}.
(f) Results by Li \emph{et al.} in \cite{Li_2016_CVPR}. (g) Results by Fu \emph{et al.} in \cite{Fu_2017_CVPR}.
(h) Results by Zhang \emph{et al.} in \cite{Zhang_2018_CVPR}. (i) Results by our method.}
\label{fig:result_render_compare}
\end{figure*}
\begin{figure}[t]
\begin{minipage}{0.48\linewidth}
\centering{\includegraphics[width=1\linewidth]{images/PSNR_Comparison}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.48\linewidth}
\centering{\includegraphics[width=1\linewidth]{images/SSIM_Comparison}}
\centerline{(b)}
\end{minipage}
\caption{Objective comparisons with two state-of-the-art deep learning works: (a) PSNR comparison. (b) SSIM comparison.}
\label{fig:PSNR_SSIM}
\end{figure}
In order to verify the effectiveness of our method, several state-of-the-art traditional and deep learning based rain removal works are selected for comparisons. The method by Ding \emph{et al.} \cite{Ding_2015_MTA} removes rain streaks in a single image by developing an $L_0$ smoothing filter that is derived from the guided filter by He \emph{et al.} \cite{He_2013_PAMI}. This work produces excellent rain removal results for some kinds of images and keeps good visual quality. Meanwhile, several rain removal works that are based on dictionary learning have appeared in recent years \cite{Fu_2011_ASSP,Kang_2012_TIP,Chen_2014_CSVT}. Among them, the work by Chen \emph{et al.} \cite{Chen_2014_CSVT} produces the best rain removal effect. In addition, two most recent works, by Luo
\emph{et al.} \cite{Luo_2015_ICCV} and Li \emph{et al.} \cite{Li_2016_CVPR}, respectively, are also selected in our comparisons.
For deep learning based rain-removed methods, we select the most recent two works \cite{Fu_2017_CVPR} and \cite{Zhang_2018_CVPR}.
Compared with other deep learning based works, these two works are more robust and can obtain better rain-removed visual quality.
We implement our rain removal algorithm using MATLAB on
an Intel (R) Xeon (R) CPU E5-2643 v2 @ 3.5 GHz 3.5 GHz (2 processors) with 64G RAM.
Some parameters used in our work are: the size of the window in
Equation (\ref{eq:detect_condition}) is $7 \times 7$;
the iteration time of $K$-means in Section \ref{sec:RainStreaksDetection} is 100;
the thresholds $T1$, $T2$ and $\mu$ are 10, 0.08, 2 respectively;
regularization parameter $\lambda$ and $\eta$ in loss function (\ref{eq:loss_function1}) is $0.25$ and $0.1$,
and the iteration time in IRLS is $3$.
The parameters here are robust in our experiments.
While for the parameter $T1$, it can be slightly changed for different images.
Because, rain's direction is downward
and its value $D$ is close to $0$ in most image,
we set the threshold $T1$ as $10$ in our paper.
Rain's direction $D$ can be approximately evaluated by user easily.
For some rain which has large falling direction (e.g. the sixth row in Fig. \ref{fig:result_compare}),
the threshold $T1$ can be changed to a larger value.
We first test the run-time consumed by the selected methods on images with size $256 \times 256$. Our method takes $28.01$ seconds. Specifically, the initial detection of rain streaks takes $5.30$ seconds; the rain streaks refining by morphology needs $2.19$ seconds; and the majority of time is spent on rain and non-rain layer separation by using quasi-sparsity prior, which is $20.02$ seconds. Upon the same image, the time consumed by other selected methods are listed in Table \ref{tab:time}. By comparison, our algorithm is the fourth fastest one in selected methods.
Because the task in this work is to remove rain streaks in single images,
we need to evaluate the effectiveness of our algorithm subjectively and objectively. For the purpose of objective evaluations, we synthesize rain images from clean images. Two such ground-truth images and synthesized rain images are shown in Fig. \ref{fig:result_render_compare}(a) and (b), respectively, and the other columns are the corresponding rain-removed results by different state-of-the-art algorithms and our method.
We also collect many real rain images and present the corresponding rain removal results as shown in Figs. \ref{fig:result_compare} and \ref{fig:result_compare1} for subjective assessments.
\begin{table*} [t]
\centering
\caption{User Study Result. The Numbers Are The Percentages of Votes Which Are Obtained by Each Method.}
\begin{tabular}{lccccccc}
\hline
Method & \cite{Ding_2015_MTA} & \cite{Chen_2014_CSVT} & \cite{Luo_2015_ICCV} & \cite{Li_2016_CVPR} & \cite{Fu_2017_CVPR} & \cite{Zhang_2018_CVPR} & Ours \\
Percentage & 5.50\% & 1.25\% & 2.50\% & 3.75\% & 21.00\% & 9.50\% & 56.50\% \\
\hline
\end{tabular}
\label{tab:statistics}
\end{table*}
\subsection{Objective assessment}
In order to evaluate the performances of different methods more completely
and accurately, we synthesize rain images by the method in \cite{Luo_2015_ICCV} and implement different rain removal algorithms on these synthesized images. Then, we calculate the PSNR and SSIM \cite{Wang_2004_TIP} values between the rain-removed images and the ground-truth images.
Fig. \ref{fig:result_render_compare} shows two examples where each row presents a ground-truth image, the rain image (obtained by synthesis),
and the rain-removed results by different methods. Note that we show the corresponding PSNR/SSIM values at the top-left corner of each rain-removed image. The PSNR/SSIM values of more examples by selected traditional methods are shown in Table \ref{tab:psnrssim}. The comparisons of PSNR/SSIM with deep learning methods are shown in Fig. \ref{fig:PSNR_SSIM}.
According to the PSNR/SSIM values, the method by Ding \emph{et al.} \cite{Ding_2015_MTA} produces very good results compared with the other traditional methods. Because of the use of an $L_0$ threshold, objects with large structures in the image will usually be preserved well, thus leading to higher SSIM values. In the meantime, rain streaks below the $L_0$ threshold will be removed, leading to higher PSNR values.
\begin{figure*}[!htb]
\centering
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Ding_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Kang_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_luo_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_li_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Fu_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_by_Zhang_part}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain77_I_nd_part}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain74_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test78_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test9_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test113_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test25_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test105_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test154_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test26_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_Ding}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_Kang}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_luo}}
\centerline{(d)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_li}}
\centerline{(e)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_Fu}}
\centerline{(f)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd_by_Zhang}}
\centerline{(g)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test110_I_nd}}
\centerline{(h)}
\end{minipage}
\vspace{0.5mm}
\caption{(a) Original rain images. (b) Results by Ding \emph{et al.} in \cite{Ding_2015_MTA}.
(c) Results by Chen \emph{et al.} in \cite{Chen_2014_CSVT}. (d) Results by Luo \emph{et al.} in \cite{Luo_2015_ICCV}.
(e) Results by Li \emph{et al.} in \cite{Li_2016_CVPR}. (f) Results by Fu \emph{et al.} in \cite{Fu_2017_CVPR}.
(g) Results by Zhang \emph{et al.} in \cite{Zhang_2018_CVPR}. (h) Results by our proposed method.}
\label{fig:result_compare}
\end{figure*}
The method by Chen \emph{et al.} \cite{Chen_2014_CSVT} can remove the rain steaks that possess lower pixel intensities but the rain streaks with higher intensities will remain (the reason will be described latter).
Furthermore, because the HOG descriptor used in this method can not identify rain streaks from tenuous details well, it would lose many details (the second image in Fig. \ref{fig:result_render_compare}).
For the above two reasons, the PSNR/SSIM values are relatively lower than the method by Ding \emph{et al.}
The work by Luo \emph{et al.} \cite{Luo_2015_ICCV} can not remove rain streaks well. It makes rain streaks more tenuous in size and weaker in intensity. We show the results of Li \emph{et al.} \cite{Li_2016_CVPR} in the sixth column of Fig. \ref{fig:result_render_compare}. This method removes rain streaks quite well. However, a lot of image details have also been removed at the same time. It can be seen from Table \ref{tab:psnrssim} that these two methods produce lower PSNR and SSIM values.
Finally, it is seen from Table \ref{tab:psnrssim} that our proposed method produces the best PSNR/SSIM results consistently for all 11 test images compared with the selected traditional methods. For some test images (5 out of 11), the PSNR value of our method is about 1 dB higher than the second best method (i.e., Ding's method).
We can see from Fig. \ref{fig:PSNR_SSIM} that our method produces comparable PSNR/SSIM values compared with the state-of-the-art
deep learning methods. Only for the two rendering rain images shown in Fig. \ref{fig:result_render_compare}, the work by Fu \emph{et al.} \cite{Fu_2017_CVPR} removes nearly all rain streaks and keeps image details relatively well. But its PSNR/SSIM values are slightly low compared with ours. The reason is that our method only remove rain streaks on
the detected rain pixels and the non-rain pixels nearly unchanged.
Though a few of light rain streaks remain in our results in these selected two images, our method keeps good image details in majority part of images. The work by Zhang \emph{et al.} removes rain streaks well, but the image details loss seriously.
That is why this method has high PSNR values, while the SSIM values are low.
Besides, these two methods also can not remove all rain streaks in some rendering rain images, but our method can achieve
good results, especially, for practical images which will be shown later. As we know, deep learning methods
are very good at dealing with rendering rain images, because they are trained from them. Real-world images are real challenging
for them.
\subsection{User study}
To conduct a visual (subjective) evaluation on the performances of selected methods, we invited 20 viewers (14 males and 6 females, they all are undergraduate, master or Ph.D students in computer vision field) to evaluate the visual quality of different methods in terms of the following three aspects:
\begin{itemize}
\item less rain residual,
\item the maintenance of the image details, and
\item overall perception.
\end{itemize}
In the evaluation, $20$ groups of results are selected and every group involves the results by Ding \emph{et al.}, Chen \emph{et al.}, Luo \emph{et al.}, Li \emph{et al.}, Fu \emph{et al.}, Zhang \emph{et al.} and our method. To ensure fairness, the results in each group are arranged randomly. For each group, the viewers are asked to select only one result which they like most by considering the three criterions together.
The evaluation result is shown in Table \ref{tab:statistics}. It is clear that our rain removal results are favored by a majority of viewers (56.50\%).
\begin{figure*}[!htb]
\centering
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test57_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain9_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain17_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain23_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain36_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain53_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/rain73_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test8_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_Ding}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_Kang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_luo}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_li}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_Fu}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd_by_Zhang}}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test58_I_nd}}
\end{minipage}
\vspace{0.5mm}
\vfill
\begin{minipage}{0.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_Ding}}
\centerline{(b)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_Kang}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_luo}}
\centerline{(d)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_li}}
\centerline{(e)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_Fu}}
\centerline{(f)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd_by_Zhang}}
\centerline{(g)}
\end{minipage}
\hfill
\begin{minipage}{.115\linewidth}
\centering{\includegraphics[width=.995\linewidth]{images/test67_I_nd}}
\centerline{(h)}
\end{minipage}
\vspace{0.5mm}
\caption{(a) Original rain images. (b) Results by Ding \emph{et al.} in \cite{Ding_2015_MTA}.
(c) Results by Chen \emph{et al.} in \cite{Chen_2014_CSVT}. (d) Results by Luo \emph{et al.} in \cite{Luo_2015_ICCV}.
(e) Results by Li \emph{et al.} in \cite{Li_2016_CVPR}.
(f) Results by Fu \emph{et al.} in \cite{Fu_2017_CVPR}. (g) Results by Zhang \emph{et al.} in \cite{Zhang_2018_CVPR}.
(h) Results by our proposed method.}
\label{fig:result_compare1}
\end{figure*}
\subsection{Results analysis}
In this subsection, we try to analyze the rain removal effectiveness
of different methods. The advantages or disadvantages of different
methods are discussed according to the rain-removed results that
is obtained by applying the selected methods on practical rain images. Notice that some images employed in these experiments have a large size so that the rain streaks look tenuous.
\textbf{Method by Ding \emph{et al.}:} The first row of Fig. \ref{fig:result_compare} shows a rain image with slight rain streaks. The result by Ding \emph{et al.}, as shown in the second column, seems to have removed the rain streaks quite well at the first glance. However, when the picture is zoomed in, it is found that a lot of non-rain details are lost. To verify this point more clearly, a small part of the rain picture and its corresponding rain-removed results by the selected methods are shown in the second row of Fig. \ref{fig:result_compare}. Now, it becomes obvious that some details of tree leaves have been removed together with the rain streaks. This is due to the threshold of the $L_0$ filters used in \cite{Ding_2015_MTA}: some non-rain objects whose size is relatively small would be mistreated as rain streaks and get removed.
The third row is still a slight rain image, but the rain streaks are denser. When zoomed in, the details-losing becomes more apparent. For heavy rain streaks in images shown in the sixth, seventh, eighth rows of Fig. \ref{fig:result_compare}, they can not be removed by the method of Ding \emph{et al.}. This is because the size of rain streaks in these images is beyond the preset threshold of $L_0$ filters. If we set the threshold larger, the rain streaks with wide size will be removed. However more details in the images will also be removed at the same time. For the light rain images which have less tenuous details (the third, forth and sixth rows in Fig. \ref{fig:result_compare1}), this method has satisfactory rain removal effectiveness.
\textbf{Method by Chen \emph{et al.}:} The results by Chen \emph{et al.} are shown in the third column. For the light rain images that have less subtle details (such as the image in the fifth row of Fig. \ref{fig:result_compare}, the third, forth and sixth rows in Fig. \ref{fig:result_compare1}), this method can obtain good rain removal results. However, if the rain images possess subtle details (such as the first, third and forth rows of Fig. \ref{fig:result_compare}), the detail-losing and image-blurring are inevitable. The reason is that the HOG descriptor used here cannot separate rain streaks and subtle details well.
The lost details can be seen clearly in third image of the second row of Fig. \ref{fig:result_compare}, which is obtained by zooming in a part of the image in the first row. Moreover, low-pass filter cannot filter bright
rain streaks completely. Consequently, the method by Chen \emph{et al.} can not deal with heavy rain images (such as the images in the sixth and seventh rows of Fig. \ref{fig:result_compare}).
\textbf{Method by Luo \emph{et al.}:} The results by Luo \emph{et al.} are in the forth column of Fig. \ref{fig:result_compare} and \ref{fig:result_compare1}. Obviously, this method can not remove rain streaks well. This is due to the discrimination of the sparse code used in this work, which is not good to separate a rain image into the rain layer
and non-rain layer. However, this method can make the intensity of rain streaks a little weaker. Hence, for tenuous rain streaks considered in their work, their method seems to have removed rain well. When rain steaks become brighter or wider, they can not be removed well.
\textbf{Method by Li \emph{et al.}:} Li \emph{et al.} used priors for both background and rain layer (which are based on Gaussian mixture models) to remove rain streaks. We show the results by this method in the fifth column. For the images that have little subtle details (the fifth, seventh, ninth, and tenth image in Fig. \ref{fig:result_compare}, as well as
the third, forth, and sixth images in Fig. \ref{fig:result_compare1}),
this method can obtain good rain-removal effectiveness. However, for rain images that have subtle details (e.g., the first, third and forth of Fig. \ref{fig:result_compare}), many subtle details are lost. This point can be seen clearly in the fifth image of the second row of Fig. \ref{fig:result_compare}.
As mentioned above, this image is part of the image in the first row that is zoomed in to see the details more clearly.
\textbf{Method by Fu \emph{et al.}:} For the majority of selected practical images, this work can achieve good results.
But there are still some defects. The first apparent one is that this method can cause slight blur for some rainy images,
such as the second and eighth images in Fig. \ref{fig:result_compare}. That is also the reason that this method has
lower PNSR/SSIM values than ours for the images in Fig. \ref{fig:result_render_compare}. The second is the generalization.
This method can not handle some rain images. For example, the seventh and eighth images in Fig. \ref{fig:result_compare}, the rain streaks are left in the results.
\textbf{Method by Zhang \emph{et al.}:} The work by Zhang \emph{et al.} is the most recent work
published on CVPR. We can see that this method faces the similar problems as the work by Fu \emph{et al.} \cite{Fu_2017_CVPR}.
The details lose seriously for some practical images, especially, the images with slim details (the last one in Fig.
\ref{fig:result_compare} and \ref{fig:result_compare1} separately, you can enlarge the images in this paper to see clear).
This method also can not deal with some rainy images, and some apparent rain streaks are left in some rain-removed results.
\textbf{Our work:} The results by our proposed method are shown in the eighth column. Compared with other traditional rain removal works, our proposed approach achieves better rain removal results. When compared with deep learning based works,
our method produces comparable results for majority of rain images. But for some other rain images, the selected deep learning
based methods can not handle well and better rain-removed result are obtained by our method.
Because our method acquires relatively more accurate locations, the remaining image details can be preserved well. Besides, the image quasi-sparsity prior offers a robust tool to the image recovery. Hence, better PSNR/SSIM values and good visual quality have been achieved in our proposed method.
\subsection{Limitations}
By experiments, our method can deal with majority of rain images.
However, every algorithm has its drawbacks, so does our method. For some images with non-rain objects that are very similar to the shape and color of rain streaks, some mis-detections are inevitable. This will result in the loss of some useful information. Besides, when the rain is very heavy, the rain streak will be combined to produce fog. A shallow thought for this situation is that we can remove the rain streaks by our method first, and a dehaze method can be used to remove haze which is caused by heavy rain. We note that this situation has been discussed in a very recent work in \cite{Li_2016_CVPR}. We will continue to work on this situation in our future work. Another future work is to further improve the rain detection.
\section{Conclusions}
\label{sec:Conclusion}
In this paper, we have proposed a new rain streaks detection and removal method from a single color image. Our results suggest that using a morphological image processing to extract connected components and quantifying the characteristics of extracted connected components by
the principal component analysis (PCA) are effective in detecting rain streaks. Once rain streaks are detected, we employ an image sparsity prior to accurately decompose a rain image into the rain layer and non-rain layer, which has also been proven to be effective. In addition, quantitative (objective) evaluations and an user study (subjective) validate the overall rain removal effectiveness of our method, which outperforms four selected traditional methods and is comparable to the most recent deep learning based works that are all proposed very recently and widely regarded as the state-of-the-art.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,116,691,497,214 | arxiv | \section{Introduction}
As public databases of chemical compounds, like PubChem \cite{kim2015pubchem}\cite{wang2016pubchem} or ChEMBL \cite{bento2014chembl}, and private databases owned pharmaceutical companies are developed,
there is growing demand to apply them to improve estimation of molecular characteristics or molecular design in medicinal and material science.
One of the most difficult obstacles to achieve this goal is that it can be almost impossible to collect annotated labels. To predict the effectiveness of drugs for a disease, longitudinal studies of patients would be needed to collect ground truth labels. For rare diseases, just acquiring a database of patients may require its own research project. Although high-throughput screening technologies have been developed and the effects of molecules can be evaluated \textit{in vitro}, there still remains a huge gap between experiments \textit{in vitro} and actual effects on a human body, as we can see from the fact that less than 10 percent of drugs passed from the Phase I of clinical trials to approval between 2006 and 2015 \cite{mullard2016parsing}. Likewise, in material science, although we can calculate chemical characteristics that correspond to ground truth labels with first-principles calculation or molecular dynamics, simulation of many-particle systems is still time-consuming.
Therefore, semi-supervised learning, in which a vast number of unlabeled samples are incorporated with labeled ones to enhance accuracy of models, will play a key role in the mining of molecules in these areas.
In this paper, we propose a novel extension of the \textit{Paragraph Vector} algorithm\cite{le2014distributed}, a well-known unsupervised representation algorithm for documents, to arbitrary graphs and extend it to semi-supervised learning. There have been several approaches for learning representations of graphs. graph2vec \cite{narayanan2017graph2vec} and PATCHY-SAN \cite{niepert2016learning} are two representatives among them that use the Weisfeiler-Lehman (WL) relabelling algorithm \cite{weisfeiler1968reduction}\cite{shervashidze2011weisfeiler} to enumerate rooted subgraphs. Instead, our algorithm is based on neural message passing \cite{gilmer2017neural} to make representations.
We implemented our algorithm using Chainer neural network framework \cite{chainer_learningsys2015} and experimentally demonstrated the following: 1) Our unsupervised algorithm for learning graph representation outperforms other previously proposed methods on several benchmark datasets, 2) Its extension to semi-supervised tasks achieves better predictive performance than supervised tasks only using labeled molecules.
\section{Related works}
\subsection{Graph convolution}
A \textit{fingerprint} is a fixed- or variable-length vector of binary or float values that reflects the chemical characteristics of a molecule. It is used in several ways, such as in similarity search of chemical compound databases or data mining with machine learning algorithms.
An Extended-connectivity Fingerprint (ECFP) \cite{rogers2010extended}, which is one of the most widely used fingerprint-construction algorithms, encodes all subgraphs whose radius are smaller than some fixed number with a hash function. It uses the Morgan algorithm \cite{morgan1965generation} to enumerate subgraphs of a graph.
Recently, many learning-based fingerprint algorithms have been proposed. \textit{Graph convolutions}, which are extensions of convolution operation from multi-dimensional arrays like images or texts to arbitrary graphs, are attracting much attention. Roughly speaking, they are divided into two types: the message passing neural networks (MPNN) \cite{gilmer2017neural} approach and the spectral approach \cite{defferrard2016convolutional}. Our algorithm is inspired by MPNN. Gilmer et al. showed in \cite{gilmer2017neural} that several MPNN graph convolution algorithms, including neural fingerprints (NFP) \cite{duvenaud2015convolutional} and Gated Graph Neural Networks (GG-NN) \cite{li2015gated}, can be formulated in a unified manner with \textit{message}, \textit{update} and \textit{readout} functions. Note that NFP can be considered as the "soft" version of ECFP. As these models consist of differentiable operations, we can train them with backpropagation.
\subsection{Paragraph vector and its extension}
Representation learning for graphs mainly has dealt with supervised learning, but recently, several researchers have proposed algorithms that learn graph representations in an unsupervised manner.
\textit{Continuous Skip-gram model} \cite{mikolov2013efficient} is an unsupervised algorithm that learns a vector representation of a word. Models are trained so that the representation of a word can predict words that surround it. Specifically, let $W$ be a finite set of distinct words and $D = (w_1, \cdots, w_{|D|})$ be a document where $w_d \in W$ and $|\cdot|$ is the cardinality of a multiset. We write the representation of a word $w\in W$ as $v_w\in \mathbb{R}^d$ where $d$ is some fixed integer. The objective function of the continuous Skip-gram model can be written as follows:
\begin{equation} \label{eq:word2vec}
\sum_{i=1}^{|D|} \sum_{-c\leq j \leq c, j\not = 0} \log P(w_{i+j} \mid w_i).
\end{equation}
Here, $P(w'\mid w) = \exp(v_{w'}^T v_w) / \sum_{u\in W} \exp (v_u^T v_w)$ for $w, w' \in W$ and $c$ is a hyper-parameter that determines the window size. $v^T$ denotes a transpose of a vector $v$ (we use column vectors throughout this paper).
As the computation of eq.\eqref{eq:word2vec} is intractable, Mikolov et al. proposed \textit{negative sampling} \cite{mikolov2013distributed} to change the objective function as
\begin{equation*} \label{eq:ns}
\sum_{i=1}^{|D|} \sum_{-c\leq j \leq c, j \not = 0} \left( \log \sigma (v_{w_{i+j}}^T v_{w_i}) + k \mathbb{E}_{w'\sim P_n} \left[ \log \sigma(-v_{w'}^T v_{w_{i}}) \right] \right)
\end{equation*}
where $\sigma(\cdot)$ is a sigmoid function $\sigma(x) = 1/(1+\exp(-x))$, $k$ is a positive integer, and $P_n$ is some distribution over $W$ called noise distribution. One typical example of the noise distribution is a uniform distribution.
This objective function can be interpreted as a Noise Contrastive Estimation (NCE) \cite{Gutman2012NCE}, as indicated in \cite{mikolov2013distributed}.
The \textit{Paragraph Vector algorithm} \cite{le2014distributed} is an extension of the continuous Skip-gram model that predicts representations of words from that of a document containing them. Formally, we are given a set of documents $\mathcal{D} = \{D_1, \cdots, D_{|\mathcal{D}|}\}$. The $i$-th document $D_i$ is composed of $|D_i|$ words: $D_i = (w_1^i, \cdots, w_{|D_i|}^i)$ where $w_n^i \in W$. We associate a representation $v_D\in \mathbb{R}^d$ for each document $D\in \mathcal{D}$ and $v_w \in \mathbb{R}^d$ for each word $w\in W$. $d$ is again some fixed integer. The model is trained so as to maximize the log-likelihood:
\begin{equation*}
\sum_{i=1}^{|\mathcal{D}|} \sum_{n=1}^{|D_i|}\log P(v_{w_n^i} \mid v_{D_i}),
\end{equation*}
where $P(v_{w} \mid v_{D}) = \exp(v_{w}^T v_D) / \sum_{u\in W} \exp (v_u^T v_D)$.
We can apply negative sampling to this objective function in the same way as the continuous Skip-gram model.
Narayanan et al. extends Paragraph Vector to arbitrary graphs and termed the model \textit{graph2vec} \cite{narayanan2017graph2vec}. Intuitively, a graph and root subgraphs in it for graph2vec correspond to a document and words in Paragraph Vector, respectively. One of the technical contributions of the paper is using the Weisfeiler-Lehman relabelling algorithm \cite{weisfeiler1968reduction} \cite{shervashidze2011weisfeiler} to enumerate all rooted subgraphs up to some specified depth. Our algorithm also can be interpreted as an extension of Paragraph Vector, but instead of listing up rooted subgraphs explicitly, we make use of the neural message passing algorithm recursively to obtain multi-resolution representations of subgraphs.
\section{Proposed method}
In this section, we present our method for the first contribution to learn hierarchical substructure representations of molecules that can be obtained in an unsupervised learning setting and then present the method to utilize the substructure representation for classification which can be done in a semi-supervised learning setting.
\subsection{Hierarchical substructure representation learning}
There are hierarchical correlations in each molecule, that is, the atoms tend to connect specific atoms with certain bond, and the neighbors of atoms tend to form specific groups called substructures, the substructures tend to form much larger specific substructures, and so on.
Note this kind of hierarchical correlations are widely observed in the other
domains, and our feature extraction method can be applicable to general graph-mining tasks.
Such a correlation for each level, in turn, implies that there exist compact representations of the substructure that characterize the molecule, and such a compact representation or feature, is often beneficial for the supervised task, especially when the size of the training data is small.
To obtain such a feature for each hierarchical level, we utilize negative sampling \cite{Gutman2012NCE}\cite{mikolov2013distributed} which optimizes the feature by solving a classification task.
Let us denote $h^{l}_{v} \in \mathbb{R}^d$ a discriminative feature vector at level $l$ that can be calculated using the information around the atom $v \in V_m$ of molecule $m \in M$ where $M$ is a given molecule dataset and $V_m$ is a set of all atoms in molecule $m$. We assume that the feature $h^{l}_{v}$ correlates with the molecule vector $u_{m} \in \mathbb{R}^d$ only when the substructure corresponding to $h^{l}_{v}$ is included in
the molecule $m$ (we denote the case as $C=1$), and decorrelates with $u_{m}$ otherwise ($C=0$). The loss function to obtain the feature is given as follows
\begin{eqnarray}
\sum _{m \in M} \sum_{l=1}^L \sum_{v \in V_{m}}
\left(
\log p(C=1|h_v^{l}, u_{m}) +
k {\mathbb{E}}_{h_{v'}^{l}\sim p_v^l} \left[\log p(C=0|h_{v'}^{l}, u_{m}) \right] \right) \label{eq:discriminative loss1}
\end{eqnarray}
where $\mathbb{E}_{h_{v'}^{l}\sim p_v^l} [\cdot]$ denotes an expected value with respect to the negative sampler that samples the substructure feature $h_{v'}^{l}$ of molecule $m'$.
The molecule $m'$ is uniformly sampled from the given molecule dataset $M$, and $h^{l}_{v'}$ is computed for randomly chosen atom $v'$ at level $l$ unless it matches the substructure $h^{l}_{v}$. If the randomly chosen substructure $h^{l}_{v'}$ matches $h^{l}_{v}$, then the molecule $m'$ is rejected and another molecule is resampled and this procedure is repeated until the molecule $m'$ is accepted.
$k$ denotes a positive scalar that determines the number of samples taken by the negative sampler $p_v^l$. In the experiment, we set $k=10$.
We define the model $p(C=1|h_v^{l}, u_{m})$ as
\begin{eqnarray}
p(C=1|h_v^{l}, u_{m}) = \sigma( u_{m}^T h_v^{l}). \label{eq:sigmoid}
\end{eqnarray}
By substituting eq.\eqref{eq:sigmoid} to eq.\eqref{eq:discriminative loss1}, the loss function becomes
\begin{eqnarray}
\sum _{m \in M} \sum_{l=1}^L \sum_{v \in V_{m}}
\left(
\log \sigma(\gamma u_{m}^T h_v^{l}) +
k {\mathbb{E}}_{h_{v'}^{l}\sim p_v^l} \left[\log \sigma(-\gamma u_{m}^T h_{v'}^{l}) \right]
\right). \label{eq:discriminative loss2}
\end{eqnarray}
We maximize the above objective function with respect to the parameters of $h_v^{l}$ which is explained later, and all of the molecule vectors $u_{m}$ $(m \in M)$ directly.
During the training, the expectation $\mathbb{E}_{p_v^l} [\cdot]$ is replaced by $k$-times sampling from $p_v^l$.
The state $h^{l}_v$ is computed by the neural network in an hierarchical manner.
At each level $l$, the computation follows the neural message passing \cite{Gilmer2017NMP}.
\begin{eqnarray}
m^{l+1}_{v} = \sum_{w \in N(v)}H_{e(v, w)}h^{l}_{w} \label{eq: message_passing} \\
h^{l+1}_{v} = \mathbf{ \sigma} (h^{l}_{v} + m^{l+1}_{v})\label{eq: update}
\end{eqnarray}
where $e(v, w)$ indicates the type of the bond between two atoms $v$ and $w$ i.e., one of four types of bond in our implementation: single, double, triple and aromatic, and $N(v)$ denotes the set of neighborhood atoms of atom $v$. $H_{e(v,w)}$ is a $d \times d$ matrix, and $\mathbf{ \sigma}$ is an element-wise sigmoid function (with a slight abuse of notation).
Note that $H_{e(v,w)}$ only depends on the type of the bond, and
it is shared for any atom and molecule so that it can compute the substructure representation for any size and atomic composition of molecules.
Because the all variables are differentiable with respect to the parameters of neural networks, we can optimize the loss function by any kind of stochastic gradient descent.
As can be seen from the eqs.\eqref{eq: message_passing} and \eqref{eq: update}, neural message passing consists of two functions, message function and update function. The message function applied on atom $v$ is to collect information from neighbors and the update function is to update the feature at $v$ based on the collected information and the former feature of $v$ as in \eqref{eq: message_passing}. By applying \eqref{eq: message_passing} and \eqref{eq: update} several times, the updated feature at atom $v$ can be used to represent substructures with root of atom $v$ as illustrated in Figure \ref{fig:message-passing-update}.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth ]{molecule-substructure_surround3.png}
\caption{Message passing and update mechanism used to represent multi-resolution substructures.
Each circle represents an atom, and the edge between circles represents the connection between adjacent atoms. The atoms used to compute the substructure representation around atom 2 are shaded.
}
\label{fig:message-passing-update}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth ]{architecture.png}
\caption{Overall architecture.
Every multi-resolution substructure computed from the input graph contributes to the computation of the output $y$.}
\label{fig:architecture}
\end{figure}
\if0
Imagining the correspondence between structures of molecules and documents, we regard each atom and molecule as word and document, respectively. It is trivial to regard each substructure present in the molecule as a word with higher level as illustrated in Figure 2. From this observation, we can apply skip-gram model to learn these representations. Similarly to skip-gram model applied to doc2vec, exactly computing the log-likelihood over all possible substructures present in the set of molecules is prohibitively expensive. Again, we can avoid this by applying negative sampling that randomly selects only substructures not present in the molecule. Actually, this step involves comparing the given substructure with all substructure present in the molecule which is NP-hard problem. Fortunately, this can be easily and efficiently done by comparing their hidden representation.
\fi
\if0
The objective function with negative sampling to be minimized becomes the following:
\begin{equation}
\sum_{mol \in M} \sum^{L}_{l=1} \sum_{v \in mol} \bigl\{ \sigma(\gamma h_{mol}^{T} h^{l}_{v} ) - k E_{v \sim p_n}\left[ \sigma(\gamma h_{mol}^{T} h^{l}_{v}) \right] \bigr\}
\end{equation}
Note that here we apply level-wise negative sampling. As can be seen, the objective function is divided into two parts, namely positive and negative parts. The intuition is that the former is to pull substructures present in the molecule closer while the latter is to push ones not present away. Trivially, (4) can be solved by backpropagation.
\fi
\subsection{Classification using substructure representation}
After obtaining the representation of rooted substructures and molecules, either of them can be used for classification. However, we argue that directly using molecule representations is not effective because they lose too much
information of the molecule to predict the properties of the molecule,
suggesting that using the set of features of rooted substructures should be better.
Our method of using substructures for classification is motivated by neural fingerprint (NFP) proposed in \cite{duvenaud2015convolutional}. More specifically, a given molecule $m$ is composed of substructures with $L$ different levels (also including atoms at 1st level).
We construct the following readout function for the classification:
\begin{equation}
y({\bf{h}}_m) = {\rm{NN}} \left(\sum^{L}_{l=1}\sum_{v\in V_m}f(W h^{l}_{v}) \right)
\end{equation}
where ${\bf{h}}_m$ is a set of features computed from molecule $m$ and $y({\bf{h}}_m)$ is output of the classifier.
$W$ is a weight matrix and commonly used by all substructures, $f$ is non-linear function and ${\rm{NN}}$ is a neural network to map the input to the output. In our experiments, $f$ is a softmax function and ${\rm{NN}}$ is a two-layer neural network.
The overall architecture is shown in Figure 2.
\section{A semi-supervised framework for prediction of molecular properties}
In this section, we describe our second contribution of this work. This is motivated by the fact that the number of molecules with known properties is sometimes few while the information of the structure of many undiscovered ones are available. In other words, we have a lot of unlabeled samples and a relatively small number of labeled samples. Therefore we address the problem of how to take advantage of unlabeled molecules to improve prediction.
The main goal of the first contribution we presented above is to introduce an efficient way to learn substructures of molecules in an unsupervised setting. We continue to propose a semi-supervised learning approach for classifying a large number of molecules with unknown properties.
\textbf{Problem setting:} Given a set of labeled molecules $\mathcal{M}^{L}=\{m_{1}, \cdots , m_{|\mathcal{M}^{L}|}\}$ with corresponding output $\{o_{1}, \cdots ,o_{|\mathcal{M}^{L}|}\}$, and a set of undiscovered molecules (or unlabeled samples) $\mathcal{M}^{U}= \{ m_{|\mathcal{M}^{L}| + 1}, \cdots, m_{|\mathcal{M}^{L}| + |\mathcal{M}^{U}|} \}$ where $|\mathcal{M}^{U}| \gg |\mathcal{M}^{L}|$.
We try to minimize the following objective function:
\begin{equation}
\sum_{i=1}^{|\mathcal{M}^{L}|} {\rm{Loss}}(m_i, y({\bf{h}}_{m_i}), o_i) + \lambda \sum_{j=1}^{|\mathcal{M}^{L}| + |\mathcal{M}^{U}|} {\rm{Reg}}(m_j, {\bf{h}}_{m_j}) \label{eq:objective}
\end{equation}
where ${\rm{Loss}}(m_i, y({\bf{h}}_{m_i}), o_i)$ is defined as the loss function of molecule $m$ that measures the discrepancy of the classifier output $y({\bf{h}}_{m_i})$ and the true output $o_{i}$, and ${\rm{Reg}}(m_j, {\bf{h}}_{m_j})$ is the regularization term put on the molecule $m_j$ to optimize the feature ${\bf{h}}_m$ as defined in eq.~\eqref{eq:discriminative loss2}.
The hyperparameter $\lambda$ controls the relative weight between the purely supervised loss and regularization term.
As we can guess from the objective function, the features ${\bf{h}}_{m_i}$ are trained so that it can predict the molecular property for the labeled dataset while it keeps rich discriminative information of the molecular for both of the labeled and unlabeled dataset.
Note that the objective function \eqref{eq:objective} can again be optimized by stochastic gradient descent
\section{Experiments and results}
In this section, we present our experiments and results to validate the effectiveness of our hierarchical substructure representation and semi-supervised approach based on the extracted substructure representation. For the evaluation of the our substructure feature, we train the network in an unsupervised manner to evaluate the genuine effectiveness of the feature. We evaluate the accuracy of our method both in unsupervised and semi-supervised learning tasks corresponding to the two aforementioned contributions.
Through all the experiments, the dimension of the substructure feature vector $h_v^l$ is set to 100 which is empirically determined.
The hyperparameter $\gamma$ in eq.\eqref{eq:discriminative loss2} was selected among $\gamma \in \{0.1, 0.5 ,1.0\}$, and was set to 0.5 due to good yields on the training dataset.
As for the hyperparameter $\lambda$ used in the loss function \eqref{eq:objective} of the semi-supervised learning, it was set to 0.5 thanks to experimentation.
For the implementation, we used the neural network framework Chainer \cite{chainer_learningsys2015}. All the experiments were conducted on a Macbook computer with 2.7 GHz Intel Core i5 and 8GB memory.
\subsection{Unsupervised learning task}
\textbf{Datasets:} We used MUTAG \cite{Debnath1991MUTAG} and PTC \cite{Helma2001} as two benchmark graph classification data sets for our experiments. MUTAG consists of 188 chemical compounds and their class labels indicate whether the compound has a mutagenic effect on a specific bacteria. PTC comprises of 344 compounds and their classes indicate carcinogenicity on rats. Both are binary classification tasks.
\textbf{Comparison:} Our proposed approach is compared with the existing proposed methods, including node2vec \cite{Aditya2016node2vec}, WL kernel \cite{shervashidze2011weisfeiler}, Deep WL kernel \cite{deepwlkernel} and graph2vec \cite{narayanan2017graph2vec}.
To evaluate the usefulness of the derived features from each unsupervised learning method, we used the strategy from the previous study \cite{narayanan2017graph2vec} of evaluating the performance on the supervised task with the same SVM classifier, but using the features derived from the unsupervised learning.
For fair comparison, the detailed experimental setting also follows the previous study \cite{narayanan2017graph2vec}, i.e., the ratio of the size of training dataset and the test dataset is 9:1, the division of the training dataset and test dataset are at random, and the division and the training is repeated ten times. Then the average accuracy and the standard deviation for the ten trials was evaluated.
The results on the two dataset obtained by our method are summarized in Table 1, with existing methods reported by graph2vec\cite{narayanan2017graph2vec} for reference.
They show that our method is better than the above one in terms of the predictive performance on the two datasets, which implies our substructure feature is more informative than the others.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Datasets} & \textbf{node2vec} & \textbf{sub2vec} & \textbf{graph2vec} & \textbf{WL kernel} & \textbf{Deep WL} & \textbf{Ours} \\ \hline
\textbf{MUTAG} & 72.63$\pm$10.2 & 61.05$\pm$15.80 & 83.15$\pm$9.25 & 80.63$\pm$3.07 & 82.95$\pm$1.96 & \textbf{86.46}$\pm$5.97 \\ \hline
\textbf{PTC} & 58.85$\pm$8.00 & 59.99$\pm$6.38 & 60.17$\pm$6.86 & 59.61$\pm$2.79& 59.04$\pm$1.09 & \textbf{62.86}$\pm$5.71 \\ \hline
\end{tabular}
\caption{Comparison of the usefulness of the features obtained by unsupervised learning methods. The usefulness of the features are evaluated by the performance on the supervised task using the same SVM classifier. The figures of each cell denote the average accuracy and the standard deviation.
\label{tab:my_label}
\end{table}
\subsection{Semi-supervised learning task}
\textbf{Datasets:} Two typical data sets, including solubility \cite{delaney2004esol} and drug efficacy \cite{gamo2010thousands} are selected to compare the performance of NFP (supervised learning model) and the proposed semi-supervised approach. The datasets consist of 1144 molecules (solubility) and 10000 molecules (drug efficacy), respectively. In our experiments, we select a relatively small subset of molecules as the labeled set and the rest is unlabeled.
\textbf{Comparison:} Our purpose is to compare the supervised method with a few number of labeled samples and semi-supervised method with additional set of unlabeled samples. Similar to the unsupervised task, training dataset and test dataset are randomly divided with the ratio 9:1 and trained on ten times.
The results are reported in Table 2. It is evident that our proposed semi-supervised task is better than supervised method with few labeled samples, showing the effectiveness of our semi-supervised learning approach on prediction of molecular properties.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{data sets}} & \multirow{2}{*}{\textbf{labeled/ all}} & \multicolumn{2}{|c|}{\textbf{Methods}} \\ \cline{3-4}
& & \textbf{NFP (avg $\pm$ std)} & \textbf{SemiNFP (avg $\pm$ std)} \\ \hline
\multirow{4}{*}{Solubility (log Mol/L)} & 3\% & 2.26 $\pm$ 0.07 & 1.85 $\pm$ 0.03 \\ \cline{2-4}
& 6\% & 1.8 $\pm$ 0.03 & 1.56 $\pm$ 0.07 \\ \cline{2-4}
& 12\% & 1.48 $\pm$ 0.2 & 1.24 $\pm$ 0.1 \\ \cline{2-4}
& 18\% & 1.21 $\pm$ 0.12 & 1.12 $\pm$ 0.35 \\ \hline
\multirow{4}{*}{Drug efficacy $EC_{50}$ in nM} & 3\% & 1.74$\pm$0.24 & 1.59 $\pm$ 0.12 \\ \cline{2-4}
& 6\% & 1.55$\pm$0.17 & 1.42$\pm$0.32 \\ \cline{2-4}
& 12\% & 1.57$\pm$0.11 & 1.41$\pm$0.26 \\ \cline{2-4}
& 18\% & 1.51$\pm$0.21 & 1.35$\pm$0.19 \\ \hline
\end{tabular}
\caption{Comparison of supervised and semi-supervised learning tasks with a few labeled molecules}
\end{center}
\end{table}
\section{Conclusion}
In this paper, we propose a novel hierarchical feature extraction method that describes the molecular characteristics in a compact vector form in an unsupervised setting. The features are trained so that it keeps the discriminative feature of molecules as much as possible at each hierarchical level.
This feature extraction method not only yields the state-of-the-art performance on the unsupervised task but also
makes it possible to introduce a semi-supervised learning framework to the supervised tasks and successfully demonstrates the effectiveness of our semi-supervised learning.
To the best of our knowledge, this is the first study that brings the semi-supervised learning framework to the prediction task of molecular properties.
\bibliographystyle{plain}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.